Claude Code is a lock-in play. Use Cursor or OpenCode.
It might be some confirmation bias here on my part but it feels as if companies are becoming more and more hostile to their API users. Recently Spotify basically nuked their API with zero urgency to fix it, redit has a whole convoluted npm package your obliged to use to create a bot, Facebook requires you to provide registered company and tax details even for development with some permissions. Am I just old man screaming at cloud about APIs used to being actually useful and intuitive?
I really hope someone from any of those companies (if possible all of them) would publish a very clear statement regarding the following question: If I build a commercial app that allows my users to connect using their OAuth token coming from their ChatGPT/Claude etc. account, do they allow me (and their users) to do this or not?
I totally understand that I should not reuse my own account to provide services to others, as direct API usage is the obvious choice here, but this is a different case.
I am currently developing something that would be the perfect fit for this OAuth based flow and I find it quite frustrating that in most cases I cannot find a clear answer to this question. I don't even know who I would be supposed to contact to get an answer or discuss this as an independent dev.
EDIT: Some answers to my comment have pointed out that the ToS of Anthropic were clear, I'm not saying they aren't if taken in a vacuum, yet in practice even after this being published some confusion remained online, in particular regarding wether OAuth token usage was still ok with the Agent SDK for personal usage. If it happens to be, that would lead to other questions I personally cannot find a clear answer to, hence my original statement. Also, I am very interested about the stance of other companies on this subject.
Maybe I am being overly cautious here but I want to be clear that this is just my personal opinion and me trying to understand what exactly is allowed or not. This is not some business or legal advice.
I'm only waiting for OpenAI to provide an equivalet ~100 USD subscription to entirely ditch Claude.
Opus has gone down the hill continously in the last week (and before you start flooding with replies, I've been testing opus/codex in parallel for the last week, I've plenty of examples of Claude going off track, then apologising, then saying "now it's all fixed!" and then only fixing part of it, when codex nailed at the first shot).
I can accept specific model limits, not an up/down in terms of reliability. And don't even let me get started on how bad Claude client has become. Others are finally catching up and gpt-5.3-codex is definitely better than opus-4.6
Everyone else (Codex CLI, Copilot CLI etc...) is going opensource, they are going closed. Others (OpenAI, Copilot etc...) explicitly allow using OpenCode, they explicitly forbid it.
This hostile behaviour is just the last drop.
The economic tension here is pretty clear: flat-rate subscriptions are loss leaders designed to hook developers into the ecosystem. Once third parties can piggyback on that flat rate, you get arbitrage - someone builds a wrapper that burns through $200/month worth of inference for $20/month of subscription cost, and Anthropic eats the difference.
What is interesting is that OpenAI and GitHub seem to be taking the opposite approach with Copilot/OpenCode, essentially treating third-party tool access as a feature that increases subscription stickiness. Different bets on whether the LTV of a retained subscriber outweighs the marginal inference cost.
Would not be surprised if this converges eventually. Either Anthropic opens up once their margins improve, or OpenAI tightens once they realize the arbitrage is too expensive at scale.
I don't think it's a secret that AI companies are losing a ton of money on subscription plans. Hence the stricter rate limits, new $200+ plans, push towards advertising etc. The real money is in per-token billing via the API (and large companies having enough AI FOMO that they blindly pay the enormous invoices every month).
That's should be illegal. They used the excuse it was there to take or just burnt evidence literally of pirated books.
What they are doing is implicitly changing the contract of usage of their services.
Not according to this guy who works on Claude Code: https://x.com/trq212/status/2024212378402095389?s=20
What a PR nightmare, on top of an already bad week. I’ve seen 20+ people on X complaining about this and the related confusion.
I pay a Max subscription since a long time, I like their model but I hate their tools:
- Claude Desktop looks like a demo app. It's slow to use and so far behind the Codex app that it's embarassing.
- Claude Code is buggy has hell and I think I've never used a CLI tool that consume so much memory and CPU. Let's not talk about the feature parity with other agents.
- Claude Agent SDK is poorly documented, half finished, and is just thin wrapper around a CLI tool…
Oh and none of this is open source, so I can do nothing about it.
My only option to stay with their model is to build my own tool. And now I discover that using my subscription with the Agent SDK is against the term of use?
I'm not going to pay 500 USD of API credits every months, no way. I have to move to a different provider.
Your core customers are clearly having a blast building their own custom interfaces, so obviously the thing to do is update TOS and put a stop to it! Good job lol.
I know, I know, customer experience, ecosystem, gardens, moats, CC isn't fat, just big boned, I get it. Still a dick move. This policy is souring the relationship, and basically saying that Claude isn't a keeper.
I'll keep my eye-watering sub for now because it's still working out, but this ensures I won't feel bad about leaving when the time comes.
Update: yes yes, API, I know. No, I don't want that. I just want the expensive predictable bill, not metered corporate pricing just to hack on my client.
Is it me, or will this just speed up the timeline where a 'good enough' open model (Qwen? Deepseek? - I'm sure the Chinese will see a value in undermining OpenAI/Anthropic/Google) combined with good enough/cheap hardware (10x inference improvement in a M7 Macbook Air?) makes running something like opencode code locally a no brainer?
Going to keep using the agents sdk with my pro subscription until I get banned. It's not openclaw it's my own project. It started by just proxying requests to claude code though the command line, the sdk just made it easier. Not sure what difference it makes to them if I have a cron job to send Claude code requests or an agent sdk request. Maybe if it's just me and my toy they don't care. We'll see how the clarify tomorrow.
I think I've made two good decisions in my life. The first was switching entirely to Linux around '05 even though it was a giant pain in the ass that was constantly behind the competition in terms of stability and hardware support. It took awhile but wow no regrets.
The second appears to be hitching my wagon to Mistral even though it's apparently nowhere as powerful or featureful as the big guys. But do you know how many times they've screwed me over? Not once.
Maybe it's my use cases that make this possible. I definitely modified my behavior to accommodate Linux.
AI is the new high-end gym membership. They want you to pay the big fee and then not use what you paid for. We'll see more and more roadblocks to usage as time goes on.
This is funny. This change actually pushes me into using a competitor more (https://www.kimi.com). I was trying out this provider with oh-my-pi (https://github.com/can1357/oh-my-pi) and was lamenting that it didn't have web search implemented using kimi.
Well a kind contributor just added that feature specifically because of this ban(https://github.com/can1357/oh-my-pi/pull/110).
I'm happy as a clam now. Yay for competition!
The pressure is to boost revenue by forcing more people to use the API to generate huge numbers of tokens they can charge more for. LLMs are becoming common commodities as open weight models keep catching up. There are similarities with pirating in the 90s when users realize they can ctrl+c ctrl+v to copy a file/model and you don't need to buy a cd/use their paid API.
This is how you gift wrap the agentic era to the open source chinese LLMs. devs don't need the best model, they need one without lawyers attached.
There is a new breed of agent-agnostic tools that call the Claude Code CLI as if it's an API (I'm currently trying out vibe-kanban).
This could be used to adhere to Claude's TOS while still allowing the user to switch AI companies at a moment's notice.
Right now there's limited customizability in this approach, but I think it's not far-fetched to see FAR more integrated solutions in the future if the lock-in trend continues. For example: one MCP that you can configure into a coding agent like Claude Code that overrides its entire behavior (tools, skills, etc.) to a different unified open-source system. Think something similar to the existing IntelliJ IDEA's MCP that gives a separate file edit tool, etc. than the one the agent comes with.
Illustration of what i'm talking about:
- You install Claude Code with no configuration
- Then you install the meta-agent framework
- With one command the meta-agent MCP is installed in Claude Code, built-in tools are disabled via permissions override
- You access the meta-agent through a different UI (similar to vibe-kanban's web UI)
- Everything you do gets routed directly to Claude Code, using your Claude subscription legally. (Input-level features like commands get resolved by meta-agent UI before being sent to claude code)
- Claude Code must use the tools and skills directly from meta-agent MCP as instructed in the prompt, and because its own tools are permission denied (result: very good UI integration with the meta-agent UI)
- This would also work with any other CLI coding agent (Codex, Gemini CLI, Copilot CLI etc.) should they start getting ideas of locking users in
- If Claude Code rug-pulls subscription quotas, just switch to a competitor instantly
All it requires is a CLI coding agent with MCP support, and the TOS allowing automatic use of its UI (disallowing that would be massive hypocrisy as the AI companies themselves make computer use agents that allow automatic use of other apps' UI)
For what it's worth, I built an alternative specifically because of the ToS risk. GhostClaw uses proper API keys stored in AES-256-GCM + Argon2id encrypted vault -no OAuth session tokens, no subscription credentials, no middleman. Skills are signed with Ed25519 before execution. Code runs in a Landlock + seccomp kernel sandbox. If your key gets compromised you rotate it; if a session token gets compromised in someone else's app you might not even know.
t's open source, one Rust binary, ~6MB. https://github.com/Patrickschell609/ghostclaw
The fundamental tension here is that AI companies are selling compute at a loss to capture market share, while users are trying to maximize value from their subscriptions.
From a backend perspective, the subscription model creates perverse incentives. Heavy users (like developers running agentic workflows) consume far more compute than casual users, but pay the same price. Third-party tools amplify this asymmetry.
Anthropic's move is economically rational but strategically risky. Models are increasingly fungible - Gemini 3.1 and Claude 4.5 produce similar results for most tasks. The lock-in isn't the model; it's the tooling ecosystem.
By forcing users onto Claude Code exclusively, they're betting their tooling moat is stronger than competitor models. Given how quickly open-source harnesses like pi have caught up, that's a bold bet.
I just cancelled my Pro subscription. Turns out that Ollama Cloud with GLM-5 and qwen-coder-next are very close in quality to Opus, I never hit their rate limits even with two sessions running the whole day and there zero advantage for me to use Claude Code compared to OpenCode.
This feels perfectly justifiable to me. The subscription plans are super cheap and if they insist you use their tool I understand. Ya'll seem a bit entitled if I'm being honest.
Anthropic is dead. Long live open platforms and open-weight models. Why would I need Claude if I can get Minimax, Kimi, and Glm for the fraction of the price?
there’s a million small scale AI apps that just aren’t worth building because there’s no way to do the billing that makes sense. If anthropic wanted to own that market, they could introduce a bring-your-own-Claude metaphor, where you login with Claude and token costs get billed to your personal account (after some reasonable monthly freebies from your subscription).
But the big guys don’t seem interested in this, maybe some lesser known model will carve out this space
I got banned for violating terms of use apparently, but I'm mystified as to what I rule I broke, and appealing just vanishes into the ether.
In enterprise software, this is an embedded/OEM use case.
And historically, embedded/OEM use cases always have different pricing models for a variety of reasons why.
How is this any different than this long established practice?
From the legal docs:
> Authentication and credential use
> Claude Code authenticates with Anthropic’s servers using OAuth tokens or API keys. These authentication methods serve different purposes:
> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.
> Developers building products or services that interact with Claude’s capabilities, including those using the Agent SDK, should use API key authentication through Claude Console or a supported cloud provider. Anthropic does not permit third-party developers to offer Claude.ai login or to route requests through Free, Pro, or Max plan credentials on behalf of their users.
> Anthropic reserves the right to take measures to enforce these restrictions and may do so without prior notice.
OK I hope someone from anthropic reads this. Your API billing makes it really hard to work with it in India. We've had to switch to openrouter because anthropic keeps rejecting all the cards we have tried. And these are major Indian banks. This has been going on for MONTHS
Thariq has clarified that there are no changes to how SDK and max suscriptions work:
https://x.com/i/status/2024212378402095389
---
On a different note, it's surprising that a company that size has to clarify something as important as ToS via X
And because of this i'll obviously opt to not subscribe to a Claude plan, when i can just use something like Copilot and use the models that way via OpenCode.
At this point, where Kimi K2.5 on Bedrock with a simple open source harness like pi is almost as good the big labs will soon have to compete for users,... openai seems to know that already? While anthropic bans bans bans
I would expect, it still is only enforced in a semi-strict way.
I think what they want to achieve here is less "kill openclaw" or similar and more "keep our losses under control in general". And now they have a clear criteria to refer when they take action and a good bisection on whom to act on.
In case your usage is high they would block / take action. Because if you have your max subscription and not really losing them money, why should they push you (the monopoly incentive sounds wrong with the current market).
That page is... confusing.
> Advertised usage limits for Pro and Max plans assume ordinary, individual usage of Claude Code and the Agent SDK.
This is literally the last sentence of the paragraph before the "Authentication and credential use"
I feel like they want to be like Apple, and open-code + open-source models are Linux. The thing is, Apple is (for some) way better in user experience and quality. I think they can pull it off only if they keep their distance from the others. But if Google/Chinese models become as good as Claude, then there won’t be a reason — at least for me — to pay 10x for the product
Their moat is evaporating before our eyes. Anthropic is Microsoft's side piece, but Microsoft is married with kids to OpenAI.
And OpenAI just told Microsoft why they shouldn't be seeing Anthropic anymore; Gpt-5.3-codex.
RIP Anthropic.
OpenClaw, NanoClaw, et al all use AgentSDK which will from now on be forbidden.
They are literally alienating a large percentage of OpenClaw, NanoClaw, PicoClaw, customers because those customers will surely not be willing to pay API pricing, which is at least 6-10x Max Plan pricing (for my usage).
This isn’t too surprising to me since they probably have a direct competitor to openclaw et al in the works right now, but until then I am cancelling my subscription and porting my nanoclaw fork with mem0 integration to work with OpenAI instead.
Thats not a “That’ll teach ‘em” statement, it is just my own cost optimization. I am quite fond of Anthropic’s coding models and might still subscribe again at the $20 level, but they just priced me out for personal assistant, research, and 90% of my token use case.
Is this a direct shot at things like OpenClaw, or am I reading it wrong?
This is a signal that everyone making AI apps should build on Gemini/OpenAI, and since there is a dance of code and model to get good results, inevitably Anthropic are now writing themselves out of being the backend for everyone elses AI apps going forward
¡Quick reminder! We are in the golden era of big company programming agents. Enjoy it while you can because it is likely going to get worse over time. Hopefully, there were will be competitive open source agents and some benevolent nerds put together a reasonable service. Otherwise I can see companies investing in their own AI infrastructure and developers who build their own systems becoming the top performers.
This is the VC funded startup playbook. It has been repeated many times, but maybe for the younger crowd it is new. Start a new service that is relatively permissive, then gradually restrict APIs and permissions. Finally, start throwing in ads and/or making it more expensive to use. Part of the reason is in the beginning they are trying to get as many users as possible and burning VC money. Then once the honey moon is over, they need to make a profit so they cut back on services, nerf stuff, increase prices and start adding ads.
The telemetry from claude code must be immensely valuable for training. Using it is training your replacement!
Why does it matter to Anthropic if my $200 plan usage is coming from Claude Code or a third party?
Doesn’t both count towards my usage limits the same?
The analogy I like to use when people say "I paid" is that you can't pay for a buffet then get all the food take-home for free.
What is the point of developing against the Agent SDK after this change.
how can they even enforce this? can't you just spoof all your network requests to appear like it's coming from claude code?
in any case Codex is a better SOTA anyways and they let you do this. and if you aren't interested in the best models, Mistral lets you use both Vibe and their API through your vibe subscription api key which is incredible.
Sounds like a panicking company grasping and clawing for a moat
Not sure what the problem is, I am on Max and use Claude Code, never get usage issues, that's what I pay for and want that to always be an option (capped monthly cost). For other uses it makes sense to go through their API service. This is less confusing and provides clarity for users, if you are a first party user use Claude's tools to access's the models otherwise API
This article is somewhat reassuring to me, someone experimenting with openclaw on a Max subscription. But idk anything about the blog so would love to hear thoughts.
https://thenewstack.io/anthropic-agent-sdk-confusion/
In my opinion (which means nothing). If you are using your own hardware and not profiting directly from Claude’s use (as in building a service powered by your subscription). I don’t see how this is a problem. I am by no means blowing through my usage (usually <50% weekly with max x5).
Product usage subsidized by company, $100. Users inevitably figure out how to steal those subsidies, agents go brrrrr. Users mad that subsidy stealing gets cut off and completely ignore why they need to rely on subsidies in the first place, priceless.
Reading these comments aren't we missing the obvious?
Claude Code is a lock in, where Anthropic takes all the value.
If the frontend and API are decoupled, they are one benchmark away from losing half their users.
Some other motivations: they want to capture the value. Even if it's unprofitable they can expect it to become vastly profitable as inference cost drops, efficiency improves, competitors die out etc. Or worst case build the dominant brand then reduce the quotas.
Then there's brand - when people talk about OpenCode they will occasionally specify "OpenCode (with Claude)" but frequently won't.
Then platform - at any point they can push any other service.
Look at the Apple comparison. Yes, the hardware and software are tuned and tested together. The analogy here is training the specific harness,caching the system prompt, switching models, etc.
But Apple also gets to charge Google $billions for being the default search engine. They get to sell apps. They get to sell cloud storage, and even somehow a TV. That's all super profitable.
At some point Claude Code will become an ecosystem with preferred cloud and database vendors, observability, code review agents, etc.