Anthropic Is Subsidizing Every Claude Code Session. The Math Doesn't Work.
Eight months. That's how long it took Claude Code to overtake GitHub Copilot and Cursor as the most-used AI coding tool on the planet. Anthropic went from zero to dominant in less time than most startups take to find product-market fit.
Then, on March 23, the cracks showed up.
Max plan subscribers paying $200 a month started burning through their five-hour session windows in under 90 minutes. Pro users at $20 a month reported being locked out for days at a time. Single prompts consumed 15% of an entire session quota. Developers who had restructured their entire workflows around Claude Code suddenly couldn't use it.
Anthropic's response was honest and alarming: "People are hitting usage limits in Claude Code way faster than expected. We're actively investigating. It's the top priority for the team."
The top priority. Not a feature. Not a model improvement. Keeping the lights on.
Two bugs and a capacity problem
The investigation surfaced two caching bugs that explain part of the chaos.
The first was a cache invalidation issue. Conversation history that should persist across turns was getting wiped mid-session. Every follow-up prompt reprocessed the entire context from scratch. Users paid full token cost for work the system had already done. One developer documented cache reads consuming 99.93% of their quota at a rate of 1,310 cache reads per I/O token.
The second bug hit session resumption. The --resume flag, designed to pick up where you left off, was reprocessing entire prior sessions instead of retrieving cached context. A feature meant to save resources was multiplying them by an order of magnitude.
Together, these bugs inflated costs by 10 to 20x for affected users. Silently. No error message. No warning. Just a quota bar that drained like a bathtub with the plug pulled.
But bugs alone don't explain what happened. On March 26, Anthropic confirmed they were "intentionally adjusting five-hour session limits to manage growing demand." Peak hours between 5am and 11am Pacific now burn through quotas faster. A usage promotion that had doubled limits expired on March 28. The squeeze came from both directions at once.
The $5,000 question
Here's the number that reframed the entire conversation. According to Cursor's internal analysis, cited by Forbes, a Claude Code Max subscriber can consume up to $5,000 in compute per month at API pricing. They pay $200.
That's a 25x subsidy.
Anthropic isn't running a software business at those margins. They're running a customer acquisition campaign disguised as a subscription service. Every power user costs them thousands. Every new subscriber who actually uses the product makes the math worse.
The counterargument exists and it's worth considering. Martin Alderson published an analysis showing that Anthropic's retail API rates don't reflect actual infrastructure costs. Using OpenRouter as a benchmark, comparable inference runs at roughly 10% of Anthropic's published prices. By that math, the heaviest 5% of users cost about $500 a month in real compute, meaning a $300 loss per user. For the average subscriber spending about $6 a day in API-equivalent costs, the $20 Pro plan is actually profitable.
Both numbers can be true at once. Anthropic profits on casual users and hemorrhages money on power users. The problem is that Claude Code turns casual users into power users. That's the whole point of the product.
Growth that creates its own crisis
The adoption numbers are staggering. Claude reached 18.9 million monthly active users in early 2026. Mobile daily actives tripled since January, hitting 11.3 million in March. Paid subscribers doubled. Revenue hit a $2.5 billion annualized run rate by February. Over 70% of Fortune 100 companies use Claude in some capacity.
Every one of those numbers is a victory. Every one of them makes the resource problem harder.
This is the trap that subsidy-driven growth sets. You price below cost to acquire users. It works. Users love the product. They tell other users. Growth compounds. And then you're staring at a compute bill that scales linearly with adoption while revenue scales at a fraction of that rate.
Anthropic isn't alone here. This is the playbook every AI company is running. OpenAI reportedly loses money on ChatGPT Plus subscribers who use it heavily. Google subsidizes Gemini access to keep developers in their ecosystem. The entire industry is in a land grab where the currency is compute and nobody is charging what it actually costs.
The difference is that Claude Code's power users are unusually expensive. A chatbot conversation costs fractions of a cent per turn. A coding agent that reads entire codebases, runs tools, retries on failures, and maintains long context windows across sessions burns through tokens at a completely different rate. The product that makes Claude Code valuable is the same product that makes it expensive to operate.
What this means for developers
NBC News reported that Claude's "most devoted users" are getting fed up. The community response has been a mix of frustration and pragmatism. Developers shared workarounds: shift to off-peak hours, break projects into smaller sessions, avoid --resume, monitor usage in the first few turns, downgrade to older versions that handle caching better.
These are smart tactics. They're also a warning sign. When your most engaged users start optimizing around your limitations instead of their own work, you have a retention problem waiting to happen.
The deeper concern is dependency. 56% of surveyed engineers now do 70% or more of their work with AI tools. If you've rebuilt your workflow around Claude Code and the limits tighten further, you don't have a backup plan. You have a productivity cliff.
Anthropic will fix the caching bugs. They'll optimize infrastructure. They'll probably raise prices or restructure tiers. The $200 Max plan consuming $5,000 in compute is not a sustainable equilibrium. Something has to give.
The real question
Every developer tool that achieves dominance faces this moment. The period where growth outpaces the business model. Slack hit it. Figma hit it. AWS hit it in reverse, starting expensive and racing prices down.
Anthropic's version is uniquely challenging because the underlying cost isn't just servers. It's intelligence. Every improvement to the model makes it more capable, which makes users do more with it, which costs more to serve. The better the product gets, the more expensive each user becomes.
The company is valued at enormous numbers and backed by deep-pocketed investors. They can absorb losses for a while. But "for a while" has a limit, and the growth trajectory is compressing the timeline.
Claude Code earned its position. Eight months to market leadership is a testament to the product. The question now is whether Anthropic can build a business model that survives the success. Because right now, every new user who falls in love with the tool makes the problem a little bit worse.
You might also like
Claude Code's Source Just Leaked and the Internet Already Documented Everything
Anthropic accidentally shipped a source map that exposed 512,000 lines of Claude Code's TypeScript source. Within hours, the community had repos, guides, and a full architectural breakdown.
The AI Agent Stack Is Finally Settling. Here's What Won.
GTC 2026 was all production case studies, not benchmarks. MCP crossed 97 million installs. The standard toolkit for shipping AI agents is becoming clear.