The Shrinkflation of AI
How Anthropic used a 2x usage promotion to reset your expectations — and then quietly reduced what you were getting all along.
On March 13, 2026, Anthropic gave every Claude user a gift: double usage outside peak hours. For two weeks, the limits felt generous. Work flowed. You adapted your schedule, shifted heavy tasks to off-peak, and your brain quietly recalibrated what “normal” looked like.
On March 28, the promotion ended.
And everything broke.
The numbers don’t lie
Within days, Reddit, X, and GitHub were flooded with complaints. Max 5x subscribers — paying $100/month — reported burning through their entire session in one hour. One Max 20x user watched their usage jump from 21% to 100% on a single prompt. A non-technical user asked Claude one question about used cars and immediately hit her session limit.
Anthropic’s official response? Your weekly limits haven’t changed. We just adjusted how they’re distributed during peak hours.
Translation: the same amount of food, served on a smaller plate, during the hours you’re hungriest.
The invisible metric
Here’s the part that makes this possible: Anthropic doesn’t tell you how many tokens your plan includes. You get a percentage bar. 55% used. But 55% of what? That number can change any day, and you’d never know — because you were never told the baseline.
It’s shrinkflation, software-style. The bag of chips costs the same, but weighs 20 grams less. Except in software, there’s no nutrition label on the back to check.
The 2x served its purpose
The promotion wasn’t charity. It was a behavioral reset.
Before the 2x, you had an intuitive sense of how much you could do in a session. After two weeks of double usage, that internal calibration was gone. The new 1x — whatever it actually is — feels worse than the old 1x, even if it were identical. Your reference point shifted from 1x to 2x, and everything below 2x now registers as a downgrade.
This isn’t speculation. It’s textbook anchoring — one of the most well-documented cognitive biases in behavioral economics.
The $100 anesthesia
Right on schedule, days after the complaints peaked, an email arrived from Anthropic offering every subscriber a one-time credit of $100 in “extra usage.” The timing was impeccable.
The credit serves multiple purposes. It softens the blow. It introduces the concept of paying beyond your subscription. It normalizes a usage-based billing layer that didn’t exist before. And if you use it and like the flexibility, you might just upgrade to the $200 plan instead of dealing with recharges.
The first dose is always free.
What this means
Anthropic isn’t evil. They’re a company burning billions on inference with an IPO target of October 2026. They need to move from subsidized flat-rate pricing to something that reflects actual compute costs. That’s rational.
But the way they’re doing it — through opaque limits, behavioral manipulation, and carefully timed promotions — treats users as variables to optimize rather than customers to serve.
The industry term for what Anthropic is building is called “frontier AI.” Ironic, then, that the frontier of their business model looks a lot like every cable company, airline, and streaming platform that came before them: start generous, build dependency, then slowly squeeze.
The difference is that when Netflix raised prices, you couldn’t switch to a Chinese alternative that costs one-seventh as much and delivers 95% of the quality.
But with AI, you can. And that’s a story for another post.