← All Insights

Anthropic Didn't Block Abuse. They Blocked Competition.

ai-integrationanthropicplatform-risk

Anthropic just told OpenClaw users that their Claude Pro and Max subscriptions no longer work with third-party agent frameworks. Crypto developers who built real workflows on top of those subscriptions are now looking at $1,000–$5,000 per day in usage costs. Anthropic offered a one-month credit and a discount on pre-purchased bundles.

Boris Cherny, who led Claude.ai, confirmed enforcement will expand beyond OpenClaw to all third-party harnesses.

Here’s what actually happened. Anthropic found that flat-rate subscribers were running agentic workloads that cost more than their subscription revenue covers. So they changed the terms.

That’s a legitimate business decision. It’s also a textbook example of platform risk.

The OpenClaw community built serious workflows on Claude subscriptions — not because they were trying to game the system, but because the API was expensive and the subscription seemed like a reasonable path. Anthropic let that happen until the economics stopped working for Anthropic. Then they pulled the rug.

What makes this sharper is the timing. Anthropic shipped Claude Code Channels — their own agent orchestration layer — at roughly the same moment they cut off third-party harnesses. They’re blocking competitors by replacing them.

This is how platform control actually works. You let the ecosystem build, you watch where the demand concentrates, you ship your own version, and you tighten the terms for everyone else. Google, Apple, Amazon — they’ve all run this playbook. Anthropic is running it now.

If you’re building production workflows on a third-party model, the real question isn’t “is this the best model?” It’s “what happens when the model vendor decides my use case competes with theirs?”

The answer, apparently, is a one-month credit and a 30% discount.

Source: VentureBeat · The Next Web