← All Insights

150,000 API Keys Leaked. Anyone Surprised?

securityapi-keysmoltbook

Moltbook leaked 150,000 API keys this week. OpenAI keys, Anthropic keys, Google keys—the full buffet. Some users are reporting thousands of dollars in unauthorized usage before they noticed.

I wrote about OpenClaw’s security problems last month. This is the same pattern: AI tooling companies treating credential management as an afterthought.

The technical failure is straightforward. Moltbook was storing API keys in a way that made them accessible through their interface. When that interface had a vulnerability, the keys walked out the door. Basic security hygiene would have prevented this—encryption at rest, proper access controls, not storing credentials in the same database as user data.

But the real lesson is about trust assumptions in the AI ecosystem. Users handed over their API keys because the tool was convenient. The implicit trade was “I’ll give you access to my AI credits in exchange for your features.” That trade only works if the vendor is competent at security.

Most aren’t. They’re AI enthusiasts who built a cool wrapper and scaled faster than their security practices. The Moltbook founder is probably a talented developer. Security engineering is a different discipline.

Before connecting any tool to your AI API keys, ask: “What happens if this company gets breached?” If the answer is “my keys are exposed,” maybe don’t.

Rotate your keys now if you’ve used Moltbook. Then think harder about which tools actually need direct API access versus which could work with more limited permissions.