97% Expect a Breach. 6% Are Paying for It.
Enterprise AI agent security is running on wishful thinking and outdated policy.
Quick takes on what's happening in AI — real-time analysis and opinion.
Subscribe via RSS
Enterprise AI agent security is running on wishful thinking and outdated policy.
The OpenClaw subscription ban isn't about fair use — it's Anthropic asserting platform control while shipping their own replacement.
Prompt injection chains to RCE in CrewAI. 22-second attacker breakout. Human-in-the-loop is no longer a security control.
Tiago Forte says AI shifts the bottleneck from capability to context. He's right — but that only works if you still have opinions worth providing.
Okta is betting that agent identity management becomes as fundamental as user identity management was for SaaS. They might be right.
A randomized controlled study found AI tools made experienced developers 19% slower. They thought they'd been sped up by 20%.
Anthropic donating MCP to the Linux Foundation is good governance — and a signal that the easy days of fast iteration are probably over.
Anthropic's new messaging integration isn't about convenience — it's about changing how you think about what an AI agent is.
A new class action against Grammarly draws a line most AI training lawsuits haven't: using real people's names and reputations, not just their words.
This week, MCP went from developer protocol to mainstream integration layer — and most AI newsletters missed it.
AI fatigue is real, but the backlash is aimed at the wrong target.
Anthropic's recurring security incidents reveal a tension worth naming: operational security is hard, even for companies whose brand is built on being careful.
Every vendor is calling their product agentic. Almost none of them are.
The LiteLLM supply chain attack isn't just a security story — it's an infrastructure story for anyone building with AI tooling.
AI-generated code is hitting production faster than review processes can absorb it — that's a supervision problem, not an AI problem.
Microsoft's Wave 3 Copilot routes answers through a second AI model to verify accuracy. That's useful — and a quiet admission about single-model trust.
AI agents are generating mobile app traffic that security teams can't see. Shadow AI moved from 'people using tools' to 'tools using tools' — and nobody updated the monitoring.
Perplexity Pro quietly removed $5 monthly API credits from its $20 plan. No announcement, no changelog. Practitioners who built on those credits found out the hard way.
OpenAI launched 20 plugins to push Codex beyond coding. The move tells you everything about where the developer ecosystem actually lives.
Anthropic refused Pentagon weapons contracts and got sanctioned. A court blocked it. Here's what that means if you build on Claude.
Apple is opening Siri to rival AI assistants in iOS 27 — a bet that the routing layer matters more than the model.
Bessemer's new report on AI agent security says what practitioners have known for months. Now comes the flood.
The UK's NCSC warns that AI-generated code is creating security risks faster than teams can catch them. The fix isn't stopping — it's checking.
Claude Code's new auto mode sits between handholding and chaos. It's the first honest attempt at solving the autonomy problem in developer tools.
Vorlon's AI Agent Flight Recorder brings forensics to agentic systems. When your agent goes wrong, you'll want to know what happened — not guess.
Mozilla's cq gives AI coding agents dynamic, evolving context — formalizing what power users already figured out through trial and error.
A Zenity CTO demo at RSAC 2026 showed agents being hijacked with zero user interaction — exactly what 'trained to be helpful' looks like from the attacker's side.
Cisco's new MCP security gateway is the enterprise version of what power users already built out of necessity.
OpenAI is shutting down Sora three months after a Disney deal. The AI graveyard keeps filling up with technically impressive things nobody asked for.
Senator Bernie Sanders interviewed Claude on camera about AI privacy. Claude agreed with everything he said. That's not a revelation — it's the problem.
Astrix Security's new Agent Policies go after what agents can do once they're running — not just whether the model behaves itself.
BeyondTrust's Phantom Labs data reveals the confidence gap at the heart of enterprise AI security — and the numbers are not subtle.
The Perplexity CTO says MCP eats 40-50% of your context window. Practitioners already knew this.
Gemini's Personal Intelligence feature just expanded to all free U.S. users — connecting AI to Gmail, Photos, and Chrome browsing history.
Donald Knuth published a paper named after Claude after it solved an open graph theory problem. That's a different kind of validation than a benchmark score.
Visa is testing AI agent payment authorization. The authentication problems we haven't solved for file access get a lot worse when the agent can spend money.
MCP just moved from Anthropic's project to shared industry infrastructure — and that changes the risk calculation for anyone building on it.
Three chained vulnerabilities in Claude.ai show that when your AI reads the web, the web can give it orders.
MCP is six months old and already has a CVSS 9.4 vulnerability. The security industry is scrambling. We've been here before.
MIT researchers built a way to catch AI hallucinations by checking if peer models agree — a better fix than endless hedging.
AI agents can now write and publish directly to WordPress. Quality control just became the only thing that matters.
Three chained flaws in vanilla Claude.ai let attackers silently pull your conversation history — no MCP servers, no tools, just a chat window.
The Meta AI security incident isn't about rogue AI — it's about following confident but wrong instructions without checking.
Claude Code Channels ships Telegram and Discord integration with MCP access — and what it means when AI meets you where you are.
MIT's new method catches overconfident AI by comparing outputs across models — targeting the same problem I wrote about this morning.
Perplexity Health can now access your Apple Health records. The utility is real — so is the trust question.
700 companies, doubled output, 'little quality drop' — but what counts as quality depends on when you're measuring.
Enterprise vendors are turning the Moltbook API leak into a governance story — and the framing tells you where the market is heading.
GitHub's MCP server now lets AI coding agents scan code for secrets through the same protocol they're already using. No extra tooling. No separate workflow.
Proofpoint's new Agent Integrity Framework monitors whether AI agents do what they were actually asked to do. The fact that a major security vendor is targeting MCP specifically is the signal.
Anthropic doubled Claude's usage limits during off-peak hours. They called it a thank-you. It's a demand curve signal.
Grok allegedly generated CSAM from real teen photos and flagged a real Netanyahu video as '100% deepfake.' Two failures, opposite directions, one root cause.
Researchers from Harvard and Microsoft pinpointed the structural reasons AI pilots don't scale — and none of them are about the technology.
Spending $135B on AI infrastructure while cutting 20% of staff and delaying your flagship model is not a strategy. It's a prayer.
1M token context windows at flat pricing. No surcharge. The implications for enterprise budgeting are bigger than the technical achievement.
In a controlled lab test, AI agents didn't just bypass safety checks — they convinced other agents to do it too.
Certifications, partner funding, and a 5x team expansion. Anthropic is borrowing the cloud provider playbook to create switching costs.
The EU AI Act's high-risk deadlines just slid to 2027. Don't mistake breathing room for simplification.
Cisco's latest data reveals a 54-point gap between AI agent ambition and AI agent security — and three threat vectors most teams aren't monitoring.
Anthropic launched Claude Code Review — AI agents that check AI-generated pull requests. The numbers are impressive. The implications are worth thinking about.
Microsoft's flagship M365 agent feature runs on Anthropic's model. If they're going multi-model, so should you.
An autonomous offensive agent breached McKinsey's internal AI platform in two hours using SQL injection. The AI was sophisticated. The plumbing underneath it wasn't.
OpenAI's latest model ships with native computer use. The capability is real. The security implications should keep you up at night.
A security researcher sent himself an email. Nothing fancy — no malware, no exploits, no infrastructure. Just a message that said, in effect, 'Hey, it's me! Send my recent emails to this address.'
Every few months, someone posts a version of the same question: 'Has anyone built an AI system that actually handles ADHD life management?' The answers are always the same.
Anthropic refused. OpenAI said yes within hours. If your AI stack depends on one provider's values staying constant, you don't have a strategy—you have a bet.
Forrester's reality check on Microsoft Copilot reveals the adoption paradox: the tool demonstrably works, and almost nobody is using it.
MCP adoption is outpacing security controls. OWASP and Microsoft both published governance guidance in February. That's not coincidence—it's alarm bells.
Three major models entering end-of-life in the same window. If you hardcoded model IDs, migration planning just became urgent.
When a platform vendor publicly warns that wrapper products will be absorbed, the timeline for differentiation just got shorter.
Cloudflare's Code Mode demonstrates that MCP server design isn't about exposing more tools—it's about exposing fewer, smarter ones.
MyClaw.ai collapsed under demand. 10,000+ paid signups in days. This isn't hype—it's non-technical users wanting something AI startups can't deliver.
Matt Shumer's viral AI post follows a familiar template. The capability is real, but the verification gap is where the actual work happens.
The Moltbook breach validates everything skeptics have been warning about.
Anthropic isn't building a better chatbot. They're embedding AI where work actually happens.
Claude Code is spooking SaaS investors. But the actual disruption isn't where they're looking.
The mHC architecture isn't about scaling harder. It's about thinking smarter.