Your Code Review Process Isn't Built for This Volume
Georgia Tech researchers have documented a surge in CVEs tied to AI-generated code — at a time when Claude Code now accounts for 8% of worldwide GitHub commits and the vibe coding market is projected at $8.5 billion. The numbers are moving fast, and the security research is starting to show what that looks like downstream.
The problem isn’t that AI writes insecure code by default — it’s that the output volume has outrun the oversight model.
When a coding agent produces in ten minutes what used to take a day, your existing code review process — designed around human output rates — is suddenly a bottleneck. Developers are shipping agent-generated code faster than it’s being meaningfully reviewed, and that’s where vulnerabilities slip through. Not because the AI is uniquely bad at security, but because the quality gate wasn’t built for this throughput.
The NCSC flagged this concern about vibe coding without guardrails, and the Georgia Tech data is the first empirical confirmation.
The practical question isn’t whether to use AI coding tools — most of us already do. It’s whether your review workflow has actually adapted to the new reality. What does your process look like when half your codebase came from an agent? If the answer is “roughly the same as before,” that’s the gap worth closing.
Speed is the feature — unchecked speed is the risk.