← All Insights

Everyone's Sharing 'Something Big Is Happening.' Here's What They Leave Out.

ai-adoptionverificationcritical-thinking

Matt Shumer’s viral post about AI this week follows a familiar template: COVID comparison, personal testimony from an insider, escalating urgency, and a call to action that amounts to “pay $20/month and start using it.”

He’s not wrong about the pace of AI. For me, the shift happened back in November with Claude Opus 4.5. I use these tools all day, every day—not as demos, but as core infrastructure for my consulting work, my newsletter, my projects. I’m probably closer to his level of daily usage than most people reading his post.

But here’s what posts like this always leave out.

The Verification Gap

Shumer says he describes what he wants, walks away for four hours, and comes back to finished work. I don’t doubt that’s his experience with certain kinds of coding tasks. What he doesn’t mention is the verification layer—the part where someone checks whether the AI’s confident output is actually correct. For code, you have compilers and tests. For everything else? You need a human who knows what “correct” looks like.

And that’s the thing about AI right now: it doesn’t fail by producing garbage. It fails by producing something that looks exactly right. Plausible product features that don’t exist. Research summaries that mix real facts with confident fabrication. Professional-sounding analysis built on misread data. The output is polished enough that skipping verification feels reasonable—and that’s when it gets expensive.

This is what the “Something Big Is Happening” posts consistently miss. The capability is real, but the judgment isn’t there yet—and the gap between “impressive demo” and “reliable daily tool” is where the actual work happens.

Beyond Trust or Distrust

Most people land in one of two camps: trust blindly (dangerous) or distrust completely (wasteful). Both miss the same thing—you can actually have a conversation with these tools about where they’ll fail you. Ask them what they’re bad at. Write down what goes wrong. Turn those into constraints the tool follows next time. Over time, that constraint document becomes more valuable than your instructions—because it shapes what kind of collaborator the AI actually is.

The advice in Shumer’s post—“spend one hour a day experimenting”—isn’t bad. It’s just incomplete in a way that matters. An hour of uncritical AI usage might make you faster at producing things that look right. An hour of learning where AI fails and how to catch it makes you genuinely more capable.

Three Things Missing from Every Hype Post

1. The verification habit is the actual skill. Using AI is easy. Knowing when to distrust its output is what separates useful adoption from expensive mistakes. I keep a running document of every way my AI tools have failed me—it reads and follows those rules every session. That document is worth more than any prompt template.

2. “Integration over capability” still holds. The latest model being impressive doesn’t matter if you haven’t figured out where it fits into how you actually work. A thoughtfully integrated system using last month’s model beats a shiny new one you prompt once and forget.

3. The people who’ll struggle aren’t the ones who “refuse to engage.” They’re the ones who engage uncritically—who trust AI output because it sounds authoritative, who skip verification because the result looks polished, who confuse speed with accuracy.

The Wrench Can Talk Now

The urgency is real. But urgency without discernment is just faster mistakes.

The wrench can talk now. Most people are still just swinging it.