← All Guides
intermediate 15 minutes

How to Audit Your AI Usage Patterns

claude-code
Guide: How to Audit Your AI Usage Patterns

After a few weeks with an AI coding assistant, you develop habits. Some are good — you learn which prompts get better results, which tasks to delegate, when to step in. Others are invisible. You repeat the same instructions every session, reach for tools that waste context, and let the AI make decisions you should be making yourself.

The problem is you can’t see any of this while you’re in the flow. You’d need someone watching over your shoulder across dozens of sessions, spotting the patterns you can’t.

Claude Code has a built-in command that does exactly that. By the end of this guide, you’ll have a personalised report showing how you actually use AI — not how you think you use it — and specific changes to make your workflow better.

What you’ll need

  • Claude Code — installed and working. If you haven’t set it up yet, follow Anthropic’s installation guide.
  • At least 2 weeks of session history — the command analyses your past 30 days. More sessions means better insights. If you’ve only run Claude Code a handful of times, give it another week before running this.

Run the command

Open Claude Code in any project directory and type:

/insights

That’s it. No flags, no configuration.

The command reads your session transcripts from the past 30 days — up to 50 sessions — and generates an interactive HTML report. It opens automatically in your browser when it’s done.

The analysis takes a minute or two. Longer if you’ve been busy. It’s processing every conversation you’ve had with Claude Code: the tools you called, the instructions you gave, the friction points, the wins.

Your report lands in ~/.claude/usage-data/report.html. You can reopen it anytime.

What the report tells you

The report has four sections, each built from your actual session data. No generic advice — everything it surfaces comes from what you specifically did.

Project summary

Cards for each project you’ve worked on, showing session counts, tools used, and languages touched. This is the overview: where you’re spending your time and what kind of work you’re asking Claude to do.

If you see a project with 15 sessions that are all short, that’s a signal. Either the work is genuinely quick, or you’re context-switching too often and burning tokens on ramp-up.

What’s working

The report identifies your strongest patterns — workflows where you and the AI click. These might be:

  • A prompting style that consistently produces working code on the first pass
  • A habit of providing context upfront that reduces back-and-forth
  • Effective use of specific tools (Bash, Edit, Grep) for the right tasks

This section matters because it tells you what to keep doing. When something works, it’s worth understanding why it works so you can apply the same approach elsewhere.

Where things go wrong

This is the section worth reading twice. It categorises your friction points by frequency:

  • Ignored instructions — times Claude didn’t follow what you asked. Often a sign that instructions were ambiguous, not that the model failed.
  • Buggy output — code that needed immediate correction. Patterns here reveal which types of tasks need more upfront specification.
  • Wrong approach — times the AI went down a path you had to redirect. This usually means missing context about your project’s conventions.
  • Wasted work — effort that didn’t contribute to the outcome. Repeated searches, unnecessary file reads, circular conversations.

Each friction point includes real examples pulled from your sessions. You’ll recognise the moments.

What to change

This is where you stop reading and start copying. Based on your specific patterns, the report generates:

  • CLAUDE.md additions — copy-paste-ready lines to add to your project’s configuration. These encode the lessons from your friction points so you don’t repeat them.
  • Skill suggestions — repeated multi-step workflows that could be automated as custom slash commands.
  • Hook recommendations — lifecycle events where automated checks would catch problems earlier.
  • Tool usage tips — features you’re underusing or using inefficiently.

How to read the results

The temptation is to focus on the problems. Resist it for a moment.

Start with what’s working. Your effective patterns are probably invisible to you because they feel natural. Seeing them explicitly helps you understand your own workflow — and makes it easier to teach those patterns to others on your team.

Then look at the friction points. The report ranks them by frequency, which is the right priority order. A friction point that happens once is an anecdote. One that shows up across 8 sessions is a workflow defect.

For each recurring friction point, ask: is this a prompting problem, a context problem, or a tooling problem?

  • Prompting problems mean your instructions are ambiguous or incomplete. The fix is usually a CLAUDE.md rule that makes the implicit explicit.
  • Context problems mean the AI doesn’t know something it needs to know. The fix is adding project context — architecture docs, coding conventions, key decisions — where Claude can find it.
  • Tooling problems mean you’re doing manually what could be automated. The fix is a skill, a hook, or an MCP server connection.

Apply what you find

The report gives you suggestions. Here’s how to act on them without overcommitting.

Pick the top three friction points. Not all of them — three. The ones that show up most often or cost the most time when they happen.

For each one, make exactly one change:

  1. Add a CLAUDE.md rule if the fix is “Claude needs to know this about my project.” One line, specific, testable. “Always run tests before committing” is good. “Write better code” is useless.
  2. Create a custom command if you’re typing the same multi-step instruction repeatedly. Claude Code’s slash commands let you encode a workflow once and reuse it. The report often suggests these directly.
  3. Set up a hook if the fix is “this should be checked automatically.” Pre-commit hooks, post-edit checks, format validation — anything that catches the problem before you notice it.

Then work normally for a week. Run /insights again. Compare. The friction points you addressed should drop in frequency. New ones may appear — that’s normal. Your workflow is a system, and improving one part often reveals the next bottleneck.

Limitations worth knowing

The command analyses your last 30 days, up to 50 sessions. If you’re a heavy user, the oldest sessions may not be included. Running it periodically — monthly is reasonable — gives you a rolling picture.

The command chunks and summarises large sessions (over 30,000 characters of transcript) before analysis. The insights are still useful, but some nuance from marathon sessions may be compressed.

The report is generated locally. Your session data doesn’t leave your machine for this analysis.

The bigger picture

Most people interact with AI tools the way they learned to in their first week. The prompting habits, the tool choices, the workflow patterns — they calcify fast. And because AI assistants don’t complain about inefficiency, nothing forces you to examine what you’re doing.

Running /insights breaks that loop. It’s a few minutes of your time, and the return is a clearer understanding of a tool you’re already using every day.

The difference between someone who uses AI effectively and someone who just uses it often is usually awareness of their own patterns. Now you have a way to see yours.