Frameworks

Two open-source frameworks for AI adoption — one for strategy, one for execution. Together they cover the full path from "should we use AI?" to "it's running in production."

← All Resources

How they fit together

PAST answers what and why — strategic clarity about what you're trying to achieve. SHAPE answers how and when — systematic execution from assessment through evaluation. Use PAST to define the initiative, then SHAPE to implement it.

Strategy

The PAST Framework

From Random AI Experiments to Strategic Clarity

Most AI implementations fail before they start — not because of bad technology, but because of unclear strategy. PAST asks four questions that determine success or failure, and works at every level from organizational strategy down to individual prompt engineering.

P

Purpose

What specific outcome are you trying to achieve?

Prevents vague goals and technology-first thinking

A

Audience

Who does AI serve?

Prevents designing for buyers instead of users

S

Scope

What are the realistic boundaries?

Prevents scope creep and trying to solve everything at once

T

Tone

How should AI align with your culture and voice?

Prevents generic AI slop and adoption failure

Quick Start: The Purpose Clarity Exercise

"Success means [specific measurable outcome] which will [business impact] by [timeline] as measured by [metric]."

If any part is blank, you're not ready to evaluate tools. Return to purpose definition.

Works at Every Level

  • Organizational Strategy — Enterprise AI implementation and governance
  • Team Workflows — Department-specific process optimization
  • Individual Productivity — Personal workflow enhancement
  • Prompt Engineering — Creating effective AI interactions
Execution

The SHAPE Methodology

From Pilots That Stall to Systematic Implementation

Most AI pilots succeed. Most scaled implementations fail. 95% of AI pilots never achieve enterprise-wide deployment, and organizations lose an average of $1.9 million per failed AI initiative. The problem is execution methodology, not technology.

S

Situation

Assess current state honestly before changing anything

Prevents bad assumptions and underestimated complexity

H

Hypothesis

Define measurable success criteria upfront

Prevents unmeasurable goals and drifting pilots

A

Action

Execute systematic pilots with clear decision frameworks

Prevents analysis paralysis and poor tool selection

P

Process

Scale what works through systematic phases

Prevents scaling disasters and complexity creep

E

Evaluation

Measure continuously and iterate based on evidence

Prevents stagnation and sunk cost bias

Key Decision Framework: Takers vs. Shapers vs. Makers

Approach Success Rate Time to Value When to Use
Takers 67% 4–8 weeks Off-the-shelf solutions (default choice)
Shapers 45% 8–16 weeks Customized vendor solutions
Makers 33% 16+ weeks Custom-built for competitive differentiation

Default to Takers unless you have compelling, documented reasons for alternatives. Simple tools that work reliably outperform complex customizations requiring constant maintenance.

Both frameworks are open-source under CC BY-SA 4.0. Use them, adapt them, teach with them — just give attribution and share alike.