Start Lesson
Your board approved the AI budget. Your team is excited. The temptation is to launch five initiatives at once. Three months later, two are abandoned, one is over budget, and your team is burned out on AI. You have seen this playbook fail at other companies. You need a sequenced plan where each phase generates the knowledge the next phase requires.
This lesson gives you a four-phase roadmap template. By the end, you will have your own adoption timeline with specific initiatives slotted into the right phase -- based on the opportunities you scored in Lesson 1 and the ROI you calculated in Lesson 3.
A completed Phased Adoption Roadmap mapping your specific AI opportunities to four phases, with timelines, success metrics, and go/no-go criteria for advancing to the next phase.
Goal: Individual productivity gains. Prove that AI works in your context with your data.
| Element | Details | |---------|---------| | What you do | Give every knowledge worker access to ChatGPT or Claude ($20/person/month). Run a 90-minute training session. Assign each person one task to try with AI this week. Share results at end of week 2. | | Which opportunities | Anything that scored 12+ on your Opportunity Scorecard AND landed on "Prompt" in your Build/Buy/Prompt matrix. These are high-readiness, low-cost experiments. | | Cost | $20/person/month + a few hours of training time | | Target outcome | Each person saves 2-5 hours/week on drafts, research, and admin | | Success metric | 80% of participants report measurable time savings in week 2 survey | | Timeline | 2 weeks |
Why this matters: Phase 1 is not about ROI. It is about building familiarity and trust. A team that has used AI for two weeks and seen real results will be enthusiastic about Phase 2. A team that gets handed an enterprise platform on day one will resist it.
Goal: Move from individual experiments to standardized team practices.
| Element | Details | |---------|---------| | What you do | Build a shared prompt library from Phase 1 winners. Define which tasks get AI-assisted and which stay manual (write this down). Establish a quality review process. Assign an AI Champion to collect feedback and maintain momentum. | | Which opportunities | Opportunities scoring 8-11 on the Scorecard that need team coordination, plus "Buy" recommendations where a commercial tool exists for your use case. | | Cost | Tool costs from Phase 1 + 5-10 hrs/week of AI Champion time + potential tool subscriptions ($50-500/month) | | Target outcome | Measurable output increase: more content, faster response times, fewer hours on repetitive work | | Success metric | Team output volume increases 30%+ compared to pre-AI baseline. Track before starting. | | Timeline | Months 1-3 |
The AI Champion role: One person (not necessarily technical) who collects feedback, improves prompts, keeps the momentum going when novelty wears off (it will, around week three), and reports results to leadership with real numbers. This is your most important early role.
Goal: Connect AI to your systems so work happens without manual copy-pasting.
| Element | Details | |---------|---------| | What you do | Take the highest-volume validated tasks from Phase 2 and connect them to existing tools using Zapier, Make, or custom API integrations. Set up monitoring for error rates, time savings, and cost. | | Which opportunities | Phase 2 tasks where quality is consistently good enough to reduce manual review, plus any "Build" recommendations from Lesson 2 that have clear ROI from Lesson 3. | | Cost | $200-2,000/month for automation tools and APIs + 10-20 hours setup per workflow + integration developer (contract or internal) | | Target outcome | Specific workflows fully automated, freeing team for higher-value work | | Success metric | Time tracking shows 50%+ reduction in manual hours for automated workflows | | Timeline | Months 3-6 |
The judgment call: Not everything from Phase 2 should be automated. Only automate tasks where quality is consistently reliable, or where review can be batched (check 50 outputs once a day instead of one at a time).
Goal: AI becomes a core part of your product, service, or competitive positioning.
| Element | Details | |---------|---------| | What you do | Build AI features into customer-facing products. Develop proprietary AI workflows that depend on your unique data or domain expertise. Hire or contract specialized talent for fine-tuning, RAG systems, or custom model development. | | Which opportunities | Customer-facing features, proprietary data advantages, workflows no existing product handles. These should have strong ROI projections from Lesson 3 and clear risk mitigation from Lesson 5. | | Cost | $10,000-100,000+ depending on scope | | Target outcome | New revenue streams or fundamentally new capabilities that were not possible before AI | | Success metric | Revenue attribution or feature usage metrics | | Timeline | Month 6 onward |
Reality check: Most businesses will be well-served by Phases 1-3. Phase 4 is for companies where AI is a strategic differentiator, not just a tool. If you are a 20-person services company, Phase 3 might be your ceiling -- and that is a great outcome worth significant ROI.
Every team that jumped straight to Phase 3 or 4 without running Phase 1 first wasted months of work and tens of thousands of dollars. Here is why:
The phases exist because each one generates the knowledge the next one needs. Skipping is not saving time. It is creating expensive rework.
Before advancing to the next phase, check these gates:
| Transition | Go Criteria | No-Go Signal | |------------|-------------|--------------| | Phase 1 to Phase 2 | 80%+ of team reports time savings; at least 3 validated use cases identified | Team is not using tools after week 2; fewer than 2 tasks show clear value | | Phase 2 to Phase 3 | 30%+ output increase measured; prompt library has 10+ validated templates; AI Champion is actively maintaining | Output has not measurably changed; quality issues persist; no clear high-volume candidates for automation | | Phase 3 to Phase 4 | At least one workflow fully automated with monitored quality; ROI matches or exceeds Lesson 3 projections | Automations require constant manual intervention; error rates above 15%; ROI below projections |
If you hit a no-go signal, stay in the current phase and fix the issue. Moving forward on a shaky foundation guarantees expensive failure.
Map your AI opportunities from Lessons 1-3 onto this roadmap template:
| Phase | Timeline | Opportunity (from Lesson 1) | Path (from Lesson 2) | Projected Monthly ROI (from Lesson 3) | Success Metric | Owner | |-------|----------|----------------------------|----------------------|---------------------------------------|----------------|-------| | 1: Quick Wins | Weeks 1-2 | | Prompt | | | | | 1: Quick Wins | Weeks 1-2 | | Prompt | | | | | 2: Team Workflows | Months 1-3 | | Prompt/Buy | | | | | 2: Team Workflows | Months 1-3 | | Prompt/Buy | | | | | 3: Automation | Months 3-6 | | Buy/Build | | | | | 4: Strategic | Month 6+ | | Build | | | |
Your output should have: At least 2 opportunities in Phase 1, specific success metrics for each phase, and a named owner for each initiative. If you cannot fill in the Phase 3-4 rows yet, that is fine -- leave them as "To be determined after Phase 2 data."
The sorting rule: Opportunities with payback periods under 1 month go in Phase 1. Payback periods of 1-3 months go in Phase 2. Anything requiring integration goes in Phase 3 at earliest. Anything requiring custom development goes in Phase 4.
You have a sequenced plan. But before you start executing, you need to understand what can go wrong -- and have guardrails in place before the first AI output reaches a customer. Lesson 5 gives you a risk register template covering hallucinations, data privacy, bias, and compliance. You will score each risk for your specific deployment plan and build mitigation strategies.