The AI Team Training Framework I Use at Every Engagement
The biggest mistake companies make when upskilling their teams on AI: they start with tools instead of mental models.
I've sat through corporate AI training sessions that opened with "here's how to use ChatGPT" and spent three hours on prompt tricks. The teams left knowing a handful of techniques and nothing about when to use them, why they work, or how to evaluate if they're working.
That's not training. That's a demo.
The Three-Tier Model
I train teams across three tiers of AI fluency. The goal is to make sure each person is operating at the right tier for their role — not everyone needs to be at tier three.
Tier 1: AI Literacy Can describe what LLMs are, what they're good at, what they're not. Can use AI tools productively for personal work. Understands why hallucination happens and how to spot it. This is the baseline for everyone — engineering, product, leadership, operations.
Tier 2: AI Application Can evaluate whether AI is the right tool for a given problem. Can write effective prompts for complex tasks. Understands retrieval, context windows, and basic agent patterns. Can review AI-generated code or content critically. This is the target for engineers, PMs, and data teams.
Tier 3: AI Architecture Can design AI systems — choosing models, structuring RAG pipelines, scoping agents, building evaluation frameworks. Can reason about latency/cost/quality tradeoffs. Knows when to fine-tune vs. prompt. This is for AI leads and senior engineers on AI product teams.
Why Most AI Training Fails
Corporate AI training fails for three reasons:
It's tool-first, not problem-first. Teams learn to use a specific tool and then look for problems it can solve. The pattern should be reversed: start with the business problem, then evaluate whether AI is appropriate, then choose tooling.
It ignores failure modes. Training that doesn't cover hallucination, retrieval failures, prompt injection, and model drift is training that produces teams overconfident in AI outputs. Understanding failure modes is as important as understanding capabilities.
There's no follow-through. A one-day workshop produces an initial spike in AI adoption that decays within weeks without follow-up. Real capability building requires repeated application, feedback loops, and someone to ask questions to.
What I Actually Do
My engagements follow a consistent structure:
Week 1: Diagnosis. I interview 5-10 people across roles to understand current AI usage, pain points, and the specific workflows that might benefit most. This shapes everything.
Week 2-3: Foundation training. Tier 1 for everyone, Tier 2 for technical and product teams. Mix of workshop and structured exercises with real company data.
Week 4-6: Applied projects. Small teams (2-3 people) tackle a real internal problem using AI. I'm available for office hours but not in the room. This is where actual learning happens.
Week 8: Retrospective. What worked, what didn't, what to build on. Teams present their projects. This creates peer learning and surfacing of patterns.
Ongoing: Office hours. Monthly or bi-weekly sessions where teams can bring specific questions, review implementations, and calibrate their approach.
The outcome isn't a team that knows how to use AI tools. It's a team that can evaluate when AI is appropriate, build AI-powered workflows with appropriate skepticism, and iterate on them effectively.
That's a meaningfully different thing.