What Happens in an Apparatus 101 Workshop
A transparency piece: what the session looks like, what participants do, and what they leave with.

Apparatus 101 is a half-day hands-on session. Not a lecture, not a slide deck about AI's potential. Everyone writes prompts, gets output, evaluates it, and improves it - using their own actual work as the material.
This article describes what happens, in the order it happens. If you're evaluating whether this session is the right fit for your firm, this is the clearest way to answer that.
Before the session
The prep work that makes the session specific
Before the session, we do a 30-minute intake call with whoever is organizing it - usually a managing partner or operations lead. The purpose is to understand your firm's most common deliverable types: the memos, the research summaries, the client updates, the drafts that your team produces repeatedly.
Those deliverables become the material for the session. Every example, every practice prompt, every exercise is built around your actual work - not generic AI examples from outside your context. That's what makes the skill transfer instead of staying abstract.
The session
What happens in the room
Prompt structure - the framework everyone uses
The session opens with the five components of a well-formed prompt: role, context, task, format, constraints. Participants see how each component changes the output, and how leaving one out changes it the other way. This takes about 45 minutes and covers the concept before anyone writes anything.
First prompts - using your deliverables
Each participant writes their first structured prompt for a deliverable they personally work on regularly. They run it, read the output, and then do something most people have never done: they diagnose what went wrong. Which component was missing? Was the context too thin? Was the task too vague? They fix the specific thing and run it again. This iteration pattern is what separates useful AI use from the vague-prompt loop.
The hallucination session
About an hour into the session, we address the concern that holds most cautious practitioners back: what if the output is wrong? We cover when hallucination happens, why it's predictable, and the two constraint instructions that catch most of it before it matters. Participants add those constraints to their prompts and see the difference. The concern becomes manageable rather than paralyzing. This is covered in more depth here for anyone who wants to read ahead.
Building the prompt library
The last hour of the session is devoted to the prompt library. Participants save their best prompts from the session into a shared system - a Notion database, a shared folder, wherever your firm already stores things people need to find. We set up the structure, agree on the contribution norm (if you use a prompt three times and it works, it goes in), and seed it with the prompts from the session. The library exists before anyone goes home.
What participants leave with
The concrete outputs
Every participant leaves with: 2-3 working prompts for their specific workflows, saved and accessible. The shared mental model for how a prompt should be structured. A working sense of where AI adds leverage in their role and where it doesn't. The constraint habits that make client-facing AI use defensible.
The firm leaves with: a seeded prompt library organized by workflow, a shared standard that makes prompts shareable across the team, and a contribution norm that keeps the library growing over time.
What the session doesn't produce: a slide deck about AI potential, a certificate of completion, or tool-specific knowledge that becomes outdated in six months.
Who it's for
The right fit
Apparatus 101 works best for professional services firms - consulting, law, accounting, research - with 5 to 50 people who are already using AI but aren't getting consistent, shareable results. The session assumes participants have access to at least one AI tool, but doesn't assume any particular level of prior usage.
It's a poor fit for: firms where leadership hasn't decided to actually change how they work (the session requires people to write real prompts and evaluate real output - passive attendance doesn't produce results), and organizations looking for enterprise software implementation rather than workflow training.
