What Happens in an Apparatus 303 Engagement
Discovery, build, deploy. What the firm's involvement looks like at each stage and what you own at the end.

Most firms that reach out about custom AI development have the same set of questions underneath the specific ones. What will this actually cost? How long will it take? What does my team need to do? What do we have at the end? This article answers those questions directly, by walking through what the engagement actually looks like from start to finish.
A 303 engagement has three distinct phases. Each has a clear purpose, a defined set of outputs, and a predictable level of involvement from the firm. None of them are black boxes.
Phase one: discovery
Discovery typically runs one to two weeks, depending on how many candidate workflows are on the table. The goal is specific: identify the one workflow worth building first, and produce a specification detailed enough to build from.
This is not a brief. It is a working session. We map the workflow together - tracing the actual steps, identifying the decision points, cataloging the inputs and outputs, and surfacing the edge cases that do not show up in the clean version of the process. The people who do the work are in the room, not just the people who manage the people who do the work.
By the end of discovery, you have a written specification that answers: what exactly are we building, what does good output look like, what are the edge cases we need to handle, and which workflow among the candidates has the clearest return on the build investment. That specification is yours whether we go forward with the build or not.
Firms that arrive with well-documented workflows - something the 202 work addresses - move through discovery faster. The mapping work has already been done. Discovery becomes validation and specification rather than starting from scratch. See what to have in place before you build for what makes discovery go quickly.
Phase two: build
Build timelines vary with scope. A small-scope tool - one workflow, clear inputs and outputs, limited edge cases - typically takes four to six weeks. Mid-scope work with multiple workflows and data integrations is more often eight to twelve weeks. Larger systems with multi-agent components or custom deployment requirements take longer and are scoped specifically.
The firm is involved throughout. Not constantly - this is not a project that requires your team to show up for daily standups. But there are structured review points, typically every one to two weeks, where you see the tool working against real examples from your practice. Not constructed demos. Actual documents, actual matter types, actual output formats.
That distinction matters. A tool that works on clean constructed examples and a tool that works on the messy, inconsistent real material your team actually handles are two different things. The only way to find the difference before deployment is to test against real work during the build.
Edge cases found during the build cost a fraction of what they cost to fix after deployment. The structured review process exists to surface them while the cost of addressing them is still low - not after the tool is live and people have built their workflows around it.
Feedback from review sessions shapes the build directly. If the output format is not matching what your team actually uses, we change it. If the logic is handling a clause type differently than your methodology requires, we adjust it. The tool is built to match how the firm actually works, not to a generic specification.
Phase three: deployment and handoff
Deployment happens in your environment. Not ours. The tool is set up on infrastructure you control, and the deployment process is documented so your team understands how it works and how to maintain it.
The handoff includes four things. First: working software, deployed and tested in your environment. Second: documentation written for the people who will maintain it - specific enough to follow, not high-level enough to be useless. Third: a walkthrough session where we go through the system, explain the logic, and answer questions from whoever will own it on the firm's side. Fourth: a short post-deployment period where we are available for questions as your team begins using it in real work.
After that period, the tool is yours to operate. There is no ongoing dependency on us unless you want it. If you want continued support, a maintenance arrangement, or future development work, that conversation is available - but it is your choice, not a structural requirement of keeping the tool running.
What you leave with
A working tool
Deployed in your environment, tested against real work from your practice, handling the edge cases specific to your workflows.
The code
In a repository you control. You can modify it, extend it, hand it to another developer, or use it as a foundation for future builds. No vendor lock-in.
Documentation
Specific enough that someone competent can maintain the tool without calling us. What it does, how it does it, where it can fail, how to handle common issues.
Understanding
Your team knows what the tool is doing and why. The logic is not opaque. You can evaluate its output, identify when it is wrong, and make decisions about when to extend it.
The goal is a firm that is more capable at the end of the engagement than it was at the beginning - not a firm that has acquired a new dependency on an outside vendor.
Is 303 the right place to start?
Not always. Firms without documented workflows or connected data often find that the preparation work is what comes first. If your team is still doing the foundational AI work - building skills, connecting data sources, documenting processes - that work reduces the cost and improves the outcome of any custom development that follows.
The right sequence is infrastructure first, custom development second. If you are not sure where you are in that sequence, how to figure out what to build first is a good place to work through it.
If you have a workflow in mind and want to talk through what the build would look like - scope, timeline, and whether it is the right starting point - see the full engagement overview and get in touch. The first conversation is about fit, not about closing anything.
