How to Turn a Good Prompt Into a Firm Asset
The anatomy of a Skill. When something is worth packaging — and when a one-off prompt is good enough.

There is a moment in every firm that takes AI seriously when someone writes a prompt that really works. It produces reliable output. It handles context well. Other people want it. And then it disappears — shared in a Slack message, saved in someone's browser history, known only to the person who wrote it.
The gap between a good prompt and a Skill is not technical. It is organizational. A Skill is a prompt — or a set of prompts — packaged so that anyone on the team can run it on the same kind of work and get the same quality of output.
What a Skill actually is
A prompt you use yourself is a starting point. You write it, adjust it, run it, tweak it based on the result. You know the edge cases. You know when it needs extra context and when it does not.
A Skill is built for someone who is not you. It has to be self-explanatory enough that a colleague can run it on their own work without asking questions. That means it needs three things a one-off prompt often lacks:
Clear input requirements
What does someone need to provide for this to work? A document, a client name, a context summary? A Skill specifies this upfront so the person running it knows exactly what to gather before they start.
Defined output format
What does the result look like? A Skill produces something predictable - a structured memo, a bullet-point summary, a draft email in a specified format. Predictability is what makes output reviewable, not just readable.
A known use case
When should someone reach for this? "Use this when summarizing a client intake call" is more useful than a great prompt with no label. The use case is what makes a Skill findable and usable by someone who did not write it.
When a prompt is worth turning into a Skill
Not every good prompt is worth packaging. The test is whether the same kind of task comes up again - for the person who wrote the prompt, but also for other people on the team. If the answer is yes on both counts, you have a Skill candidate.
The types of prompts that become the most valuable Skills tend to share a few characteristics. They work on a recurring task type, not a one-off request. They require firm-specific context - your tone, your client relationship standards, your document formats - that a generic prompt would not have. And they are the kind of thing where quality variance is costly: a mediocre first draft of a client-facing memo is worse than no draft at all.
Research synthesis prompts are a good example. Every firm does some version of this: gather sources, pull the relevant findings, organize them into something a person can act on. Done well, this is a repeatable task with a consistent structure. Done badly, it produces a wall of text that someone has to re-read and reorganize anyway. A Skill for research synthesis tells the AI what the output should look like, what to prioritize, and what level of detail is appropriate for your firm's work. The person who built that Skill did the thinking once. Everyone else benefits from it every time.
The anatomy of a Skill
Structurally, a Skill is a prompt with four components, each doing a specific job:
Role & context
Who is the AI being asked to be, and what does it know about your firm? This is where you embed your institutional context - the kind of work you do, the standards you hold, the voice you write in.
Task definition
What should the AI do with the input it receives? Be specific about the action - summarize, extract, draft, compare - and what the purpose of the output is.
Output format
What does the result look like? Headers, bullets, plain paragraphs, a table? The more specific this is, the more consistent the output will be across different users.
Constraints & guardrails
What should the AI not do? Length limits, tone restrictions, what to flag for human review rather than synthesize, when to ask for clarification rather than make an assumption.
Building your first Skill
The easiest way to build a Skill is to start with a prompt you already use. Open it. Run it on a few recent examples. Notice where it produces output you are happy with and where you have to do cleanup. The cleanup is the gap - the places where your current prompt is underspecified and your judgment fills in what the prompt leaves out.
Write down what you actually do in those cleanup moments. Then add those steps to the prompt. That is most of what makes a good prompt into a Skill: externalizing the judgment that used to live only in your head.
Test it with someone else before you add it to the library. Give it to a colleague without explaining what to do - just the Skill and an appropriate input. See what they produce. The places where they get stuck or ask questions are the places where the Skill needs more definition.
Why this compounds
A single Skill is a useful tool. A library of Skills is something different: it is the firm's collective knowledge about how to do work well, in a form that anyone can use.
When a new hire joins, they do not have to spend six months figuring out how your firm writes client updates, or how it structures research memos, or what level of detail goes into a due diligence summary. They open the library, find the relevant Skill, and produce output that reflects your firm's standards from day one.
When a partner goes on leave, the way they run client calls does not disappear with them. It is in the library.
This is the shift from individual AI use to firm-level AI use. Not everyone needs to be good at prompting - they need to be good at using the Skills their colleagues have built. That is a much lower bar, and it is what makes compounding possible.
Building a Skills library is the core of Apparatus 202. We cover how to write Skills, how to organize them, and how to build a library your team actually uses. If your firm has completed 101 and is ready for the next step, that is where to go next.
