The Anatomy of a Structured Prompt (With Examples for Professional Services)
The five components every high-quality prompt needs — and how to use them to get consistent, useful output from any AI tool.

Most professionals who use AI for the first time write prompts the way they'd send a text message. Short. Vague. Low-context. Then they're surprised when the output is generic, off-tone, or requires three rounds of back-and-forth to make useful.
The problem isn't the AI. It's the prompt.
A structured prompt has five components, each doing a specific job. Once you know what each part does and why it's there, your output gets better - not because you got lucky, but because you gave the model what it needed to do the work.
Before we start
What a bad prompt looks like
Here's a prompt that a lot of consultants write on their first week using AI:
Prompt
The AI will produce something. It might even be decent. But it will almost certainly be:
- Generic in tone — no sense of your firm's voice or the client's context
- Wrong length — usually too long, sometimes too short
- Unfocused — the model has to guess what "summary" means to you
- Unreliable — run it again and you'll get something different
The fix isn't to write a longer prompt. It's to write a structured one. There's a difference.
The framework
The five components of a structured prompt
1. Role
Tell the model what it is. Not what you want it to do yet - who it's being. This sets the frame for everything that follows: vocabulary, tone, level of expertise, and the kind of judgment calls it makes when something is ambiguous.
For professional services, useful roles include: senior analyst at a management consulting firm, corporate attorney specializing in M&A, financial advisor writing for a CFO audience. The more specific the role, the more calibrated the output.
2. Context
Background the model needs to do the task well. This is where you provide the information it can't know on its own: who the client is, what industry they're in, what's already been decided, what the stakes are.
A common mistake is to leave context out because it feels obvious. It's not obvious to the model. If you're drafting a memo for a client who just went through a restructuring, say so. That single sentence quietly changes how the model handles every ambiguous decision.
3. Task
The specific thing you need done. Be precise and use verbs. Not "help me with this report" but "identify the three highest-risk clauses" or "draft an executive summary that leads with the financial exposure."
If you want the model to do multiple things, sequence them explicitly. "First... then... finally..." is more reliable than a single compound request. The model takes instructions literally - your job is to be literal on purpose.
4. Format
How the output should be structured. Bullet points or prose? How long? Should it use headers? Does it go directly into a document, or are you reading it first? Should it use your firm's terminology for this type of deliverable?
This is the one people skip most often. The model defaults to something reasonable - but "reasonable" and "what you actually need" are usually different. The output looks fine but it's not paste-ready, and now you're reformatting instead of working.
5. Constraints
What to avoid, what to flag, what not to assume. This is where you prevent the failure modes you've learned to expect. Don't make legal conclusions. Don't use informal language. If something is ambiguous, say so rather than guessing. Cite the source document for any claim.
Constraints are also where you handle accuracy. For professional services output that will be client-facing, you almost always want: flag any claims you are not certain about. That one instruction does more to reduce hallucinations than almost anything else.
Putting it together
The same task, done right
Back to the contract summary. Here's what that prompt looks like with all five components:
Prompt
That prompt will produce reliable output every time. Not because it's long - it isn't, particularly. Because every part is earning its place. The role tells the model who it is. The context fills in what it can't know on its own. The task and format together mean the output lands ready to use. The constraints catch the failure modes before they happen.
More importantly: that prompt is now a template. Swap in a new contract and new client context, and you have a repeatable workflow.
More examples
Professional services prompts in practice
Research brief — consulting
Prompt
Client memo — advisory
Prompt
When it doesn't work
Iteration as a skill
Even a well-structured prompt won't always produce what you need on the first try. The right response is to diagnose, not to start over.
Most bad output comes from one of three problems: the context was insufficient, the task was ambiguous, or the format instruction wasn't specific enough. When you get something wrong, read the output and ask: which component did the model misread?
- Output is too generic: Add more specific context about the client, industry, or situation.
- Output is the wrong length or structure: Tighten the format instruction - be more literal about what you want.
- Output makes claims you can't verify: Strengthen the constraints: "flag any claim you are uncertain about" or "cite the specific section of the document."
- Output misses the point of the task: Rewrite the task component with more precise verbs and explicit sequencing.
Iteration isn't a sign that the prompt failed. It's how you calibrate. The goal is to reach a version of the prompt you can save, reuse, and hand to a colleague - one that produces the same quality of output whether you run it or they do.
The bigger picture
The prompt as a firm-level asset
One good prompt helps one person, on one task. A library of them is something else: it's how individual insight becomes firm-level infrastructure.
When a senior associate figures out the right way to prompt for contract risk flags, that knowledge shouldn't live in their browser history. It should be somewhere the whole team can find it, build on it, and keep improving. That's harder than writing a good prompt. It's also where the real leverage is.
