Articles/Reference

Glossary: AI Terms Every Professional Services Firm Should Know

Plain-language definitions of the terms that matter — without the hype or the PhD-level jargon.

February 2026·8 min read

AI discussions in professional services tend to oscillate between marketing hyperbole and technical jargon that assumes a background nobody actually has. Neither is useful for practitioners who need to make good decisions about how to use these tools.

This glossary covers the terms that actually matter for professional services work - the ones you'll encounter in vendor conversations, in AI tool documentation, and in discussions about your firm's AI approach. Definitions are practical, not academic.

The terms

Plain-language definitions

Large language model (LLM)

The type of AI behind tools like ChatGPT, Claude, and Gemini. LLMs are trained on large amounts of text and learn to predict what words should follow other words - which, at scale, produces remarkably capable language generation. The term refers to the underlying technology, not any specific product.

Prompt

The input you give an AI model - the text that tells it what to do. A prompt can be a single sentence or several paragraphs. Quality matters significantly: a well-structured prompt produces much better output than a vague one, regardless of how capable the underlying model is.

Prompt engineering

The practice of writing effective prompts. The term sounds more technical than it is - it just means learning how to communicate clearly with an AI model. For professional services work, it's better understood as briefing: giving the model the context, role, task, format, and constraints it needs to do the work well.

Context window

The amount of text a model can process in a single conversation - its working memory. Modern models have large context windows (often 100,000 words or more), which means you can paste full contracts, research reports, or long documents directly into the prompt and have the model work with the full content.

Hallucination

When an AI model generates confident-sounding information that is wrong or fabricated. Hallucination tends to happen in predictable conditions - particularly when the model is asked to recall specific facts (citations, statistics, case names, regulatory provisions) from training data rather than from documents you provide. It's not random, and it's largely preventable with the right prompting practices.

RAG (Retrieval-Augmented Generation)

A technique that connects an AI model to a knowledge base of documents, so it can retrieve relevant content before generating a response. In practice, this is what happens when you load documents into a Claude Project or a custom GPT - the model has access to that content and can ground its output in it rather than relying on training data alone.

System prompt

Instructions given to an AI model before the conversation starts - invisible to the user but active throughout. System prompts are how custom tools (like a firm-specific AI assistant) establish the model's role, constraints, and context before any specific task is given. Many enterprise AI tools let organizations configure system prompts to shape the model's behavior for their use case.

Temperature

A setting that controls how creative or varied an AI model's output is. Higher temperature produces more varied, sometimes more creative responses; lower temperature produces more consistent, predictable output. For professional services work - where consistency matters more than novelty - lower temperature settings generally produce better results.

Token

The unit AI models use to process text. Roughly speaking, one token equals about three to four characters of text. Models are priced per token and have context windows measured in tokens. For practical purposes: a typical legal memo might be 2,000 to 5,000 tokens; a full contract might be 10,000 to 50,000 tokens.

Fine-tuning

Training an AI model further on a specific dataset to make it more capable or specialized for a particular domain. Rarely relevant for most professional services firms - the major models are already highly capable for the work, and fine-tuning requires significant data and cost. Usually mentioned as a future option that most firms never actually need.

Prompt library

A maintained collection of prompts that have been tested, refined, and organized for reuse. The mechanism by which individual AI learning becomes institutional knowledge. A good prompt library is organized by workflow, maintained by someone who owns it, and grows through a contribution norm that encourages team members to add prompts that work.

Agentic AI

AI that can take actions - browsing the web, executing code, sending emails, updating records - rather than just generating text. Increasingly relevant as firms move beyond content generation toward workflow automation. Most current professional services AI use doesn't require agentic capability, but it's the direction the technology is moving.

Further reading

From terms to practice

Understanding the vocabulary is a starting point. The bigger leverage is in the practice. The most actionable next step for most professional services practitioners is learning how to structure a prompt that actually produces what you need - not just the terminology around it.

For firm leaders thinking about AI adoption more broadly, why firm AI use isn't compounding covers the structural problem that keeps most firms stuck at individual use rather than building anything lasting.

Next step

Ready to give your team a shared standard?

Apparatus 101 gives your team structured prompting, a seeded prompt library, and the workflows to keep it growing. One session — no ongoing subscription.