Articles/Training & Evaluation

Case Study: How a Consulting Firm Turned a Senior Partner's Research Process Into a Subscription Product

One partner had a research methodology that drove client renewals. Nobody else at the firm knew exactly how it worked. Here is how it became a system - and then a service.

February 2026·8 min read

Note: this case study is a composite of patterns across real engagements. Firm details have been generalized.

The firm had a problem that looked like an opportunity. One of its senior partners produced a weekly market intelligence brief that clients consistently cited as a reason they renewed. It combined public data, industry filings, and a particular interpretive framework the partner had developed over fifteen years of doing the work. Clients did not just read it - they forwarded it internally. A few had made procurement decisions based on it.

When clients asked whether others at the firm could produce the same brief, the honest answer was no. The partner ran the process from memory. She knew which sources to check, how to weight certain signals, when an emerging trend was worth noting and when it was noise. None of that was written down. It could not be delegated. It could not scale.

This is a common situation in professional services. The most valuable thing a firm produces often lives inside a single person who has never had to explain it.

What discovery actually involved

Discovery took three weeks. That is longer than typical, and the reason was the nature of the problem: the partner had never articulated her research process as a process. She had to externalize something she had always done intuitively.

The first week was mostly listening. What sources did she check and in what order? Which sources were reliable for what kinds of signals? What was the first thing she looked for in a filing, and what did she do if she did not find it? How did she decide whether something was worth including? The questions were granular and the answers sometimes surprised her. She discovered that she had patterns she did not know she had.

The second week moved into structure. How did she organize what she found before writing? What made the brief readable - what was its underlying architecture? What kinds of interpretive moves was she making - comparing current data to historical baselines, flagging anomalies against sector norms, identifying second-order implications of primary findings?

By week three, there was a draft specification: a document describing the sources, the monitoring logic, the interpretive framework, and the output format in enough detail to build against.

The discovery document was not a prompt. It was closer to a methodology manual - the kind of thing a senior person might write for a very capable junior who needed to understand not just what to do, but why, and how to handle the cases where the usual approach did not apply.

The build

Build took eight weeks. The core system had three components: a monitoring layer that tracked the relevant sources on a weekly schedule, a synthesis layer that applied the partner's analytical framework to what the monitoring layer surfaced, and a formatting layer that produced output in the brief's established structure.

The calibration process was the most demanding part. Output from the system was compared against briefs the partner had actually written. Where the system's interpretation diverged from hers, the reasons were examined. Sometimes the prompt logic was wrong. Sometimes the framework specification needed to be more precise. A few times, the partner realized she had articulated her process one way during discovery and actually did it differently in practice - those gaps required another conversation.

Before the build

  • One partner produced the brief from scratch each week
  • Process lived entirely in her head
  • Could not be delegated or replicated
  • Revenue from one-off research engagements only

After the build

  • Any analyst at the firm can run the process
  • Methodology documented and encoded in the system
  • Partner reviews output rather than producing it
  • Three clients on monthly subscription model

By week six, the system was producing output the partner rated at roughly 80% of what she would have written herself. Weeks seven and eight were refinement - closing that gap, testing edge cases, handling the weeks where the source data was thin and the system had to make interpretive decisions about what was worth including at all.

What the firm has now

The brief is now a product the firm sells as a monthly subscription. Three clients who previously paid for one-off research engagements converted to recurring fees. The firm has had two new client conversations specifically about the subscription offering - something that did not exist six months ago.

The partner still reviews every output before it goes to a client. That is deliberate. Her judgment is still part of the product. What changed is that she is reviewing and sometimes lightly editing, rather than producing from scratch. That frees several hours a week and means the brief can go out even when she is traveling.

The analysts who run the process now understand the methodology better than they did before. Externalizing it - putting it into a specification and then into a system - produced a document that doubles as a training artifact. New analysts read it as part of their orientation.

What made this work

The partner was willing to spend real time in discovery. Externalizing an intuitive process is uncomfortable and takes patience. Firms that rush this phase get a system built on an incomplete specification, which means calibration takes longer and the gap between system output and senior judgment stays wide.

The build was tested against actual outputs, not just described requirements. This matters because the specification always has gaps. Testing against real briefs surfaces those gaps before the system is in production use.

The firm thought about the product question from the start. The brief already had clients who valued it. The subscription model was not invented after the build - it was a hypothesis going in, which shaped decisions about output format and client-facing presentation.

This pattern - one expert's methodology becoming a firm-level system and eventually a product - appears across professional services engagements. The law firm contract risk scoring case study shows a similar dynamic: a senior attorney's review criteria encoded into a tool that junior associates can now run consistently.

If you have a workflow that a specific person at your firm does particularly well - and you have wondered what it would take to make it replicable - the starting point is usually a conversation about what discovery would need to surface. The Apparatus custom development practice is built around exactly this kind of engagement.

Next step

Ready to build something your firm owns?

We scope each engagement around your highest-value workflow. The conversation starts with what you want to build and whether we are the right people to build it.