Articles/The Landscape

5 Signs Your Team Is Using AI the Wrong Way

A checklist for firm leaders who know their team is using AI but aren't sure how well.

February 2026·5 min read

Most firm leaders know their team is using AI. What they don't always know is whether they're using it well. The output looks fine. People seem productive. But some patterns in how a team uses AI are signs of a much shallower engagement than it appears.

None of these signs means the team is doing something wrong on purpose. They're usually a skills gap - the difference between using AI and knowing how to use AI. Both look like "using AI" from the outside.

The checklist

Five patterns worth looking for

01

People restart instead of iterate

When a prompt produces mediocre output, the typical response is to delete it and try something different - a slightly reworded version of the same vague request. That's not iteration. Iteration means reading the bad output and asking: which component was missing? Was the context too thin? Was the task ambiguous? Was the format not specified? People who haven't learned to diagnose output treat every bad result as a fresh start instead of a signal.

02

Nobody can tell you what their best prompt is

Ask anyone on your team: "What's the prompt you've gotten the most value from?" If the answer is a shrug, or "I don't really save them," the firm's AI capability is starting from zero every day. Good prompts are assets. The people who get the most out of AI treat them that way - saved, named, reused, improved. If prompts live only in browser history, they might as well not exist.

03

Your junior staff use it more than your senior staff

This sounds counterintuitive, but it's a warning sign. AI fluency doesn't correlate with being young or tech-comfortable. It correlates with knowing your domain well enough to brief the model properly and evaluate the output critically. If your senior people aren't using AI, it usually means they haven't seen it work for the kinds of tasks they own - often because nobody has helped them write the right prompt for their actual work.

04

"We tried AI for X and it didn't work"

One bad attempt becomes a permanent verdict. "We tried AI for contract review and the output was unusable" - so they stopped. The conclusion should have been "our prompt for contract review was unusable." That's a solvable problem. But without a mental model of what makes a prompt good or bad, there's no way to tell the difference between a tool limitation and a fixable technique problem. The model gets blamed for a prompt quality issue.

05

The AI use looks the same as it did six months ago

If nobody is learning from what works, if no prompts are being shared, if the median AI output quality at your firm today is about the same as it was last year - nothing is compounding. Individual use is happening, but it isn't building anything. The firms that are genuinely ahead on AI right now aren't the ones with the most users. They're the ones whose use gets better every month because someone is paying attention to what works.

What to do about it

These are skills gaps, not attitude problems

Every one of these signs is fixable. They're not evidence that AI isn't right for your firm, or that your team lacks the aptitude. They're evidence that the team hasn't had the specific training that would change the behavior.

The behaviors that correlate with good AI output - giving context before giving a task, diagnosing bad output instead of restarting, saving prompts that work - are learnable. They're covered in detail in what it actually means to be good at AI.

The structural piece - getting your team to a shared standard, building a prompt library, creating norms around sharing - is a separate problem from the individual skill. Why your firm's AI use isn't compounding covers that part.

Next step

Ready to give your team a shared standard?

Apparatus 101 gives your team structured prompting, a seeded prompt library, and the workflows to keep it growing. One session — no ongoing subscription.