You've tried ChatGPT. Possibly Claude. Maybe you even paid for the premium version, typed in your request with genuine hope, and received... something that could have been written by anyone, about anything, for no one in particular.
Generic. Vague. The verbal equivalent of beige.
So you concluded that AI isn't quite ready yet. Or that it's useful for other people's businesses, just not yours. Or that you'll get round to figuring it out properly when things are less busy (they won't be).
This might sting a bit: if AI doesn't deliver a useful response, 100% of the time it's because you weren't clear what you wanted.
Not 90%. Not "usually." Every single time.
That's not my observation. It comes from David Boyle, who's spent decades in audience research and now teaches organisations how to actually get value from these tools. And after sitting with dozens of small business owners who've had the same disappointing experience, I think he's onto something.
The problem is almost never the model's intelligence. It's almost always the context you didn't think to provide.
The Consultant You'd Actually Hire
Dharmesh Shah, co-founder of HubSpot, has a thought experiment I keep coming back to.
You're hiring a consultant. Two candidates:
Consultant A: 200 IQ. Genuine polymath. Could probably explain string theory while solving a Rubik's cube blindfolded. Knows absolutely nothing about your business, your customers, your constraints, or what you actually need.
Consultant B: 150 IQ. Still sharp. Knows your business inside out. Understands why that one supplier is a nightmare, why your best customers come from referrals not ads, and why the obvious solution won't work in your situation. Consultant B also knows the tribal knowledge: the workarounds that aren't documented anywhere, the real reasons deals fall through (not the ones in the CRM), the inside baseball that only comes from being embedded in the business.
You pick Consultant B. Obviously. Every time.
Because raw intelligence without relevant context is just confident guessing. Consultant A might give you a technically perfect answer that's completely wrong for your circumstances. They'd probably present it beautifully, too, with slides.
The same applies to AI. GPT-5, Claude Opus 4.5, Gemini 3.0: these models have capabilities that would have seemed like science fiction three years ago. But capability without context produces the same generic output that made you quietly cancel your subscription.
Why Zero Times Anything Is Still Zero
Shah frames this mathematically, and it's worth sitting with:
Success = IQ × CQ
IQ is the model's intelligence. CQ is the Context Quotient: how much the AI knows about you, your business, your goals, your constraints, your preferences, and what you actually mean when you say "make it punchier."
That's multiplication, not addition. If your CQ is zero, the whole equation equals zero. Doesn't matter if you're using the most sophisticated model on the planet. Zero times genius is still zero.
Lance Martin puts it another way: the LLM is like a CPU, and the context window is like RAM. Raw processing power means nothing if the working memory is empty.
And here's the counterintuitive bit: as models get smarter, they become more sensitive to poor prompts, not less. GPT-5 follows instructions with surgical precision, which means vague instructions produce vague results more reliably than ever.
The IQ gap between AI models is narrowing. Every major company is building astonishingly capable systems. The CQ gap, meanwhile, is wide open. That's where the advantage lies. Not in picking the "best" model, but in getting better at telling it what you actually need.
In real estate, the old saying was "location, location, location." In AI, it's "context, context, context." And if anything, context matters more. You can renovate a house in a bad location. You can't make AI useful without telling it what you actually need.
You're Not Bad at This (Everyone Struggles)
If you're finding context transfer difficult, you're experiencing a universal cognitive challenge, not a personal failing.
There's a classic psychology experiment called Dunker's radiation problem. Participants learn to solve a medical puzzle: how do you destroy a tumour with radiation without killing the healthy tissue around it? The solution is to use multiple weak beams that converge at the tumour.
Then researchers present an analogous military problem: how does an army capture a fortress when the main road is too narrow for the full force? Same principle. Split into smaller groups, converge from different directions.
Success rate? About 10-30%.
But when researchers simply ask "does the radiation problem have any bearing here?", success jumps to 90%.
The knowledge was already there. The connection just needed to be made explicit.
AI needs the same kind of nudge. It's not that you're particularly bad at this. It's that humans, in general, are surprisingly poor at transferring context between situations. We assume too much is obvious. It rarely is.
The Four P's (Prompting Is Only One of Them)
David Boyle argues that focusing on "prompting" misses three-quarters of the picture. There are four P's, and context runs through all of them.
Prep
What you bring before you type a single word.
The AI doesn't know if you're a dentist or a dog groomer. It doesn't know if you've got a marketing budget that could buy a small car or if you need solutions that cost less than a coffee. It doesn't know if you want edgy and provocative or safe and corporate.
You take these things for granted. They're so obvious to you that you forget to mention them. But without them, you're asking AI to give directions without mentioning where you're starting from.
Before you start, gather:
- What does "good" look like for this specific task?
- What constraints exist (budget, brand guidelines, compliance, your boss's inexplicable hatred of bullet points)?
- What materials can you paste in (examples of tone, past work that landed well, competitor examples to avoid)?
One more thing: where you put context matters. Research shows that context placed at the end of a prompt can be "forgotten" by the model. Put your context at the top, before the task instruction.
Prompt
The instruction itself. Most people think this is the whole game. It's about 25% of it.
The prompt is only as good as the prep that informs it. "Write me a marketing email" is about as useful as asking a stranger for directions by saying "which way?"
Process
Breaking tasks into steps instead of asking for the final answer in one go.
Don't say "write me a marketing strategy." Say "first, help me understand my current position. What questions should I be asking about my target audience?"
Use the AI's memory features when they help. Reset when you need fresh thinking. Build on previous conversations when continuity matters.
This is a workflow, not a magic button.
Proficiency
Your judgment on the output. The AI gives you a draft, not gospel.
Boyle uses the acronym CARE:
- Check: Is this accurate? Does it match what you know to be true?
- Add: What's missing? What context did it lack that you can now provide?
- Remove: What doesn't belong? What's off-brand, irrelevant, or just wrong?
- Edit: Make it yours. Adjust the language, add your perspective, fix the bits that sound like a robot wrote them (because one did).
You wouldn't expect a new hire to produce perfect work on day one without any feedback. Same principle.
What This Actually Looks Like
Let's make this concrete.
The vague prompt (what most people try):
"Help me write a marketing email for my business."
What the AI doesn't know: what your business does, who this email is for, what action you want, your tone of voice, what's worked before, any constraints on length or timing or compliance. Basically everything.
The context-rich prompt (what actually produces something useful):
I run a small accounting firm in Manchester serving owner-managed businesses (£500k-£5m turnover). I need to write an email to existing clients about our new R&D tax credit service.
Context:
- Audience: Business owners who already trust us for their accounts
- Goal: Book a 15-minute call to assess eligibility
- Tone: Professional but warm, not salesy (we're their trusted adviser)
- Length: Under 200 words (they're busy people)
- Key objection to address: "I don't do R&D" (most people underestimate what qualifies)
The first prompt gets you something you'd be embarrassed to send. The second gets you something you might actually use.
Before your next AI task, try these four questions:
- What am I actually trying to achieve? (Not "what do I want it to write" but "what outcome do I need?")
- Who is this for, and what do they care about?
- What does "good" look like in my specific situation?
- What constraints or preferences should it know about?
The Four Layers Worth Providing
Not all context is equal. Here's a quick framework:
| Layer | What it includes | Example |
|---|---|---|
| Personal | Your role, expertise, preferences | "I'm a non-technical founder who doesn't want jargon" |
| Business | Your company, industry, position | "B2B SaaS, 20 employees, bootstrapped, no marketing team" |
| Task | What you're trying to accomplish | "Write onboarding emails for new users who signed up but haven't logged in" |
| Constraints | Limits, rules, requirements | "Must comply with GDPR, under 100 words, can't mention competitors by name" |
Most people provide one layer. Maybe two if they're being thorough. Effective AI use requires all four.
The Actual Cost of Getting This Wrong
When context is missing, something predictable happens. AI produces generic output. You conclude that AI isn't that useful. You stop trying. The productivity gains everyone else talks about never materialise. And you're left wondering what all the fuss was about.
Shah puts it well: "Without shared, structured context, AI is like a very smart intern on their first day. It can answer questions impressively and write eloquently. But it doesn't know what's actually happening in your business."
The tragedy isn't that AI failed. It's that you walked away from something that could genuinely help, because nobody explained that the tool was waiting for you to tell it what you actually needed.
The solution isn't a smarter model. It's context.
What You Can Do About This
Figuring out what context to provide isn't always obvious. You know your business so well that you've forgotten what's not obvious to an outsider. That's the curse of expertise; the things you take for granted are precisely the things the AI needs to know.
Sometimes you need someone to ask the right questions.
If you're finding that AI tools give you generic output no matter what you try, a Power Hour might help. In 60 minutes, we can map out the context your AI tools are missing and build a simple system for providing it consistently.
Not a training session on prompting tricks. A working session on understanding what your specific business needs to communicate to get useful results.
Context isn't a feature. It's the whole game.
