Back to blog
ai-adoptionproductivitythinking

Stop Using AI as an Answer Engine

9 min read
Stop Using AI as an Answer Engine

You know that feeling. You're writing something important, a proposal, a blog post, a strategy document, and somewhere in the back of your mind there's a better angle. An insight you had earlier. A connection you made while reading something last week. But when you reach for it, nothing's there.

The idea existed. You know it did. But it escaped before you could capture it.

We consume so much content daily: articles, podcasts, conversations, social media, emails. Ideas flash through our minds constantly, then vanish just as quickly. Later, we're left with that frustrating sense that there was something valuable we've lost.

Most people use AI to get more information faster. But what if the real opportunity is the opposite: using AI to retrieve the ideas already inside you?

The Answer Engine Trap

Watch how most people use ChatGPT or Claude. They type a question, get a response, and move on. It's Google Search with better grammar.

This feels productive. You got an answer. Task complete.

But here's what Roger Martin, the strategy advisor and former Dean of Rotman School of Management, points out: AI is fundamentally a mode-seeking device. It searches through vast amounts of data and returns the most frequent answer to your question. Not necessarily the best answer. The most common one.

"So you can say to AI, 'Give me the single most innovative answer to this question,'" Martin explains. "It'll look at all the answers that have been classified as 'innovative,' and whichever one of those answers is the most frequent, it'll give you that."

You asked for the exceptional. You got the modal.

This matters less for factual queries. If you need a hex code or a syntax reminder, the most common answer is probably right. But for strategic decisions, creative work, or anything where your specific context matters? The average answer, by definition, produces average results.

And it gets worse.

Even Experienced Users Fall In

If you're reading this, you probably don't accept the first response uncritically. You push back, ask follow-up questions, refine the output. That's better than the one-shot approach.

But here's the trap even sophisticated users fall into: you're still optimising for answers rather than thinking.

The subtle difference between "help me solve this" and "help me think about this" changes everything. The first outsources your cognition. The second develops it.

What You're Actually Losing

In 2011, researchers Betsy Sparrow, Jenny Liu, and Daniel Wegner published a study in Science that identified what they called "the Google effect." Their finding: people are less likely to remember information they believe is easily accessible online. We've offloaded memory to search engines.

That's not necessarily bad. Freeing mental resources from trivia could theoretically allow deeper thinking about concepts. The notebook, the calendar, the contact list: we've always used external tools to extend our cognition.

But with AI, we're not just outsourcing memory. We're outsourcing reasoning.

A longitudinal study published in Nature's Scientific Reports examined habitual GPS users. The finding was sobering: regular GPS use correlated with steeper declines in hippocampal-dependent spatial memory. Critically, the GPS use wasn't caused by poor navigation skills. The GPS use itself caused the decline.

Use it or lose it applies to thinking, not just muscles.

A 2024 study in MDPI's Societies journal found a significant negative correlation between frequent AI tool usage and critical thinking abilities. Younger users (17-25) showed higher AI usage with lower critical thinking scores, while those 46 and above showed the inverse pattern.

This isn't about AI being bad for young people. It's that those who developed critical thinking skills before AI became ubiquitous use the tools differently. The skills gap creates a usage gap, which widens the skills gap further.

The Invisible Shift

Researchers describe a phenomenon called "delegation drift": a gradual, almost invisible transfer of cognitive responsibility from you to the tool. It happens incrementally, without awareness. What begins as convenience evolves into dependency.

When ease is prioritised over depth, high-level thinking atrophies.

Those buried ideas that could improve your work? AI-as-answer-engine won't surface them. Your subconscious might be holding exactly the insight you need, but if you're asking AI to tell you what to think rather than helping you think, that insight stays buried.

Socratic questioning retrieves buried ideas from your subconscious

The Socratic Alternative

Roger Martin is working on something different. Not an AI that gives you answers, but one that helps you discover answers through being questioned.

"Rather than the AI coming up with the answer for you," he explains, "what the AI does is help you as your thought partner. It's sort of Socratic. It says, 'Okay, what's the problem that you're trying to solve? Well, okay, that's not such a great definition. Can we improve that?'"

This isn't new pedagogy. Research from Washington University showed that students who generated conceptual questions performed significantly better on tests than students who simply received answers. Questions identify knowledge gaps, focus attention, and reveal mental models. Receiving answers does none of these things.

A 2025 paper in Frontiers in Education introduced what researchers call "The Cognitive Mirror Framework": AI that acts as a reflective partner by "feigning confusion" and asking clarifying questions. When the AI asks "Can you explain that differently?" the learner must re-evaluate their thinking in real time.

The result? "Increasing autonomy and conceptual reinforcement."

The AI isn't giving you its thinking. It's strengthening yours.

Retrieval, Not Generation

Here's where this connects to those buried ideas.

Socratic questioning works as a retrieval mechanism. By probing around a topic from multiple angles, it can help surface ideas lodged in your subconscious that direct questioning would never find.

That brilliant angle you had for a proposal, the connection you made while reading something last week, the insight that emerged during a conversation but escaped before you could capture it: these aren't gone. They're just not accessible through direct queries.

"Give me ideas for this proposal" might get you generic suggestions from AI's training data.

"What am I not considering here?" followed by "Why might that assumption be wrong?" followed by "What would someone who disagrees with me say?" draws out what's already in your head but hasn't been articulated.

This is different from AI having the answer. This is AI helping you find your answer.

Augment or Replace

The research literature makes a useful distinction. Tools either augment your thinking (transform how you reason) or replace it (eliminate the need to reason).

A spreadsheet augments thinking. It doesn't calculate for you; it transforms how you think about data relationships. A calculator used pedagogically can focus attention on conceptual meaning rather than arithmetic.

AI can go either way. Used to draft, it can augment thinking. Used to replace thinking, it atrophies it.

The question isn't whether you're using AI. It's which mode you're operating in.

Practical Implementation

Here's how to shift from answer engine to thinking partner.

Default Prompts That Change the Dynamic

Instead of asking for answers, ask for questions:

  • "Before answering, ask me 3 questions to understand my situation better"
  • "Challenge my assumptions here. What might I be wrong about?"
  • "Help me think through this rather than just giving me your answer"
  • "What am I not considering?"
  • "Play devil's advocate. Why might this be the wrong approach?"

When to Use Which Mode

Not every interaction needs Socratic questioning. Here's a simple distinction:

Answer mode (just ask, get response):

  • Factual lookups (dates, syntax, specifications)
  • Routine tasks with clear right answers
  • Time-constrained queries where thinking isn't the bottleneck

Thinking partner mode (questions before answers):

  • Strategic decisions
  • Content creation where your perspective matters
  • Problem-solving where context is crucial
  • Anything where your thinking quality affects the outcome

Setting Up Default Behaviour

I've configured my AI tools to default to questioning mode. Before answering strategic questions, they now ask:

"This feels like a strategic question. Do you want me to help you think it through, or just give you my best answer?"

This builds the habit of recognising when deeper thinking serves better than quick answers.

For moments when I genuinely just need a fast answer, opt-out phrases bypass the check-in:

  • "just answer"
  • "quick answer"
  • "skip Socratic"
  • "I'm in a rush"
  • "straight answer"

The mindset shift from "What's the answer?" to "Help me find the answer" sounds subtle. In practice, it changes everything.

Implementation Prompts

Here's the actual instruction I use:

Before answering questions that involve decisions, strategy, positioning, or choices, briefly check: "This feels like a strategic question. Do you want me to help you think it through, or just give you my best answer?" When to check: business decisions, content strategy, trade-offs with no obvious right answer, anything where I say "should I..." or "what do you think about..." When to skip: factual answers, syntax questions, looking things up.

You can add this to custom instructions in ChatGPT, to your CLAUDE.md file for Claude Code, or simply paste it at the start of a conversation.

The Capability AI Can't Replace

The difference between using AI as an answer engine and using it as a thinking partner isn't just about getting better outputs. It's about becoming a better thinker.

Roger Martin makes a distinction borrowed from Aristotle: there's the part of the world where things cannot be other than they are (science, where the pen will always drop), and the part where things can be other than they are (strategy, where imagination creates new possibilities).

In the scientific part, answers work. In the strategic part, you need judgment, imagination, and the ability to construct compelling arguments for possibilities that don't yet exist.

AI can give you the modal answer to any question. What it can't give you is the capacity to think well about questions that don't have answers yet.

That capacity develops through use. Atrophies through disuse.

The choice isn't whether to use AI. It's whether you're using it to become more capable, or less.

More posts like this, straight to your inbox

If you found this useful, you'll probably like what comes next.