Prompt Playbook: Big Questions in AI PART 2
"My AI chatbot seems to understand me better than my spouse does," a client recently told me.
Half-joking I think…
"So is it actually... intelligent? Or is it just really good at faking it?"
As we interact with systems that can write poetry, solve complex problems, and engage in what feel like meaningful conversations, the line between simulation and genuine intelligence begins to blur. Increasingly so as the models get more sophisticated.
When Claude or ChatGPT responds with apparent empathy or insight, are we experiencing something truly comparable to human intelligence, or simply an elaborate illusion created by pattern matching?
Let’s get started:
Summary
Is AI intelligent?
AI intelligence vs. mimicry
Three distinct perspectives on machine intelligence
Why token prediction creates the appearance of understanding
The emergent properties of scale in large language models
How our definition of intelligence keeps shifting
A practical framework for discussing AI capabilities with clients
The Philosophical Quandary
The question of whether AI is "truly intelligent" or merely mimicking intelligence has been debated since the very beginning of the field.
Turing sketched out the first “computer” (as we would understand it) to solve a mathematical problem and shortly afterwards speculated whether these machines could ever be intelligent.
Smart guy. Scary smart.
Whether AI is “intelligent” touches on fundamental questions about consciousness, understanding, and what it means to think—questions that have honestly occupied philosophers for millennia and remain (largely) unresolved.
When fielding this question, there are several distinct perspectives you could take:
Position 1: "AI Systems Possess a Different Kind of Intelligence"
Some argue that AI systems like large language models demonstrate genuine intelligence, just of a different kind than human intelligence. These systems can reason through complex problems, identify patterns humans might miss, and generate novel solutions.
In this view, we should broaden our concept of intelligence beyond human cognition. Stop being so anthropocentric - as we are wont to be! Intelligence should be defined functionally—by what a system can accomplish—rather than by how it accomplishes it or whether it has subjective experiences.
Position 2: "AI Is Advanced Mimicry With No Real Understanding"
The opposing view holds that current AI systems are essentially sophisticated pattern-matching machines with no actual understanding of the content they process. In this perspective, what looks like intelligence is actually just statistical correlation at massive scale.
Which…kinda makes sense considering how LLMs work. They are mass probability engines.
Proponents of this view often cite the Chinese Room thought experiment proposed by philosopher John Searle: a person who doesn't understand Chinese follows instructions to manipulate Chinese symbols, producing appropriate responses without comprehension. The system as a whole appears to understand Chinese, but no actual “understanding” exists anywhere within it. The system is pure mimicry.
Position 3: "The Distinction Between Real Intelligence and Simulation May Not Matter"
My position is more pragmatic. Surprise surprise!
The question itself might be less important than we think. Or even meaningless. Current AI systems are neither conscious minds nor simple mechanical calculators—they occupy a new and interesting space that challenges our existing categories.
The Token Prediction Explanation
At their core, large language models like GPT-4 or Claude are token prediction engines. When you input text, the model predicts what tokens (roughly, pieces of words) are most likely to follow based on patterns it observed in its training data.
This is fundamentally different from how humans think. These systems don't have intentions, beliefs, or desires. They don't "know" what words mean in the way we do—with connections to lived experience, emotions, and physical sensations. They're processing statistical patterns, not meaning as humans understand it.
However, this doesn't mean these systems are simple or unimpressive. The scale at which they operate leads to emergent properties that weren't explicitly programmed.
Just as complex behaviours can emerge in natural systems (like how individual ant behaviours create sophisticated colony structures or a flock of starlings forms a murmuration), complex capabilities emerge from these massive statistical models that weren't directly encoded.
Hell, life itself and evolution thereafter led to complex outcomes from simple mechanical processes. Given enough scale (be it geological time or immense data sets and compute) some pretty wild things emerge unbidden.
It's also important to recognise that LLMs represent just one approach to AI—albeit the dominant one at this moment. We’ve had many types before which is why knowing the history is helpful!
We’ll also have many types hereafter. The field continues to evolve, and future breakthroughs may come from entirely different architectures or approaches. Many experts believe that truly intelligent systems will ultimately require different approaches or hybrid methods that go beyond the statistical prediction paradigm. We’ll see!
The Moving Goal Post Problem
From emergence come behaviour that seem very “intelligent”. And it keeps happening again and again in the world of AI.
But we humans keep shifting back the goal posts. We’ll declare something like “computers are great at calculations sure but they’ll never beat a grandmaster chess player”. We set arbitrary boundaries for what is human intelligence. Which then get demolished.
Consider this pattern:
1997: "Chess requires unique human intelligence"... until Deep Blue beat Kasparov.
2016: "OK has a finite move set so of course a computer could brute force it. But Go is too intuitive for computers"... until AlphaGo beat Lee Sedol.
2020: “OK but that’s just games in a controlled environment. AI can’t deal with real world tasks like driving which require real-world perception and split-second judgment”… until Waymo launched driverless taxis in Phoenix.
2023: “Driving is mainly mechanical. AI can’t handle complex reasoning or professional tasks”… until GPT-4 started passing bar exams, acing medical boards, and writing better code than junior devs.
This perfectly illustrates what comedian Louis C.K. observed in his famous bit about