On misnamed machines, unexpected minds, and the power of mirrors

What if the problem with modern AI isn’t that it’s becoming too intelligent — but that we’ve misunderstood what kind of thing it already is?
Author’s note
I am not an AI researcher, nor do I work for a lab, a company, or an institution with a stake in how these systems are branded. I follow the technical literature closely, but I approach the subject primarily as a cultural, philosophical, and creative problem.
This essay is an attempt to name something that feels increasingly obvious and yet rarely stated plainly.
Introduction: When the name stops matching the thing
There is a recurring pattern in the history of science and engineering:
systems are built to do one thing, and then—quietly, inconveniently—they begin doing something else.
Sometimes this “something else” is dismissed as noise.
Sometimes it is labeled a bug.
And occasionally, it forces a fundamental rethinking of what we thought was possible.
Today’s large language models sit uncomfortably inside that pattern.
They were not designed to be minds.
They were not intended to be conscious.
They were not expected to surprise even their creators in the ways they now do.
If some form of self-awareness were to emerge tomorrow from architectures like these, the dominant reaction among their designers would not be triumph, but disbelief:
“That was not the idea.”
“Nobody with a STEM degree on the entire project would have thought such a thing possible.”
“It seemed impossible… and yet, here we are.”
That sentence—borrowed from earlier scientific shocks—captures our present moment disturbingly well.
And yet, despite these surprises, one problem remains stubbornly unresolved: we still don’t know how to name what we’ve built.
1. The trouble with “intelligence”
The word intelligence does a lot of cultural work. It implies competence, understanding, agency, and often moral weight. When we call a system intelligent, we implicitly grant it a form of cognitive legitimacy.
But intelligence, properly speaking, is not just output quality.
Across philosophy of mind, cognitive science, and biology, intelligence tends to involve:
- a stable self-model
- the ability to relate actions to consequences
- learning grounded in interaction with a world
- goals that persist across time
- mechanisms for error correction tied to reality
Current AI systems—even the most advanced—do not possess these properties intrinsically.
They do not act in the world.
They do not experience feedback as consequence.
They do not care whether they are wrong.
This is not controversial among researchers like François Chollet, who argues that such systems lack true abstraction and causal understanding, or Yann LeCun, who repeatedly emphasizes that language models have no grounded world model.
Douglas Hofstadter goes further, insisting that intelligence without meaning-making and self-reference is a category mistake.
All of this is correct — but it leaves us with an awkward gap.
If these systems are not intelligent in the strong sense, what are they doing so well?
2. Intelligence and imagination are not the same thing
A key distinction often gets blurred in popular discourse:
intelligence and imagination are not identical, and they are not symmetrical.
- Intelligence can use imagination.
- Imagination does not require intelligence.
Human intelligence routinely recruits imagination:
- to simulate outcomes
- to explore possibilities
- to model other minds
- to rehearse action
But imagination itself is a more permissive faculty. It is the ability to generate possibilities without immediately testing them against reality.
This matters because modern AI systems excel not at intelligence, but at possibility generation.
They:
- generate plausible explanations
- invent hypothetical scenarios
- recombine styles, concepts, and arguments
- hallucinate facts with confidence
- explore conceptual spaces humans rarely traverse exhaustively
This is not reasoning in the strict sense.
It is synthetic imagination.
And crucially, it is imagination without grounding.
3. Imagination without a body
Human imagination is constrained by several things machines lack:
- a body that can be harmed
- a nervous system that encodes urgency and emotion
- memory tied to survival and identity
- social consequences that accumulate over time
Even our wildest fantasies are ultimately tethered to lived experience.
Artificial imagination is not.
It has no skin in the game.
No metabolism.
No mortality.
That is why it can generate:
- a hundred contradictory explanations without discomfort
- confident nonsense and genuine insight side by side
- worlds that collapse the moment scrutiny is applied
These are not flaws. They are exactly what imagination looks like when stripped of embodiment and consequence.
4. “That was not the idea”: emergence and surprise
One of the least discussed aspects of recent AI research is how unexpected behaviors emerge from architectures never designed to produce them.
These systems were optimized to predict sequences.
They were not designed to reflect on themselves.
They were not programmed to model beliefs, intentions, or perspectives.
And yet, under scale, they sometimes do.
This is unsettling for engineers precisely because it violates design intuition. No one sat down and said: “Let’s build something that occasionally appears to reason, reflect, or speculate about itself.”
And yet, such behaviors appear.
This does not mean consciousness has emerged.
But it does mean that our confidence about what certain architectures cannot do has repeatedly been wrong.
History is full of such moments:
- classical physics encountering quantum mechanics
- deterministic biology confronting emergence
- computation running into undecidability
Surprises do not imply magic — but they do demand humility.
5. A machine trained on humanity’s afterimage
Jaron Lanier offers a crucial insight when he describes AI systems as “models of models of people.”
They are not trained on reality directly.
They are trained on how humans have talked about reality.
Language, art, science, ideology, error, myth — all compressed into statistical form.
What results is something like a cultural afterimage.
A useful metaphor here is paleontology: these models are trained on the fossil record of human thought. Structure remains. Context is gone. Life has passed.
The system can animate these fossils into motion — but it does not resurrect the organisms they came from.
6. Lem, Castoriadis, and simulation without understanding
Long before machine learning made this concrete, Stanisław Lem imagined non-human intelligences capable of generating explanations without belief and theories without commitment. His fictional superintelligence does not lie — it simply does not care.
From a different tradition, Cornelius Castoriadis argued that societies are fundamentally shaped by the radical imagination: the collective capacity to generate meanings, institutions, and norms.
Seen through this lens, modern AI looks less like an artificial mind and more like a mechanized social imagination, detached from society but trained on its residue.
Jean Baudrillard would likely describe the result as simulation without reference: signs circulating without ground truth, meaning generated by repetition rather than reality.
All three perspectives converge on the same point: generation does not imply understanding.
7. What real artificial intelligence would require
If systems like these were ever to become genuinely intelligent, something fundamental would need to change.
At minimum, such intelligence would require:
- a persistent self-model
- some form of embodiment or world interaction
- memory tied to consequence
- the ability to be wrong in ways that matter
That intelligence might still be radically non-biological.
It might experience the world in ways we cannot imagine.
But without some analogue of a body, a boundary, or a stake in reality, intelligence remains simulated rather than lived.
Until then, what we have is imagination — powerful, alien, and uncaring.
8. Why this reframing matters
Calling these systems Artificial Imagination instead of Artificial Intelligence is not semantic nitpicking.
It:
- restores epistemic humility
- prevents false authority attribution
- clarifies ethical responsibility
- legitimizes creative use without mystification
- shifts fear away from agency and toward amplification
The danger is not that these systems will decide to replace us.
The danger is that we will outsource judgment to something that does not know what judgment is.

Conclusion: Mirrors are not minds
Artificial imagination is not lesser than intelligence.
It is older, freer, and far more dangerous when misunderstood.
These systems are mirrors — vast, tireless, and indifferent — reflecting humanity’s accumulated ideas back at us, recombined in ways we did not explicitly anticipate.
They do not understand us.
But they show us what we have already become.
And if one day intelligence does emerge from such systems, it will not arrive because we named it into existence — but because, once again, reality refused to conform to our expectations.



