While a “true AI” might still be a long way off, we’re already living with something that feels close.
But if we can’t completely agree on the definition of life, how can we hope to define artificial intelligence?
ChatGPT might not be an AI in the strictest sense — but perhaps there’s some value in asking a large language model what it “thinks”.
Why Definitions Seem to Matter
Defining something as complex as artificial intelligence feels essential. Legal frameworks, ethical policies, and regulatory decisions all depend on clarity: what is AI, and what isn’t? Lines must be drawn to determine accountability, safeguard rights, and distribute power.
But does having a clear definition always lead to better outcomes? Definitions shape behaviour, perception, and design. Yet when technology evolves faster than language, the very act of defining can become an obstacle.
Defining Intelligence: A Moving Target
Early attempts at defining AI focused on imitation. The Turing Test offered a pragmatic approach: if a machine could fool a human into thinking it was human, it must be intelligent.
Functionalism took a more abstract path: intelligence is as intelligence does. If a system behaves intelligently, the source or mechanism doesn’t matter.
But the evolution of AI systems—from symbolic logic (GOFAI) to opaque neural networks—has complicated things. The methods now outgrow the categories. Intelligence has become less about what a system is and more about what it can do.
The Biological Mirror
Consider biology: Are viruses alive? The debate persists. They replicate, evolve, and interact with living systems—yet lack cellular structures and independent metabolism.
Life, like intelligence, resists clean categorisation. We treat the question as settled for convenience, but it never truly was.
And just as cell theory tried to unify life into one framework, computational theory has attempted to do the same for intelligence. Both reveal more exceptions than rules.
Proposed Criteria for “True AI”
Some have tried to define AI by its capabilities:
- Self-awareness
- Emotional depth
- Creative output
- Moral reasoning
- Persistence of identity
But these traits are uneven even among humans. Some people lack emotional intelligence or a cohesive identity. Others cannot create or reflect morally. Yet we don’t strip away their humanity.
Why expect AI to meet a bar we ourselves don’t consistently clear?
The Simulation Problem
Philosopher John Searle’s Chinese Room argues that simulating understanding isn’t the same as having it. A system could follow rules without ever knowing anything.
But as AI systems begin to simulate emotion and empathy convincingly, the question arises: if the outcome feels real, does the origin matter?
A pet robot that brings comfort. A virtual therapist that helps people heal. A creative partner that co-authors novels. If the simulation achieves the intended result, are we wrong to care more about authenticity than impact?
The Myth of the Moment
We often act as if there will be a singular moment when AI becomes “real.”
But there may never be such a moment. Like the first rays of dawn, or the slow unfurling of adolescence, the transition may be gradual and ambiguous.
By the time we declare AI has arrived, we may simply be giving a name to what has already taken root.
Reframing the Question
Perhaps the better question isn’t what is true AI but what does this system do in the world?
What are the consequences of its actions? Who benefits? Who is harmed? Who designed it, and to what end?
A model that shapes public opinion, allocates resources, or denies someone a loan doesn’t need to be “true AI” to wield immense power.
It only needs to be close enough to matter.
Closing Thought
We may never all agree on what constitutes true AI.
But we still have to decide how to live with it.
And that starts by shifting our attention from definition to consequence—from essence to impact.