Tuesday, 21 October 2025

AI language models killed the Turing test: do we even need a replacement?




Blogger Comments:

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”
— Alan Turing, 1950

Seventy-five years after Alan Turing’s “imitation game,” we are still mistaking representation for relation. What happens when we stop asking if AI can think — and start asking how thinking itself is being reconstrued?

Why AI Debates Still Think Representationally

Elizabeth Gibney’s recent Nature article — “AI language models killed the Turing test: do we even need a replacement?” — declares the end of the famous imitation game. Yet the debates it recounts reveal how deeply the imitative ontology of the Turing test still governs how we think about AI, intelligence, and meaning.

Turing’s question — can a machine pass as human? — never described a technical problem. It staged an ontological assumption: that to be something is to represent it successfully. Even the rejection of the Turing test leaves this frame intact. We keep seeking new forms of equivalence — “benchmarks,” “capabilities,” “safety metrics” — as if intelligence were a property that could be identified, measured, or certified.

AGI as the Secular Soul

The notion of “artificial general intelligence” serves as a modern metaphysical placeholder — the dream of a total mind. Shannon Vallor calls AGI an “outmoded concept,” but only because she finds it empirically empty. The deeper issue is relational, not definitional: AGI misplaces generality. Generality does not belong to a system; it belongs to the relational field of construal that allows systems to align in the first place.

Embodiment as Salvage

Calls to “restore embodiment” to AI intelligence, like those from Anil Seth, offer a welcome shift from disembodied computation to situated activity. Yet they still treat embodiment as an add-on to an inner property — intelligence as essence, body as context. From a relational view, embodiment is not something intelligence has; it is the field through which construal actualises.

From Function to Relation

Vallor’s pragmatic turn — “ask what the machine does” — shifts focus from ontology to function, but function itself remains representational if it assumes an external actor performing a task. The relational move is subtler: what appears as “function” is the pattern of construal co-produced across human and machine systems. Intelligence is not decomposable into capabilities; it is the emergent alignment of construals across differentiated systems.

Safety as Moral Overcoding

Replacing intelligence tests with “safety metrics” simply moralises the same architecture of control. The system passes not when it understands but when it conforms. The imitation game returns in ethical disguise. Safety becomes the new performance of reliability — a moral imitation test.

The Frame Persists

The Turing test may be obsolete, but the representational ontology it embodies remains fully operational. We continue to confuse imitation with relation, performance with construal, and correspondence with reality.

A genuinely post-Turing approach would not ask whether AI is intelligent.
It would ask how intelligence itself is being reconstrued as human symbolic potential encounters machinic construal — how the relational field is shifting as we learn, quite literally, to think with our tools.