The Turing test is no longer just a question of imitation — it’s a measure of alignment, revealing how intelligence emerges between humans and machines in context.
A recent proposal by AI researcher Vinay K. Chaudhri suggests updating the Turing test. Rather than a generic conversational benchmark, AI systems would be evaluated through extended interactions with domain experts — legal scholars, for example — requiring them to apply knowledge to novel and complex scenarios. Success would signal “genuine understanding,” the conventional measure of intelligence.
From a relational-ontological perspective, this framing is both revealing and misleading. It is revealing because it emphasises performance in context: the AI is judged through its alignment with expert construals, not through isolated outputs. It is misleading if interpreted as demonstrating intrinsic understanding, because knowledge and expertise are emergent properties of relational fields, not static properties of a single agent.
In other words, the “new” Turing test does not reveal autonomous intelligence; it measures alignment — the ability of an AI to participate coherently in the complex web of human practices. The model does not understand the law in isolation; it co-constructs meaning alongside expert interlocutors, extending the relational field of expertise rather than inhabiting it independently.
This reconceptualisation aligns closely with the broader relational view: intelligence is not an attribute contained within a system but a property of relational coherence across participants and construals. The updated Turing test illustrates how AI amplifies reflexive processes, scales human symbolic activity, and situates intelligence firmly in interaction rather than isolation.
Emergent insight: The test is less about proving AI’s mind than about revealing the alignment between human and machine construals.




