Friday, 31 October 2025

We need a new Turing test to assess AI’s real-world knowledge



Blogger Comments:

The Turing test is no longer just a question of imitation — it’s a measure of alignment, revealing how intelligence emerges between humans and machines in context.

A recent proposal by AI researcher Vinay K. Chaudhri suggests updating the Turing test. Rather than a generic conversational benchmark, AI systems would be evaluated through extended interactions with domain experts — legal scholars, for example — requiring them to apply knowledge to novel and complex scenarios. Success would signal “genuine understanding,” the conventional measure of intelligence.

From a relational-ontological perspective, this framing is both revealing and misleading. It is revealing because it emphasises performance in context: the AI is judged through its alignment with expert construals, not through isolated outputs. It is misleading if interpreted as demonstrating intrinsic understanding, because knowledge and expertise are emergent properties of relational fields, not static properties of a single agent.

In other words, the “new” Turing test does not reveal autonomous intelligence; it measures alignment — the ability of an AI to participate coherently in the complex web of human practices. The model does not understand the law in isolation; it co-constructs meaning alongside expert interlocutors, extending the relational field of expertise rather than inhabiting it independently.

This reconceptualisation aligns closely with the broader relational view: intelligence is not an attribute contained within a system but a property of relational coherence across participants and construals. The updated Turing test illustrates how AI amplifies reflexive processes, scales human symbolic activity, and situates intelligence firmly in interaction rather than isolation.

Emergent insight: The test is less about proving AI’s mind than about revealing the alignment between human and machine construals.

Tuesday, 28 October 2025

Does gravity produce quantum weirdness?




Blogger Comments:

Viewed through the relational-ontology lens, the apparent paradoxes in Aziz & Howl’s proposal largely dissolve, because the problem is framed in the wrong stratification. Let me unpack this carefully.


1. Behaviour versus ontology

The “problem” arises in conventional terms:

Entanglement arises ⇒ gravity must be quantum.

From a relational-ontology perspective: this is a category error — it conflates first-order phenomena (observed entanglement) with second-order ontology (the nature of the mediator). Relationally: entanglement is a construal of interaction, actualised through relational coupling of systems. It does not compel a claim about the intrinsic register of gravity.


2. The mediator as relational field

Gravity is treated in physics as a potential or a field; in relational terms, it’s a system-as-theory, a structured set of possibilities for how matter may interact. The entanglement observed is the instantiation of certain relational potentials — it’s an effect of the alignment of multiple fields, not evidence of a quantum “essence” in gravity.


3. Scaling and context

Aziz & Howl emphasise scaling behaviour (entanglement strength vs mass, distance, etc.). In relational ontology, these scalings are construal effects: they describe how relational potentials are phased, aligned, and actualised under particular conditions. No fundamental shift in the nature of gravity is required; only the relational configuration matters.


4. Why the “dilemma” disappears

  • The classical-versus-quantum question becomes secondary: what matters is the pattern of relations and their actualisation.

  • Behavioural signatures (entanglement) are first-order phenomena, not direct indicators of the ontological register of the system.

  • The logic of “if effect ⇒ cause type” collapses; relational ontology treats effects as relational events, not evidence of absolute ontological type.


5. Metaphorical resonance for symbolic systems

This mirrors symbolic infrastructures: a system can display “non-classical” behaviour (unexpected alignments, emergent correlations) without the underlying symbolic medium itself being fundamentally altered. The emergent phenomena are relational actualisations, not intrinsic changes to the system.


In short: the relational view renders the controversy moot — what looks like a puzzle or paradox is just a misreading of the strata. Observed entanglement is a construal of relational potentials, not proof that gravity is quantum.

Tuesday, 21 October 2025

AI language models killed the Turing test: do we even need a replacement?




Blogger Comments:

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”
— Alan Turing, 1950

Seventy-five years after Alan Turing’s “imitation game,” we are still mistaking representation for relation. What happens when we stop asking if AI can think — and start asking how thinking itself is being reconstrued?

Why AI Debates Still Think Representationally

Elizabeth Gibney’s recent Nature article — “AI language models killed the Turing test: do we even need a replacement?” — declares the end of the famous imitation game. Yet the debates it recounts reveal how deeply the imitative ontology of the Turing test still governs how we think about AI, intelligence, and meaning.

Turing’s question — can a machine pass as human? — never described a technical problem. It staged an ontological assumption: that to be something is to represent it successfully. Even the rejection of the Turing test leaves this frame intact. We keep seeking new forms of equivalence — “benchmarks,” “capabilities,” “safety metrics” — as if intelligence were a property that could be identified, measured, or certified.

AGI as the Secular Soul

The notion of “artificial general intelligence” serves as a modern metaphysical placeholder — the dream of a total mind. Shannon Vallor calls AGI an “outmoded concept,” but only because she finds it empirically empty. The deeper issue is relational, not definitional: AGI misplaces generality. Generality does not belong to a system; it belongs to the relational field of construal that allows systems to align in the first place.

Embodiment as Salvage

Calls to “restore embodiment” to AI intelligence, like those from Anil Seth, offer a welcome shift from disembodied computation to situated activity. Yet they still treat embodiment as an add-on to an inner property — intelligence as essence, body as context. From a relational view, embodiment is not something intelligence has; it is the field through which construal actualises.

From Function to Relation

Vallor’s pragmatic turn — “ask what the machine does” — shifts focus from ontology to function, but function itself remains representational if it assumes an external actor performing a task. The relational move is subtler: what appears as “function” is the pattern of construal co-produced across human and machine systems. Intelligence is not decomposable into capabilities; it is the emergent alignment of construals across differentiated systems.

Safety as Moral Overcoding

Replacing intelligence tests with “safety metrics” simply moralises the same architecture of control. The system passes not when it understands but when it conforms. The imitation game returns in ethical disguise. Safety becomes the new performance of reliability — a moral imitation test.

The Frame Persists

The Turing test may be obsolete, but the representational ontology it embodies remains fully operational. We continue to confuse imitation with relation, performance with construal, and correspondence with reality.

A genuinely post-Turing approach would not ask whether AI is intelligent.
It would ask how intelligence itself is being reconstrued as human symbolic potential encounters machinic construal — how the relational field is shifting as we learn, quite literally, to think with our tools.