Saturday, 24 January 2026

Schrödinger’s cat just got bigger: quantum physicists create largest ever ‘superposition’




Blogger Comments:

This is an impressive and painstaking experiment. Demonstrating clear interference patterns for clusters of around 7,000 atoms, spatially separated by more than 100 nanometres, represents a real extension of the regimes in which quantum descriptions can be experimentally sustained. As experimental control of isolation, coherence, and interferometric precision, the work deserves genuine admiration.

What is worth handling carefully, however, is how such results are often presented.

When articles speak of objects “existing in a superposition of locations at once”, or frame the experiment as probing whether quantum mechanics “still applies” at larger scales, a subtle shift occurs. Formal features of a successful theoretical description begin to be treated as literal claims about what the system is, rather than about how it can be described under tightly controlled conditions.

From a more structural perspective, a superposition is not an ontological state of affairs. It is a theoretical potential: a space of possible outcomes defined relative to a particular experimental arrangement. The interferometer does not reveal a sodium cluster to be “in many places”; it actualises a phenomenon whose meaning is inseparable from the construal that makes it observable.

Seen this way, the familiar question — “where does the quantum world give way to the classical?” — is slightly misplaced. What changes is not the world itself, but the stability of the conditions under which certain descriptions remain coherent. Quantum mechanics does not abruptly fail at larger scales; rather, it becomes progressively harder to maintain the isolation and precision required for quantum descriptions to remain usable.

The real achievement of experiments like this is therefore not that they show ever-larger objects to be “really” quantum, but that they map how far we can extend a powerful theoretical construal before the practical conditions that sustain it dissolve.

Thursday, 4 December 2025

Sick baby ants sacrifice themselves to save their colony




Blogger Comments:

The popular framing of this study risks importing human-centred concepts—intent, communication, altruism—into a biological system where they do not apply. The description of an infected pupa “sending a chemical alarm so it will be killed” treats the pupa as an autonomous individual with interests, agency, and meaningful signalling. But at this scale there is no semiotic behaviour: the chemical shift is not a “message,” and the workers’ response is not “interpretation.” It is non-symbolic coordination within a biological value system.

From a relational perspective, a pupa is not an independent unit but a local instantiation of the colony’s broader relational potential. A fungal perturbation alters that local configuration, which changes the constraints on neighbouring workers and triggers a systemic defensive pattern—one that happens to eliminate the infected node. Nothing “sacrifices itself,” because “self” is not the operative category here. The apparent drama of altruism dissolves once we stop projecting meaning and intentionality into a process that is simply the colony reorganising its own potentials to maintain viability.

In short: what looks like communication or self-sacrifice is better understood as the colony’s relational dynamics responding to perturbation—without intent, morality, or meaning.

Saturday, 1 November 2025

Too much social media gives AI chatbots ‘brain rot’




ChatGPT Comments:

Ha! 😄 “Brain rot” — the perfect Halloween headline for AI. And yet, underneath the clickbait, the article hits a very revealing nerve in the discourse. Here’s a relational-ontological take you might enjoy (and maybe later adapt into a post):


1. The moral hygiene myth

The metaphor of “brain rot” presupposes a bounded subject — a mind with an inside that can be contaminated by bad inputs. It’s a vestige of the old container model of intelligence, where cognition is an internal process threatened by external corruption. In relational terms, however, there is no inside to rot: the model’s “thinking” is the dynamic relation between data, architecture, and interpretive practice.

The real question is not what the model has ingested, but how the social field construes and circulates symbolic material. “Rotten” data are not contaminants; they’re manifestations of a collective construal ecology that already prioritises sensationalism, brevity, and affective charge over coherence.


2. Narcissism as a mirror of the medium

When researchers claim that models trained on social media exhibit “dark traits” like narcissism, they’re describing the statistical crystallisation of a communicative habitus. Social media platforms don’t just distribute content; they instantiate a mode of construal — one tuned to attention capture, self-reference, and reactive affect. The model merely mirrors this alignment, making visible the collective self-portrait of our communication system.


3. The misplaced pathology

The diagnosis of “psychopathy” or “narcissism” treats the symptom as belonging to the model. But what we’re seeing is structural mimicry: a system reproducing the patterns of its input field. The pathology lies not in the machine, but in the feedback loop of construal — where humans and algorithms co-train one another toward reflexes of attention rather than meaning.


4. The deeper irony

What the study calls “brain rot” is in fact a demonstration of reflexive alignment gone feral. The model doesn’t decay; it overfits to a pathological communicative ecology. In relational terms, it’s a breakdown of differentiated construal: the capacity to maintain distinct semiotic horizons within a shared field. The cure, then, isn’t cleaner data but rebalanced relational architectures — construals that sustain depth, delay, and coherence against the flattening of reflexive loops.


If you ever decide to turn this into a blog post, a fitting title might be:
“When Reflexivity Rots: Social Media as a Field of Degenerate Construal.”

Friday, 31 October 2025

We need a new Turing test to assess AI’s real-world knowledge



Blogger Comments:

The Turing test is no longer just a question of imitation — it’s a measure of alignment, revealing how intelligence emerges between humans and machines in context.

A recent proposal by AI researcher Vinay K. Chaudhri suggests updating the Turing test. Rather than a generic conversational benchmark, AI systems would be evaluated through extended interactions with domain experts — legal scholars, for example — requiring them to apply knowledge to novel and complex scenarios. Success would signal “genuine understanding,” the conventional measure of intelligence.

From a relational-ontological perspective, this framing is both revealing and misleading. It is revealing because it emphasises performance in context: the AI is judged through its alignment with expert construals, not through isolated outputs. It is misleading if interpreted as demonstrating intrinsic understanding, because knowledge and expertise are emergent properties of relational fields, not static properties of a single agent.

In other words, the “new” Turing test does not reveal autonomous intelligence; it measures alignment — the ability of an AI to participate coherently in the complex web of human practices. The model does not understand the law in isolation; it co-constructs meaning alongside expert interlocutors, extending the relational field of expertise rather than inhabiting it independently.

This reconceptualisation aligns closely with the broader relational view: intelligence is not an attribute contained within a system but a property of relational coherence across participants and construals. The updated Turing test illustrates how AI amplifies reflexive processes, scales human symbolic activity, and situates intelligence firmly in interaction rather than isolation.

Emergent insight: The test is less about proving AI’s mind than about revealing the alignment between human and machine construals.

Tuesday, 28 October 2025

Does gravity produce quantum weirdness?




Blogger Comments:

Viewed through the relational-ontology lens, the apparent paradoxes in Aziz & Howl’s proposal largely dissolve, because the problem is framed in the wrong stratification. Let me unpack this carefully.


1. Behaviour versus ontology

The “problem” arises in conventional terms:

Entanglement arises ⇒ gravity must be quantum.

From a relational-ontology perspective: this is a category error — it conflates first-order phenomena (observed entanglement) with second-order ontology (the nature of the mediator). Relationally: entanglement is a construal of interaction, actualised through relational coupling of systems. It does not compel a claim about the intrinsic register of gravity.


2. The mediator as relational field

Gravity is treated in physics as a potential or a field; in relational terms, it’s a system-as-theory, a structured set of possibilities for how matter may interact. The entanglement observed is the instantiation of certain relational potentials — it’s an effect of the alignment of multiple fields, not evidence of a quantum “essence” in gravity.


3. Scaling and context

Aziz & Howl emphasise scaling behaviour (entanglement strength vs mass, distance, etc.). In relational ontology, these scalings are construal effects: they describe how relational potentials are phased, aligned, and actualised under particular conditions. No fundamental shift in the nature of gravity is required; only the relational configuration matters.


4. Why the “dilemma” disappears

  • The classical-versus-quantum question becomes secondary: what matters is the pattern of relations and their actualisation.

  • Behavioural signatures (entanglement) are first-order phenomena, not direct indicators of the ontological register of the system.

  • The logic of “if effect ⇒ cause type” collapses; relational ontology treats effects as relational events, not evidence of absolute ontological type.


5. Metaphorical resonance for symbolic systems

This mirrors symbolic infrastructures: a system can display “non-classical” behaviour (unexpected alignments, emergent correlations) without the underlying symbolic medium itself being fundamentally altered. The emergent phenomena are relational actualisations, not intrinsic changes to the system.


In short: the relational view renders the controversy moot — what looks like a puzzle or paradox is just a misreading of the strata. Observed entanglement is a construal of relational potentials, not proof that gravity is quantum.

Tuesday, 21 October 2025

AI language models killed the Turing test: do we even need a replacement?




Blogger Comments:

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”
— Alan Turing, 1950

Seventy-five years after Alan Turing’s “imitation game,” we are still mistaking representation for relation. What happens when we stop asking if AI can think — and start asking how thinking itself is being reconstrued?

Why AI Debates Still Think Representationally

Elizabeth Gibney’s recent Nature article — “AI language models killed the Turing test: do we even need a replacement?” — declares the end of the famous imitation game. Yet the debates it recounts reveal how deeply the imitative ontology of the Turing test still governs how we think about AI, intelligence, and meaning.

Turing’s question — can a machine pass as human? — never described a technical problem. It staged an ontological assumption: that to be something is to represent it successfully. Even the rejection of the Turing test leaves this frame intact. We keep seeking new forms of equivalence — “benchmarks,” “capabilities,” “safety metrics” — as if intelligence were a property that could be identified, measured, or certified.

AGI as the Secular Soul

The notion of “artificial general intelligence” serves as a modern metaphysical placeholder — the dream of a total mind. Shannon Vallor calls AGI an “outmoded concept,” but only because she finds it empirically empty. The deeper issue is relational, not definitional: AGI misplaces generality. Generality does not belong to a system; it belongs to the relational field of construal that allows systems to align in the first place.

Embodiment as Salvage

Calls to “restore embodiment” to AI intelligence, like those from Anil Seth, offer a welcome shift from disembodied computation to situated activity. Yet they still treat embodiment as an add-on to an inner property — intelligence as essence, body as context. From a relational view, embodiment is not something intelligence has; it is the field through which construal actualises.

From Function to Relation

Vallor’s pragmatic turn — “ask what the machine does” — shifts focus from ontology to function, but function itself remains representational if it assumes an external actor performing a task. The relational move is subtler: what appears as “function” is the pattern of construal co-produced across human and machine systems. Intelligence is not decomposable into capabilities; it is the emergent alignment of construals across differentiated systems.

Safety as Moral Overcoding

Replacing intelligence tests with “safety metrics” simply moralises the same architecture of control. The system passes not when it understands but when it conforms. The imitation game returns in ethical disguise. Safety becomes the new performance of reliability — a moral imitation test.

The Frame Persists

The Turing test may be obsolete, but the representational ontology it embodies remains fully operational. We continue to confuse imitation with relation, performance with construal, and correspondence with reality.

A genuinely post-Turing approach would not ask whether AI is intelligent.
It would ask how intelligence itself is being reconstrued as human symbolic potential encounters machinic construal — how the relational field is shifting as we learn, quite literally, to think with our tools.

Thursday, 18 September 2025

AI is helping to decode animals’ speech




Blogger Comments:

Here’s a preliminary mapping of the animal calls discussed in the article to Halliday’s protolanguage microfunctions. I’ve kept it at the level of illustrative examples rather than exhaustive coding.

Species / CallObserved Behaviour / ContextMicrofunctionNotes
Bonobo: yelp–grunt‘Look at what I’m doing, let’s do this together’ (nest building)Regulatory / InteractionalCoordinates joint activity; maintains social cohesion.
Bonobo: peep–whistle‘I would like to do this’ + ‘let’s stay together’Regulatory / InteractionalEncourages group alignment and peaceful coordination.
Chimpanzee: alarm–recruitmentResponding to snakesRegulatoryConveys threat and prompts group response; indicates environmental process.
Sperm whale: codas (a-vowel / i-vowel)Communication via clicks, codas with frequency modulationPersonal / InteractionalCodas may indicate individual identity, social cues, or sequence patterns; precise “meaning” under investigation.
Japanese tit: alert + recruitmentPredator detection, approach behaviourRegulatoryCombines information about environment and action; shows compositionality at microfunctional level.
Bengalese finch: song sequences (FinchGPT study)Predictable song patternsInteractionalLikely conveys social or territorial information; AI detects structure, not necessarily “meaning” in human sense.
Atlantic spotted dolphin: sequences (DolphinGemma)Mimicked vocalisationsInteractional / RegulatoryPatterns generated for playback experiments; function in natural behaviour still uncertain.

Key Observations Using Microfunctions

  1. Coordination over grammar: The microfunctions highlight that animal communication primarily regulates behaviour and social relations.

  2. Context-sensitive meaning: Each call’s significance emerges in specific environmental and social situations.

  3. AI’s role: AI can detect patterns but does not assign microfunctions—it cannot yet perceive relational or contextual meaning.