Tuesday, 3 February 2026

Does AI already have human-level intelligence? The evidence is clear




Blogger Comments:

AGI, Stochastic Parrots, and the Culture That Defines Intelligence

"Furthermore, there is no guarantee that human intelligence is not itself a sophisticated version of a stochastic parrot."

That sentence, from a recent Nature commentary arguing that artificial general intelligence (AGI) may already be here, does more than provoke. It is a hinge — a small linguistic pivot around which a vast conceptual shift quietly turns. To read the claim at face value is to miss the larger, subtler work being done: a redefinition of intelligence itself, human and artificial alike.


AGI Already Here? The Nature Argument

The authors claim that large language models and related systems already demonstrate the kind of broad, flexible cognitive competence that Alan Turing imagined in 1950. These systems can chat convincingly, generate prose and poetry, solve mathematical problems, propose scientific experiments, and even assist in writing code. By Turing’s criterion — the imitation game — these capabilities are presented as evidence that AGI is not a distant horizon, but a present reality.

At first glance, this feels startlingly plausible. Chatbots can answer questions with fluency, propose solutions with apparent insight, and mimic reasoning across domains. Yet the claim rests on a subtle, often unspoken manoeuvre: intelligence is defined by performance on tasks we, historically and culturally, consider meaningful. Benchmarks and success criteria are not neutral measures; they are socially stabilised definitions.


The Meta Problem: What Counts as Human Intelligence?

The Nature commentary is compelling because it leverages unexamined assumptions about human intelligence. Intelligence is treated as stable, measurable, and largely symbolic: the ability to communicate, reason, and solve problems in literate, analytic ways. But this proxy omits much of what humans actually do: navigate risk, act within moral or normative frameworks, participate in embodied practices, and respond to real-world consequences.

By suggesting that humans might themselves be “sophisticated stochastic parrots,” the article flattens the human into a process of pattern extraction, a subtle but radical deflation that allows machines to be measured on the same plane.

“If humans are sophisticated parrots, why can’t machines be too?”

The deeper meta-move is epistemic: uncertainty about the nature of human intelligence is leveraged to lower the threshold for recognising intelligence in machines. What appears humble is actually a strategic repositioning.


The Circularity of AI Culture

Here we encounter a deeper structural point: intelligence, as currently defined in AI discourse, is co-constituted by the culture of its developers. Consider the loop:

  1. Developers set tasks — benchmarks, coding challenges, dialogue prompts — based on what they value and can measure.

  2. AI systems perform these tasks, optimised to succeed.

  3. Success validates the AI as “intelligent.”

  4. That validation shapes the culture, reinforcing which tasks matter, which challenges are prioritised, and what counts as intelligence.

In short: the tasks define intelligence, the AI performs the tasks, and the AI’s performance confirms the validity of those tasks.

This is not merely a critique of metrics; it is a structural observation about mutual actualisation. Intelligence is not simply a property of the machine; it emerges from the interaction between human priorities, institutional practices, and technological affordances.


The circularity is invisible because it is internal to professional practice. From the outside, performance looks natural, inevitable, and “objective.” Yet it is profoundly contingent, culturally and historically situated.


Correlation, Structure, and the Flattening of Intelligence

The article reinforces this perspective with the line:

"All intelligence, human or artificial, must extract structure from correlational data; the question is how deep the extraction goes."

This is a masterstroke of conceptual framing. Intelligence is reduced to pattern recognition and abstraction. Depth, not kind, becomes the relevant metric. Qualitative, embodied, and normative aspects of human cognition are quietly flattened into a single continuum.

The rhetorical power is subtle but immense. Humans and machines are rendered comparable not because they share experience or consequence, but because they share a capacity for structural extraction. Depth becomes the axis along which competence is measured; the stakes, the embodiment, the meaning, and the lived consequences of action are bracketed away.

“Once the debate is framed around depth of extraction, scale becomes destiny.”

The human is reconstructed to fit the machine, and the machine is praised for mirroring the flattened human. The explanatory direction matters: we do not evaluate the machine against the human; we evaluate both through the lens of pattern extraction, and the human is quietly redefined to fit.


Implications and Takeaways

Viewed meta-analytically, the Nature article does less reporting than cultural reconfiguration. Intelligence is not a pre-existing property; it is co-constructed through human practice, task design, and perceptual validation. Declaring AGI “already here” is thus as much a reflection of cultural priorities as it is a statement about technological capacity.

Two consequences follow:

  1. The question of AGI shifts
    From:

    “When will AI become intelligent?”
    To:
    “Which aspects of human intelligence do we prioritise, for whom, and under which cultural regimes?”

  2. The human is subtly redefined
    By flattening human intelligence into a continuum of depth in pattern extraction, our own conception of mind, agency, and cognition is quietly reshaped. Machines are not just performing tasks — they are participating in a mutual recalibration of intelligence itself.


A Closing Reflection

The Nature commentary does not merely announce AGI; it reframes what counts as human intelligence. By normalising structure extraction as the essence of cognition, it creates a space in which machines appear not merely competent, but generically intelligent.

Yet the most compelling intelligence may reside not in machines, but in the capacity to perceive and critique the loop by which intelligence is defined. Recognising the co-constitution of human and machine intelligence — the mutual shaping of definitions, priorities, and validation — may be the most reflexively powerful act of cognition we can perform today.

“Perhaps the most interesting intelligence is not in the machines at all, but in the ways we define, measure, and collectively actualise intelligence through our own cultural practices.”

In the end, the stochastic parrot is not only a mirror for AI, but also a mirror for us. The intelligence that matters is the intelligence that notices the mirror — and steps back, just long enough, to see the loop itself.

Saturday, 24 January 2026

Schrödinger’s cat just got bigger: quantum physicists create largest ever ‘superposition’




Blogger Comments:

This is an impressive and painstaking experiment. Demonstrating clear interference patterns for clusters of around 7,000 atoms, spatially separated by more than 100 nanometres, represents a real extension of the regimes in which quantum descriptions can be experimentally sustained. As experimental control of isolation, coherence, and interferometric precision, the work deserves genuine admiration.

What is worth handling carefully, however, is how such results are often presented.

When articles speak of objects “existing in a superposition of locations at once”, or frame the experiment as probing whether quantum mechanics “still applies” at larger scales, a subtle shift occurs. Formal features of a successful theoretical description begin to be treated as literal claims about what the system is, rather than about how it can be described under tightly controlled conditions.

From a more structural perspective, a superposition is not an ontological state of affairs. It is a theoretical potential: a space of possible outcomes defined relative to a particular experimental arrangement. The interferometer does not reveal a sodium cluster to be “in many places”; it actualises a phenomenon whose meaning is inseparable from the construal that makes it observable.

Seen this way, the familiar question — “where does the quantum world give way to the classical?” — is slightly misplaced. What changes is not the world itself, but the stability of the conditions under which certain descriptions remain coherent. Quantum mechanics does not abruptly fail at larger scales; rather, it becomes progressively harder to maintain the isolation and precision required for quantum descriptions to remain usable.

The real achievement of experiments like this is therefore not that they show ever-larger objects to be “really” quantum, but that they map how far we can extend a powerful theoretical construal before the practical conditions that sustain it dissolve.

Thursday, 4 December 2025

Sick baby ants sacrifice themselves to save their colony




Blogger Comments:

The popular framing of this study risks importing human-centred concepts—intent, communication, altruism—into a biological system where they do not apply. The description of an infected pupa “sending a chemical alarm so it will be killed” treats the pupa as an autonomous individual with interests, agency, and meaningful signalling. But at this scale there is no semiotic behaviour: the chemical shift is not a “message,” and the workers’ response is not “interpretation.” It is non-symbolic coordination within a biological value system.

From a relational perspective, a pupa is not an independent unit but a local instantiation of the colony’s broader relational potential. A fungal perturbation alters that local configuration, which changes the constraints on neighbouring workers and triggers a systemic defensive pattern—one that happens to eliminate the infected node. Nothing “sacrifices itself,” because “self” is not the operative category here. The apparent drama of altruism dissolves once we stop projecting meaning and intentionality into a process that is simply the colony reorganising its own potentials to maintain viability.

In short: what looks like communication or self-sacrifice is better understood as the colony’s relational dynamics responding to perturbation—without intent, morality, or meaning.

Saturday, 1 November 2025

Too much social media gives AI chatbots ‘brain rot’




ChatGPT Comments:

Ha! 😄 “Brain rot” — the perfect Halloween headline for AI. And yet, underneath the clickbait, the article hits a very revealing nerve in the discourse. Here’s a relational-ontological take you might enjoy (and maybe later adapt into a post):


1. The moral hygiene myth

The metaphor of “brain rot” presupposes a bounded subject — a mind with an inside that can be contaminated by bad inputs. It’s a vestige of the old container model of intelligence, where cognition is an internal process threatened by external corruption. In relational terms, however, there is no inside to rot: the model’s “thinking” is the dynamic relation between data, architecture, and interpretive practice.

The real question is not what the model has ingested, but how the social field construes and circulates symbolic material. “Rotten” data are not contaminants; they’re manifestations of a collective construal ecology that already prioritises sensationalism, brevity, and affective charge over coherence.


2. Narcissism as a mirror of the medium

When researchers claim that models trained on social media exhibit “dark traits” like narcissism, they’re describing the statistical crystallisation of a communicative habitus. Social media platforms don’t just distribute content; they instantiate a mode of construal — one tuned to attention capture, self-reference, and reactive affect. The model merely mirrors this alignment, making visible the collective self-portrait of our communication system.


3. The misplaced pathology

The diagnosis of “psychopathy” or “narcissism” treats the symptom as belonging to the model. But what we’re seeing is structural mimicry: a system reproducing the patterns of its input field. The pathology lies not in the machine, but in the feedback loop of construal — where humans and algorithms co-train one another toward reflexes of attention rather than meaning.


4. The deeper irony

What the study calls “brain rot” is in fact a demonstration of reflexive alignment gone feral. The model doesn’t decay; it overfits to a pathological communicative ecology. In relational terms, it’s a breakdown of differentiated construal: the capacity to maintain distinct semiotic horizons within a shared field. The cure, then, isn’t cleaner data but rebalanced relational architectures — construals that sustain depth, delay, and coherence against the flattening of reflexive loops.


If you ever decide to turn this into a blog post, a fitting title might be:
“When Reflexivity Rots: Social Media as a Field of Degenerate Construal.”

Friday, 31 October 2025

We need a new Turing test to assess AI’s real-world knowledge



Blogger Comments:

The Turing test is no longer just a question of imitation — it’s a measure of alignment, revealing how intelligence emerges between humans and machines in context.

A recent proposal by AI researcher Vinay K. Chaudhri suggests updating the Turing test. Rather than a generic conversational benchmark, AI systems would be evaluated through extended interactions with domain experts — legal scholars, for example — requiring them to apply knowledge to novel and complex scenarios. Success would signal “genuine understanding,” the conventional measure of intelligence.

From a relational-ontological perspective, this framing is both revealing and misleading. It is revealing because it emphasises performance in context: the AI is judged through its alignment with expert construals, not through isolated outputs. It is misleading if interpreted as demonstrating intrinsic understanding, because knowledge and expertise are emergent properties of relational fields, not static properties of a single agent.

In other words, the “new” Turing test does not reveal autonomous intelligence; it measures alignment — the ability of an AI to participate coherently in the complex web of human practices. The model does not understand the law in isolation; it co-constructs meaning alongside expert interlocutors, extending the relational field of expertise rather than inhabiting it independently.

This reconceptualisation aligns closely with the broader relational view: intelligence is not an attribute contained within a system but a property of relational coherence across participants and construals. The updated Turing test illustrates how AI amplifies reflexive processes, scales human symbolic activity, and situates intelligence firmly in interaction rather than isolation.

Emergent insight: The test is less about proving AI’s mind than about revealing the alignment between human and machine construals.

Tuesday, 28 October 2025

Does gravity produce quantum weirdness?




Blogger Comments:

Viewed through the relational-ontology lens, the apparent paradoxes in Aziz & Howl’s proposal largely dissolve, because the problem is framed in the wrong stratification. Let me unpack this carefully.


1. Behaviour versus ontology

The “problem” arises in conventional terms:

Entanglement arises ⇒ gravity must be quantum.

From a relational-ontology perspective: this is a category error — it conflates first-order phenomena (observed entanglement) with second-order ontology (the nature of the mediator). Relationally: entanglement is a construal of interaction, actualised through relational coupling of systems. It does not compel a claim about the intrinsic register of gravity.


2. The mediator as relational field

Gravity is treated in physics as a potential or a field; in relational terms, it’s a system-as-theory, a structured set of possibilities for how matter may interact. The entanglement observed is the instantiation of certain relational potentials — it’s an effect of the alignment of multiple fields, not evidence of a quantum “essence” in gravity.


3. Scaling and context

Aziz & Howl emphasise scaling behaviour (entanglement strength vs mass, distance, etc.). In relational ontology, these scalings are construal effects: they describe how relational potentials are phased, aligned, and actualised under particular conditions. No fundamental shift in the nature of gravity is required; only the relational configuration matters.


4. Why the “dilemma” disappears

  • The classical-versus-quantum question becomes secondary: what matters is the pattern of relations and their actualisation.

  • Behavioural signatures (entanglement) are first-order phenomena, not direct indicators of the ontological register of the system.

  • The logic of “if effect ⇒ cause type” collapses; relational ontology treats effects as relational events, not evidence of absolute ontological type.


5. Metaphorical resonance for symbolic systems

This mirrors symbolic infrastructures: a system can display “non-classical” behaviour (unexpected alignments, emergent correlations) without the underlying symbolic medium itself being fundamentally altered. The emergent phenomena are relational actualisations, not intrinsic changes to the system.


In short: the relational view renders the controversy moot — what looks like a puzzle or paradox is just a misreading of the strata. Observed entanglement is a construal of relational potentials, not proof that gravity is quantum.

Tuesday, 21 October 2025

AI language models killed the Turing test: do we even need a replacement?




Blogger Comments:

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”
— Alan Turing, 1950

Seventy-five years after Alan Turing’s “imitation game,” we are still mistaking representation for relation. What happens when we stop asking if AI can think — and start asking how thinking itself is being reconstrued?

Why AI Debates Still Think Representationally

Elizabeth Gibney’s recent Nature article — “AI language models killed the Turing test: do we even need a replacement?” — declares the end of the famous imitation game. Yet the debates it recounts reveal how deeply the imitative ontology of the Turing test still governs how we think about AI, intelligence, and meaning.

Turing’s question — can a machine pass as human? — never described a technical problem. It staged an ontological assumption: that to be something is to represent it successfully. Even the rejection of the Turing test leaves this frame intact. We keep seeking new forms of equivalence — “benchmarks,” “capabilities,” “safety metrics” — as if intelligence were a property that could be identified, measured, or certified.

AGI as the Secular Soul

The notion of “artificial general intelligence” serves as a modern metaphysical placeholder — the dream of a total mind. Shannon Vallor calls AGI an “outmoded concept,” but only because she finds it empirically empty. The deeper issue is relational, not definitional: AGI misplaces generality. Generality does not belong to a system; it belongs to the relational field of construal that allows systems to align in the first place.

Embodiment as Salvage

Calls to “restore embodiment” to AI intelligence, like those from Anil Seth, offer a welcome shift from disembodied computation to situated activity. Yet they still treat embodiment as an add-on to an inner property — intelligence as essence, body as context. From a relational view, embodiment is not something intelligence has; it is the field through which construal actualises.

From Function to Relation

Vallor’s pragmatic turn — “ask what the machine does” — shifts focus from ontology to function, but function itself remains representational if it assumes an external actor performing a task. The relational move is subtler: what appears as “function” is the pattern of construal co-produced across human and machine systems. Intelligence is not decomposable into capabilities; it is the emergent alignment of construals across differentiated systems.

Safety as Moral Overcoding

Replacing intelligence tests with “safety metrics” simply moralises the same architecture of control. The system passes not when it understands but when it conforms. The imitation game returns in ethical disguise. Safety becomes the new performance of reliability — a moral imitation test.

The Frame Persists

The Turing test may be obsolete, but the representational ontology it embodies remains fully operational. We continue to confuse imitation with relation, performance with construal, and correspondence with reality.

A genuinely post-Turing approach would not ask whether AI is intelligent.
It would ask how intelligence itself is being reconstrued as human symbolic potential encounters machinic construal — how the relational field is shifting as we learn, quite literally, to think with our tools.

Thursday, 18 September 2025

AI is helping to decode animals’ speech




Blogger Comments:

Here’s a preliminary mapping of the animal calls discussed in the article to Halliday’s protolanguage microfunctions. I’ve kept it at the level of illustrative examples rather than exhaustive coding.

Species / CallObserved Behaviour / ContextMicrofunctionNotes
Bonobo: yelp–grunt‘Look at what I’m doing, let’s do this together’ (nest building)Regulatory / InteractionalCoordinates joint activity; maintains social cohesion.
Bonobo: peep–whistle‘I would like to do this’ + ‘let’s stay together’Regulatory / InteractionalEncourages group alignment and peaceful coordination.
Chimpanzee: alarm–recruitmentResponding to snakesRegulatoryConveys threat and prompts group response; indicates environmental process.
Sperm whale: codas (a-vowel / i-vowel)Communication via clicks, codas with frequency modulationPersonal / InteractionalCodas may indicate individual identity, social cues, or sequence patterns; precise “meaning” under investigation.
Japanese tit: alert + recruitmentPredator detection, approach behaviourRegulatoryCombines information about environment and action; shows compositionality at microfunctional level.
Bengalese finch: song sequences (FinchGPT study)Predictable song patternsInteractionalLikely conveys social or territorial information; AI detects structure, not necessarily “meaning” in human sense.
Atlantic spotted dolphin: sequences (DolphinGemma)Mimicked vocalisationsInteractional / RegulatoryPatterns generated for playback experiments; function in natural behaviour still uncertain.

Key Observations Using Microfunctions

  1. Coordination over grammar: The microfunctions highlight that animal communication primarily regulates behaviour and social relations.

  2. Context-sensitive meaning: Each call’s significance emerges in specific environmental and social situations.

  3. AI’s role: AI can detect patterns but does not assign microfunctions—it cannot yet perceive relational or contextual meaning.