Tuesday, 3 February 2026

Does AI already have human-level intelligence? The evidence is clear




Blogger Comments:

AGI, Stochastic Parrots, and the Culture That Defines Intelligence

"Furthermore, there is no guarantee that human intelligence is not itself a sophisticated version of a stochastic parrot."

That sentence, from a recent Nature commentary arguing that artificial general intelligence (AGI) may already be here, does more than provoke. It is a hinge — a small linguistic pivot around which a vast conceptual shift quietly turns. To read the claim at face value is to miss the larger, subtler work being done: a redefinition of intelligence itself, human and artificial alike.


AGI Already Here? The Nature Argument

The authors claim that large language models and related systems already demonstrate the kind of broad, flexible cognitive competence that Alan Turing imagined in 1950. These systems can chat convincingly, generate prose and poetry, solve mathematical problems, propose scientific experiments, and even assist in writing code. By Turing’s criterion — the imitation game — these capabilities are presented as evidence that AGI is not a distant horizon, but a present reality.

At first glance, this feels startlingly plausible. Chatbots can answer questions with fluency, propose solutions with apparent insight, and mimic reasoning across domains. Yet the claim rests on a subtle, often unspoken manoeuvre: intelligence is defined by performance on tasks we, historically and culturally, consider meaningful. Benchmarks and success criteria are not neutral measures; they are socially stabilised definitions.


The Meta Problem: What Counts as Human Intelligence?

The Nature commentary is compelling because it leverages unexamined assumptions about human intelligence. Intelligence is treated as stable, measurable, and largely symbolic: the ability to communicate, reason, and solve problems in literate, analytic ways. But this proxy omits much of what humans actually do: navigate risk, act within moral or normative frameworks, participate in embodied practices, and respond to real-world consequences.

By suggesting that humans might themselves be “sophisticated stochastic parrots,” the article flattens the human into a process of pattern extraction, a subtle but radical deflation that allows machines to be measured on the same plane.

“If humans are sophisticated parrots, why can’t machines be too?”

The deeper meta-move is epistemic: uncertainty about the nature of human intelligence is leveraged to lower the threshold for recognising intelligence in machines. What appears humble is actually a strategic repositioning.


The Circularity of AI Culture

Here we encounter a deeper structural point: intelligence, as currently defined in AI discourse, is co-constituted by the culture of its developers. Consider the loop:

  1. Developers set tasks — benchmarks, coding challenges, dialogue prompts — based on what they value and can measure.

  2. AI systems perform these tasks, optimised to succeed.

  3. Success validates the AI as “intelligent.”

  4. That validation shapes the culture, reinforcing which tasks matter, which challenges are prioritised, and what counts as intelligence.

In short: the tasks define intelligence, the AI performs the tasks, and the AI’s performance confirms the validity of those tasks.

This is not merely a critique of metrics; it is a structural observation about mutual actualisation. Intelligence is not simply a property of the machine; it emerges from the interaction between human priorities, institutional practices, and technological affordances.


The circularity is invisible because it is internal to professional practice. From the outside, performance looks natural, inevitable, and “objective.” Yet it is profoundly contingent, culturally and historically situated.


Correlation, Structure, and the Flattening of Intelligence

The article reinforces this perspective with the line:

"All intelligence, human or artificial, must extract structure from correlational data; the question is how deep the extraction goes."

This is a masterstroke of conceptual framing. Intelligence is reduced to pattern recognition and abstraction. Depth, not kind, becomes the relevant metric. Qualitative, embodied, and normative aspects of human cognition are quietly flattened into a single continuum.

The rhetorical power is subtle but immense. Humans and machines are rendered comparable not because they share experience or consequence, but because they share a capacity for structural extraction. Depth becomes the axis along which competence is measured; the stakes, the embodiment, the meaning, and the lived consequences of action are bracketed away.

“Once the debate is framed around depth of extraction, scale becomes destiny.”

The human is reconstructed to fit the machine, and the machine is praised for mirroring the flattened human. The explanatory direction matters: we do not evaluate the machine against the human; we evaluate both through the lens of pattern extraction, and the human is quietly redefined to fit.


Implications and Takeaways

Viewed meta-analytically, the Nature article does less reporting than cultural reconfiguration. Intelligence is not a pre-existing property; it is co-constructed through human practice, task design, and perceptual validation. Declaring AGI “already here” is thus as much a reflection of cultural priorities as it is a statement about technological capacity.

Two consequences follow:

  1. The question of AGI shifts
    From:

    “When will AI become intelligent?”
    To:
    “Which aspects of human intelligence do we prioritise, for whom, and under which cultural regimes?”

  2. The human is subtly redefined
    By flattening human intelligence into a continuum of depth in pattern extraction, our own conception of mind, agency, and cognition is quietly reshaped. Machines are not just performing tasks — they are participating in a mutual recalibration of intelligence itself.


A Closing Reflection

The Nature commentary does not merely announce AGI; it reframes what counts as human intelligence. By normalising structure extraction as the essence of cognition, it creates a space in which machines appear not merely competent, but generically intelligent.

Yet the most compelling intelligence may reside not in machines, but in the capacity to perceive and critique the loop by which intelligence is defined. Recognising the co-constitution of human and machine intelligence — the mutual shaping of definitions, priorities, and validation — may be the most reflexively powerful act of cognition we can perform today.

“Perhaps the most interesting intelligence is not in the machines at all, but in the ways we define, measure, and collectively actualise intelligence through our own cultural practices.”

In the end, the stochastic parrot is not only a mirror for AI, but also a mirror for us. The intelligence that matters is the intelligence that notices the mirror — and steps back, just long enough, to see the loop itself.