Smriti Mallapaty’s article, while accessible and engaging, presents an excellent opportunity to examine how mainstream science journalism handles the concept of consciousness—a notoriously slippery and often misrepresented topic.
Let’s break it down by looking at framing, conceptual clarity, methodological assumptions, and implications for theories of consciousness—plus we’ll shine a light on any subtle ontological or rhetorical moves that deserve interrogation.
1. Framing: “How does the brain control consciousness?”
Right from the headline, the article frames consciousness as something controlled by the brain—as if the brain is a sort of CEO flipping awareness switches. This framing subtly presupposes:
-
That consciousness is a thing (a discrete object or state), rather than an unfolding process or emergent phenomenon.
-
That there is a one-way causal relationship: the brain acts, and consciousness results—a framing that dodges thornier questions about the bidirectionality of brain-mind relations, or whether ‘the brain’ and ‘consciousness’ are even ontologically separable in this way.
This mechanistic framing primes the reader to accept neural correlates as explanations rather than as correlations—a category mistake that plagues much of consciousness discourse.
2. Conceptual Clarity: What does “conscious perception” mean here?
The article uses the term “conscious perception” to refer to the brain “becoming aware of its own thoughts.” That’s already a conflation of self-reflective cognition (thinking about thinking) with sensory awareness (e.g., noticing a flashing icon). Those are not the same kind of process—unless you're sneaking in a fairly specific theory of consciousness without naming it.
It’s worth asking: what theoretical lens is this built from? Is it Global Workspace Theory (GWT)? Higher-order thought theory? Integrated Information Theory (IIT)? Recurrent Processing Theory? The article never says—which has the effect of treating “consciousness” as a monolithic phenomenon rather than a field full of rival models.
So we’re left with a flattening: consciousness is equated with noticing a stimulus, which is equated with awareness, which is equated with neural activity in the thalamus and prefrontal cortex. This makes the article readable—but at the cost of conceptual rigour.
3. Methodological Assumptions: What’s being measured?
The study involved patients undergoing deep brain electrode therapy for chronic headaches—a rare opportunity, certainly, but also a highly unusual population. The experimental setup depends on:
-
A binary behavioural response (eye movement = yes or no) to indicate conscious awareness of a stimulus.
-
An assumption that the stimulus is perceived consciously only 50% of the time, which is statistically convenient but ontologically murky.
This method assumes a third-person observable behaviour (eye movement) can be equated with a first-person experience (conscious awareness)—a leap that rests on contested territory. Philosophically, this reflects the hard problem in disguise: turning qualia into data without confronting the transformation involved.
Furthermore, the claim that this is the “first time” simultaneous recordings have been made in thalamus and cortex in a consciousness-relevant task sounds impressive—but the phrase “relevant to consciousness science” is doing a lot of unexamined work. What exactly makes a task consciousness-relevant?
4. The Thalamus as a Gatekeeper: Good metaphor, or misleading one?
The article’s big takeaway is that the thalamus acts as a “filter” or “gatekeeper” for conscious awareness—a metaphor that’s been floating around since Francis Crick. But this metaphor carries baggage:
-
It implies a central executive function—some agentive part of the brain deciding what gets through—which anthropomorphises a system that is, presumably, not conscious itself.
-
It lends itself to the misleading notion of a Cartesian theatre—a place in the brain where “the show” of consciousness is watched.
Also, the claim that thalamic activity precedes cortical activity when participants are conscious of the icon is interesting—but temporality here is tricky. Does precedence imply causation? Or is the coordination between the thalamus and cortex part of a larger dynamical system where mutual entrainment is the real story?
The article leans toward the former, but this may be a misstep.
5. Alternative Interpretations and Scepticism
Liad Mudrik offers a necessary note of caution: the activity being tracked might reflect attention, not consciousness. This is vital. Many cognitive scientists argue that attention and consciousness are dissociable—indeed, you can attend to something unconsciously, or become conscious of something to which you're not attending.
The fact that this possibility is mentioned but not deeply explored points to a pattern in consciousness journalism: mention complexity, then move swiftly on.
6. Implications for Theories of Consciousness
If we accept the findings as reported, what are the implications?
-
They challenge theories that localise consciousness exclusively in the cortex, suggesting a more distributed, recursive model.
-
They support models that see conscious access as a systems-level phenomenon, involving timing, synchronisation, and possibly recurrent signalling (which aligns with GWT or even IIT in some interpretations).
-
However, they do not clarify what consciousness is, only what patterns of brain activity are associated with a specific behavioural report.
In other words: correlates, not causes.
Ontological Underpinnings: Materialist Monism in Disguise?
The article never explicitly addresses ontology, but it’s tacitly materialist: the brain produces consciousness, full stop. No room for emergentism, enactivism, panpsychism, or even the possibility that consciousness might be a semiotic process arising from biological substrates rather than a product of them.
This reductionism is not unusual, but it should be flagged—especially since you, Chris, operate within a meaning-centred framework (SFL + Edelman’s TNGS). From that standpoint, this entire framing is problematic: it treats experience as something that can be located, rather than construed.
Conclusion: A Well-Written Piece with a Neurocentric Tilt
Mallapaty’s article is engaging and informative for a general audience. But it falls into familiar traps:
-
Collapsing correlation into explanation
-
Slipping between concepts (attention ≠ consciousness)
-
Using metaphors ("gatekeeper") without unpacking their implications
-
Presenting a narrow, neural account as though it speaks to the full phenomenon of conscious experience
There’s real value in the empirical work here—but the framing obscures the theoretical choices involved. As always in consciousness science, the more confidently something is claimed, the more carefully we should read what’s left unsaid.
Let’s take the next step and run this article through two lenses that offer richer ways of construing what’s going on:
-
A Systemic Functional Linguistics (SFL)–based critique focusing on how meaning is construed in the article and what metafunctions are doing the heavy lifting.
-
A reframe in terms of potential-instance dynamics, interrogating what the study reveals (and conceals) when we shift from correlates of consciousness to the construal of meaning from neural potential.
1. SFL-Based Critique: Meaning, Metafunctions, and Ideological Framing
A. Ideational Metafunction: What experience is being construed?
The article presents an account of conscious perception as an observable, neural process—something that can be measured, located, and described using objective methods. That already tells us which experiential meaning potential is being selected: one grounded in material processes (recorded
, moved
, projected
, activated
).
But notice what's excluded: there's little to no construal of mental processes (thought
, felt
, experienced
) in a subjective sense. The participants’ experiences are inferred only through behavioural proxies (eye movements) and linked directly to physical correlates.
➤ What we’re seeing is a construal of consciousness as if it were a material process, rather than a mental or semiotic one.
This construal aligns with a materialist ontology, but from an SFL perspective, it's a highly constrained meaning potential—an instance that enacts only a very narrow subset of our experiential semiotic resources.
B. Interpersonal Metafunction: Who gets to be the sayer, and how?
The dominant voices are:
-
The scientists (e.g., Zhang, Mudrik, Shine), who are construed as authoritative Sayers (
says
,plans
,suggests
,found
) -
The reporter, who mostly plays an intermediary role, but with clear evaluative choices (
remarkable
,elaborate
,tricky
,support
)
We also see participants (patients) entirely reduced to functions. They are grammatically realised only as Receivers (were asked
, had electrodes implanted
) or as Behavers in tightly constrained tasks. Their consciousness is treated as that which can be measured from the outside, never that which can speak from within.
➤ This strips the phenomenon of intersubjectivity. Consciousness becomes not a relation, but a variable.
Interpersonally, then, the article maintains the epistemic authority of science while distancing the lived experience of consciousness itself—ironically silencing the very thing it's trying to study.
C. Textual Metafunction: How is the text organised?
The textual organisation privileges:
-
Linear causality: thalamus → cortex → behaviour → conclusion
-
Nominalisations that obscure agency: e.g., “activation”, “filtering”, “processing”, “awareness”
-
Repetition of metaphors (e.g., “gatekeeper”, “filter”, “access”), which pre-package the ideational content
These metaphors do a lot of ideological work. For instance, calling the thalamus a gatekeeper construes the brain as a bureaucracy and consciousness as an access-controlled domain—highly familiar metaphors from systems design and surveillance capitalism, not from lived experience or embodied cognition.
➤ This metaphor construes meaning as something passively received—awaiting permission to enter awareness—rather than actively made.
2. Reframing Through the Lens of Potential-Instance Dynamics
Now let’s leave behind the neuro-determinism and ask: What if consciousness is not located in the brain, but is the actualisation of meaning from potential by a meaning-maker? That’s your domain, Chris. Let’s walk it out.
A. The brain as a meaning substrate, not a meaning site
From the SFL + TNGS view, neural activity represents a field of potential, shaped by neuronal group selection, experience, and patterned connectivity. This field does not contain consciousness per se, but rather affords the construal of meaning.
➤ Consciousness, then, is the instantiation of meaning from this neural potential—an act of semiosis, not simply a state of neural activation.
The thalamus, under this view, isn’t a gatekeeper but a mediator of affordances—modulating which patterns of neural connectivity are foregrounded in the flow of potential meaning. Its function is systemic, not executive.
B. Reframing the experiment: What was actually being measured?
What we call “conscious perception” in this study is just a behavioural instance that indexes the likelihood that an act of semiosis occurred. The participants’ eye movement is not the same as the perception—it is an externalised token of a meaning instance, one that likely, but not necessarily, correlates with the instantiation of perceptual meaning.
The researchers then correlate this instance (eye movement) with instantial systems (thalamic-cortical activity patterns) in an attempt to reverse-engineer the system of potential that gave rise to the instance.
➤ But without accounting for the meaning-maker’s role—the deployment of meaning potential in context—this approach collapses instance into cause.
It treats the instance (awareness of the icon) as the effect of thalamic activity, when in fact it may be the actualisation of a potential that includes—but is not reducible to—neural signals.
C. What would a semiotically aware experiment look like?
To account for the actualisation of meaning, the experiment would need to:
-
Attend to meaning potentials shaped by context: what does the icon mean to the participant?
-
Distinguish between potential meaning (stimulus affordance), meaning potential (participant’s semiotic system), and meaning instance (actualised awareness).
-
Include participant reflection or report, not just behaviour—as a trace of the act of semiosis.
-
Map not only neural activation, but how that activation becomes organised into semiotic systems that yield meaning.
That would mean taking the participant seriously not just as a biological body but as a social-semiotic system embedded in culture, history, language, and symbolic exchange.
Conclusion: From Brain States to Meaning States
The article reflects the dominant paradigm: neural states as primary, consciousness as their emergent side-effect. But through SFL and the potential-instance lens, we can reorient:
-
Consciousness is not a thing the brain has, but a semiotic process enacted by a meaning-maker.
-
Neural structures (like the thalamus) are not gatekeepers of awareness, but fields of affordance that shape which meaning potentials are actualised.
-
The scientific account is not wrong, but incomplete: it flattens the meaning-making dimension into a neurophysiological one.
The article shows us how scientific discourse instantiates only one construal of consciousness—an instance of meaning that hides its own metafunctional structure. By naming that, we can open space for other construals—ones in which meaning is not observed from outside, but enacted from within.