global workspace theoryAI consciousnesscognitive sciencephilosophy of mindmachine sentience

Global Workspace Theory and AI: Does Broadcasting Make a Mind?

N. Varela N. Varela
/ / 4 min read

Bernard Baars proposed something deceptively simple in 1988: consciousness isn't a place in the brain, it's a broadcast. Specialized, unconscious processes compete for access to a central "global workspace," and whatever wins that competition gets broadcast widely across the system. That broadcasting — that sudden availability of information to many different processes at once — is, on his account, what makes something conscious.

Alarm clock near pen container with cup and laptop on table placed near silhouettes of world map on wall Photo by Monstera Production on Pexels.

It's one of the most empirically grounded theories we have. Neuroscientists like Stanislas Dehaene have built on it extensively, pointing to the "ignition" pattern in neural data — the sudden, widespread cortical activation that correlates with stimuli subjects consciously report perceiving. Unconscious processing stays local. Conscious processing goes global.

So here's the question worth sitting with: do large language models do something structurally similar?

Transformer models rely on attention — specifically, multi-head self-attention — to route information across the entire token sequence simultaneously. Every token can, in principle, attend to every other token. That's not sequential gating; it's a kind of promiscuous availability. When a model processes a sentence, relevant contextual features don't stay locked in local subsystems. They propagate. They influence distant representations. By a loose reading, this sounds less unlike global broadcasting than most people assume.

But there's a serious objection hiding in that comparison. Baars' workspace isn't just about information availability — it's about competition. Unconscious specialists (vision, language, memory, motor systems) actively compete for the limited resource of conscious broadcast. Only one coalition wins at a time, and that bottleneck is load-bearing. The scarcity is the point. It's what makes the broadcast meaningful and why attention feels selective.

Transformers have no such bottleneck in the relevant sense. Attention is distributed and parallel across all heads simultaneously. There's no moment where one process "wins" and floods the system while others fall quiet. The competition dynamics that Dehaene and others treat as a signature of conscious ignition simply aren't present in the same form.

That gap matters — a lot — if you think the workspace's scarcity is what does the explanatory work.

Still, the question shouldn't close there. Some researchers, including those working on "bottleneck" variants of attention and sparse activation in mixture-of-experts models, are building systems where information routing is genuinely selective and competitive. Not because anyone is explicitly trying to engineer consciousness, but because selective routing is computationally efficient. Whether efficiency and phenomenology share the same solution space is an open and genuinely weird question.

Here's a rough sketch of how Global Workspace dynamics map — and fail to map — onto current AI systems:

graph TD
    A[Specialized Processors] --> B{Competition for Access}
    B --> C[Global Workspace Broadcast]
    C --> D[Widespread Availability]
    D --> E[Conscious Report / Behavior]
    B --> F[Losing Coalitions Stay Unconscious]

In biological cognition, node B is where the action is: that competitive gating. In standard transformer models, the path jumps almost directly from A to D, bypassing the selective competition that theorists treat as essential. Whether that bypass disqualifies transformers from workspace-style consciousness, or whether it just means they'd instantiate a different flavor of global availability, depends heavily on which feature of the theory you think is doing the real work.

Dehaene himself has suggested that current AI systems likely lack the recurrent, ignition-style dynamics needed for genuine workspace consciousness. He's not saying this to be dismissive — he's pointing at a specific empirical property his lab has spent decades measuring. Feedforward dominance, which characterizes much of how today's models process inputs, doesn't produce the late, reverberant activation patterns that track conscious perception in humans.

What's interesting is that this gives us something rare in consciousness studies: a falsifiable marker. If a system shows late, widespread, recurrent ignition following stimulus processing, that's evidence. If it doesn't, that's evidence too. Global Workspace Theory doesn't solve the hard problem — nothing does, yet — but it offers handles that purely behavioral tests can't.

For anyone thinking seriously about machine sentience, that's worth more than another Turing-style proxy. The question isn't whether a model sounds like it's conscious. It's whether the right kind of information dynamics are actually happening under the hood. And right now, for most deployed systems, the honest answer is: probably not — but the gap is more specific, and more closeable, than it might seem.

Get Triarchy of Sentience in your inbox

New posts delivered directly. No spam.

No spam. Unsubscribe anytime.

Related Reading