It’s a tempting idea, isn’t it?: consciousness is just what happens when things get complicated enough.
In both neuroscience and pop science, we often hear that if a system — like the human brain — reaches a certain level of complexity, something clicks. The lights turn on. It’s how people sometimes explain the magic of our minds — and increasingly, how they speculate about artificial intelligence. If we keep building bigger AI models, the thinking goes, eventually they’ll get so complex that consciousness will just come along for the ride.
It sounds intuitive. After all, we’re complex, and we’re conscious.
But there is a problem with this idea: the scientific evidence to support it isn’t nearly that clean.1
So what’s really going on here?
To find out, let’s follow the science and ask three questions:
What does the neuroscience say about the link between consciousness and complexity?
Where does the data start to break down? and
Does consciousness come along for the ride when systems become appropriately complex?
Q1: What Does the Neuroscience Say?
When it comes to consciousness, there’s no shortage of scientific theories. The last time I checked, there were 22 of them. And, as you might expect, the scientists behind these theories don’t tend to agree on much. In fact, they disagree about a lot — what consciousness is, how we should measure it, and where in the brain we might find it.
But in the middle of all this disagreement, there’s one idea almost all of them seem to agree on: Consciousness has something to do with complexity.
There’s probably good reason for the consensus. It’s backed by decades of research showing a strong relationship between brain complexity and conscious awareness. The findings come from all kinds of brain imaging tools, too. And all kinds of definitions of both consciousness and complexity.
And yet, the pattern is remarkably consistent: when the brain’s activity becomes more complex, consciousness tends to be present. When complexity drops, awareness often fades.
I know what you’re thinking… Those are some slippery words you’re using there, Suzi!
What exactly do you mean by consciousness? And how are neuroscientists measuring complexity?
Great questions. I’m glad you asked.
In neuroscience, a common way to understand consciousness is to split it into three categories:
States of consciousness — like being awake, asleep, in a coma, under anaesthesia, or tripping on psychedelics.
Contents of consciousness — the specific things you’re aware of in any given moment, like a face, a sound, a memory, or the colour red.
Self-consciousness — the sense that you are the one having those experiences.
In this essay, let’s just focus on the first two.
To test for states of consciousness, scientists might ask whether a person can follow simple commands, like squeeze my hand. For contents of consciousness, they might ask someone to respond to a specific stimulus. For example, press a button if you see a house.
We use tools like EEG, MEG, fMRI, or TMS to record patterns of brain activity while asking these kinds of questions. That way, we can compare what the brain looks like when someone is consciously responding — and when they’re not. These patterns are known as neural correlates of consciousness, or NCCs. (And yes, they’re controversial. But let’s leave that can of worms for another day.)
Complexity, meanwhile, is also measured in a bunch of ways. But almost all of them come from information theory. Some use Shannon entropy. Others look at how easily brain signals can be compressed — something similar to algorithmic complexity. The general idea is that more complex signals are less predictable. They’re harder to summarise, and harder to compress.2
Take sleep, for example.
When we are in deep non-REM sleep — when dreaming is rare — EEG signals tend to be slow, repetitive, and highly predictable. These are the big, rolling brain waves known as delta waves. But when we are awake, the brain’s electrical activity speeds up. It gets messier, more varied, and harder to predict.
Now imagine running both sets of EEG data — sleep and awake — through a compression algorithm. The sleep data, with its repetitive patterns, is easier to compress. The awake data, full of variation, takes up more space. In other words, the file size for the awake brain is bigger than the file size for the sleeping brain.
So, we say the brain signals we find during sleep are less complex than the ones we find when we’re awake.
Most of the research linking complexity to consciousness has focused on states of consciousness — those broad conditions like being awake, asleep, anaesthetised, or in a coma.
The findings have been so consistent that complexity measures are starting to show up in clinical settings, too. Some doctors have used them to help distinguish between patients who are in a vegetative state and those in a minimally conscious state — conditions where a person might appear unresponsive, but still retain some inner awareness.
But the link between complexity and consciousness doesn’t just show up in states of consciousness. There’s also evidence that complexity tracks the contents of consciousness, too.
In one clever study, participants watched a short film — played forward and backward. The visual input was identical in both cases: same scenes, same colours, same lighting. But only the forward version made narrative sense. The researchers found that brain activity was significantly more complex during the forward viewing than the backward one.
Using dozens of different complexity measures and a wide variety of brain imaging techniques, the same pattern keeps showing up: Complex brain activity tends to go hand-in-hand with conscious experience.
Q2: Where does the data start to break down?
If this story sounds a little too neat for neuroscience, you’re right to be suspicious.
It is true that complexity measures usually rise with consciousness and drop when it disappears. That is a robust finding — there have been hundreds of studies.
But it’s not the whole story. The story is complicated in a few ways — but let’s focus on the fact that there are exceptions to the — usual.
Sometimes complexity measures go up when consciousness measures goes down.
For example, with seizures. During a seizure, the brain doesn’t go quiet — it floods with activity. Neural signals become overactive, disorganised, and unstructured. It’s kind of like the brain goes into chaos. Most complexity measures score that activity as high complexity. But consciousness often disappears.
And then there psychedelics and certain dissociative anaesthetics, like ketamine. These are drugs that could be described as warping consciousness. People are conscious, yes, but they often report vivid hallucinations, distorted time, and out-of-body experiences. Complexity measures tend to be high, but conscious experiences are clearly altered.
So what’s going on?
It might be helpful to distinguish between two things: complexity as whatever our complexity measures are measuring, and complexity as the thing we actually think of when we talk about complexity. Let’s call the first kind measured complexity, and the second meaningful complexity. Ideally, our measured complexity would capture meaningful complexity. But that’s not always the case.
As I mentioned above, most complexity measures in neuroscience come from information theory — things like Shannon entropy or compression algorithms. These tools are great at picking up on structured variation — when structure is there to be found. But they run into trouble when there’s no structure — when the signal is disordered or random.
That’s because both randomness and details are hard to compress. They both show up as high complexity on these measures — even though one is chaotic noise and the other is a meaningful pattern.
So what about meaningful complexity?
There’s an interesting relationship between disorder and meaningful complexity: as disorder increases, complexity tends to rise — up to a point. But then it flips. It hits a peak. As disorder keeps rising, complexity starts to fall. That tipping point — where complexity is at its maximum — is often called the edge of chaos.3
(See last week’s essay for a deeper dive into that idea.)
You can think of this complexity curve as having two sides.
On one side, increasing disorder leads to more complexity. This might be where most brain states sit. When we’re awake and aware, we might be closer to the top of the curve. But during deep sleep, coma, or anaesthesia, we might slide down that slope. On this side of the curve, disorder and complexity rise and fall together, and the relationship holds.
But on the other side of the curve — past the edge of chaos — disorder keeps increasing, while complexity begins to drop.
States like seizures or psychedelics might be examples of the brain operating on the far side of the complexity curve, where disorder keeps rising, but complexity starts to fall.
It might be that straying too far in either direction — too ordered or too chaotic — is what causes consciousness to fade. The brain may need to live in that narrow band near the edge of chaos to sustain awareness.
We don’t know for sure. But that explanation would fit the findings.
From this it might be tempting to conclude that the edge of chaos is the key — that if a system gets itself to the edge of chaos, consciousness will just show up.
Q3: Does consciousness come along for the ride when systems become appropriately complex?
Well… not quite. And for a few reasons.
First: Not all highly complex things are conscious.
Take the cerebellum — it’s that small, wrinkly structure tucked under the back of the brain. It’s one of the most neuron-dense, intricately wired parts of the brain. By almost any measure, it would be considered highly complex.
So if consciousness simply comes along for the ride when complexity gets high enough, you’d expect the cerebellum to be one of the brain regions that contributes most to conscious experience.
But it’s not.
People with damage to the cerebellum — or those who are missing large parts of it —can still have completely intact conscious experiences. The cerebellum is complex, yes. But apparently not in the way that matters for consciousness.4
Second: Complexity and consciousness may be correlated — but that doesn’t mean one causes the other.
Let’s start with a simple example.
Say every time we saw someone opening an umbrella, we also saw wet sidewalks. If we didn’t understand weather, we might think: umbrellas cause the sidewalks to get wet.
If so, we would be mistaken. Really, there’s a third thing happening: rain. Rain causes both the umbrellas to go up and sidewalks to get wet. The two are correlated — but one doesn’t cause the other. Rain is the hidden third variable that explains both.
But our error might not stop there. If we only conducted our research on the relationship between umbrella use and wet sidewalks — when it was raining — we might think the correlation between umbrella use and wet sidewalks was rock solid — unbreakable even.
But we’d be very wrong. Because in many parts of the world people use umbrellas on sunny days too. For shade, not rain. Umbrellas go up, but sidewalks stay dry.
So what does this have to do with the brain?
Well, almost everything we understand about consciousness comes from studying humans — and to a lesser extent, other animals. We’ve mostly looked at complexity and consciousness in systems where we already assume consciousness is present. So it’s no surprise we keep finding a correlation between complexity and consciousness.
But that’s a bit like studying umbrella use only when it rains.
If we’re only looking at human brains — one specific kind of system — we might be overestimating how universal that complexity–consciousness link really is.
Just like the relationship between umbrellas and wet sidewalks, there could be a third thing happening: a particular kind of architecture or dynamic mechanism or function that is behind both consciousness and complexity.
If that’s the case, complexity and consciousness might just be co-travellers — like umbrellas and wet sidewalks.
So what could this hidden variable be?
Most theories of consciousness seem to agree that there’s a sweet spot for the brain. They don’t usually call it the edge of chaos, but the idea is similar — a balance between too much rigidity and too much randomness. Consciousness, it seems, needs just the right amount of tension in the system.
Where the theories seem to differ is in how they think the brain keeps that balance.
Integrated Information Theory (IIT) calls it integration and differentiation. Global Workspace Theory (GWT) talks about global broadcasting. Predictive processing theory describes the constant dance between expectation and surprise. And Recurrent Processing Theory (RPT) focuses on the difference between feedforward and feedback loops.5
Even the more speculative ideas — like Electromagnetic Field Theory (EMT) and quantum theories — think the system needs to stay in dynamic balance. Not too simple, not too chaotic.6
On almost every one of these mechanisms, today’s AI falls short.
Let’s take large language models, like ChatGPT.
They can simulate language, reason through problems, even hold a conversation. But they don’t have any of the balancing mechanisms that these theories of consciousness see as essential. They don’t self-regulate. They don’t respond to surprise in any adaptive way. They don’t coordinate across modules, or self-stabilise their own perceptions with feedback.
By the standards of any of these theories, they’re missing the thing that counts.7
They may be complex — but not in the way that seems to be required for consciousness.
At least according to these theories.
So where does that leave conscious AI?
According to most consciousness theories, just adding more complexity probably isn’t enough. Consciousness might need — or simply might be — specific kinds of structures, architectures, dynamics, and functions.8 So, if we build an AI like that, would it have conscious experiences?
Maybe.
Of course, it’s possible — perhaps even likely — that all of those theories are wrong. Or, at least incomplete.
Maybe consciousness isn’t simply feedback loops, or integration, or predictive models.
Maybe.
Next Week…
How much information do you think it takes to build a building? A lot, right?
We need blueprints. Plans. Sequencing. The entire structure has to be laid out in advance before anything can take shape.
In that sense, a blueprint holds a lot of information — everything needed to make a building.
But what about life? How much information does it take to build that?
Well… there’s no master diagram for an oak tree. And no detailed layout for a brain. All we seem to have is DNA. And yet somehow, from this simple code, we get the most complex thing in the known universe.
What kind of information is that?
There’s also the problem of confusing intelligence with consciousness — a system can be brilliant at solving problems without being aware that it exists. But that’s a topic for another essay.
Complexity measures are also not direct measurements of consciousness. They’re proxies — tools that infer the likelihood of awareness based on statistical patterns. Even leading techniques, like the Perturbational Complexity Index (PCI), only probabilistically estimate the presence of consciousness.
While popular in complex systems theory, the edge of chaos remains more of a theoretical metaphor than an empirically grounded principle in neuroscience. It helps frame discussions but lacks precise operational definitions in consciousness research.
Whether the cerebellum has absolutely no role in consciousness is still under discussion. Some recent studies suggest it might play a modulatory role in emotion or cognitive function. But the general point remains — it’s not a core substrate of consciousness.
It’s important to note that many of these theories are still under development and sometimes propose mutually exclusive mechanisms. There is no consensus on which — if any — provides a definitive account of consciousness.
Electromagnetic field and quantum theories of consciousness are speculative and not widely accepted within mainstream neuroscience. They are often criticised for lacking testable predictions and empirical support. By contrast, while theories like IIT are also speculative and debated, they are taken more seriously in the field. They have some testable models — such as the Perturbational Complexity Index (PCI) — that are actively used in consciousness research.
While large language models (LLMs) are predictive systems — trained to anticipate the next word based on prior context — their form of prediction differs fundamentally from the hierarchical, embodied, and dynamically updated predictive architectures theorised in cognitive neuroscience. Similarly, although LLMs integrate vast amounts of information during training, this integration is static and offline, not the active, recurrent, and self-organising integration posited by theories like IIT or GWT. Most LLMs operate with feedforward architectures and lack real-time feedback loops, self-generated goals, or intrinsic world models — mechanisms many theories consider essential to conscious processing. Recent AI research is exploring architectures that introduce recurrence, internal modelling, and perception-action loops, but these remain early-stage and do not yet replicate the kind of dynamic, integrated causality seen in biological cognition.
It’s worth noting that many leading theories of consciousness assume a computational or information-processing framework — that consciousness arises from particular kinds of processing structures, dynamics, or architectures. This assumption is rarely questioned directly but is foundational to theories like IIT, GWT, and some predictive processing theories. Yet, it remains an open philosophical and scientific question whether consciousness is computable at all.
I like the distinction between measured and meaningful complexity. It addresses a worry I've had about the putative non-complexity of high entropy states. Although it feels like maybe the "meaningful" part may need some scrutiny. What do we mean by it? It could be functional (as in causal), understandable, or something else.
Overall, I think it's right that complexity is necessary for what most of us mean by "consciousness", but it isn't sufficient. My attitude about the various scientific theories, is that many of them capture aspects of the problem but none can justify a claim to being the one and only solution. For those of us who accept that "consciousness" is a hazily defined collection of capabilities, that shouldn't be surprising.
Interesting discussion Suzi, as always!
“Whether the cerebellum has absolutely no role in consciousness is still under discussion. Some recent studies suggest it might play a modulatory role in emotion or cognitive function”
It could have self awareness of its own and simply not tell the narrative self about it, other than in crude flags like “I don’t feel so well”, or “I feel hungry”.
To me, consciousness is like self-awareness, which presupposes attention. In flow states all attention is placed on an absorbing task and the awareness of self reduces. Unconscious tasks of attention by the brain may be like a first stomach, predigesting information so the aware self can ‘ruminate’