Functionalism: Why the Most Popular Consciousness Theory Might be Wrong
Consciousness Theories. Physicalism #5
Functionalism is currently the most popular theory of consciousness. It's the idea that consciousness — all our thoughts, feelings, and beliefs are not defined by what they are made of but by what they do. This theory is also behind the idea that AI might be conscious (now or sometime in the future) because, given the right type of machine, it could replicate the functional states of a human mind. But despite its popularity, not everyone is convinced by the functionalist claims.
So, what's the issue with functionalism?
This is Part 5 of our series on physicalism, the theory that the physical world can fully explain consciousness. This week we’re continuing our exploration of functionalism. Last week we set up three questions and addressed the first two:
What is functionalism? and,
Why might someone believe in functionalism?
You can find last week’s article here:
This week we’re asking the third question.
What are the Main Arguments Against Functionalism?
There are several arguments against functionalism. Let’s review five popular ones.
It's important to note that there are many different types of functionalism. And the different types of functionalism are often the evolved response to the following arguments. So, not all of these arguments will apply equally to every type.
1. The Inverted Qualia Problem
Qualia refers to the qualitative and intrinsic aspects of our mental states. It's the what-it-is-like to see the redness of a ripe tomato, feel the sharpness of a bee sting on the tender side of your foot, taste the rich, nutty sweetness of pistachio gelato, or smell the crisp, salty ocean air after a storm.
Critics claim that functionalism simply cannot account for these subjective experiences. Even if we could fully describe all the functional and causal roles of a mental state — that type of explanation will always be missing something essential — the subjective feeling of the experience itself. Qualia, the critic claims, cannot be reduced to mere functional roles or computational states.
Philosophers have developed various thought experiments to illustrate this problem. The inverted spectrum argument is one such thought experiment.
The Inverted Spectrum
I have outlined the inverted spectrum thought experiment in my article Is Consciousness Computational?
Here’s what I wrote there:
Imagine two people. Let's call them Alice and Bob. Externally, they both seem to react to colours in the same way. They stop at red lights, go at green lights, and so on. However, internally, their experiences of colours are different. What Alice experiences as red, Bob experiences as green, and vice versa. Their colour experiences are inverted relative to each other.
According to functionalism, if two beings (like Alice and Bob) have the same functional responses to stimuli, they should have the same internal experiences. But the inverted spectrum hypothesis suggests that it's possible for two people to have the same functional responses (e.g., both stopping at red lights) while having different internal experiences (seeing different colours).
This scenario with Alice and Bob suggests that functional roles alone cannot capture the full nature of conscious experience. Critics suggest that the inverted spectrum thought experiment poses a significant challenge to functionalism.
In a future article, I’ll discuss the functionalist’s counter-arguments to these claims. In the meantime, you might find the article Is Consciousness Computational? interesting. There, I outline one counter-argument to the inverted spectrum thought experiment.
2. The Absent Qualia Problem
Philosophers love thought experiments. Thought experiments ask us to imagine a situation that is beyond our experiences. Two of the best-known thought experiments that are arguments against functionalism involve China — The Chinese Room by John Searle and The China Brain by Ned Block (also known as the Chinese Nation). Both thought experiments make similar arguments.
Searle’s Chinese Room thought experiment is probably the most popular. I wrote a little about it in my article — If AI Were Conscious, How Would We Know? I have more to say about this thought experiment, and I’ll do that in a future post, but I want to focus on Block’s argument in this article.
The China Brain
In 1978, Ned Block proposed his thought experiment as part of his famous paper, Troubles with Functionalism.
Block writes:
Imagine a body externally like a human body, say yours, but internally quite different. The neurons from sensory organs are connected to a bank of lights in a hollow cavity in the head…. Inside the cavity resides a group of little men. Each has a very simple task...
The idea of little men inside the head refers to homunculi, a term derived from Latin meaning little men or miniature humans.
Block continues:
How many homunculi are required? Perhaps a billion are enough; after all, there are only about a billion neurons in the brain.
China's population was around 1 billion people in 1978, so Block chose China for his thought experiment. However, Block significantly underestimated the number of neurons in the brain. In 1978, the estimated number of neurons in the brain was actually 100 billion (we now estimate it to be about 86 billion for adults).
Block continues:
Suppose we convert the government of China to functionalism… We provide each of the billion people in China… with a specially designed two-way radio that connects them in the appropriate way to other persons and to the artificial body mentioned…
Block continues by imagining that the entire population of China is organised to simulate the functional roles of neurons in a brain — let’s say your brain.
Each person, equipped with a special radio, acts as a single neuron, receiving and transmitting signals according to simple rules. This massive system is connected to the artificial body.
The crucial question Block poses is whether this system, which perfectly replicates the functional organisation of your brain, would possess consciousness or subjective experiences as you do.
Block proposes this thought experiment because he is convinced most people will say — obviously, it’s not conscious. And if we believe that this "China Brain" lacks genuine consciousness despite being functionally identical to your brain, it suggests there's something more to consciousness than the functionalist account allows.
Functionalist counterarguments to this absent-qualia problem typically follow two paths: either they claim that this "China Brain" would indeed be conscious, or they challenge the conceivability of the thought experiment itself. This second approach argues that, as described, it is simply not physically possible for such a system to replicate the functional organisation of a human brain and, therefore, would not function like a human brain.
3. The Too Liberal Problem
You may remember from a few weeks ago that the mind-brain identity theory faces criticism for being too conservative or carbon chauvinistic — it doesn't allow for the possibility of consciousness in non-biological systems.
Last week, we discussed multiple realisability, an aspect of functionalism often seen as an advantage over the identity theory. This feature allows consciousness to exist in various physical substrates, not just biological brains.
However, while multiple realisability is generally considered a strength of functionalism, some argue it goes too far in the opposite direction. Critics claim that functionalism is too permissive, potentially attributing consciousness to systems we wouldn't want to consider conscious. In other words, if identity theory was too conservative, functionalism might be too liberal.
4. The Homunculus Fallacy Problem
In Part 1 of the series on The Five Most Controversial Ideas in the Study of Consciousness, we discussed the homunculus fallacy.
Here’s a recap and how this fallacy might relate to functionalism:
Because functionalism explains mental states in terms of their functional roles, some critics have argued that this explanation implicitly assumes some sort of inner "processor,” “overseer,” or "interpreter" of these functional states. They think that for functional states to have meaning, some internal mechanism must read or interpret them.
This has some asking: how do we explain the conscious experiences of this inner interpreter?
If we explain the inner interpreter's consciousness with another set of functional relationships interacting with other mental states, we'd need another interpreter to interpret those interactions. This process could continue indefinitely, creating an infinite series of "homunculi" or inner interpreters.
Ned Block's China Brain thought experiment illustrates this problem. In his scenario, the functional organisation of a brain is simulated by the entire population of China. The question arises: where does consciousness emerge in this system? Some might suggest that consciousness only appears when the system is connected to the artificial body, implying the need for a central "interpreter" — or a theatre of sorts — where the input from all 1 billion people comes together as one experience for the interpreter to experience. But this interpretation is the Homunculus Fallacy in fine form.
It's worth noting that not all forms of functionalism fall prey to this critique. For instance, Daniel Dennett offers a version of functionalism in his book Consciousness Explained that attempts to address these concerns. Dennett's approach avoids positing a central interpreter, instead describing the system as a hierarchical structure where lower levels of the system become less and less functionally organised until we approach the mechanistic level of the neuron (or groups of neurons).
5. The Function Needs Form Problem
Functionalism often compares brains to computers. The common claim is that the mind is software running on the hardware of the brain.
Not all versions of functionalism strictly adhere to the computer analogy, but the recent popularity of AI has many leaning towards this view. The claim is that just as artificial neural networks are a complex computational process, so are real neural networks. In recent years, computational functionalism has become one of the more prominent theories of consciousness.
But not everyone is convinced by this analogy—indeed, many disdain such a view.
Neuroscientist Anil Seth questions the assumptions behind the brain-is-a-computer analogy in his recent article Conscious Artificial Intelligence and Biological Naturalism.
Seth explains that in traditional computing, software is designed to work the same way regardless of what specific computer it's running on. We can take software, copy it, and execute it on different hardware. This is true even for adaptive software, like machine learning algorithms. The entire system relies on the separation between software and hardware.
According to Alexander Ororbia and Karl Friston, software like this is immortal — it can theoretically exist forever, independent of any particular hardware.
Seth argues that the brain's very structure challenges the computer analogy. Unlike computers, which clearly distinguish hardware and software, brains show no such separation between wetware and mindware. In the brain, the neural connections (wetware) are constantly changing. And these physical changes are thought to be responsible for our experiences (mindware). When it comes to brains, there may be no way to separate their form from their function.
Based on work by Alexander Ororbia and Karl Friston, Seth suggests that if the brain is doing computations, they must be mortal — once the wetware (brain) changes or fails, the mindware (specific functions) will cease to exist.
This inseparability of form and function in the brain challenges a key tenet of functionalism: the multiple realisability thesis. If Seth is correct, and the brain's wetware is inextricably linked to its mindware, it suggests that consciousness might be more tightly bound to its biological substrate than functionalism assumes. This would imply that creating conscious AI using non-biological hardware and immortal software could be more challenging or even impossible if consciousness depends on the specific properties of biological neural networks.
The Sum Up
Functionalism, despite its popularity in explaining consciousness, faces several challenges.
The inverted qualia problem questions whether functional roles can truly capture subjective experiences,
while the absent qualia argument, exemplified by the China Brain thought experiment, suggests that functional equivalence might not guarantee consciousness.
Critics also worry that functionalism might be too permissive, potentially attributing consciousness to unlikely systems.
The homunculus fallacy problem concerns an implicit need for an inner interpreter, leading to a potentially infinite regress.
Finally, the function-needs-form argument challenges the brain-computer analogy, suggesting that consciousness might be inseparable from the brain's physical structure.
These critiques don't invalidate functionalism entirely, but they do highlight significant areas where the theory is most likely to be challenged. The debate over functionalism's legitimacy as a solid theory of consciousness remains far from settled.
We will leave functionalism for now and continue exploring other physicalist views. Next in the series is Eliminative Materialism.
Thank you.
I want to take a small moment to thank the lovely folks who have reached out to say hello and joined the conversation here on Substack.
If you'd like to do that, too, you can leave a comment, email me, or send me a direct message. I’d love to hear from you. If reaching out is not your thing, I completely understand. Of course, liking the article and subscribing to the newsletter also help the newsletter grow.
If you would like to support my work in more tangible ways, you do that in two ways:
You can become a paid subscriber
or you can support my coffee addiction through the “buy me a coffee” platform.
I want to personally thank those of you who have decided to financially support my work. Your support means the world to me. It's supporters like you who make my work possible. So thank you.
Must try to say this without sounding too sycophantic, but I wish you had given some introductory philosophy lectures when I was first at university (although I’m guessing you would not have been born until a few decades later). Anyway. Very clear (which is often a problem in this context). Thank you.
Really enjoying this series, Suzy!
I think the inverted qualia problem is exactly what makes us humans. The stimuli and response might be identical for two people but the internal feeling is quite different. The feelings and internal functions of each person are what make AI, robots, and neurologically implanted devices so difficult, I believe. Recreating feelings and empathy seems, at this stage, almost impossible.
LLMs are surely databases that contain more information than we can ever grasp, but their ability to truly dissect and utilize that information to paint a story making readers feel connected, seems very human. And the biggest leap AI will need to make if it ever can.