Once upon a time, there was no meaning.
And now, it seems, meaning saturates our lives.
You get a text that says, “On my way!”
That string of letters isn’t just a string of letters. It means something to you, and to the person who sent it. There’s a kind of this-stands-for-that quality to it.
But how did that happen? How did letters become not just letters? How did symbols become meaningful?1
Over the past six essays, we’ve been circling those sorts of questions. Can symbols alone be enough for meaning? Do symbols need to be grounded in bodies, our history, or the world? And what does it mean to say that brain activity has meaning?
And yet, every time we tried to pin meaning down, it seemed to slip through our fingers.
We ran into strange puzzles — circular explanations, metaphorical shortcuts, and concepts that turned out to be more confused than we first thought.
The closer we looked, the blurrier it seemed to get.
Meaning is troublesome. It seems.
So, what is the trouble with meaning?
The trouble has a lot to do with representations. So, to wrap up this series, let’s focus on representations.
We’ll ask three questions:
What is a representation?
What do representations mean in neuroscience?
What is the trouble with representations in the brain?
The Trouble with Meaning Series
This is the essay 7, and the last essay, in the series on the trouble with meaning.
Here are the other essays in the series:
Q1: What is a representation?
I’ve mentioned a few times in this series that a representation typically involves three parts: a vehicle, a target, and a consumer.2
Let’s take a simple everyday example.
Think about the small red battery symbol that sometimes appears at the top of your phone.
The vehicle is the red symbol itself — the battery icon.
The target is what that symbol points to — something like: “your battery charge is low.”
The consumer is you — the person using the phone, who sees the icon and goes to find a charger.
So we could say, a symbol (the vehicle) stands in for something (the target) for someone or something that uses it (the consumer).
Once you’ve got those three pieces — a vehicle, a target, and a consumer — and a mapping between them, you’ve got a representation.
And we can say that the red symbol at the top of your phone is about your phone’s battery level, because it means something to you.
This is how aboutness and meaning come along with representations.
A vehicle is about a target because it means something to the consumer.
That’s the everyday idea of a representation.
But what about brain representations?
Let’s see how the vehicle, target, consumer fits with neuroscience talk. Let’s say you see a dog.
The vehicle would be some kind of brain activity — a pattern of neural firing, that happens when you see a dog.
The target is the thing that the activity points to — like something in the world, or an idea, a thought, a belief, or a decision. In this case, it’s the dog itself.
And the consumer is… who, exactly? You? The person with the brain? The brain itself?
There’s something off here.
A vehicle (the brain activity) is about a target (a real dog in the world) because it means something to the consumer. But who is the consumer in this situation?3
Calling the entire brain the consumer feels odd. Most neuroscientists would point to other parts or systems of the brain, maybe downstream circuits — motor, memory, or decision systems. But, as we’ll see, that move can land us in trouble too.
Q2: What do representations mean in neuroscience?
A few years ago, two philosophers set out to find out what neuroscientists actually mean when they use that word representation.4 They surveyed researchers across the cognitive sciences and concluded that…
well…
That neuroscientists use the word representation in a “confused and unclear” way.
Apparently, we’ve all been using the word representation to mean different things.
How surprising!?
The most common usage is to mean a link between brain activity and sensory inputs — like the inputs that come from our eyes, ears, and body.5
In other words, the neuroscientist looks for a causal or statistical connection between the brain activity and the sensory input. And we expect those brain responses will systematically vary with varying sensory inputs in a way that’s reliable, repeatable, and ideally causal.
So if you’re looking at a dog, the brain responds one way. If you’re looking at a person, it responds another. And the activity pattern is consistent enough that, at least in theory, you could look at the brain activity alone and make a pretty good guess about what the person was seeing.
So, we read sentences like, activity in part of the inferior temporal cortex represents face perception. Or activity in the lateral occipital complex represents object shape.
These kinds of statements are common in the neuroscience literature.
But is this really a representation?
There seems to be a few things not quite right with this picture. Some of the features of representations seem to be missing.
First
It is not clear who or what the consumer is in this situation. The neuroscientist is comfortable talking about the vehicle (the neural activity) and its link to a target (some input). But who or what is making the neural activity about the input? I’ll come back to this point in a bit.
Second
Something else seems to be missing, too. You may recall that one key feature of a representation is that it can be wrong. A symbol on a map can misrepresent the real world. A battery icon can light up when the battery’s actually fine.
But in neuroscience, there’s a noticeable reluctance to say that brain activity misrepresents anything.6
If some sensory input doesn’t line up with the expected brain activity we don’t say the brain activity misrepresented. We look for an explanation. We say there is noise in the signal, or some compensatory mechanism was involved. But we don’t usually say the brain activity was wrong.
Third
Some philosophers suggest, these so-called representations that neuroscientists talk about are not representations at all. They are natural signs.7
So what is a natural sign?
Good question, I’m glad you asked!
A natural sign is a physical or observable feature that correlates with or is caused by some state of affairs — without needing an interpreter, or a convention, or a user to give it meaning.
These philosophers want to point out a difference between something that standing in for something else — like a battery icon standing in for the actual battery level — and a link between two things.
A natural sign is not a symbol that stands in for something else, it’s a link.
The classic example is smoke. Smoke is a natural sign of fire. We can say that the presence of smoke correlates with fire or even that smoke is caused by fire, but we wouldn’t say that smoke stands in for fire.
Smoke is just a natural sign of fire.
We can distinguish natural signs from representations in three ways:
Natural Signs don’t require a consumer.
Smoke still exists even if no one is around to see it.Natural signs can’t misrepresent.
Natural signs depend on causal or statistical correlations between two things. So when the link misfires — say, there is smoke but no fire — it’s treated as an exception, not a mistake.
Representations work differently. Because a representation involves a symbol (the vehicle), that stands in for something else. This symbol can be wrong. A symbol on a map can mislead you. A red battery icon can flash even when the battery’s full.
Natural signs don’t rely on conventions.
Language, traffic signs, battery icons, and emojis only work as representations because we agree on what those symbols mean. This is not true for natural signs.
So what does this mean for representations in the brain?
When we see a dog and there is some corresponding activity in our brain, we have found a link between the sensory input and brain activity. Does that mean we have a natural sign and not a representation?
If this true, and brain activity is simply natural signs that are caused by or correlated with inputs, then how does meaning ever enter the picture? Do we need an interpreter?
Q3: What is the Trouble with Representations in the Brain?
You may have seen or heard some version of the standard textbook explanation of how the brain works: Input comes in through the senses → the brain processes that information internally → then sends an output.
This same process of input → internal process → output is used to describe lots of things.
Your desktop computer takes input from a keyboard, processes it, and shows you a result on screen.
Your digestive tract takes in food, breaks it down, and produces… well, results.
Even your toaster follows the same general flow.
This process is commonly used to explain many functions in the sciences.
Let’s take nutritional science as an example. To understand digestion a nutritional scientist might link what people eat (input) with activity in the gut (internal processes) and with output. And from these links draw conclusions about how digestion works.
Neuroscientists employ this same process. To understand what the brain does a neuroscientist might link what hits our retinas (input) with activity in the brain (internal processes) and output (behavioural responses). And from these links draw conclusions about how the brain works.
But something interesting happens in neuroscience that doesn’t happen in digestive science. In neuroscience we treat that correlation or causal link as if it explains what the brain activity is about. We say the activity in your brain means what hit your retina because it’s correlated or caused by what hit your retina.
We don’t make this move in nutritional science. We don’t say the activity in your digestive tract means a pink frosted donut with sprinkles because that activity is correlated or caused by eating a pink frosted donut with sprinkles.
In digestion, activity in the gut is a marker of what you just ate — not a message about donuts. But in visual neuroscience, spikes in the visual cortex are routinely called a representation of what you see.
It’s the same functional logic (input → process → output), but our interpretations have wildly different metaphysical load.
We want to say that the brain is not like digestion (or your toaster). We want to say the brain does something more. We want to say that what happens in the middle — those internal processes — are about things. We want to say the brain has content. That its internal states mean something. That they are about something. That they represent things.
When we label brain activity as a representation simply because it links (statistically or causally) with input, are we skating on thin conceptual ice? Is what we call an explanation of meaning, simply an added label of meaning? Have we explained meaning at all? Or have we simply snuck in meaning through the backdoor?
For example, Christof Koch and Francis Crick famously suggested that:
“A good way to begin to consider the overall behavior of the cerebral cortex is to imagine that the front of the brain is ‘looking at’ the sensory systems, most of which are at the back of the brain.”8
If we take statements like this as strictly neuro-anatomical shorthand, some might think it’s a harmless metaphor.
But others suggest there is sneaky trouble here. This sort of language where we suggest that one part of the cortex “looks at” another part invites an easy — almost automatic — interpretation:
The back-of-brain activity is treated as the vehicle that stands in for the target (the world).
And the front-of-brain becomes the consumer who finds meaning in that back-of-brain activity.
This is what philosophers call teleological thinking.9 It treats one brain region as if its activity is for the sake of something else — as if it’s “looking at” or “interpreting” another region with a goal in mind. The back of the brain becomes the map, the front becomes the map-reader.
It turns causal interactions between neurons into a kind of purpose-driven story. The front processes the back in order to make sense of the world.
That’s how meaning sneaks in. Not by explanation, but by implication. We apply the vehicle, target, consumer to the brain like we would for a person reading a map, or a user reacting to a battery icon, as if there’s someone in there to do the reading.
This kind of language risks bringing back what we’re supposed to have thrown out long ago: the homunculus.10 The brain activity is for the little inner observer who interprets, understands, and acts. We’re explaining brain activity by what it’s for rather than how it came to be. This is the hallmark of teleological thinking. We may not call it a little person anymore. We’ve swapped out the little man in the control room for one brain region reading another. But the logic is the same.
Just like the original little person homunculus, this frontal-lobe reader needs explaining. After all, if the front of the brain is interpreting the back, then who — or what — is interpreting the front?
If we are not careful, we find ourselves caught in the same old loop: an infinite regress of map-readers, interpreters, and inner consumers, each needing another to make sense of the last.
The Sum Up
We’ve covered a lot of ground in this series on the trouble with meaning. And we’ve been chasing one slippery question: where does meaning come from?
Perhaps one of the biggest troubles with explaining meaning in the brain is that we keep trying to explain it backwards.
We start with the end — what something is for — then reason backwards.11 It is natural to do this for built things. A plane wing is for flying. But we use this talk for living things too. A heart is for pumping blood. A brain is for thinking. And activity in the back of the brain is for the front of the brain. Before we know it, we’ve smuggled meaning into places it doesn’t belong.
Science has ways of dealing with that kind of problem. One of them is the theory of evolution by natural selection.12 It explains function without foresight. It explains how things come to look to have design and purpose, without needing a designer. It dissolves the idea that everything has to be for something.
Another classic move is to break a system into smaller and smaller parts. If the brain seems too mysterious, just break it down into smaller parts. Instead of looking for a little thinker inside the brain, break that thinker into simpler subroutines, and then break those down again — until you’re left with tiny, mechanical operations.13 No homunculus. No ghost in the machine. Just little mindless steps that add up to something smarter.
But some see trouble with this idea too.
Some thinkers want us to notice what’s underneath the need for this sort of strategy. It is especially required when we assume that minds work like typical machines that take inputs, process them, and spit out outputs. This standard model, they say, is the problem. It invites the homunculus in and then requires us to explain why there is actually no homunculus.14
For this reason, many of our modern theories of how the brain works blur the lines between input and output.15 There’s no clear beginning or end — just a continuous dynamical loop of sensing, predicting, acting, and updating.
So, what happens when we stop treating the brain as a message-passing machine and start seeing it as a dynamical system?
Does the regress finally break? Does it explain meaning? Or have we merely handed the homunculus a new kind of hamster wheel?
That, I’m afraid, is a puzzle for another day.16
Next Week…
No essay next week — I’m heading to Sydney for the Asia Pacific Conference on Vision. I’ll be back on July 1st with some new puzzles to ponder.
Let’s be careful here. We shouldn’t presupposed a binary — meaning vs. no meaning —situation. Plus, let’s not forget that lots of processing is non-conscious and still might count as ‘meaningful’ in a deflationary sense.
Cognitive scientists, following Peirce, Dretske, and Millikan, who often describe a representation as a triad.
Neuroscientists disagree on who or what consumes a neural pattern: downstream circuits? The whole organism? This is a live debate.
The study is Investigating the Concept of Representation in the Neural and Psychological Sciences by Luis H. Favela & Edouard Machery (Frontiers in Psychology, 2023). They surveyed 736 psychologists, neuroscientists, and philosophers and found widespread uncertainty about when the label representation applies. [Open Access]
Baker, Lansdell & Kording’s review “Three Aspects of Representation in Neuroscience” lists correlation/encoding as the first and most widely used sense.
In practice, papers usually invoke “noise,” “adaptation,” or “decoding error” rather than calling the code itself false. See Investigating the Concept of Representation in the Neural and Psychological Sciences by Luis H. Favela & Edouard Machery (Frontiers in Psychology, 2023) for more on this point.
Charles S. Peirce on index, Dretske’s Knowledge and the Flow of Information (1981), and Millikan’s bio-semantic work all distinguish natural signs from conventional representations.
Crick, F., & Koch, C. (2003). A framework for consciousness. Nature Neuroscience, 6(2), 119–126. https://doi.org/10.1038/nn0203-119. [PDF]
Philosophers use teleological explanation for “process X exists for the sake of Y” (see Millikan 1989, Papineau 2021).
This is the classic objection (Dennett 1991). Note that Crick & Koch themselves say “This division of labor does not lead to an infinite regress.”
Dennett, D. C. (1995). Darwin’s dangerous idea: Evolution and the meanings of life. Simon & Schuster.
Godfrey-Smith, P. (2001). Three kinds of adaptationism. In S. H. Orzack & E. Sober (Eds.), Adaptationism and optimality (pp. 335–357). Cambridge University Press.
Darwin, C. (1859). On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life. John Murray.
Dawkins, R. (1986). The blind watchmaker: Why the evidence of evolution reveals a universe without design. W. W. Norton & Company.
Dennett (1995) calls this ‘homuncular decomposition’: keep shrinking the inner agent until there’s no agent left.
This is the critique of the enactivists (e.g.,Varela, Thompson & Rosch 1991) and ecological psychologists.
Predictive-processing, active-inference, and sensorimotor-contingency theories all stress continuous perception–action loops (e.g., Friston 2010; Clark 2013; O’Regan & Noë 2001). For example, on active-inference views, perception itself is a covert form of motor control, which blurs input/output.
If you found this series interesting, you might find this conversation interesting too! It covers a lot of the same ideas, especially those discussed in this essay.
This is long been my concern with the word "representation", it seems to imply something being presented to an inner observer (re-presentation). If we use words like "schema", "model", or even "reaction cluster" or "early dispositional pattern", it seems more evident this is actually part of the processing of a system, something we can imagine happening in a computer or dynamical system.
It doesn't surprise me that everyone is using "representation" to mean different things, since everyone is using words like "consciousness", "mind", or "emotion" to mean different things as well, often even the same person in the same conversation. This language ambiguity, I think, offers the impression of deep mysteries. When we use more precise language, mysteries remain, but they seem a lot less intractable, more conducive to scientific investigation.
Interesting post, as always Suzi!
One challenge is that "meaning" is a high-level concept. Restricted, I think, to humans, an abstraction. We create meaning, it's not something out there we find. Meaning to you may not be meaning to me. It's a rabbit hole concept like "consciousness" or "representation" or "real". I'm not sure it's possible to define such nebulous concepts effectively. (Endlessly palatable for philosophers, though.)
Maybe a problem with representation is the pigeonholes "vehicle", "target", and "consumer". That works okay for external symbols, but as you say, "And the consumer is… who, exactly? [...] There’s something off here." The notion of natural signs seems more on target.
I thought about the way we train LLMs. The encoding that results from their training seems more aligned with natural signs — THIS experience causes THAT encoding — than with representations — symbols standing for experiences. In part because it's impossible to say exactly *where* facts are stored in an LLM. There are no concrete symbols, just a unified set of parameters. Like a unified set of trained neurons in a brain.
In software, an old decomposition approach is IPO — Input-Process-Output. As you point out, it's a general framing that applies to many processes, including many aspects of humans. I do think it applies to brains although, as with software, it's recursive. Each input, Process, and Output is itself made of IPOs, which also decompose to IPOs, and so on until you get to the most basic functionality. In brains, even neurons can be decomposed — synapses are their own IPOs (composed of biochemical IPOs). FWIW, I see the brain as more like an old analogue radio, a signal processor, than as a numerical information processor.
I wonder if the question of the homunculus is another version of the Hard Problem. How can clay have opinions? Why is this IPO system self-aware? I think to the extent a "homunculus" exists, it's the whole brain having that self-awareness.
Very interesting series. Looking forward to whatever is next. Have fun in Sydney!