What Gives Brain Activity Meaning?
Can teleosemantics explain how mental representations get their meaning?
We’re heading into the colder months here in Australia, which means the snakes are getting busy — feeding up before brumation, the reptile version of hibernation.
Just recently, not far from where I live on the east coast, a carpet python was found with an unusual meal in its belly: two golf balls.
Thanks to some quick-thinking wildlife volunteers, the snake survived. But the story made the rounds in local news outlets, mostly because of the obvious question: did the snake mistake golf balls for eggs?1
You might think there’s not much going on in a snake’s head. And fair enough — snakes aren’t known for their philosophical depth. But still — we might think that even a snake has to tell the difference between food and not-food. Presumably, something in its nervous system helps it do that.
This is what some call a representation. And, some might think this representation in the snake’s brain is about eggs.
At least, that’s how we often talk about what happens in our own brains — when we think about eggs, or dogs, or why the sky is blue.
But this raises two puzzles. First, how can any brain representation be about something? And second, if it is about something — how can it misrepresent something?
If the snake’s brain is representing egg, then what’s it doing being active for golf balls?
How do we explain that without evoking mystery or magic?
There are two approaches. One says: let’s keep the idea of representation, but try to naturalise it — explain it using nothing more than physical stuff and its relations.2 The other is more radical. It says: we should ditch the idea entirely. Because the whole concept of representation is the problem.
This week, we’ll explore the first path — keeping representations, but naturalising them. Next week, we’ll turn to the alternative view.
So here’s our question for this week: Can we find a naturalistic theory of representation that explains how representations can be about things?
To find out, we’ll ask three questions:
Can we naturalise representations?
What’s the leading proposal for how to do it?
And what do the critics say?
First a quick note…
This is Essay 5 in The Trouble with Meaning series. You don’t need to read the earlier essays to follow this one, but if anything here feels a bit murky, Essay 4 will probably be the most helpful.
Here’s what we’ve covered so far:
Essay 1: Searle’s Chinese Room — can a system understand meaning just by following rules?
Essay 2: Symbolic Communication — how do symbols work, and what would it take to share them with aliens?
Essay 3: The Grounding Problem — how do words (or thoughts) get their meaning in the first place?
Essay 4: The Rise (and Trouble) of Representationalism — why many scientists think the brain represents the world, and why that view runs into trouble.
If we want a naturalistic theory of the mind, there are a few things it needs to do. Back in Essays 3 and Essay 4, I laid out four key criteria.
We want a theory that:
Is naturalistic — no magic, no mysteries.
Fits with an objective, mind-independent reality.
Accounts for mistakes — not just when things go right.
Avoids circularity — no infinite regresses, no hidden homunculi doing the real work.
These four won’t satisfy every philosopher’s wish-list, but it’s the bar we’ll use for this week’s essay. So, let’s see if we can find a theory that clears it?
Q1: Can We Naturalise Representations?
Let’s start with the basics. A representation is commonly thought of as something that stands in for something else. So, some might say that certain brain activity represents things — dogs, eggs, the blue sky.
The brain activity is the vehicle.3 The thing the brain activity stands in for is the target. Targets might be physical objects, like a snake, or abstract ideas, like justice.
So how do vehicles come to be about their targets?
The classic views proposed three main answers:
Co-location: the vehicle and target are in the same place at the same time.
Resemblance: the vehicle looks or sounds like the target.
Simple causation: the target causes the vehicle.
Let’s take them one at a time.
Co-location
This one fails quickly. We can think about (or represent) the Eiffel Tower while we’re in Sydney. We can remember someone who’s no longer with us. And we can imagine things that don’t exist, like unicorns. Co-location can’t explain those cases — and it can’t explain misrepresentation either. So we can set it aside.
Resemblance
Back in 2015, someone opened a tub of butter and saw the face of Donald Trump.4 The shape resembled him — but does that mean the butter was about Trump? Probably not.
We might say the shape of the butter is well suited to represent Trump — but only if someone decides to treat it that way. The resemblance alone can’t be the whole story. The butter can only be about Trump because someone already has Trump in mind.
And that puts us in a loop. We’re using a representation to explain a representation. It fails our circularity criteria.
Simple Causation
Maybe a representation is about what caused it.
Take a photograph of Trump. It’s about Trump because Trump caused it — through light bouncing off his face onto the camera sensor. The photo has a causal link to its target.
But that’s not always enough. Wet grass is caused by rain, but we wouldn’t say the wet grass is about rain.
And what about misrepresentation?
Say we see a coiled garden hose and mistake it for a snake. We might want to say that your brain representation was caused by the hose.5 So should we say whatever is going on in the brain is about the hose? That feels wrong. If it is about anything, isn’t it about a snake, even though the snake isn’t there?6
Causal theories need something more.
One attempt comes from philosopher Jerry Fodor. His idea is called asymmetric dependency.7 Here’s the basic version:
Your brain is active in some way for snakes. But sometimes your brain is active in a similar way for garden hoses that look like snakes. The activity is about snakes, Fodor says, because the garden hose only causes it because snakes do. If snakes didn’t cause that brain activity, hoses wouldn’t either. But snakes would still cause it, even if hoses never did. That’s the asymmetry.
This lets us keep a causal theory and explain misrepresentation. So far, so good.
But there’s a catch.
First, this asymmetry sneaks in an assumption that the brain activity is supposed to be active for snakes, not hoses. We’ve quietly assumed what is considered successful brain activity. This is called the normative-assumption problem.
Second, it assumes we already know what the brain state is about. It fails our circularity test.8
So our search continues.
Q2: What is the Leading Proposal?
Nowadays, one of the most influential proposals for naturalising the aboutness of representations is called teleosemantics. It’s not a single theory, but a family of closely related views built around one core idea:9
The idea is that our brains represent things not because it resembles the thing it represents, or is directly caused by it, but because that is the function that was passed on.10
Take that garden hose in your backyard. You see it, and for a split second, you think snake. Something goes on in your brain. So we might say this brain’s activity functions as a detector for snakes — it’s an evolved response that reliably picks out snake-like features.11
But why does that activity go on in your brain?
There are two ways to interpret that why question.12
One says when we ask why does that activity go on in your brain? what we are really asking is, what is the purpose or goal of this activity? This interprets the why question as a what for question. It’s a question about ends.
The other interpretation is that the question why does that activity go on in your brain? is asking, which historical process shaped this activity? This interprets the why question as a how come question. It’s a question about means, not ends.
The first — the what for question — risks circularity. If we say, the brain activity is for representing snakes, we’ve already assumed what we were trying to explain: that the brain activity functions as a snake-detector. It end-point thinking — explaining the activity by its end goal or purpose, rather than by the process that produced it.
That’s the circularity problem again.
The second — the how come question — might give us a way out.
The philosopher Fred Dretske came to think that a physical theory of meaning needs two ingredients: information and function.
Information helps explain how the brain’s activity might correlate with something in the world. But correlation alone doesn’t explain why that activity is about the thing it correlates with — or why we can say it was wrong when the correlation fails.
We need function — and not just any function, but one grounded in history.
This is where the how come question becomes crucial.
According to teleosemantics, a representation’s function (what for) is fixed by its selected effect (how come). That is:
Its function is grounded in its evolutionary history.
So why does a certain brain activity represent snakes?
Because, over evolutionary time, animals with brains that responded in certain ways in the presence of snakes were more likely to survive. Their brains developed a detection system that was active when snakes were around — and that detection system got passed on.
It seems like a clever move. It replaces purpose-based explanations (what for) — which often sneak in circularity — with a naturalistic account of how representational functions could arise (how come). And, some argue, it does this without smuggling in the very meaning it’s supposed to explain.
Let’s bring back our golf-ball-eating snake.
Why say the snake’s brain represented the golf balls as eggs?
Because, according to teleosemantics, snakes with brains that responded to pale, egg-shaped objects were more likely to find food. Over time, because having a system like that had advantages, it was passed on, and we can now say its function is to detect eggs.
So when the brain responds in a similar way for a golf ball, it’s performing the function that was passed on — just in the wrong context. The system misrepresents, just like when we mistake a hose for a snake.
That’s the appeal of teleosemantics. It seems to explain misrepresentations.
Even when the brain misrepresents, the function stays the same. The brain response was passed on because it played an evolutionary function. And that evolutionary function determines what it’s about.
So how does teleosemantics stack up against our criteria?
At first glance, it looks promising:
It’s naturalistic — meaning comes from evolutionary history.
It preserves a mind-independent reality — our representations succeed because they track real features of the world.
It accounts for misrepresentation — mistakes are part of how biological systems work.
And it avoids circularity — functions are defined by history, not assumed meaning.
At least, that’s how it seems. But not everyone is convinced.
Q3: What Do the Critics Say?
Now that we’ve seen the appeal of teleosemantics, let’s walk through the criteria again — but this time, with a skeptical lens.
One thing to keep in mind: teleosemantics isn’t a single unified theory. It’s a family of views. So not every criticism applies equally to all versions.
Does teleosemantics fit with a mind-independent world?
At first glance, yes. Teleosemantics grounds meaning in real-world history — in the evolutionary success of tracking features out there in the environment.
But some critics argue it leans too heavily on the world. If meaning depends on past success tracking real things, how do we explain thoughts about things that don’t exist?
Take novel concepts.13 Imagine you’re Freeman Dyson, the first to conceive of a shell around a star to harness its energy — a Dyson Sphere. That idea had no precedent. So how could your brain activity for Dyson Sphere have been selected for in evolutionary history?
Some versions of teleosemantics expand the idea of selection to include learning through operant conditioning, neural plasticity, and culture. Others suggest novel thoughts piggyback on older systems — for instance, the Dyson Sphere might reuse neural activity that was selected for spatial reasoning or tool use.14 Plus, Dyson-Sphere thoughts will most likely be a recombination across many circuits, not a single new state.15
But what about purely imaginary things?
You can picture unicorns, Atlantis, or an “octophant” — half octopus, half elephant. If they don’t exist, then how could any system have evolved to track them?16
Some defenders say these thoughts are compositional — they borrow from real-world concepts (like “octopus” and “elephant”) and combine them. But that raises a question: how far can this piggybacking go? At what point does recombination lose contact with biological function?
Does teleosemantics explain misrepresentations?
This is one of teleosemantics’ big selling points. If a system evolved to detect eggs, and it fires for a golf ball, that’s a misrepresentation — not because of a broken rule, but because it failed at its selected function.
But critics aren’t convinced it’s that simple.
This is the normativity or functional-indeterminacy problem.17 Our brains don’t really have distinct patterns for specific targets. What looks like one brain response often serves many roles — what we might call attention, movement, memory, threat detection. So if meaning depends on function, and function is ambiguous, how do we know when the system misrepresented?
Take the garden hose mistaken for a snake. Was that a misrepresentation of a snake? Or was it a successful case of erring on the side of caution?
Supporters of teleosemantics have suggested solutions — like looking at statistical patterns over time or granting the function to the most evolutionarily valuable one.18 But can this give us a clear, determinate content — the kind that tells us what brain activity is about, and when it’s misrepresenting? Or is it a convenient after-the-fact story that doesn’t actually avoid circularity?
Does teleosemantics avoid circularity?
Even if we view representations through a functional lens, to say that some brain activity was selected for detecting snakes, we already have to know what counts as a snake encounter. But that assumes the very content — snake — we were trying to explain.19
Likewise, calling the garden hose mistake a misrepresentation only makes sense if we’ve already decided that the brain activity was supposed to represent a snake, not a hose. But where does that judgment come from?
We’re back at the circularity problem — again.
Arguably, this is the problem that many of the other critiques collapse into.
Some defenders respond that we’re asking the wrong kind of question. Rather than fixating on narrow content like snake or hose, they suggest we take a broader view: what matters is the role the representation plays in survival.20
From this perspective, the brain activity wasn’t selected to represent snakes per se, but to help the organism avoid danger. So when the brain responds to a hose as if it was a snake, it’s still performing its general function.
But critics push back: if we blur the line between what some brain activity is about and what it merely does, we risk losing what makes representation distinct from simple reactivity. It might explain why the activity exists — but not what it is about.21
Some philosophers go further. They argue that aboutness is an explanatory dead-weight — a relic from an outdated way of thinking.22 On this view, the question of how representations come to be about things is simply the wrong question.
Some thinkers suggest we abandon the idea of aboutness altogether — and with it, the entire teleosemantic project.
Before we wrap up, there are two more critiques that come up often enough that it would feel like a glaring omission not to mention them.
Swampman
One of the most famous challenges to teleosemantics comes from Donald Davidson’s Swampman thought experiment.23
It goes like this: a lightning bolt strikes Davidson near a swamp, vaporising him — and, by sheer coincidence, assembling an atom-for-atom duplicate from the surrounding matter. This perfect copy, Swampman, gets up and walks away, indistinguishable from the original.
Since Swampman is physically identical to Davidson, it seems reasonable to think he has the same representations — that he thinks, perceives, and remembers just like Davidson did.
But here’s the problem: Swampman has no history. He wasn’t shaped by evolution or learning. So if teleosemantics is right — and aboutness comes from history — then Swampman shouldn’t have any aboutness to his representations at all.
Some defenders bite the bullet and say, if Swampman has no history, he must have no aboutness.24
But others argue the whole thought experiment should be thrown out like so many other bad thought experiments.
Why? Well, there are a number of reasons.
But one is because this thought experiment, like many thought experiments, makes claims about our world. And if we want to make claims about our world the thought experiment needs to pass two tests.
It needs to be:
Logically possible: the scenario must not contain outright logical contradictions.
Physically possible in our world: it must be able to occur without violating the physical laws that govern our universe.
Swampman might pass the first test. But, the argument goes, Swampman, flunks the second.
In our world we can’t assemble anything without paying a thermodynamic price. Which means, we can’t assemble a perfect atom-by-atom copy of Davidson without an enormous amount of time, energy, and causal interaction. And that — right there — is a history.
The moment we account for how Swampman could come to exist, given our physical laws, we realise that his existence is subject to the same evolutionary or developmental processes that teleosemantics depends on.
What About Artificial Intelligence?
Modern AI systems — especially large language models — also raise questions for teleosemantics.25
These models seem to have representations that are about things. But unlike us, they weren’t shaped by biological evolution. So where does their aboutness come from?
Some theorists think the idea of teleosemantics can be extended to machines. They argue that training plays a role similar to biological evolution. A neural network trained to recognise cats might develop representations of cats because those patterns helped reduce error — a kind of computational “survival.”
The idea is that the training objective — like next-token prediction or image classification — functions like an artificial selective pressure. And if that objective was chosen by designers for a specific purpose, perhaps it can ground representational function in a way that parallels biological evolution.
But not everyone agrees.
Teleosemantic theories are based on the idea that a representation gets its aboutness from the function it was shaped to do — because they help organisms survive. So even if a representation gets reused in different ways, its core function stays the same.
Critics ask us to compare that to large language models. These systems aren’t shaped by evolution, but by training on massive amounts of (mostly) text, using a broad and open-ended goal: predict the next word. That might sound like a function, but, the argument goes, it’s not tied to any specific biological need.
For critics, that makes it difficult to say what, if anything, the model’s functions are about.
Next Week…
Some philosophers respond to these problems by saying that no theory of representation will ever fully succeed. They argue we need to get rid of the whole idea of representations. This is the anti-representational view. That’s the direction we’ll head next week.
Carpet pythons have been documented swallowing golf balls that have been placed in chicken coops (the golf balls are usually placed to encourage chickens to lay eggs). Snakes hunt mostly by scent and heat, not by sharp vision, so a golf ball on its own wouldn’t usually fool them. But after days beneath broody hens, the balls might pick up enough egg and chicken odour to trigger the snake’s feeding response.
That usually means trying to explain how representations get their meaning in physical terms, rather than assuming it. But another approach, made famous by Daniel Dennett, takes a different route: it treats systems as if they have beliefs and desires if that stance helps us predict their behaviour. That’s the intentional stance — it sidesteps the metaphysics of meaning. This essay focuses on theories that try to give a literal account of representational content — especially teleosemantics.
When thinking about brain activity as vehicles, we need to be careful not to treat vehicles as a single pattern. In this essay, I use shorthand phrases like “brain activity”. Just remember that real brain activity is encoded in population codes spread across circuits. This will be an important point to keep in mind when we get to talking about causes in the brain.
Here I’m talking loosely about ‘a snake-representation,’ but in real nervous systems vision, smell, and audition are handled by partly separate circuits — and each has its own error profile. So the teleosemantic story is probably best told at those modality-specific levels, rather than at some abstract, one-size-fits-all representation level. More on that another time.
This is called the disjunction problem.
Psychosemantics: The Problem of Meaning in the Philosophy of Mind [Open Access]
Bielecka, K. (2025). Externalist Conceptions of Representation. In: What is Misrepresentation?. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 71. Springer, Cham. https://doi.org/10.1007/978-3-031-84375-4_5
The family counts Ruth Millikan, David Papineau, Fred Dretske (later work), Karen Neander, Nick Shea, and others among its major architects, and it remains the dominant naturalistic program in the literature.
Common terms used are etiological or selected-effect function. Larry Wright (1973) introduced the term and Millikan generalised it to semantics.
I’m not talking here about a full-blown symbolic concept like the word “snake” — with learned meaning and shared conventions. The brain’s activity in this example is more like an index: a low-level activation when something has the right shape, motion, or sound signature — not a concept in the complex human sense. Teleosemantics offers a way to say what that detector is about, by linking its function to evolutionary history.
That distinction between what for and how come question can be traced to evolutionary biologist Ernst Mayr’s proximate (mechanism) vs. ultimate (function/history) causes. Which the following paper does a great job explaining:
Haig, D. Proximate and ultimate causes: how come? and what for?. Biol Philos 28, 781–786 (2013). https://doi.org/10.1007/s10539-013-9369-z [PDF]
Garson, J., Papineau, D. Teleosemantics, selection and novel contents. Biol Philos 34, 36 (2019). https://doi.org/10.1007/s10539-019-9689-8 [PDF]
Artiga, M. Teleosemantic modeling of cognitive representations. Biol Philos 31, 483–505 (2016). https://doi.org/10.1007/s10539-016-9525-3 [PDF]
I discuss the question of where new ideas come from in the essay Can AI Generate New Ideas?
Mendelovici, A. (2016). Why Tracking Theories Should Allow for Clean Cases of Reliable Misrepresentation. Disputatio, 8(42), 2016. 57-92. https://doi.org/10.2478/disp-2016-0003
Bergman, K. (2024) Living with semantic indeterminacy: The teleosemanticist’s guide. Mind & Language, 40(1), 53-73. DOI: https://doi.org/10.1111/mila.12514
Millikan RG. In Defense of Proper Functions. Philosophy of Science. 1989;56(2):288-302. doi:10.1086/289488 [PDF]
There are two kinds of circularity to watch for here. One is about evolution: can we say a brain state was selected for tracking snakes without already knowing it’s about snakes? The other is about us: do we, the theorists, assign content in a way that simply assumes the answer? Any good theory needs to avoid both.
e.g., Shea, Nicholas (2018). Representation in Cognitive Science. https://philarchive.org/rec/SHERIC
e.g., Mendelovici, A. (2016). Why Tracking Theories Should Allow for Clean Cases of Reliable Misrepresentation. Disputatio, 8(42), 2016. 57-92. https://doi.org/10.2478/disp-2016-0003
e.g., Ramsey WM. Representation Reconsidered. Cambridge University Press; 2007.
Davidson, D. (1987). Knowing One’s Own Mind. Proceedings and Addresses of the American Philosophical Association, 60(3), 441–458. https://doi.org/10.2307/3131782
Millikan (1989), Papineau (2001) explicitly accept that verdict.
Grindrod, J. Large language models and linguistic intentionality. Synthese 204, 71 (2024). https://doi.org/10.1007/s11229-024-04723-8
Stephen Francis Mann & Ross Pain (2022) Teleosemantics and the hard problem of content, Philosophical Psychology, 35:1, 22-46, https://doi.org/10.1080/09515089.2021.1942814
That was a difficult one, Suzi! This is definitely a topic that I want to understand better, but I think it will take a few more readings for it to sink in.
I’m just in awe of someone who can get away with the phrase: “Now that we’ve seen the appeal of teleosemantics”!
I love this. I hope to come out with a theory of my own in the next few millennia 😉. Thanks, Suzi.