Do you remember learning to ride a bike?
Chances are, no one gave you a manual. More likely, someone just plopped you on a bike and gave you a push. You wobbled. You overcorrected. And maybe you hit the front brakes a little too hard and flew over the handlebars.
But pretty soon, your body figured it out. You found your balance. You synced up with the bike. And now — if you’ve done it enough — you don’t have to think about how to ride a bike. You just ride it.
It’s tempting to think some things we do — like riding a bike — require having a body. While other things — like doing math in your head — don’t. Maybe those could happen even if you were just a brain in a vat.
But some philosophers and scientists say — hold on. Not so fast.
They argue that every mental act, from balancing on a bike to solving a math problem, happens in a loop: brain, body, and world all working together. This view is called enactivism. It doesn’t claim the mind lives entirely “out there” in the world — or entirely inside your skull. It says: the brain doesn’t do it alone.1
That might sound strange. It’s a shift from the more familiar view of the brain as a kind of information processor — like the brain-is-a-computer metaphor we are all too used to hearing.
So: is this a genuinely new way to understand the mind? Or is it just reminding us of something we already knew — that the brain’s activity is always tangled up with the body, and the world around it?
To find out, let’s ask four questions:
What is enactivism?
Why might someone agree with it?
What do the critics say?
Does artificial intelligence need a body?
Where are we in the series…
This is Essay 6 in The Trouble with Meaning series. You don’t need to read the earlier essays to follow this one, but here’s what we have covered so far:
Essay 1: Searle’s Chinese Room — can a system understand meaning just by following rules?
Essay 2: Symbolic Communication — how do symbols work, and what would it take to share them with aliens?
Essay 3: The Grounding Problem — how do words (or thoughts) get their meaning in the first place?
Essay 4: The Rise (and Trouble) of Representationalism — why many scientists think the brain represents the world, and why that view runs into trouble.
Essay 5: Teleosemantics — one major theories that tries to explain where meaning comes from.
This week, it’s enactivism — one of the other major theories that tries to explain where meaning comes from.
A quick note on words
Talk long enough about brains and sooner or later someone will drop mind, cognition, or consciousness — often as if they were perfect synonyms.
Philosophers, meanwhile, build entire theories on the hair-splitting differences between these words.
So, these words come with philosophical baggage.
In this essay, in an attempt to try to avoid some of this baggage, I’ll use mental activity as a broad label for things like perceiving, remembering, imagining, or planning.
No term is baggage-free, of course. But by flagging this early I hope it might keep us from unpacking dictionaries in every paragraph.
I’ll revisit whether this label really earns its keep at the end of the essay.
Q1. What is Enactivism?
Enactivism is a relatively new word in the philosophy of mind. It started to gain traction in the early 1990s, with the publication of a book called The Embodied Mind.2
In that book, the authors made a bold claim: minds aren’t things we have — they’re things we do. To have a mind, they said, is to engage in sense-making. Not by sitting back and receiving information, but by moving through the world — by acting, feeling, and responding.
Since then, the idea has caught on. And while different versions of enactivism have taken shape over the years, most of them come back to four central claims.3
1. The Mind is Embodied
For enactivism, the body isn’t just along for the ride. It matters — deeply.
Mental activity — whether we’re remembering something, imagining a scene, or thinking through a problem — isn’t just happening in the brain. It’s shaped by our body’s movements, its sensations, and its rhythms.
Our posture, our heart rate, the tension in our muscles, even signals from our gut — all of it can influence how we think.4
The body, in other words, isn’t just a delivery system for the brain. It’s part of the process.
2. The Mind is Embedded
We don’t think in a vacuum. We think in kitchens and parks and waiting rooms.
Our thoughts are shaped by what’s around us — the objects we use, the routines we follow, the people we talk to. Even the feel of a place can change the way we think.5
You’ve probably noticed this. Some tasks just feel easier in certain settings. If you’re trying to solve a problem, you might go for a walk, take a drive, or hop in the shower — because you know that’s where your best ideas tend to show up.
3. Perception is Active
If you’ve ever seen a textbook diagram of how the mind works, it probably looked like this: input goes in through the senses, gets processed in the brain, and then an output comes out — like pressing buttons on a machine.
But enactivism rejects that picture.
For them, there is no middle stage. Perception isn’t something that happens after input and before action. It is action. It’s skilled, moment-to-moment engagement with the world.6
4. Enactivism is Skeptical of Mental Representations
Enactivism doesn’t deny that the brain does complex things. But they do question whether those things are best described as internal representations — the kind you find in the brain-as-computer versions of cognitive science.
Radical enactivism argue that we don’t need representations at all.
Others take a softer view. They’re open to the idea that some representations might exist. Perhaps lean, action-oriented ones.7
Most camps agree on this point: if representations are part of the story, they show up later in both evolution and development. They emerge through culture, through language, and through shared practices. And they’re built on top of something more fundamental — our embodied, active engagement with the world.8
Q2. Why Might Someone Agree with Enactivism?
Let’s look at four reasons why someone might find enactivism appealing.
1. The Grounding Problem
We talked about this back in Essay 3. The grounding problem asks the question: how does mental activity get its meaning?
Radical enactivism responds by reject the idea that meaning has to be explained in terms of internal representations in the first place. For them, meaning isn’t something added on top of experience — it’s already there, in our ongoing engagement with the world.
Moderate enactivism take a similar line, but leave some room for representations. They argue that if representations show up, they only make sense because they’re grounded in embodied action. Our actions are already meaningfully tied to the world, they say — so any representation that develops out of those actions inherits that connection.9
2. The Behavioural Evidence
There’s a well-known experiment you may have seen. A group of people are passing a basketball. You’re asked to count the number of passes between players wearing white t-shirts.
Halfway through the clip, a person in a gorilla suit walks right into the scene — beats their chest — and strolls off.
And yet… most people don’t notice the gorilla at all.
It’s one of the most famous examples of inattentional blindness — the idea that we can miss something obvious, simply because we weren’t looking for it.
Change blindness tells a similar story. In one ad by Skoda, tiny details in a scene are constantly changing. A wall swaps colour. A teapot appears. And most viewers don’t catch a thing.
And this one is a good example, too:
Experiments like these suggest that perception isn’t a passive recording of the world. You don’t take in everything and then process it. You won’t notice the gorilla unless you’re actively looking for it. And in change blindness you won’t see a change unless you are actively looking at the right spot.
One of the most striking demonstrations of actions role in perception comes from a now-famous experiment with kittens.10
It was the 1960s. Psychologists set up a kind of carousel for cats. Each pair of kittens was harnessed into the same rotating rig — one kitten was a walker, the other as a passenger.
The walker kitten could walk with the carousel. The passenger kitten, though, just went along for the ride — carried in a little basket, seeing everything but not moving its legs.
Importantly, both kittens received identical visual input. But only one could move.
So what happened to the kittens?
The walker kittens developed normal vision. But the passenger kittens — the ones who only watched — didn’t. They walked straight toward what looked like a cliff, showing no hesitation. They had no depth cues.
Enactivism takes this as powerful evidence. Perception requires active interaction with the world.11
3. The Neuroscience Evidence
The brain doesn’t come with fully wired circuits. As babies, our circuits have to be built. And, it turns out, movement plays a key role in how that happens.
In newborn rats — and in premature human babies — early brain development involves something called spindle bursts. These are brief, bursts of brain activity that show up right around the time of spontaneous body twitches.12
If you’ve ever felt your body twitch just as you were falling asleep, you’ve felt a version of this. In adults, spindle bursts mostly show up during non-REM sleep. But in babies, they happen all the time.
The baby kicks. Or flinches. And at the same time, or just before, a spindle burst is seen in the brain. These little jolts help wire up circuits, linking neurons that fire together.
Some parts of the body work in opposition — like biceps and triceps — when one muscle contracts, the other relaxes. So, the brain learns not just what fires together, but what doesn’t. So, inhibitory patterns form too — which helps shape coordination.
We see spindle bursts in the visual system, too. Babies’ eyes twitch, even before they can focus. And those twitches help organise the visual cortex.
In other words, the brain doesn’t just wait for the world to feed it information. From the beginning, it’s building itself — through action.
4. Evidence From Robotics
Back in the 1990s, roboticist Rodney Brooks had an idea that went against the mainstream view at the time.13
Instead of programming his robots with detailed maps of its world — carefully constructed layouts and pre-defined plans — he gave them simple rules. Just enough to sense the world and react to it.
These robots didn’t have anything you’d normally call a representation. No internal model of their environment.
But they could navigate. They could avoid obstacles. They could complete tasks. And — surprisingly — they often outperformed the robots with maps.
Brooks called them robots “without representation.” He claimed intelligent behaviour doesn’t require internal representations at all. It can emerge from tight, real-time loops between sensing and acting.
Which, is exactly what enactivists were saying about living systems.
Q3. What do the critics say?
Of course, not everyone is on board with enactivism. Let’s review three of the most common criticisms.
1. Enactivism is Vague About Representations
One of the most common critiques of enactivism is that it is fuzzy on where it stands with mental representations.
Even the more radical versions are hard to pin down.
Take the book Radicalizing Enactivism.14 Early on, the authors come out swinging — they argue that mental activity doesn’t involve internal contentful representations at all. No representations, full stop. But just a few pages later, they write that even radical enactivists “need not, and should not, deny the existence and importance of contentful and representationally based modes of thinking.”
Wait — what?
To many readers, that sounds contradictory.
The authors try to clear it up by drawing a distinction. They say there are two kinds of mental activity. Basic cognition — like perception, attention, and motor control — which doesn’t require representations. It’s grounded in our ongoing bodily engagement with the world. But content-involving cognition — like imagining a future vacation or solving a math problem — does involve representations. The important point, they say, is that those content-involving representations are only possible because they’re built on a foundation of non-representational activity — things built through brain-body-world interaction.
It might sound like a clever move. But not everyone’s convinced.15
Critics point out that this hybrid model may inherit the worst of both worlds. It doesn’t fully reject representations, so it doesn’t escape the problems enactivism set out to solve. But it also adds a new problem: how does the system bootstrap itself from non-representational activity to content-involving representations. And right now, the explanations that are given — well — they sound a little hand-wavy. Like saying “...and then a miracle occurs.”
In trying to keep a foot in both camps, the hybrid view risks pleasing no one: too representational for radical enactivists, and too vague for the representationalist.
2. Representationism Works in Neuroscience
Some argue — why do we need a different theory? Representation works great in neuroscience. It explains the data, makes predictions, and helps us build models of how the brain works. Isn’t that what a good scientific theory is supposed to do?
Modern neuroscience often describes the brain as an information processor. Neurons are said to encode things — like edges, faces, or places. Brain regions get mapped according to what they represent. These ideas are grounded in decades of experimental work.
So when radical enactivists say we should throw out the idea of mental representations, many neuroscientists raise an eyebrow. Are we really supposed to ignore all that data?
Enactivists say: not exactly. They argue they’re not denying that brain activity correlates with experience. What they’re questioning is how we interpret those correlations. Instead of thinking of brain activity as a kind of internal map — like a stored map in the head — they see it as part of a living, dynamic system. A system that only makes sense when you look at the whole loop: brain, body, and world, all acting together.
To some, that can sound a little vague — like replacing one metaphor with another.
But there is a growing pile of papers trying to make enactivism mathematically precise.16 The worry is: the more detailed these models become, the more they start to sound like the very representational theories they claim to oppose — just with new jargon.17
3. Enactivism is Just Behaviourism
Enactivism wants to downplay internal representations. It says mental activity is rooted in action — in how organisms engage with the world.
That focus on action sounds familiar.
In fact, it sounds a lot like behaviourism, the dominant view of the mid-20th century. And that raises a question: if we strip mental activity down to nothing but action, aren’t we just back where we started?
Enactivists say no. They’ve tried hard to draw a clear line between their view and behaviourism. Behaviourism was about inputs and outputs — what goes in and what comes out. It didn’t care what was happening in between. Enactivism, by contrast, is all about the ongoing relationship — the loop — between an organism and its environment.
For sense-making, they claim, a system has to be autonomous, it has to regulate and maintain itself, and it has to be adaptive — not just reacting, but adjusting its responses in ways that serve its continued existence. These features, they argue, are absent in purely behavioural things like thermostats or bacteria that drift toward sugar.
Q4. What About Artificial Intelligence?
If a large language model can answer your questions, write a poem, and debug your code — isn’t that mental activity?
And yet, LLMs don’t move. They don’t have bodies.
Doesn’t that suggest it’s internal representations — not embodiment — doing the real work?
Some, like neuroscientist Antonio Damasio, argue there’s a crucial difference we shouldn’t overlook.18
Biological creatures have to meet their own thermodynamic requirements. To survive, biological creatures don’t just have to perform useful behaviours — they also have to keep themselves alive. That means finding energy, repairing damage, maintaining internal order. If they fail to do either — act adaptively or maintain themselves — they cease to exist.
But LLMs are different. An LLM does not have to secure its own energy, repair itself, or maintain its own internal order.19 We do those things for it. The two roles that are together in a biological creature — performing useful behaviours and keeping itself alive — are separated in an LLM. And, some think, that separation might not be a trivial detail. It could turn out to be a very important difference.
So, what if we gave an LLM a body?20 One that could interact with the world. One that could monitor itself, regulate its internal state, manage its own energy use, and adjust its behaviour accordingly? Would that kind of interacting, self-maintaining, body-in-the-world system make a difference to the LLM? Would we expect to see something very different than we typically see from an LLM?21
The trouble with questions like these — about whether AI needs a body to have mental activity — is that the answer might depend on what we mean by mental activity. Embodiment might really matter for our kind of mental activity. But is it a universal ingredient required for any system to have mental activity?
LLMs show that representation-heavy algorithms can, at least, mimic some high-level behaviours — but enactivists reply that until a system regulates its own survival in a messy world, calling it a full-fledged mind might be premature.
The Sum Up & Next Week…
Last essay we covered teleosemantics. Both teleosemantics and enactivism want to avoid magical or mysterious accounts of meaning. But they do it differently.
Enactivism says: Meaning doesn’t come from internal representations in the brain — it comes from active, embodied engagement with the world.
Teleosemantics says: Mental representations do exist — and they get their meaning from evolutionary history.
In some corners of philosophy and cognitive science, this difference gets framed as a showdown: representations versus enactivism. Pick a side.
But many researchers think that’s the wrong way to see it. The split isn’t as clean — or as necessary — as it’s sometimes made out to be.
Next week, I’ll wrap up this series on The Trouble with Meaning by tying up a few loose ends — including a look at where some thinkers believe the real trouble with meaning might lie.
This one is a good starting point for those interested in reading more: Gallagher S. Embodied and Enactive Approaches to Cognition. Cambridge University Press; 2023. [Open Access]
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. The MIT Press.
The term enaction arrived in 1991, but kindred ideas trace to Merleau-Ponty (1945) Phenomenology of Perception, Gibson (1979), and Maturana & Varela (1972/80) Autopoiesis and Cognition.
Three of these map onto the common “4E” family (Embodied, Embedded, Enactive, Extended). Note, I have not included “Extended” (Clark & Chalmers) because it isn’t the same as enactivism — even thought it is often allies with it.
See e.g., Critchley et al. (2004) on cardiac interoception; Damasio’s somatic-marker hypothesis; Barsalou (2010) on grounded cognition.
Critchley et al. (2004) Nat. Neurosci.; Barsalou (2010) Annu. Rev. Psychol. [ResearchGate]
In cognitive science this claim is largely uncontroversial; the debate is over how much work the environment does.
This is the sensorimotor-contingency view. See:
O’Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(5), 939–973. doi:10.1017/S0140525X01000115 [PDF]
This view is not restricted to enactivism, modern predictive-processing views also see it as an perception–action loop.
Radicalizing enactivism: Hutto, D. D., & Myin, E. (2013). Radicalizing enactivism: Basic minds without content. MIT Press. [Link]
Sensorimotor Enactivism: O’Regan & Noë, “A sensorimotor account of vision and visual consciousness” (BBS 2001); see synthesis in Degenaar & O’Regan 2017
Autopoietic / Adaptive: Di Paolo, “Autopoiesis, adaptivity, and enactivism” (2019) plus historical tie-in to Maturana & Varela’s Autopoiesis and Cognition (1980)
Critics would reply that if a pattern in the head tracks distal facts and supports successful prediction, we have all the grounds we need to call it a representation — embodied or not.
But critics push back. They argue that just acting in the world doesn’t explain why this pattern of movement or neural activity means that thing. Action alone, they say, doesn’t magically give rise to meaning.
For example:
Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1–3), 335–346. [PDF]
Rupert, R. D. (2020). Embodiment, consciousness, and the massively representational mind. Philosophical Topics, 48(2), 99–120. [PDF]
Held, R., & Hein, A. (1963). Movement-produced stimulation in the development of visually guided behavior. Journal of Comparative and Physiological Psychology, 56 (5), 872. [PDF]
But critics aren’t entirely convinced by this interpretation. Yes, they say, these studies show that perception isn’t passive. But that doesn’t necessarily mean the brain isn’t using internal representations. It might just mean the representations are built through movement.
An, S., Kilb, W., & Luhmann, H. J. (2014). Sensory-evoked and spontaneous gamma and spindle bursts in neonatal rat motor cortex. Journal of Neuroscience, 34(33), 10870–10883. https://doi.org/10.1523/JNEUROSCI.4539-13.2014
Blumberg, M. S., Coleman, C. M., Sokoloff, G., Weiner, J. A., Fritzsch, B., & McMurray, B. (2015). Development of twitching in sleeping infant mice depends on sensory experience. Current Biology, 25(5), 656–662. https://doi.org/10.1016/j.cub.2015.01.022
Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47(1–3), 139–159. https://doi.org/10.1016/0004-3702(91)90053-M
Later critics showed the subsumption layers still encode implicit representations; Brooks’s point was that explicit symbolic models weren’t needed.
Hutto, D. D., & Myin, E. (2013). Radicalizing enactivism: Basic minds without content. MIT Press. [Link]
Critics include Adams & Aizawa (2010), Rupert (2020) who argue this view can’t bridge the basic cognition to content gap without smuggling in representations
See, dynamical-systems literature (Kelso 1995; Beer 2000) and precise information-theoretic enactivism (Baltieri & Buckley 2019).
Some neuroscientists (e.g., Friston, Clark) now blend embodiment with representational generative models — so the debate is shifting.
Some robots already monitor battery & temperature; but this is still a far cry from biological homeostasis. See, recent Wired Article: Google’s Gemini Robotics AI Model Reaches Into the Physical World
Embodied-AI programmes (e.g., Google DeepMind’s Gemini Robotics-ER, Vision-Language-Action models) are actively testing this. E.g., https://arxiv.org/abs/2405.14093
I absolutely love these philosophical explorations.
It's funny, because the brain (where the mind resides, presumably) is, in fact, part of the body. So by definition the mind needs a body. I would go even as far as to say that the whole thing - your body from head to toe - is your mind (not sure if this a controversial take, but if it is I'll take it).
PS. Even large language models have bodies, one could argue, in the form of large datacenters. Without those physical 'bodies' those algorithms wouldn't exist and be able to perform their calculations.
Thank you for this overview of something that is quite in fashion today, but I can't quite wrap my head around it.
I don't find anything particularly original or new in enactivism. It reiterates concepts we already know from our daily experience, several of which were expressed by past philosophers long before brain scans. Perhaps enactivism rephrases things with more precise and rigorous terminology and tests them with the latest neuroscience, but in essence, it merely rephrases what is (more or less intuitively) common wisdom. Where is the news?