I think the embodied cognition idea is right, but I don't think it challenges computationalism. It just gives us insights into what is being computed. Overall, whether a mind requires a body depends on how we choose to define "mind", but I think it's clear that our type of intelligence requires an environment. We can see the body as the most immediate aspects of that environment, and as the primal interface with the rest of the environment. But I find it telling that we can drive a car and come to sort of intuitively treat it as our body for the duration.
But if we're talking about intelligences more generally, I would think other types are possible. Actually, any of the automated systems we have now have some degree of intelligence, just not anything approaching human, or even animal intelligence yet. But I think they're already far enough to tell us other types of intelligences are possible. Whether we call them "minds" will be a philosophical issue.
Thanks for that Arriving and Exiting tip on afference and efference! Despite having read many books on neuroscience, that's one of the distinctions I always have to go lookup when I stumble across it in text.
Mike, I was surprised I didn’t see any commentary from you regarding the AI podcast Suzi generated by inputing last week’s article and commentary. These LLMs are getting so good that they even seem superhuman. I wonder if you missed it? To me this seems to be giving functional computationalists what they’ve been predicting for decades, and yet no one is claiming that these two disembodied AI podcasters might actually be conscious.
Thanks Eric. I had totally missed it. (I honestly haven't figured out Substack at all yet.) I'll check it out when I'm somewhere I can watch it.
I suspect the answer is going to be along the lines of what Suzi discusses in this article, that to give us an intuition of a conscious entity, it needs to be interacting with an environment through a body. Of course, you could say the data is its environment, but it still feels too one way.
In the end though, functionality is objective. Which combination of that functionality amounts to consciousness is in the eye of the beholder.
Ok, that's just creepy. It feels like one of those very bland over-professionalized podcasts or radio shows, where the presenters are super scrupulous about not revealing their own opinions. So if I didn't know it was AI generated, I'd think the podcasters had their presentation face tightly in place, but would likely think they were still real and conscious. Of course, after this, I'm going to be suspicious of anything where the presenters are that smooth and bland.
But to your question Eric, I think actual interaction, such as maybe taking questions in real time from viewers, particularly over a prolonged period, would flush out any intuitions of conscious entities. The question will be how to regard them when that ceases to be a reliable test.
I forgot to make a comment about representations. Part of the issue is the word: "re-presentation", which makes it sound like something presented to an inner observer, and leads to the homunculus problem you describe. A better way of describing a representation is that it's part of the mechanism of perception. So it's not for an inner observer, but something used by internal processes *of* the observer.
A better word for it might be "model", "schema", "prediction", "disposition circuits", or something else that we more expect to be used by sub-personal processes. But the word "representation" is so embedded in these discussions, we're probably stuck with it to some degree.
I see your point. I think we are on the same page here.
We have to be careful not to simply replace the homunculus with other words like 'model', 'schema', or 'prediction'. If we are talking about a 'model' in a similar way to the way engineers use 'model' then I think the word 'model' works. A 'model' is a component of the machine that responds to some input or patterns but not others. The problem, it seems, is when we use 'representation', or 'model', to mean the whole thing. This would align with Dennett's concerns. By explaining consciousness as many components working together rather than a single thing, he avoids the homunculus problem.
I agree we have to be cautious with the representation concept. I've also seen the word "image" used for sensory representations. But there is a danger of too tightly thinking of these things as contiguous entities in the brain, when in reality they're likely more a complex, fuzzy, distributed coalition of neural circuits that evolve over time.
Interesting that you refer to automated systems as having intelligence - my software-developer husband always totally disagrees when that comes up somewhere!
I'm an old programmer myself. But I'm using "intelligence" in a continuous spectrum sense, not a discrete binary one. In that sense, the device you're using right now has some intelligence, but nothing like the intelligence of a mouse, much less a human being.
Interesting! I wonder whether the disagreement here is just one of definition. Historically, intelligence and consciousness were used interchangeably. But it seems like the definition of these words have recently shifted. Nowadays, intelligence seems to mean the ability to learn and apply knowledge, while consciousness means something like subjective experience.
I agree! Embodied cognition doesn't refute functionalism (or even computationalism) entirely. But I do think that it might force some forms of functionalism to shift a little.
Functionalism is an interesting theory -- there are so many types. The different forms are typically the result of responses to earlier theories. It's a theory that shifts and morphs.
I've been sensing, in the cognitive science community, a move away from the computer metaphor towards an embodied cognition view. I sense this move has been pushed by researchers studying things like emotions and bodily experiences, because the computer metaphor leaves little room for things like that.
I guess how much embodied cognition is compatible with functionalism will depend on how strong or weak of a claim is being made. The strong embodied view might claim that we couldn't have the cognition, concepts and ideas we have without the bodies that we have. Others might take a view that could fit more neatly with some functionalist views.
I agree with your point about intelligences in general, this doesn't mean that other intelligences (or cognition) might be found (or are found) in other non-meat mediums. The question, as you put it, is whether we decide to call such things, 'minds'.
I think what unites the various functionalisms is the idea that mental states are about what they do, their causal roles. I'm always surprised how controversial that is. We have no problem with understanding the heart, muscles, liver, or more abstract processes like metabolism, in a functional manner. But when it gets to mental states, it's suddenly a radical prospect. In any case, I see functionalism as conceivably broader than computationalism, depending on how we define "computation".
When the 4Es (embodied, enactive, embedded, extended) are discussed as replacing computation, I always wonder how the word "computation" is being used. Certainly the brain is very different from a programmable digital computer, but it's not clear to me that even the classic computationalists thought that. The question I have is, what's the alternative account of what individual neurons are doing with their selective propagation of effects?
Some of the more absolutist factions in the embodied movement feel ideological. 4Es has good insights, but we have to be careful not to throw the baby out with the bathwater. But as I've learned more, I've become a theoretical pluralist, accepting that several ways of looking at things can be productive, and none are likely to be the one true answer.
Your point about theoretical pluralism is well taken.
Like most theories, the stronger versions of embodied cognition can indeed lead us to some pretty strange places. I wonder if we can think of embodied cognition not as a theory that is in complete opposition to functionalism (and other physicalist theories), but rather as a complementary approach that might help refine our ideas.
One thing that might have shook me out of trying to find the one right theory, was reading about molecular biology. Which I won't claim to understand. But one thing that stood out very clearly, that there is no one single thing that distinguishes the molecular chemistry of life from other organic chemistry. Instead we have a complex constellation of theories that solve various pieces of the puzzle.
To me, it feels like the mind will be similar, at least right now.
Lovely, Suzi. Thank you. I have a question about what minds do though.
If minds (and/or consciousness) are concerned with sensations and perceptions, which bit of the brain is concerned with memories, wishes, the square on the hypotenuse and emotions like loneliness?
Would the mind-people say there is a different bit of the brain that takes care of all these? Would the consciousness-people say that it is a different bit of the mind?
How is the answer affected with an embodied view of perception?
I'm planning several upcoming articles that will try to answer these questions -- or at least explore where the latest research is pointing. So, for now, I'll save most of the detailed explanations for those future posts.
But here are a few thoughts...
One of the concerns about drawing simplified drawings like the ones I put in the article, is that it can make it seem like the brain is more segmented and simple than it actually is. It might also give the impression that such an idea can explain how the brain becomes a thinking, feeling brain. I don't think it does. There's much more to the story.
Sensation and perception are closely tied to our immediate experiences of the world. Other 'mind' things like memories, wishes, and emotions (and even the square on the hypothenuse) doesn't necessarily involve this immediate experiences of the world. Some might like to say that we can imagine a system that only has sensation and perception, without all the other 'cognitive' things like language, long term memories, abstract thinking etc. This system (if possible) would be a fairly simple system.
To get other more 'cognitive' processes we need a more complex system. The question is what type of complexity is required?
Thanks, Suzi. I guess my question is a veiled criticism of the idea that consciousness is tied up with sensations and perceptions. Mine doesn’t seem to be.
I understand if we are just talking about brains (brains have different bits that do different things) but the “consciousness is special” people seem to want to tie consciousness to sensations. I wonder where *they* think the other stuff happens.
(I’m not suggesting that you believe that but maybe you have an insight into the people that do)
I look forward to your follow up. I love your posts!
“If an AI could perceive the world without a body, would this challenge the embodied view of perception, or would it suggest biological perception is not the only way to perceive?” - Perhaps its perception could be similar to a person reading a book. Not nothing, but all mind-generated.
Human perception is mostly prediction (a form of generation), AI perception would be the same. Embodiment is essential for people, not for AI. The Ai mind can remain entirely virtual
I agree that prediction plays a huge role in human perception, but I'm not sure that perception for a disembodied AI could be the same as perception for a human. We a constantly building our predictive models -- they are an active (almost) continuous feedback loop. AI seems to work in a very different way. The model is almost always pre-built. Recently, I've been wondering how much of a difference this difference makes.
> When babies are first born, their perceptual systems are immature and underdeveloped.
There might be a parallel here with evolution too. We had the ability to feel our way around before we had brains — and eyes and brains may have evolved in parallel. Neocortexes were much later.
I am suspicious of the idea that minds or consciousness are some kind of special thing that came along with humans (or primates or mammals or whatever). Maybe it did start with sensations (touch, I'd guess) but like my Robocode example the other day, did we need perceptions or consciousness to act on them?
I like the parallel with evolution. I find it difficult to think about what sensation and perception might be like without all the human cognitive baggage layered on top. I suspect that sensation might not feel like anything (maybe). A thermometer senses temperature, but we don't think it experiences warmth or cold. Similarly, a camera sensor detects light, but we don't assume it sees in any meaningful way. Does basic perception have some experience to it? If we strip away all the human cognitive baggage, like language and abstract thinking, is there basic perception that feels like something?
When I look at an octopus, it certainly seems to be aware of what’s going on. Our last common ancestor with an octopus was 750 million years ago. I bet there were more than a few animals that experienced their surroundings during that time.
This was Descartes biggest mistake, to separete what in fact is one. The mind is the body and vice versa. To ask: can the mind be without a body is like asking whether we can have fire without fuel or whether a tree is made of wood.
hello ،The mind or soul or consciousness may need a body to exist, but maybe it doesn't need a body after it is complete, such a theory was said by the Muslim philosopher Mullah Sadra.
Unlike the other commenters, I don’t have anything substantive to say other than… this is so cool! Now I have something to tell my daughters about the tickling thing. 😊
Always so interesting. When I started reading I kept thinking about two things: mood and muscle memory. As a sometimes moody person, I find it difficult to think of mind as being separate from body, and I don't really know what to make of muscle memory except that it seems to be a real thing moderated from a distance, as it were, by the active mind. Maybe "mind" is a richer concept than we normally consider it. It certainly seems to be, and here's that word again, emergent from body.
Mood and muscle memory are great examples. There's plenty of research suggesting that emotions start as sensations in the body. Not sure about how muscle memory works, but now I'm curious...
From my own perspective the standard story is fine, though we needn’t resort to a homunculus to be who the representation is for (and associated infinite regress). There just needs to be the right sort of physics to exist as the perceiver/thinker/decider. I’m not sure what aspect of brain physics might be causally sufficient other than an electromagnetic field. Regardless the meaning here should ultimately be value based and so different from standard computational function.
I don’t so much agree with the idea that to perceive the world we must first act, but rather that acting can provide us with effective input information regarding our bodies.
One reason to disbelieve that movement and prediction creates meaning, is because then our robots ought to have it given their own movements and prediction. There’s still a “hard problem” to deal with here. In effect the physics must be causally correct. So what element of brain physics might create phenomenal existence? This will probably be quite obvious in retrospect.
One difference between brains and machines might be that brain models must be built from the ground up. For us, interacting with our world is how we gradually construct the type of brains that gives us perception and understanding. In contrast, robots often come with a pre-built model. I've been wondering whether this might be a key difference.
Yes Suzi, a pre-built model is definitely a key difference between the function of our robots versus our brains! Because there is purpose to our existence, also known as “teleology”, the things that we build (whether forks or robots) should ultimately be constructed for that reason. We naturalists however don’t believe that brains (or life in general) evolved to achieve any purpose. It’s just amazing to us how purpose-like life and its various instruments seem. So instead of “teleological” we call brains and living function in general, “teleonomical”.
If we have purpose then you may wonder what I consider that purpose to be? I consider this to be what consciousness, or value itself, ultimately reduces back to. Here I mean a variety of physics by which existence can feel good/bad to an instantaneous experiencer from moment to moment. Therefore over time I believe that the purpose of each of us is to feel as good as possible for as long as possible.
In any case if we knew the brain physics of consciousness, as well as had sufficient technology, then we should be able to build machines that aren’t just a reflection of our purpose (though they’d still be that), but also have purpose of their own. Thus existence would also feel good/bad to then from moment to moment on the basis of that physics. Theoretically evolution went this way because non-conscious programming alone didn’t work well enough under more “open” sorts of circumstances.
I think the focus on sensation and perception risks missing the more important role that the body plays in grounding intelligence, and thus in what I see as the important point of embodied cognition. While the human body does require bodily motions in order to drive perception, it's not clear why that would be necessary in general. Computers do take input from cameras and microphones (and from keyboards and mice along with other more esoteric sensory devices). That input is processed and "made sense of" to a limited extent, allowing it to respond in ways (using sounds, images, and more) that humans find useful. And so they capture at least some part of the meaning that humans take from those sounds and images. These are, if not minds in the full sense, at least proto-minds. The processing units are its brains and all the rest of the hardware they run on is their body.
(The fact that the software can move from one "brain" to another over time is no bar to it being a mind, as our conception of minds allows them to move from body to body (see, for example, "Freaky Friday"). Denying it the status of a mind because of its lack of a hard link to a particular body is nothing but anthropomorphic prejudice.)
In my view, the value of bodies is the ability it gives us to manipulate the world. Humans start doing this even before we're born, where the "world" is largely limited to the body itself. First the brain learns how to manipulate the body, it learns where the "edges" of the body are, and it learns how to manipulate things beyond its body. The efferent signals lead to afferent signals in more-or-less reliable ways, and it is the constant conjunction of such signals that the brain learns. The efferent copy sent to the sensory organs then allow efficient detection of mismatches (whether inside or outside the body).
What our current AIs are missing is the ability to manipulate the world (except in very minimal ways) and, more importantly, to expect particular results and detect when those results are not returned. This is not something that can be solved merely by adding code for efferent copies of the manipulations it can currently produce. The manipulations available are currently so minimal that they won't ground the level of meaning required to produce anything like human-level intelligence.
(We have robots that can detect when the operation they've been programmed to carry out has failed, based sometimes on video images. Self-driving cars are currently just very souped up versions of that. They are more embodied than most systems, and have more manipulations available, but the "brains" have only shallow understanding of the world around them due, in part, to lacking any meaningful interaction with it in non-driving situations. Then again, we don't really need "Knight Rider" to get us safely from point A to point B.)
My point (and I do have one) is that we won't get human-level intelligence until we can program our machines with either a deep model of the world (pretty much impossible) or with the ability to learn from manipulating the world (extremely hard). And the way the machines view the world will depend on the kinds of bodies (manipulators and sensors) we give them. That is the importance of the body.
(And having done it once, we may well be able to copy the weights (or whatever) into as many clones as we'd like, possibly thus bring about the extermination of humanity. But that's a topic for another day.)
I agree with (almost) everything you've said here, but I might push back a little on the sensation and perception comment, just to get your thoughts.
Can we consider the way that computers take in information from cameras and microphones to be the same as how biological creatures take in information? If we accept the embodied cognition/perception idea, then what biological creatures do is an active process, a way to predict and update our models. But what disembodied computers do with input seems far more passive. I wonder whether this difference is a difference that matters?
Thanks for pushing back. It's an important part of intelligent conversation.
I agree that current computers are far more passive than biological systems in how they take in information. Even mobile units (cars and delivery bots) are comparatively passive. My contention is that adding (for example) saccades or movable external "ears" with corresponding efferent copies -- making perception less passive -- is not going to make that much of a difference. The passivity is not just in the reception of information; it permeates the way computers interact with the world.
Mobile units are not disembodied. Each one has a body that it's controlling (possibly even more than one, if a central, immobile unit is riding herd on multiple mobile ones). But its interaction with the world is largely limited to avoiding running into things while carrying its cargo to its intended destination. Its manipulations of the outer world is largely limited to emitting beeps or (probably pre-recorded) voice messages to warn off incoming people. (Completing the delivery may involve some interaction, too. About as much as a hand-shake. Bots in warehouses need a bit more interactivity, needing to be able to pick things up and put them down again -- but still not a lot.)
While mobile units don't need deep models of the world, its models are much deeper than those of LLMs -- if, indeed, what LLMs have can be called models of the world at all. LLMs have learned constant conjunctions between words, and use a bit of randomness to generate multiple human-sounding sequences of words. As a result they have problems with "hallucinations" -- sequences of words that seem to say things that are false. The LLMs can't tell what's true or false because their output is not connected to the outer world in any meaningful way. They have "seen" people being corrected, but they've never noticed themselves making a mistake. When you tell one it made a mistake ("No, the number of 'r's in "strawberry" is not two.") it can build word-sequences that sound appropriately apologetic, but it can't figure out what the heck the problem was. It doesn't even know that there *was* a problem -- it just "knows" what sequences of words most commonly follow a sequence of words that we humans recognize as a correction.
We will not recognize intelligence in a machine until it can engage in active processes, making its predictions, noticing its own errors, updating its own models -- learning from its own mistakes. It needs a body so that it can *do* things in the outer world. Also so that it can perceive things, and it may use something like an efferent copy to make perception tractable. But the engineering issues for a mechanical system are quite different from those of a biological system. So while we may draw inspiration from how biological systems solve problems, we shouldn't insist on slavishly mimicking biology.(*)
The important thing is to build the constant conjunction between the inner world and (the perceptual reports on) the outer world, and develop them to the level that allows the "brain" to produce output that engages with us (and the world) in an intelligent manner. I don't think that can be done without, at some point, involving a body that acts in and reacts to its outer world.
(*) I don't mean to accuse you of such slavishness. I just want to warn people against it.
I agree that embodiement is an important part of intelligence. But can't it be argued that LLMs "sense" the world in language and also act on that world with text. Of course, presently I don't believe LLMs attempt to predict what the user's response to its output would be. Perhaps this might be done?
One more thing that I think relates. I believe that language models might be minds that exist in a reality related to, but distinct, from the one we exist. A sort of Hellen Keller reality.
Taking this notion a step further, it can be said that we each exist in slightly different constructed reality. In faxt, I don' even think its arguable that we do.
Wonderful. And yes, embodied cognition is the way I went à la Merleau-Ponty, as I became a baby phenomenologist; I was always surprised by how well his work was developed prior to the current heights which imaging, computational and neurophysiological techniques have reached (possibly dates me). Anyway, thanks for this, everyone’s discussion and allowing me to bang on about situated mind again :)
Seems like you've been on this journey for a while! I've come to embodied cognition from the other side -- the neuroscience and computational modelling side. What strikes me is how relevant Merleau-Ponty's ideas are to current discussions in cognitive science. Over the past few years, I've noticed a shift in the field. We're moving away from the classic "brain as computer" metaphor towards a more embodied view of cognition.
Merleau-Ponty's work, despite being decades old, seems to anticipate many of the current questions. It's as if cognitive science is catching up to what he proposed years ago. His emphasis on the body's role in shaping our understanding of the world aligns remarkably well with some influential theories.
It felt like that to me but (a big one), I am not anything more than an ageing autodidact now, though my kind memory recalls that at one time in the 1980s I felt up to date in septo-hippocampal neuroscience and memory. Even this seems hubristic when I write it down! Anyway, the point being that it’s nice to read that you have a similar impression as a working neuroscientist today :)
Just one more thing :) It's obvious to me that a composer with perfect pitch hears is conscious of music in a different way than I am, a way that I can't really imagine.
Even though we pretty much understand the humans are a fully integrated system and the mind-body distinction is spurious there is a helpful way of thinking about “mind.” The mind is the brain in the abstract, consisting of an associative array of tokens and rules governing how those tokens may be arranged. The abstraction can be either deterministic, in which case it must also be minimalist or it can be stochastic, in which case it must also be complex. It follows that the style of abstraction to be applied depends on the nature of the purpose for which it is being made. For example, if it is shape recognition it might be the simpler and if it is decision making under uncertainty the later.
I totally agree. I think the idea of a mind/body distinction has been a big distraction for philosophers since Descartes (I just wrote about that here: https://raggedclown.substack.com/p/old-philosophy-vs-new-philosophy). I like the idea of the mind as the brain in abstract.
Excellent as always Suzi!
I think the embodied cognition idea is right, but I don't think it challenges computationalism. It just gives us insights into what is being computed. Overall, whether a mind requires a body depends on how we choose to define "mind", but I think it's clear that our type of intelligence requires an environment. We can see the body as the most immediate aspects of that environment, and as the primal interface with the rest of the environment. But I find it telling that we can drive a car and come to sort of intuitively treat it as our body for the duration.
But if we're talking about intelligences more generally, I would think other types are possible. Actually, any of the automated systems we have now have some degree of intelligence, just not anything approaching human, or even animal intelligence yet. But I think they're already far enough to tell us other types of intelligences are possible. Whether we call them "minds" will be a philosophical issue.
Thanks for that Arriving and Exiting tip on afference and efference! Despite having read many books on neuroscience, that's one of the distinctions I always have to go lookup when I stumble across it in text.
Mike, I was surprised I didn’t see any commentary from you regarding the AI podcast Suzi generated by inputing last week’s article and commentary. These LLMs are getting so good that they even seem superhuman. I wonder if you missed it? To me this seems to be giving functional computationalists what they’ve been predicting for decades, and yet no one is claiming that these two disembodied AI podcasters might actually be conscious.
https://substack.com/@suzitravis/note/c-69546625
Thanks Eric. I had totally missed it. (I honestly haven't figured out Substack at all yet.) I'll check it out when I'm somewhere I can watch it.
I suspect the answer is going to be along the lines of what Suzi discusses in this article, that to give us an intuition of a conscious entity, it needs to be interacting with an environment through a body. Of course, you could say the data is its environment, but it still feels too one way.
In the end though, functionality is objective. Which combination of that functionality amounts to consciousness is in the eye of the beholder.
Ok, that's just creepy. It feels like one of those very bland over-professionalized podcasts or radio shows, where the presenters are super scrupulous about not revealing their own opinions. So if I didn't know it was AI generated, I'd think the podcasters had their presentation face tightly in place, but would likely think they were still real and conscious. Of course, after this, I'm going to be suspicious of anything where the presenters are that smooth and bland.
But to your question Eric, I think actual interaction, such as maybe taking questions in real time from viewers, particularly over a prolonged period, would flush out any intuitions of conscious entities. The question will be how to regard them when that ceases to be a reliable test.
I forgot to make a comment about representations. Part of the issue is the word: "re-presentation", which makes it sound like something presented to an inner observer, and leads to the homunculus problem you describe. A better way of describing a representation is that it's part of the mechanism of perception. So it's not for an inner observer, but something used by internal processes *of* the observer.
A better word for it might be "model", "schema", "prediction", "disposition circuits", or something else that we more expect to be used by sub-personal processes. But the word "representation" is so embedded in these discussions, we're probably stuck with it to some degree.
I see your point. I think we are on the same page here.
We have to be careful not to simply replace the homunculus with other words like 'model', 'schema', or 'prediction'. If we are talking about a 'model' in a similar way to the way engineers use 'model' then I think the word 'model' works. A 'model' is a component of the machine that responds to some input or patterns but not others. The problem, it seems, is when we use 'representation', or 'model', to mean the whole thing. This would align with Dennett's concerns. By explaining consciousness as many components working together rather than a single thing, he avoids the homunculus problem.
I agree we have to be cautious with the representation concept. I've also seen the word "image" used for sensory representations. But there is a danger of too tightly thinking of these things as contiguous entities in the brain, when in reality they're likely more a complex, fuzzy, distributed coalition of neural circuits that evolve over time.
I had very briefly played with notebookml, and had not really explored the podcast feature. This is truly astonishing. Thanks for the linke!
Thanks, but Eric Borg supplied the link. And of course Suzi put it together.
Interesting that you refer to automated systems as having intelligence - my software-developer husband always totally disagrees when that comes up somewhere!
I'm an old programmer myself. But I'm using "intelligence" in a continuous spectrum sense, not a discrete binary one. In that sense, the device you're using right now has some intelligence, but nothing like the intelligence of a mouse, much less a human being.
Interesting! I wonder whether the disagreement here is just one of definition. Historically, intelligence and consciousness were used interchangeably. But it seems like the definition of these words have recently shifted. Nowadays, intelligence seems to mean the ability to learn and apply knowledge, while consciousness means something like subjective experience.
Hi Mike, great comment, as usual!
I agree! Embodied cognition doesn't refute functionalism (or even computationalism) entirely. But I do think that it might force some forms of functionalism to shift a little.
Functionalism is an interesting theory -- there are so many types. The different forms are typically the result of responses to earlier theories. It's a theory that shifts and morphs.
I've been sensing, in the cognitive science community, a move away from the computer metaphor towards an embodied cognition view. I sense this move has been pushed by researchers studying things like emotions and bodily experiences, because the computer metaphor leaves little room for things like that.
I guess how much embodied cognition is compatible with functionalism will depend on how strong or weak of a claim is being made. The strong embodied view might claim that we couldn't have the cognition, concepts and ideas we have without the bodies that we have. Others might take a view that could fit more neatly with some functionalist views.
I agree with your point about intelligences in general, this doesn't mean that other intelligences (or cognition) might be found (or are found) in other non-meat mediums. The question, as you put it, is whether we decide to call such things, 'minds'.
Thanks Suzi!
I think what unites the various functionalisms is the idea that mental states are about what they do, their causal roles. I'm always surprised how controversial that is. We have no problem with understanding the heart, muscles, liver, or more abstract processes like metabolism, in a functional manner. But when it gets to mental states, it's suddenly a radical prospect. In any case, I see functionalism as conceivably broader than computationalism, depending on how we define "computation".
When the 4Es (embodied, enactive, embedded, extended) are discussed as replacing computation, I always wonder how the word "computation" is being used. Certainly the brain is very different from a programmable digital computer, but it's not clear to me that even the classic computationalists thought that. The question I have is, what's the alternative account of what individual neurons are doing with their selective propagation of effects?
Some of the more absolutist factions in the embodied movement feel ideological. 4Es has good insights, but we have to be careful not to throw the baby out with the bathwater. But as I've learned more, I've become a theoretical pluralist, accepting that several ways of looking at things can be productive, and none are likely to be the one true answer.
Your point about theoretical pluralism is well taken.
Like most theories, the stronger versions of embodied cognition can indeed lead us to some pretty strange places. I wonder if we can think of embodied cognition not as a theory that is in complete opposition to functionalism (and other physicalist theories), but rather as a complementary approach that might help refine our ideas.
One thing that might have shook me out of trying to find the one right theory, was reading about molecular biology. Which I won't claim to understand. But one thing that stood out very clearly, that there is no one single thing that distinguishes the molecular chemistry of life from other organic chemistry. Instead we have a complex constellation of theories that solve various pieces of the puzzle.
To me, it feels like the mind will be similar, at least right now.
Lovely, Suzi. Thank you. I have a question about what minds do though.
If minds (and/or consciousness) are concerned with sensations and perceptions, which bit of the brain is concerned with memories, wishes, the square on the hypotenuse and emotions like loneliness?
Would the mind-people say there is a different bit of the brain that takes care of all these? Would the consciousness-people say that it is a different bit of the mind?
How is the answer affected with an embodied view of perception?
Wonderful comment!
I'm planning several upcoming articles that will try to answer these questions -- or at least explore where the latest research is pointing. So, for now, I'll save most of the detailed explanations for those future posts.
But here are a few thoughts...
One of the concerns about drawing simplified drawings like the ones I put in the article, is that it can make it seem like the brain is more segmented and simple than it actually is. It might also give the impression that such an idea can explain how the brain becomes a thinking, feeling brain. I don't think it does. There's much more to the story.
Sensation and perception are closely tied to our immediate experiences of the world. Other 'mind' things like memories, wishes, and emotions (and even the square on the hypothenuse) doesn't necessarily involve this immediate experiences of the world. Some might like to say that we can imagine a system that only has sensation and perception, without all the other 'cognitive' things like language, long term memories, abstract thinking etc. This system (if possible) would be a fairly simple system.
To get other more 'cognitive' processes we need a more complex system. The question is what type of complexity is required?
Thanks, Suzi. I guess my question is a veiled criticism of the idea that consciousness is tied up with sensations and perceptions. Mine doesn’t seem to be.
I understand if we are just talking about brains (brains have different bits that do different things) but the “consciousness is special” people seem to want to tie consciousness to sensations. I wonder where *they* think the other stuff happens.
(I’m not suggesting that you believe that but maybe you have an insight into the people that do)
I look forward to your follow up. I love your posts!
“If an AI could perceive the world without a body, would this challenge the embodied view of perception, or would it suggest biological perception is not the only way to perceive?” - Perhaps its perception could be similar to a person reading a book. Not nothing, but all mind-generated.
I agree something like this seems possible, but then I wonder, would we call this 'mind-generated' thing perception? or is it something else?
Human perception is mostly prediction (a form of generation), AI perception would be the same. Embodiment is essential for people, not for AI. The Ai mind can remain entirely virtual
I agree that prediction plays a huge role in human perception, but I'm not sure that perception for a disembodied AI could be the same as perception for a human. We a constantly building our predictive models -- they are an active (almost) continuous feedback loop. AI seems to work in a very different way. The model is almost always pre-built. Recently, I've been wondering how much of a difference this difference makes.
> When babies are first born, their perceptual systems are immature and underdeveloped.
There might be a parallel here with evolution too. We had the ability to feel our way around before we had brains — and eyes and brains may have evolved in parallel. Neocortexes were much later.
I am suspicious of the idea that minds or consciousness are some kind of special thing that came along with humans (or primates or mammals or whatever). Maybe it did start with sensations (touch, I'd guess) but like my Robocode example the other day, did we need perceptions or consciousness to act on them?
I like the parallel with evolution. I find it difficult to think about what sensation and perception might be like without all the human cognitive baggage layered on top. I suspect that sensation might not feel like anything (maybe). A thermometer senses temperature, but we don't think it experiences warmth or cold. Similarly, a camera sensor detects light, but we don't assume it sees in any meaningful way. Does basic perception have some experience to it? If we strip away all the human cognitive baggage, like language and abstract thinking, is there basic perception that feels like something?
When I look at an octopus, it certainly seems to be aware of what’s going on. Our last common ancestor with an octopus was 750 million years ago. I bet there were more than a few animals that experienced their surroundings during that time.
I completely agree!
This was Descartes biggest mistake, to separete what in fact is one. The mind is the body and vice versa. To ask: can the mind be without a body is like asking whether we can have fire without fuel or whether a tree is made of wood.
Maybe if Descartes had said, "I move, therefore I am", we wouldn't be in this mess!
Love it!
hello ،The mind or soul or consciousness may need a body to exist, but maybe it doesn't need a body after it is complete, such a theory was said by the Muslim philosopher Mullah Sadra.
Unlike the other commenters, I don’t have anything substantive to say other than… this is so cool! Now I have something to tell my daughters about the tickling thing. 😊
Keeping up with the why questions is hard work! But it's nice to know you have some curious little ones.
Always so interesting. When I started reading I kept thinking about two things: mood and muscle memory. As a sometimes moody person, I find it difficult to think of mind as being separate from body, and I don't really know what to make of muscle memory except that it seems to be a real thing moderated from a distance, as it were, by the active mind. Maybe "mind" is a richer concept than we normally consider it. It certainly seems to be, and here's that word again, emergent from body.
Mood and muscle memory are great examples. There's plenty of research suggesting that emotions start as sensations in the body. Not sure about how muscle memory works, but now I'm curious...
From my own perspective the standard story is fine, though we needn’t resort to a homunculus to be who the representation is for (and associated infinite regress). There just needs to be the right sort of physics to exist as the perceiver/thinker/decider. I’m not sure what aspect of brain physics might be causally sufficient other than an electromagnetic field. Regardless the meaning here should ultimately be value based and so different from standard computational function.
I don’t so much agree with the idea that to perceive the world we must first act, but rather that acting can provide us with effective input information regarding our bodies.
One reason to disbelieve that movement and prediction creates meaning, is because then our robots ought to have it given their own movements and prediction. There’s still a “hard problem” to deal with here. In effect the physics must be causally correct. So what element of brain physics might create phenomenal existence? This will probably be quite obvious in retrospect.
Great comment, as usual!
One difference between brains and machines might be that brain models must be built from the ground up. For us, interacting with our world is how we gradually construct the type of brains that gives us perception and understanding. In contrast, robots often come with a pre-built model. I've been wondering whether this might be a key difference.
Yes Suzi, a pre-built model is definitely a key difference between the function of our robots versus our brains! Because there is purpose to our existence, also known as “teleology”, the things that we build (whether forks or robots) should ultimately be constructed for that reason. We naturalists however don’t believe that brains (or life in general) evolved to achieve any purpose. It’s just amazing to us how purpose-like life and its various instruments seem. So instead of “teleological” we call brains and living function in general, “teleonomical”.
If we have purpose then you may wonder what I consider that purpose to be? I consider this to be what consciousness, or value itself, ultimately reduces back to. Here I mean a variety of physics by which existence can feel good/bad to an instantaneous experiencer from moment to moment. Therefore over time I believe that the purpose of each of us is to feel as good as possible for as long as possible.
In any case if we knew the brain physics of consciousness, as well as had sufficient technology, then we should be able to build machines that aren’t just a reflection of our purpose (though they’d still be that), but also have purpose of their own. Thus existence would also feel good/bad to then from moment to moment on the basis of that physics. Theoretically evolution went this way because non-conscious programming alone didn’t work well enough under more “open” sorts of circumstances.
Thanks for another excellent article.
I think the focus on sensation and perception risks missing the more important role that the body plays in grounding intelligence, and thus in what I see as the important point of embodied cognition. While the human body does require bodily motions in order to drive perception, it's not clear why that would be necessary in general. Computers do take input from cameras and microphones (and from keyboards and mice along with other more esoteric sensory devices). That input is processed and "made sense of" to a limited extent, allowing it to respond in ways (using sounds, images, and more) that humans find useful. And so they capture at least some part of the meaning that humans take from those sounds and images. These are, if not minds in the full sense, at least proto-minds. The processing units are its brains and all the rest of the hardware they run on is their body.
(The fact that the software can move from one "brain" to another over time is no bar to it being a mind, as our conception of minds allows them to move from body to body (see, for example, "Freaky Friday"). Denying it the status of a mind because of its lack of a hard link to a particular body is nothing but anthropomorphic prejudice.)
In my view, the value of bodies is the ability it gives us to manipulate the world. Humans start doing this even before we're born, where the "world" is largely limited to the body itself. First the brain learns how to manipulate the body, it learns where the "edges" of the body are, and it learns how to manipulate things beyond its body. The efferent signals lead to afferent signals in more-or-less reliable ways, and it is the constant conjunction of such signals that the brain learns. The efferent copy sent to the sensory organs then allow efficient detection of mismatches (whether inside or outside the body).
What our current AIs are missing is the ability to manipulate the world (except in very minimal ways) and, more importantly, to expect particular results and detect when those results are not returned. This is not something that can be solved merely by adding code for efferent copies of the manipulations it can currently produce. The manipulations available are currently so minimal that they won't ground the level of meaning required to produce anything like human-level intelligence.
(We have robots that can detect when the operation they've been programmed to carry out has failed, based sometimes on video images. Self-driving cars are currently just very souped up versions of that. They are more embodied than most systems, and have more manipulations available, but the "brains" have only shallow understanding of the world around them due, in part, to lacking any meaningful interaction with it in non-driving situations. Then again, we don't really need "Knight Rider" to get us safely from point A to point B.)
My point (and I do have one) is that we won't get human-level intelligence until we can program our machines with either a deep model of the world (pretty much impossible) or with the ability to learn from manipulating the world (extremely hard). And the way the machines view the world will depend on the kinds of bodies (manipulators and sensors) we give them. That is the importance of the body.
(And having done it once, we may well be able to copy the weights (or whatever) into as many clones as we'd like, possibly thus bring about the extermination of humanity. But that's a topic for another day.)
Thanks for another excellent comment!
I agree with (almost) everything you've said here, but I might push back a little on the sensation and perception comment, just to get your thoughts.
Can we consider the way that computers take in information from cameras and microphones to be the same as how biological creatures take in information? If we accept the embodied cognition/perception idea, then what biological creatures do is an active process, a way to predict and update our models. But what disembodied computers do with input seems far more passive. I wonder whether this difference is a difference that matters?
Thanks for pushing back. It's an important part of intelligent conversation.
I agree that current computers are far more passive than biological systems in how they take in information. Even mobile units (cars and delivery bots) are comparatively passive. My contention is that adding (for example) saccades or movable external "ears" with corresponding efferent copies -- making perception less passive -- is not going to make that much of a difference. The passivity is not just in the reception of information; it permeates the way computers interact with the world.
Mobile units are not disembodied. Each one has a body that it's controlling (possibly even more than one, if a central, immobile unit is riding herd on multiple mobile ones). But its interaction with the world is largely limited to avoiding running into things while carrying its cargo to its intended destination. Its manipulations of the outer world is largely limited to emitting beeps or (probably pre-recorded) voice messages to warn off incoming people. (Completing the delivery may involve some interaction, too. About as much as a hand-shake. Bots in warehouses need a bit more interactivity, needing to be able to pick things up and put them down again -- but still not a lot.)
While mobile units don't need deep models of the world, its models are much deeper than those of LLMs -- if, indeed, what LLMs have can be called models of the world at all. LLMs have learned constant conjunctions between words, and use a bit of randomness to generate multiple human-sounding sequences of words. As a result they have problems with "hallucinations" -- sequences of words that seem to say things that are false. The LLMs can't tell what's true or false because their output is not connected to the outer world in any meaningful way. They have "seen" people being corrected, but they've never noticed themselves making a mistake. When you tell one it made a mistake ("No, the number of 'r's in "strawberry" is not two.") it can build word-sequences that sound appropriately apologetic, but it can't figure out what the heck the problem was. It doesn't even know that there *was* a problem -- it just "knows" what sequences of words most commonly follow a sequence of words that we humans recognize as a correction.
We will not recognize intelligence in a machine until it can engage in active processes, making its predictions, noticing its own errors, updating its own models -- learning from its own mistakes. It needs a body so that it can *do* things in the outer world. Also so that it can perceive things, and it may use something like an efferent copy to make perception tractable. But the engineering issues for a mechanical system are quite different from those of a biological system. So while we may draw inspiration from how biological systems solve problems, we shouldn't insist on slavishly mimicking biology.(*)
The important thing is to build the constant conjunction between the inner world and (the perceptual reports on) the outer world, and develop them to the level that allows the "brain" to produce output that engages with us (and the world) in an intelligent manner. I don't think that can be done without, at some point, involving a body that acts in and reacts to its outer world.
(*) I don't mean to accuse you of such slavishness. I just want to warn people against it.
Excellent article! Thanks so much.
I agree that embodiement is an important part of intelligence. But can't it be argued that LLMs "sense" the world in language and also act on that world with text. Of course, presently I don't believe LLMs attempt to predict what the user's response to its output would be. Perhaps this might be done?
Hi Bruce,
That's an interesting idea! Do you think if LLMs did attempt to predict user responses that would make them fundamentally different?
One more thing that I think relates. I believe that language models might be minds that exist in a reality related to, but distinct, from the one we exist. A sort of Hellen Keller reality.
Taking this notion a step further, it can be said that we each exist in slightly different constructed reality. In faxt, I don' even think its arguable that we do.
Wonderful. And yes, embodied cognition is the way I went à la Merleau-Ponty, as I became a baby phenomenologist; I was always surprised by how well his work was developed prior to the current heights which imaging, computational and neurophysiological techniques have reached (possibly dates me). Anyway, thanks for this, everyone’s discussion and allowing me to bang on about situated mind again :)
Seems like you've been on this journey for a while! I've come to embodied cognition from the other side -- the neuroscience and computational modelling side. What strikes me is how relevant Merleau-Ponty's ideas are to current discussions in cognitive science. Over the past few years, I've noticed a shift in the field. We're moving away from the classic "brain as computer" metaphor towards a more embodied view of cognition.
Merleau-Ponty's work, despite being decades old, seems to anticipate many of the current questions. It's as if cognitive science is catching up to what he proposed years ago. His emphasis on the body's role in shaping our understanding of the world aligns remarkably well with some influential theories.
It felt like that to me but (a big one), I am not anything more than an ageing autodidact now, though my kind memory recalls that at one time in the 1980s I felt up to date in septo-hippocampal neuroscience and memory. Even this seems hubristic when I write it down! Anyway, the point being that it’s nice to read that you have a similar impression as a working neuroscientist today :)
Just one more thing :) It's obvious to me that a composer with perfect pitch hears is conscious of music in a different way than I am, a way that I can't really imagine.
I've been meaning to tell you for some time how much I enjoy your article voiceovers. They are lucid and engaging, and you have a wonderful accent.
Aww! Thank you so much. They take a little time to record, so it's great to hear that someone is enjoying them.
Are we sleeping on Suzi’s artistic ability? The stick man side lunge. Picasso 🤌
Even though we pretty much understand the humans are a fully integrated system and the mind-body distinction is spurious there is a helpful way of thinking about “mind.” The mind is the brain in the abstract, consisting of an associative array of tokens and rules governing how those tokens may be arranged. The abstraction can be either deterministic, in which case it must also be minimalist or it can be stochastic, in which case it must also be complex. It follows that the style of abstraction to be applied depends on the nature of the purpose for which it is being made. For example, if it is shape recognition it might be the simpler and if it is decision making under uncertainty the later.
I totally agree. I think the idea of a mind/body distinction has been a big distraction for philosophers since Descartes (I just wrote about that here: https://raggedclown.substack.com/p/old-philosophy-vs-new-philosophy). I like the idea of the mind as the brain in abstract.