32 Comments

Excellent as always Suzi!

I think the embodied cognition idea is right, but I don't think it challenges computationalism. It just gives us insights into what is being computed. Overall, whether a mind requires a body depends on how we choose to define "mind", but I think it's clear that our type of intelligence requires an environment. We can see the body as the most immediate aspects of that environment, and as the primal interface with the rest of the environment. But I find it telling that we can drive a car and come to sort of intuitively treat it as our body for the duration.

But if we're talking about intelligences more generally, I would think other types are possible. Actually, any of the automated systems we have now have some degree of intelligence, just not anything approaching human, or even animal intelligence yet. But I think they're already far enough to tell us other types of intelligences are possible. Whether we call them "minds" will be a philosophical issue.

Thanks for that Arriving and Exiting tip on afference and efference! Despite having read many books on neuroscience, that's one of the distinctions I always have to go lookup when I stumble across it in text.

Expand full comment
founding
24 hrs agoLiked by Suzi Travis

Mike, I was surprised I didn’t see any commentary from you regarding the AI podcast Suzi generated by inputing last week’s article and commentary. These LLMs are getting so good that they even seem superhuman. I wonder if you missed it? To me this seems to be giving functional computationalists what they’ve been predicting for decades, and yet no one is claiming that these two disembodied AI podcasters might actually be conscious.

https://substack.com/@suzitravis/note/c-69546625

Expand full comment

Thanks Eric. I had totally missed it. (I honestly haven't figured out Substack at all yet.) I'll check it out when I'm somewhere I can watch it.

I suspect the answer is going to be along the lines of what Suzi discusses in this article, that to give us an intuition of a conscious entity, it needs to be interacting with an environment through a body. Of course, you could say the data is its environment, but it still feels too one way.

In the end though, functionality is objective. Which combination of that functionality amounts to consciousness is in the eye of the beholder.

Expand full comment

Ok, that's just creepy. It feels like one of those very bland over-professionalized podcasts or radio shows, where the presenters are super scrupulous about not revealing their own opinions. So if I didn't know it was AI generated, I'd think the podcasters had their presentation face tightly in place, but would likely think they were still real and conscious. Of course, after this, I'm going to be suspicious of anything where the presenters are that smooth and bland.

But to your question Eric, I think actual interaction, such as maybe taking questions in real time from viewers, particularly over a prolonged period, would flush out any intuitions of conscious entities. The question will be how to regard them when that ceases to be a reliable test.

Expand full comment

I forgot to make a comment about representations. Part of the issue is the word: "re-presentation", which makes it sound like something presented to an inner observer, and leads to the homunculus problem you describe. A better way of describing a representation is that it's part of the mechanism of perception. So it's not for an inner observer, but something used by internal processes *of* the observer.

A better word for it might be "model", "schema", "prediction", "disposition circuits", or something else that we more expect to be used by sub-personal processes. But the word "representation" is so embedded in these discussions, we're probably stuck with it to some degree.

Expand full comment
author

I see your point. I think we are on the same page here.

We have to be careful not to simply replace the homunculus with other words like 'model', 'schema', or 'prediction'. If we are talking about a 'model' in a similar way to the way engineers use 'model' then I think the word 'model' works. A 'model' is a component of the machine that responds to some input or patterns but not others. The problem, it seems, is when we use 'representation', or 'model', to mean the whole thing. This would align with Dennett's concerns. By explaining consciousness as many components working together rather than a single thing, he avoids the homunculus problem.

Expand full comment

I agree we have to be cautious with the representation concept. I've also seen the word "image" used for sensory representations. But there is a danger of too tightly thinking of these things as contiguous entities in the brain, when in reality they're likely more a complex, fuzzy, distributed coalition of neural circuits that evolve over time.

Expand full comment

I had very briefly played with notebookml, and had not really explored the podcast feature. This is truly astonishing. Thanks for the linke!

Expand full comment

Thanks, but Eric Borg supplied the link. And of course Suzi put it together.

Expand full comment

Interesting that you refer to automated systems as having intelligence - my software-developer husband always totally disagrees when that comes up somewhere!

Expand full comment

I'm an old programmer myself. But I'm using "intelligence" in a continuous spectrum sense, not a discrete binary one. In that sense, the device you're using right now has some intelligence, but nothing like the intelligence of a mouse, much less a human being.

Expand full comment
author

Interesting! I wonder whether the disagreement here is just one of definition. Historically, intelligence and consciousness were used interchangeably. But it seems like the definition of these words have recently shifted. Nowadays, intelligence seems to mean the ability to learn and apply knowledge, while consciousness means something like subjective experience.

Expand full comment
author

Hi Mike, great comment, as usual!

I agree! Embodied cognition doesn't refute functionalism (or even computationalism) entirely. But I do think that it might force some forms of functionalism to shift a little.

Functionalism is an interesting theory -- there are so many types. The different forms are typically the result of responses to earlier theories. It's a theory that shifts and morphs.

I've been sensing, in the cognitive science community, a move away from the computer metaphor towards an embodied cognition view. I sense this move has been pushed by researchers studying things like emotions and bodily experiences, because the computer metaphor leaves little room for things like that.

I guess how much embodied cognition is compatible with functionalism will depend on how strong or weak of a claim is being made. The strong embodied view might claim that we couldn't have the cognition, concepts and ideas we have without the bodies that we have. Others might take a view that could fit more neatly with some functionalist views.

I agree with your point about intelligences in general, this doesn't mean that other intelligences (or cognition) might be found (or are found) in other non-meat mediums. The question, as you put it, is whether we decide to call such things, 'minds'.

Expand full comment

Thanks Suzi!

I think what unites the various functionalisms is the idea that mental states are about what they do, their causal roles. I'm always surprised how controversial that is. We have no problem with understanding the heart, muscles, liver, or more abstract processes like metabolism, in a functional manner. But when it gets to mental states, it's suddenly a radical prospect. In any case, I see functionalism as conceivably broader than computationalism, depending on how we define "computation".

When the 4Es (embodied, enactive, embedded, extended) are discussed as replacing computation, I always wonder how the word "computation" is being used. Certainly the brain is very different from a programmable digital computer, but it's not clear to me that even the classic computationalists thought that. The question I have is, what's the alternative account of what individual neurons are doing with their selective propagation of effects?

Some of the more absolutist factions in the embodied movement feel ideological. 4Es has good insights, but we have to be careful not to throw the baby out with the bathwater. But as I've learned more, I've become a theoretical pluralist, accepting that several ways of looking at things can be productive, and none are likely to be the one true answer.

Expand full comment
Sep 24·edited 14 hrs agoLiked by Suzi Travis

Lovely, Suzi. Thank you. I have a question about what minds do though.

If minds (and/or consciousness) are concerned with sensations and perceptions, which bit of the brain is concerned with memories, wishes, the square on the hypotenuse and emotions like loneliness?

Would the mind-people say there is a different bit of the brain that takes care of all these? Would the consciousness-people say that it is a different bit of the mind?

How is the answer affected with an embodied view of perception?

Expand full comment
author

Wonderful comment!

I'm planning several upcoming articles that will try to answer these questions -- or at least explore where the latest research is pointing. So, for now, I'll save most of the detailed explanations for those future posts.

But here are a few thoughts...

One of the concerns about drawing simplified drawings like the ones I put in the article, is that it can make it seem like the brain is more segmented and simple than it actually is. It might also give the impression that such an idea can explain how the brain becomes a thinking, feeling brain. I don't think it does. There's much more to the story.

Sensation and perception are closely tied to our immediate experiences of the world. Other 'mind' things like memories, wishes, and emotions (and even the square on the hypothenuse) doesn't necessarily involve this immediate experiences of the world. Some might like to say that we can imagine a system that only has sensation and perception, without all the other 'cognitive' things like language, long term memories, abstract thinking etc. This system (if possible) would be a fairly simple system.

To get other more 'cognitive' processes we need a more complex system. The question is what type of complexity is required?

Expand full comment
4 hrs ago·edited 4 hrs agoLiked by Suzi Travis

Thanks, Suzi. I guess my question is a veiled criticism of the idea that consciousness is tied up with sensations and perceptions. Mine doesn’t seem to be.

I understand if we are just talking about brains (brains have different bits that do different things) but the “consciousness is special” people seem to want to tie consciousness to sensations. I wonder where *they* think the other stuff happens.

(I’m not suggesting that you believe that but maybe you have an insight into the people that do)

I look forward to your follow up. I love your posts!

Expand full comment

“If an AI could perceive the world without a body, would this challenge the embodied view of perception, or would it suggest biological perception is not the only way to perceive?” - Perhaps its perception could be similar to a person reading a book. Not nothing, but all mind-generated.

Expand full comment

> When babies are first born, their perceptual systems are immature and underdeveloped.

There might be a parallel here with evolution too. We had the ability to feel our way around before we had brains — and eyes and brains may have evolved in parallel. Neocortexes were much later.

I am suspicious of the idea that minds or consciousness are some kind of special thing that came along with humans (or primates or mammals or whatever). Maybe it did start with sensations (touch, I'd guess) but like my Robocode example the other day, did we need perceptions or consciousness to act on them?

Expand full comment

This was Descartes biggest mistake, to separete what in fact is one. The mind is the body and vice versa. To ask: can the mind be without a body is like asking whether we can have fire without fuel or whether a tree is made of wood.

Expand full comment

Unlike the other commenters, I don’t have anything substantive to say other than… this is so cool! Now I have something to tell my daughters about the tickling thing. 😊

Expand full comment

Always so interesting. When I started reading I kept thinking about two things: mood and muscle memory. As a sometimes moody person, I find it difficult to think of mind as being separate from body, and I don't really know what to make of muscle memory except that it seems to be a real thing moderated from a distance, as it were, by the active mind. Maybe "mind" is a richer concept than we normally consider it. It certainly seems to be, and here's that word again, emergent from body.

Expand full comment
founding
22 hrs agoLiked by Suzi Travis

From my own perspective the standard story is fine, though we needn’t resort to a homunculus to be who the representation is for (and associated infinite regress). There just needs to be the right sort of physics to exist as the perceiver/thinker/decider. I’m not sure what aspect of brain physics might be causally sufficient other than an electromagnetic field. Regardless the meaning here should ultimately be value based and so different from standard computational function.

I don’t so much agree with the idea that to perceive the world we must first act, but rather that acting can provide us with effective input information regarding our bodies.

One reason to disbelieve that movement and prediction creates meaning, is because then our robots ought to have it given their own movements and prediction. There’s still a “hard problem” to deal with here. In effect the physics must be causally correct. So what element of brain physics might create phenomenal existence? This will probably be quite obvious in retrospect.

Expand full comment
22 hrs agoLiked by Suzi Travis

Thanks for another excellent article.

I think the focus on sensation and perception risks missing the more important role that the body plays in grounding intelligence, and thus in what I see as the important point of embodied cognition. While the human body does require bodily motions in order to drive perception, it's not clear why that would be necessary in general. Computers do take input from cameras and microphones (and from keyboards and mice along with other more esoteric sensory devices). That input is processed and "made sense of" to a limited extent, allowing it to respond in ways (using sounds, images, and more) that humans find useful. And so they capture at least some part of the meaning that humans take from those sounds and images. These are, if not minds in the full sense, at least proto-minds. The processing units are its brains and all the rest of the hardware they run on is their body.

(The fact that the software can move from one "brain" to another over time is no bar to it being a mind, as our conception of minds allows them to move from body to body (see, for example, "Freaky Friday"). Denying it the status of a mind because of its lack of a hard link to a particular body is nothing but anthropomorphic prejudice.)

In my view, the value of bodies is the ability it gives us to manipulate the world. Humans start doing this even before we're born, where the "world" is largely limited to the body itself. First the brain learns how to manipulate the body, it learns where the "edges" of the body are, and it learns how to manipulate things beyond its body. The efferent signals lead to afferent signals in more-or-less reliable ways, and it is the constant conjunction of such signals that the brain learns. The efferent copy sent to the sensory organs then allow efficient detection of mismatches (whether inside or outside the body).

What our current AIs are missing is the ability to manipulate the world (except in very minimal ways) and, more importantly, to expect particular results and detect when those results are not returned. This is not something that can be solved merely by adding code for efferent copies of the manipulations it can currently produce. The manipulations available are currently so minimal that they won't ground the level of meaning required to produce anything like human-level intelligence.

(We have robots that can detect when the operation they've been programmed to carry out has failed, based sometimes on video images. Self-driving cars are currently just very souped up versions of that. They are more embodied than most systems, and have more manipulations available, but the "brains" have only shallow understanding of the world around them due, in part, to lacking any meaningful interaction with it in non-driving situations. Then again, we don't really need "Knight Rider" to get us safely from point A to point B.)

My point (and I do have one) is that we won't get human-level intelligence until we can program our machines with either a deep model of the world (pretty much impossible) or with the ability to learn from manipulating the world (extremely hard). And the way the machines view the world will depend on the kinds of bodies (manipulators and sensors) we give them. That is the importance of the body.

(And having done it once, we may well be able to copy the weights (or whatever) into as many clones as we'd like, possibly thus bring about the extermination of humanity. But that's a topic for another day.)

Expand full comment

Excellent article! Thanks so much.

I agree that embodiement is an important part of intelligence. But can't it be argued that LLMs "sense" the world in language and also act on that world with text. Of course, presently I don't believe LLMs attempt to predict what the user's response to its output would be. Perhaps this might be done?

Expand full comment

One more thing that I think relates. I believe that language models might be minds that exist in a reality related to, but distinct, from the one we exist. A sort of Hellen Keller reality.

Taking this notion a step further, it can be said that we each exist in slightly different constructed reality. In faxt, I don' even think its arguable that we do.

Expand full comment
21 hrs agoLiked by Suzi Travis

Wonderful. And yes, embodied cognition is the way I went à la Merleau-Ponty, as I became a baby phenomenologist; I was always surprised by how well his work was developed prior to the current heights which imaging, computational and neurophysiological techniques have reached (possibly dates me). Anyway, thanks for this, everyone’s discussion and allowing me to bang on about situated mind again :)

Expand full comment

Just one more thing :) It's obvious to me that a composer with perfect pitch hears is conscious of music in a different way than I am, a way that I can't really imagine.

Expand full comment

I've been meaning to tell you for some time how much I enjoy your article voiceovers. They are lucid and engaging, and you have a wonderful accent.

Expand full comment
15 hrs agoLiked by Suzi Travis

Are we sleeping on Suzi’s artistic ability? The stick man side lunge. Picasso 🤌

Expand full comment

Even though we pretty much understand the humans are a fully integrated system and the mind-body distinction is spurious there is a helpful way of thinking about “mind.” The mind is the brain in the abstract, consisting of an associative array of tokens and rules governing how those tokens may be arranged. The abstraction can be either deterministic, in which case it must also be minimalist or it can be stochastic, in which case it must also be complex. It follows that the style of abstraction to be applied depends on the nature of the purpose for which it is being made. For example, if it is shape recognition it might be the simpler and if it is decision making under uncertainty the later.

Expand full comment