I sometimes wonder if the behaviorists don't get a bad rap these days. A lot of their approach could be seen as a response to the limitations in measurement they had to work with. Of course, behaviorism wasn't a monolithic movement. For many it was just a methodology, but others did turn it into an ontology.
Although it's worth remembering that the information processing paradigm was always there. It's often forgotten today that modern computing is based on Boolean logic, which began with George Boole's 1854 book: The Laws of Thought. From the beginning the idea seems to have been to capture how thinking works. So maybe we shouldn't be too surprised that information processing paradigms eventually fed back into the actual cognitive sciences.
Hey Mike -- yes, thanks for pointing this out, it’s easy to dismiss it today, but as you say, many behaviourists were simply working within the empirical constraints of their time. And you make a good point -- some took it as a methodology while others took it as a more extreme ontological claim, which is where much of the criticism is directed.
On Boolean logic — that’s an interesting idea. I’ve always seen the major influences on the cognitive revolution as Shannon’s information theory, cybernetics, and Chomsky’s critique of behaviourism. I haven’t read much of Boole’s work, but from what I understand, it was more aligned with Aristotelian logic than with the kind of step-by-step symbolic processing later used in cognitive psychology. That said, I see what you’re getting at — Boole’s work did predate computing and laid essential groundwork for modern logic, which eventually fed into computational approaches to the mind.
Hey Suzi! I have to admit I've never read Boole directly, just brief articles about how Boolean logic developed. And much of that material notes that the version later used in technology took time to develop, with later thinkers adding to his concepts considerably.
I worked with a “direct descendent” of Skinner in grad school and came to understand where they came from. It began as a strategy- let’s see how much we can explain without inventing hypothetical intervening variables. My mentor loved telling the story of how Skinner trained a pigeon so that it was ultimately working on a variable ratio 2000 schedule. (For the uninitiated, that’s one reward for every 2000 responses on the average!). Of course, when the reinforcement was withdrawn, it kept on pecking forever. “Now that’s persistence, that’s character!” he quoted Skinner as laughingly saying (God help his daughter!) The problem is, as you say, is that most behaviorists never did see the need to broaden the model to include intervening variables, or biological constraints, and thus it did become an ontology. I found many of the ones I met to rather cerebral and untroubled by emotions. Rather convenient when you’re working with a model that ignores them.
Wow! That pigeon story is both hilarious and fascinating. It’s wild to me that there wasn’t more curiosity about what the bird was thinking. My first thought was exactly that — I’d love to know what that poor bird thought!
YES! I agree. Just because a system processes information doesn’t necessarily mean it has a mind — what matters is *HOW* it does so.
So the question then becomes what makes certain kinds of information processing ‘mind-like’? Is it a question of structure (e.g., neural networks vs. classical computation?), function (e.g., higher-order thoughts, self-awareness, prediction?), or something else entirely?
Arsiwalla, X.D., Signorelli, C.M., Puigbo, JY., Freire, I.T., Verschure, P.F.M.J. (2018). Are Brains Computers, Emulators or Simulators?
"Machines implementing non-classical logic might be better suited for simulation rather than computation (a la Turing). It is thus reasonable to pit simulation as an alternative to computation and ask whether the brain, rather than computing, is simulating a model of the world in order to make predictions and guide behavior. If so, this suggests a hardware supporting dynamics more akin to a quantum many-body field theory."
Something like quasicrystals but dynamic
"Quasicrystals are intriguing states of matter that occupy a fascinating middle ground between periodic crystals and amorphous unordered glasses—they are long-range ordered without being periodic. This gives rise to peculiar transport properties and has been used to study Anderson and Many-body localization.
Quasicrystals are self-similar and their fractal structure can give rise to the most complex quantum states. Mathematically, they can be described as a projection from a in our case four-dimensional periodic parent lattice. In particular, they can inherit topological features, such as protected edge states, from their higher-dimensional parents."
The perfect place for consciousness to hide in the physical brain would be in a space almost impossible to visualize or measure from our perceived three-dimensional space. You project a three-dimensional space from an extra dimension.
Thanks James! That's a really fascinating perspective! The idea of consciousness hiding in an almost unmeasurable space is an interesting idea. Definitely something to think about!
This could bea spatial dimension. Kaluza-Klein proposed a 5th spatial dimension to unify the EM field with gravity. That's interesting from the EM field and consciousness theory perspective.
But it might not be spatial. We accept time as a dimension but it isn't a spatial at all and doesn't have the same properties as the other dimensions. The extra dimension(s) could be informational. Do we even understand what dimensions are? Are the spatial dimensions "real" dimensions or constructs of our brains?
At any rate, likely an artificial consciousness could be created if the idea is correct and we gained an understanding of how the extra dimension worked but the implementation probably wouldn't be anything like a digital computer and might behave more like something living than dead.
The word mind is often used to be located in the brain, wouldn’t be more appropriate to associate the mind as a function of the brain? This eliminates a need for a specific location.
You raise an interesting point! Thinking of the mind as a function helps move us away from the idea that it's an object sitting somewhere -- possibly in the brain.
But does this mean it has no location at all? Most of the time we think about functions happening somewhere. Digestion is a function, but we don't say we want to eliminate the need for a specific location where digestion happens. We are happy to say it takes place in the stomach (and other parts of the digestive tract).
If the mind is a function, its functional processes still might depend on physical structure. If so, the question becomes what physical structure is required? Could the function be performed in a non-biological system, like a computer? Or is there something about the physical structure of the brain that matters?
I think current language usage often separates the brain from the body’s function. This was the reasoning in my thoughts about the word “mind”. We associate it with thoughts and memories but it is the brain that is processing the experiences the body is having. The usage as a function makes the mind more of a whole body experience and removes a distraction of associating the mind with only the brain. The brain is also processing information both internally and externally which makes it more difficult identify where a mind is.
I see what you mean about how language separates the brain from the body, and I really like the idea that thinking of the mind as a function shifts us toward a whole-body perspective. But I’m curious, when you say that this removes the distraction of associating the mind only with the brain, do you mean that the mind is something that 'emerges' from the interaction between brain and body? That the mind is what the brain does? Or are you suggesting that cognition itself isn’t centered in the brain at all?
You went a different way than I thought you were going to go Suzi. And you really fooled me when you mentioned “code breaking”, since Alan Turing was instrumental for breaking the German Enigma coding machine to help the allies prevail in World War II. Instead you went with how the work of Claude Shannon was interpreted. And indeed, you reached the same theme that I do with Turing. Shannon never meant for bits to be interpreted with meaning. Regardless I’ll provide a quick account of the Turing situation.
In a sense Turing essentially attempted what the behaviorists attempted — he simplified consciousness for human convenience. How might we ever know that a computer was conscious given that we don’t yet even grasp what we’re talking about in the human? So he set up an “imitation game” to effectively take consciousness out of the equation. If we can’t tell whether we’re talking with a computer or a human, then that computer must have a consciousness. This mandates that there isn’t anything interesting about consciousness because here all computers must at least have the computational mechanisms for it — they all take input information and algorithmically convert it to appropriate other information. This is what computational functionalists presume the brain does for our consciousness, though far more extensively than even modern chatbots do. In 1980 John Searle tried to use his Chinese room thought experiment to inject some reason into the debate, but to no avail.
So that’s essentially the situation we have today, and why I think I was right as a college kid in the late 80s to consider it ridiculous that the field of psychology didn’t yet have a value driven model of consciousness, or didn’t even consider such a thing needed. Thus instead of just talking about my “dual computers” model of brain function in blog commentary (as I’ve done since 2014), I’m finally trying to present my models on their own. The goal would be to help our troubled mental and behavioral sciences finally gain a solid foundation from which to build.
It sounds like next time you’re going to get into the physics of information. This will permit me to submit my own such position. Here I mean that information should only be said to exist as such, to the extent that something causally appropriate becomes informed by it. Observe that all known elements of computation follow this rule. Functional computationalists however, conveniently posit consciousness as the unique exception to this rule.
I see what you mean about Turing’s approach having a kind of behaviourist flavour. He really did sidestep the question of consciousness in a way that's pretty similar to what the behaviourists did with mental states. Instead of trying to define what consciousness is, he just set up a test for whether something acts conscious. Computational functionalism seems to have picked up that thread, treating intelligence as an input-output system, just on a more sophisticated level.
We use the word information in so many different ways. Shannon's version is all about reducing uncertainty, but in cognitive science, we often mean something more like structured representation or meaning. The question is, how much do these different concepts relate to each other? Is the confusion in these debates just that we're using the same word but talking about entirely different things? Or is it that there's actually some connection between them that we can pin down?
I think there’s a great connection between all the different ways that we use the term “information”, and it’s the very root of the word itself. They all “inform” something that’s causally appropriate. A DVD can inform a DVD player because it’s causally appropriate to do so. A Betamax tape is not. Or a DVD could inform a table as a shim under one of its legs because it’s causally appropriate to do so. Speaking Chinese to me does not inform me because it’s not causally appropriate information to me — I don’t speak the language. The only problem with functional computationalists is that they haven’t quite learned this lesson and so presume that processed brain information itself exists as consciousness rather than goes on to inform something causally appropriate that exists as a given consciousness. I’ll probably go deeper into this with your coming “information” post. And now that it’s the weekend I hope to finally begin writing my post #3, which will concern the way that evolution must have straightened this particular engineering matter out millions of years ago. I haven’t decided if I’ll go purely theoretical on this, or pair it with the fossil record and thus the emergence of the brain, and then brains armed with consciousness. The “Cambrian explosion” of life should be a central theme if I do map this theory back to the fossil record.
Sounds like a fascinating direction for your next post! Curious to see where you take it, especially if you bring in the Cambrian explosion and the emergence of consciousness!
Suzi, I just want to thank you, again, for all that you bring. I just googled "spider brain," and I must say I seriously doubt I would have done that, among many other things I've recently done in response to your work, without your essays. Always thought-provoking, and always inspirational.
The reason I went to spider brains was the thought that we define (human) brains too narrowly. We think of them as located within the skull - but maybe we should think of them as the entire neural system, as much the sense in a fingertip as the cells of the frontal lobe. And maybe "mind" is emergent from the entire system and not cognition. You know people sometimes say guys think with their... certain body parts. Perhaps that's more literally true than I ever considered.
Which took me to spider brains. You know, some spiders think with their legs. https://spideranatomy.com/do-spiders-have-brains/. Maybe we do too and theories of the mind should take that into consideration.
Wow, that’s such a kind thing to say, Jack —thank you! It really means a lot.
I love it! Spider brains! I also love that the website 'spideranatomy.com' exists. AND they have a special section on spider-part! You know that I'm going to spend hours on this site now.
Some might say human guys think with their... certain body parts, but spiders take this to a whole new level. I just found out that male spiders don’t actually have that certain body part — instead, they use a modified limb. And don’t some of them become lunch after using their... certain modified limb? Which gives a whole different meaning to 'some spiders think with their legs'.
Haha, you got a little more enjoyment out of that spider business than I expected! I'd heard that black widows, for example, were famous for mate cannibalism, but it turns out that might be overrated: https://www.burkemuseum.org/collections-and-research/biology/arachnology-and-entomology/spider-myths/myth-black-widows-eat. Note that female black widows are 10-160 times the size of the males, who must not even be a satisfying bite, much less a meal, and if they're thinking fast enough (with their... certain legs, no doubt), they often get away to mate another day.
I distracted you, though. What do you think of my idea that our concept of "brain" is too limited? That might kick the can of sentience down the road a bit, although robotics is moving quickly to step into the breach.
Haha, I did get a bit carried away with the spider business! And that’s interesting— black widow cannibalism might be more of a myth than a rule.
But back to your original point — I do think it’s important to consider more than just the brain when thinking about what the brain does. As we’ve discussed before, I think embodied cognition makes some compelling arguments. But I’m also wary of taking it too far — at its extreme, embodied cognition can lead to some very strange places. If we go broad enough, cognition starts to seem like something that isn’t to do with the brain at all, which, to me, is a problem.
Ha! We've been talking about humans thinking with ... certain parts and spiders mating with... various limbs and you're suddenly objecting to strange places? How now, brown cow? (are you familiar with that nonsense phrase which functions sort of like the flavoring particle "doch" in German?) Anyway, I wouldn't want to take the brain out of the equation, just to consider the neural connections more generally as an essential part of the brain and not just a link between mind and body. Is that a bridge too far?
Haha, fair point — I definitely opened the door to some strange places! And yes, I know how now, brown cow? — though I love the idea of it as an English doch. I hadn’t thought of it that way before.
I see what you’re getting at, and no, I don’t think it’s a bridge too far. There’s definitely value in viewing cognition/mind as a whole system rather than isolating the brain in our study of it.
You suggest that Behaviorism went away after the cognitive revolution. Behaviorism treated the brain as a black box: input->black box->output. This can be described as a reflexive arc. And it is how AI and computers work today: once programmed, influence from the environment is irrelevant. To me, that harkens back to behaviorism.
Ethology (Karl von Frisch, my patron saint) took animals out of the behaviorist's laboratory (where environmental context was irrelevant) and studied them in their own natural surroundings. This break with behaviorism is reflected in the idea of embodied cognition where the brain is in continuous touch with its environment and is constantly learning from or adapting to the environment. That process is described by a continuous loop, not an arc.
Oh! I like this. The shift from a reflexive arc to a continuous loop is such a great way to frame the difference between behaviourism and embodied cognition.
Good point — AI does often resemble (and is even described like) behaviourism’s black box. This is especially true for systems that apply learned patterns (training) without further adaptation (updating the model).
As we’ve discussed before, I think there’s some merit to embodied cognition.
But what do you think about other types of AI models, like reinforcement learning? These models do adjust based on environmental feedback. Does that bring them closer to an ethological approach — where the system continuously interacts with its surroundings, learning and adapting the way animals do? Or do you think embodiment itself is the key factor?
I think reinforcement learning is a great first step but it does not go far enough to cut the strings of the puppetmaster/homonculus. RL presumes that feedback comes FROM the environment in the form of a positive or negative feedback--the result of a reward function defined by an external intelligent agent. First of all, the environment does not communicate anything TO an intelligent organism. The organism observes the consequence of its own action in the environment. Secondly, the reward functions defined by a human developers are simply indirect "strings" of a homunculus puppetmaster. I want to cut those strings entirely.
This is not to say that all feedback knowledge is acquired after birth. We are born with instincts and emotional biases (sweet berries are good and pain is bad). I will be writing on how learning is bootstrapped from a very small number of innate behaviors.
Most of what is written about embodied cognition is word salad. I suspect that will continue until someone actually builds something that works and can be validated. But before that can happen, it need a concrete implementation of non-representationalism and a demonstration of how representations (I am not a radical non-representationalist) emerge from non-representations. That will be revealed later this year.
Fascinating—looking forward to seeing where you take this. A system that bootstraps itself—where representations emerge from non-representational dynamics, without anything handed down from above. That's an exciting direction. I'm also curious how you define representations in this context.
The precursor of representations are non-representational indices...pointers to percepts. I am close to finishing the first computational validation of Zenon Pylyshyn's controversial Theory of Visual Indexing.
His theory claims that our visual system tracks objects (up to five) pre-attentively and without recognition. Each visual track is an indirect reference or index to a projected object according to its location only without consideration of any other object features such as color, shape, texture, or lines. Only after an object is indexed, will our brain try to recognize it. And only then, whether it is recognized or not, does the perceived object enter our consciousness.
Suzi, others more qualified than me compliment your expertise and the quality of the insights you display in your articles. But I just want to thank you for your masterful storytelling. This essay risked being dry, but somehow you made every single point heighten curiosity for the next point in a way that made me feel like I was on a magic carpet ride, a ride that ended (again) with me learning something new and eagerly awaiting your next essay. Someone mentioned in the comments of a previous article that you write like a poet. I remember nodding as I read it. I don't know whether you've formally studied poetry, but ... wow, and thank you!
Wow, thank you, Steve! This is such a generous comment. Hearing that my essays feel like a journey rather than just an explanation is music to my ears. Truly, I appreciate you taking the time to write this lovely message.
I haven’t formally studied poetry, but I do love poetry and short stories.
Thanks again — you’ve absolutely made my day with this!
“We’re thinking about information as something that means something to the system”. Although it never had a broad impact, I’ve always admired (and found very useful ) Gregory Bateson’s definition of information as “a difference that makes a difference”. It’s elegant and rather profound.
I love this -- a difference that makes a difference. I'm surprised I hadn’t explicitly linked Bateson’s definition with Giulio Tononi’s Integrated Information Theory until now. But IIT's definition of intrinsic information must have drawn on Bateson.
Tononi defines intrinsic information as 'differences that make a difference' to the system itself (rather than to an external observer). And in IIT, intrinsic information — when it is highly integrated — is consciousness.
Sort of. When the other side is arguing heatedly that we’re paying too much for half a dozen eggs, but that the price might be fair for six of them, you’d say that’s a distinction without a difference. Of course it’s rarely that obvious in actual arguments.
Wow, Suzi. Synchronicity is afoot! The night you posted this, I had jotted down in the notes I’m making for a post on consciousness that I’m working on, “when you reach a certain level of information processing, you have to have a system/mechanism for representing complexity. This is a strong argument for why consciousness is necessary”. And then a day later I read your comment in which you introduce me to Giulio Tononi and IIT, and my reaction is omg, I need to read this! (I have begun- really interesting!). So, I mention Bateson’s dictum/definition and it leads you to make a connection with Tonini, and your summary of his key idea clicks with something I’ve been pondering re consciousness and which will hopefully help me expand and refine my ideas! I love this! I was so hoping that Substack might be an intellectual meeting place, and that’s what it’s turning out to be. If I haven’t said it before, so glad to have discovered your ‘stack. You have great ideas, explain things so well, and are very generous with your readers. Again, good luck with the clean-up from the typhoon!
Thank you so much for your kind words, Frank! Substack is amazing — the community here is incredible. There are so many great writers and thinkers that it’s hard to keep up with all the fascinating ideas and conversations.
You’ve probably come across @erikhoel and his Substack, The Intrinsic Perspective. Giulio Tononi was Erik’s doctoral supervisor, and he occasionally writes about these ideas in his newsletter.
For the record, “I think, therefore I am” is also empiricist. “I am observing my own sense of thinking” is an axiomatic proposition from a subjective point of view ;)
But what is the next thing is the Artificial Language. Not like computer programming ones but with new concepts for processing all information, from sensing and feelings to the science of physics, medicine, etc.
A few actors in this I wasn’t familiar with, but I will now remedy that. Thank you for this essay, Suzi.
Thanks, John! I'm glad you found it interesting!
An excellent overview of the history!
I sometimes wonder if the behaviorists don't get a bad rap these days. A lot of their approach could be seen as a response to the limitations in measurement they had to work with. Of course, behaviorism wasn't a monolithic movement. For many it was just a methodology, but others did turn it into an ontology.
Although it's worth remembering that the information processing paradigm was always there. It's often forgotten today that modern computing is based on Boolean logic, which began with George Boole's 1854 book: The Laws of Thought. From the beginning the idea seems to have been to capture how thinking works. So maybe we shouldn't be too surprised that information processing paradigms eventually fed back into the actual cognitive sciences.
Hey Mike -- yes, thanks for pointing this out, it’s easy to dismiss it today, but as you say, many behaviourists were simply working within the empirical constraints of their time. And you make a good point -- some took it as a methodology while others took it as a more extreme ontological claim, which is where much of the criticism is directed.
On Boolean logic — that’s an interesting idea. I’ve always seen the major influences on the cognitive revolution as Shannon’s information theory, cybernetics, and Chomsky’s critique of behaviourism. I haven’t read much of Boole’s work, but from what I understand, it was more aligned with Aristotelian logic than with the kind of step-by-step symbolic processing later used in cognitive psychology. That said, I see what you’re getting at — Boole’s work did predate computing and laid essential groundwork for modern logic, which eventually fed into computational approaches to the mind.
Now you've got me wanting to read about Boole!
Hey Suzi! I have to admit I've never read Boole directly, just brief articles about how Boolean logic developed. And much of that material notes that the version later used in technology took time to develop, with later thinkers adding to his concepts considerably.
I worked with a “direct descendent” of Skinner in grad school and came to understand where they came from. It began as a strategy- let’s see how much we can explain without inventing hypothetical intervening variables. My mentor loved telling the story of how Skinner trained a pigeon so that it was ultimately working on a variable ratio 2000 schedule. (For the uninitiated, that’s one reward for every 2000 responses on the average!). Of course, when the reinforcement was withdrawn, it kept on pecking forever. “Now that’s persistence, that’s character!” he quoted Skinner as laughingly saying (God help his daughter!) The problem is, as you say, is that most behaviorists never did see the need to broaden the model to include intervening variables, or biological constraints, and thus it did become an ontology. I found many of the ones I met to rather cerebral and untroubled by emotions. Rather convenient when you’re working with a model that ignores them.
Wow! That pigeon story is both hilarious and fascinating. It’s wild to me that there wasn’t more curiosity about what the bird was thinking. My first thought was exactly that — I’d love to know what that poor bird thought!
"If the mind is an information processor… why couldn’t a machine that processes information have a mind?"
I think a "mind" is about how the information is processed not just that it is processed.
YES! I agree. Just because a system processes information doesn’t necessarily mean it has a mind — what matters is *HOW* it does so.
So the question then becomes what makes certain kinds of information processing ‘mind-like’? Is it a question of structure (e.g., neural networks vs. classical computation?), function (e.g., higher-order thoughts, self-awareness, prediction?), or something else entirely?
I think it is non-classical. It may be based on dynamic geometric shapes with higher dimensional properties.
https://www.scientificamerican.com/article/how-squishy-math-is-revealing-doughnuts-in-the-brain/
That’s an interesting direction! Non-classical computation can mean a lot of things — are you thinking about something like continuous dynamics?
And if so, do you think biological systems use continuous dynamics in a way that's unique to life, or could artificial systems replicate it?
Thinking along the lines of this paper:
Arsiwalla, X.D., Signorelli, C.M., Puigbo, JY., Freire, I.T., Verschure, P.F.M.J. (2018). Are Brains Computers, Emulators or Simulators?
"Machines implementing non-classical logic might be better suited for simulation rather than computation (a la Turing). It is thus reasonable to pit simulation as an alternative to computation and ask whether the brain, rather than computing, is simulating a model of the world in order to make predictions and guide behavior. If so, this suggests a hardware supporting dynamics more akin to a quantum many-body field theory."
Something like quasicrystals but dynamic
"Quasicrystals are intriguing states of matter that occupy a fascinating middle ground between periodic crystals and amorphous unordered glasses—they are long-range ordered without being periodic. This gives rise to peculiar transport properties and has been used to study Anderson and Many-body localization.
Quasicrystals are self-similar and their fractal structure can give rise to the most complex quantum states. Mathematically, they can be described as a projection from a in our case four-dimensional periodic parent lattice. In particular, they can inherit topological features, such as protected edge states, from their higher-dimensional parents."
https://www.manybody.phy.cam.ac.uk/Research/quasicrystal#:~:text=Quasicrystals%20are%20intriguing%20states%20of,Anderson%20and%20Many%2Dbody%20localisation.
The perfect place for consciousness to hide in the physical brain would be in a space almost impossible to visualize or measure from our perceived three-dimensional space. You project a three-dimensional space from an extra dimension.
Thanks James! That's a really fascinating perspective! The idea of consciousness hiding in an almost unmeasurable space is an interesting idea. Definitely something to think about!
I must admit the idea is pretty crazy and even I have my own doubts. But the idea isn't new.
Smythies proposed a phenomenal space in the 50's, I think. Peter Sjöstedt-H. has written more recently on the idea. A more recent article by Smythies.
http://www.neurohumanitiestudies.eu/archivio/smythies.pdf
This could bea spatial dimension. Kaluza-Klein proposed a 5th spatial dimension to unify the EM field with gravity. That's interesting from the EM field and consciousness theory perspective.
But it might not be spatial. We accept time as a dimension but it isn't a spatial at all and doesn't have the same properties as the other dimensions. The extra dimension(s) could be informational. Do we even understand what dimensions are? Are the spatial dimensions "real" dimensions or constructs of our brains?
At any rate, likely an artificial consciousness could be created if the idea is correct and we gained an understanding of how the extra dimension worked but the implementation probably wouldn't be anything like a digital computer and might behave more like something living than dead.
The word mind is often used to be located in the brain, wouldn’t be more appropriate to associate the mind as a function of the brain? This eliminates a need for a specific location.
You raise an interesting point! Thinking of the mind as a function helps move us away from the idea that it's an object sitting somewhere -- possibly in the brain.
But does this mean it has no location at all? Most of the time we think about functions happening somewhere. Digestion is a function, but we don't say we want to eliminate the need for a specific location where digestion happens. We are happy to say it takes place in the stomach (and other parts of the digestive tract).
If the mind is a function, its functional processes still might depend on physical structure. If so, the question becomes what physical structure is required? Could the function be performed in a non-biological system, like a computer? Or is there something about the physical structure of the brain that matters?
I think current language usage often separates the brain from the body’s function. This was the reasoning in my thoughts about the word “mind”. We associate it with thoughts and memories but it is the brain that is processing the experiences the body is having. The usage as a function makes the mind more of a whole body experience and removes a distraction of associating the mind with only the brain. The brain is also processing information both internally and externally which makes it more difficult identify where a mind is.
I see what you mean about how language separates the brain from the body, and I really like the idea that thinking of the mind as a function shifts us toward a whole-body perspective. But I’m curious, when you say that this removes the distraction of associating the mind only with the brain, do you mean that the mind is something that 'emerges' from the interaction between brain and body? That the mind is what the brain does? Or are you suggesting that cognition itself isn’t centered in the brain at all?
You went a different way than I thought you were going to go Suzi. And you really fooled me when you mentioned “code breaking”, since Alan Turing was instrumental for breaking the German Enigma coding machine to help the allies prevail in World War II. Instead you went with how the work of Claude Shannon was interpreted. And indeed, you reached the same theme that I do with Turing. Shannon never meant for bits to be interpreted with meaning. Regardless I’ll provide a quick account of the Turing situation.
In a sense Turing essentially attempted what the behaviorists attempted — he simplified consciousness for human convenience. How might we ever know that a computer was conscious given that we don’t yet even grasp what we’re talking about in the human? So he set up an “imitation game” to effectively take consciousness out of the equation. If we can’t tell whether we’re talking with a computer or a human, then that computer must have a consciousness. This mandates that there isn’t anything interesting about consciousness because here all computers must at least have the computational mechanisms for it — they all take input information and algorithmically convert it to appropriate other information. This is what computational functionalists presume the brain does for our consciousness, though far more extensively than even modern chatbots do. In 1980 John Searle tried to use his Chinese room thought experiment to inject some reason into the debate, but to no avail.
So that’s essentially the situation we have today, and why I think I was right as a college kid in the late 80s to consider it ridiculous that the field of psychology didn’t yet have a value driven model of consciousness, or didn’t even consider such a thing needed. Thus instead of just talking about my “dual computers” model of brain function in blog commentary (as I’ve done since 2014), I’m finally trying to present my models on their own. The goal would be to help our troubled mental and behavioral sciences finally gain a solid foundation from which to build.
It sounds like next time you’re going to get into the physics of information. This will permit me to submit my own such position. Here I mean that information should only be said to exist as such, to the extent that something causally appropriate becomes informed by it. Observe that all known elements of computation follow this rule. Functional computationalists however, conveniently posit consciousness as the unique exception to this rule.
I see what you mean about Turing’s approach having a kind of behaviourist flavour. He really did sidestep the question of consciousness in a way that's pretty similar to what the behaviourists did with mental states. Instead of trying to define what consciousness is, he just set up a test for whether something acts conscious. Computational functionalism seems to have picked up that thread, treating intelligence as an input-output system, just on a more sophisticated level.
We use the word information in so many different ways. Shannon's version is all about reducing uncertainty, but in cognitive science, we often mean something more like structured representation or meaning. The question is, how much do these different concepts relate to each other? Is the confusion in these debates just that we're using the same word but talking about entirely different things? Or is it that there's actually some connection between them that we can pin down?
I think there’s a great connection between all the different ways that we use the term “information”, and it’s the very root of the word itself. They all “inform” something that’s causally appropriate. A DVD can inform a DVD player because it’s causally appropriate to do so. A Betamax tape is not. Or a DVD could inform a table as a shim under one of its legs because it’s causally appropriate to do so. Speaking Chinese to me does not inform me because it’s not causally appropriate information to me — I don’t speak the language. The only problem with functional computationalists is that they haven’t quite learned this lesson and so presume that processed brain information itself exists as consciousness rather than goes on to inform something causally appropriate that exists as a given consciousness. I’ll probably go deeper into this with your coming “information” post. And now that it’s the weekend I hope to finally begin writing my post #3, which will concern the way that evolution must have straightened this particular engineering matter out millions of years ago. I haven’t decided if I’ll go purely theoretical on this, or pair it with the fossil record and thus the emergence of the brain, and then brains armed with consciousness. The “Cambrian explosion” of life should be a central theme if I do map this theory back to the fossil record.
Sounds like a fascinating direction for your next post! Curious to see where you take it, especially if you bring in the Cambrian explosion and the emergence of consciousness!
Suzi, I just want to thank you, again, for all that you bring. I just googled "spider brain," and I must say I seriously doubt I would have done that, among many other things I've recently done in response to your work, without your essays. Always thought-provoking, and always inspirational.
The reason I went to spider brains was the thought that we define (human) brains too narrowly. We think of them as located within the skull - but maybe we should think of them as the entire neural system, as much the sense in a fingertip as the cells of the frontal lobe. And maybe "mind" is emergent from the entire system and not cognition. You know people sometimes say guys think with their... certain body parts. Perhaps that's more literally true than I ever considered.
Which took me to spider brains. You know, some spiders think with their legs. https://spideranatomy.com/do-spiders-have-brains/. Maybe we do too and theories of the mind should take that into consideration.
Wow, that’s such a kind thing to say, Jack —thank you! It really means a lot.
I love it! Spider brains! I also love that the website 'spideranatomy.com' exists. AND they have a special section on spider-part! You know that I'm going to spend hours on this site now.
Some might say human guys think with their... certain body parts, but spiders take this to a whole new level. I just found out that male spiders don’t actually have that certain body part — instead, they use a modified limb. And don’t some of them become lunch after using their... certain modified limb? Which gives a whole different meaning to 'some spiders think with their legs'.
Haha, you got a little more enjoyment out of that spider business than I expected! I'd heard that black widows, for example, were famous for mate cannibalism, but it turns out that might be overrated: https://www.burkemuseum.org/collections-and-research/biology/arachnology-and-entomology/spider-myths/myth-black-widows-eat. Note that female black widows are 10-160 times the size of the males, who must not even be a satisfying bite, much less a meal, and if they're thinking fast enough (with their... certain legs, no doubt), they often get away to mate another day.
I distracted you, though. What do you think of my idea that our concept of "brain" is too limited? That might kick the can of sentience down the road a bit, although robotics is moving quickly to step into the breach.
Haha, I did get a bit carried away with the spider business! And that’s interesting— black widow cannibalism might be more of a myth than a rule.
But back to your original point — I do think it’s important to consider more than just the brain when thinking about what the brain does. As we’ve discussed before, I think embodied cognition makes some compelling arguments. But I’m also wary of taking it too far — at its extreme, embodied cognition can lead to some very strange places. If we go broad enough, cognition starts to seem like something that isn’t to do with the brain at all, which, to me, is a problem.
Ha! We've been talking about humans thinking with ... certain parts and spiders mating with... various limbs and you're suddenly objecting to strange places? How now, brown cow? (are you familiar with that nonsense phrase which functions sort of like the flavoring particle "doch" in German?) Anyway, I wouldn't want to take the brain out of the equation, just to consider the neural connections more generally as an essential part of the brain and not just a link between mind and body. Is that a bridge too far?
Haha, fair point — I definitely opened the door to some strange places! And yes, I know how now, brown cow? — though I love the idea of it as an English doch. I hadn’t thought of it that way before.
I see what you’re getting at, and no, I don’t think it’s a bridge too far. There’s definitely value in viewing cognition/mind as a whole system rather than isolating the brain in our study of it.
A great post, as always.
You suggest that Behaviorism went away after the cognitive revolution. Behaviorism treated the brain as a black box: input->black box->output. This can be described as a reflexive arc. And it is how AI and computers work today: once programmed, influence from the environment is irrelevant. To me, that harkens back to behaviorism.
Ethology (Karl von Frisch, my patron saint) took animals out of the behaviorist's laboratory (where environmental context was irrelevant) and studied them in their own natural surroundings. This break with behaviorism is reflected in the idea of embodied cognition where the brain is in continuous touch with its environment and is constantly learning from or adapting to the environment. That process is described by a continuous loop, not an arc.
Oh! I like this. The shift from a reflexive arc to a continuous loop is such a great way to frame the difference between behaviourism and embodied cognition.
Good point — AI does often resemble (and is even described like) behaviourism’s black box. This is especially true for systems that apply learned patterns (training) without further adaptation (updating the model).
As we’ve discussed before, I think there’s some merit to embodied cognition.
But what do you think about other types of AI models, like reinforcement learning? These models do adjust based on environmental feedback. Does that bring them closer to an ethological approach — where the system continuously interacts with its surroundings, learning and adapting the way animals do? Or do you think embodiment itself is the key factor?
I think reinforcement learning is a great first step but it does not go far enough to cut the strings of the puppetmaster/homonculus. RL presumes that feedback comes FROM the environment in the form of a positive or negative feedback--the result of a reward function defined by an external intelligent agent. First of all, the environment does not communicate anything TO an intelligent organism. The organism observes the consequence of its own action in the environment. Secondly, the reward functions defined by a human developers are simply indirect "strings" of a homunculus puppetmaster. I want to cut those strings entirely.
This is not to say that all feedback knowledge is acquired after birth. We are born with instincts and emotional biases (sweet berries are good and pain is bad). I will be writing on how learning is bootstrapped from a very small number of innate behaviors.
Most of what is written about embodied cognition is word salad. I suspect that will continue until someone actually builds something that works and can be validated. But before that can happen, it need a concrete implementation of non-representationalism and a demonstration of how representations (I am not a radical non-representationalist) emerge from non-representations. That will be revealed later this year.
Fascinating—looking forward to seeing where you take this. A system that bootstraps itself—where representations emerge from non-representational dynamics, without anything handed down from above. That's an exciting direction. I'm also curious how you define representations in this context.
The precursor of representations are non-representational indices...pointers to percepts. I am close to finishing the first computational validation of Zenon Pylyshyn's controversial Theory of Visual Indexing.
His theory claims that our visual system tracks objects (up to five) pre-attentively and without recognition. Each visual track is an indirect reference or index to a projected object according to its location only without consideration of any other object features such as color, shape, texture, or lines. Only after an object is indexed, will our brain try to recognize it. And only then, whether it is recognized or not, does the perceived object enter our consciousness.
Sounds like a magic trick, doesn't it?
yes, it does! Can't wait to read more
Suzi, others more qualified than me compliment your expertise and the quality of the insights you display in your articles. But I just want to thank you for your masterful storytelling. This essay risked being dry, but somehow you made every single point heighten curiosity for the next point in a way that made me feel like I was on a magic carpet ride, a ride that ended (again) with me learning something new and eagerly awaiting your next essay. Someone mentioned in the comments of a previous article that you write like a poet. I remember nodding as I read it. I don't know whether you've formally studied poetry, but ... wow, and thank you!
Wow, thank you, Steve! This is such a generous comment. Hearing that my essays feel like a journey rather than just an explanation is music to my ears. Truly, I appreciate you taking the time to write this lovely message.
I haven’t formally studied poetry, but I do love poetry and short stories.
Thanks again — you’ve absolutely made my day with this!
No, thank you!
“We’re thinking about information as something that means something to the system”. Although it never had a broad impact, I’ve always admired (and found very useful ) Gregory Bateson’s definition of information as “a difference that makes a difference”. It’s elegant and rather profound.
I love this -- a difference that makes a difference. I'm surprised I hadn’t explicitly linked Bateson’s definition with Giulio Tononi’s Integrated Information Theory until now. But IIT's definition of intrinsic information must have drawn on Bateson.
Tononi defines intrinsic information as 'differences that make a difference' to the system itself (rather than to an external observer). And in IIT, intrinsic information — when it is highly integrated — is consciousness.
There's a damning phrase in the law (legal analysis) that mirrors this, "a distinction without a difference."
I didn't know that. Is it considered a 'damning phrase' because it essentially dismisses an argument as empty or misleading?
Sort of. When the other side is arguing heatedly that we’re paying too much for half a dozen eggs, but that the price might be fair for six of them, you’d say that’s a distinction without a difference. Of course it’s rarely that obvious in actual arguments.
Ah! Interesting -- I learned something new! Thanks, Jack.
Glad to be able to make a very small repayment. Looking forward to your next, but hope you get a little rest now and then, too.
Wow, Suzi. Synchronicity is afoot! The night you posted this, I had jotted down in the notes I’m making for a post on consciousness that I’m working on, “when you reach a certain level of information processing, you have to have a system/mechanism for representing complexity. This is a strong argument for why consciousness is necessary”. And then a day later I read your comment in which you introduce me to Giulio Tononi and IIT, and my reaction is omg, I need to read this! (I have begun- really interesting!). So, I mention Bateson’s dictum/definition and it leads you to make a connection with Tonini, and your summary of his key idea clicks with something I’ve been pondering re consciousness and which will hopefully help me expand and refine my ideas! I love this! I was so hoping that Substack might be an intellectual meeting place, and that’s what it’s turning out to be. If I haven’t said it before, so glad to have discovered your ‘stack. You have great ideas, explain things so well, and are very generous with your readers. Again, good luck with the clean-up from the typhoon!
Thank you so much for your kind words, Frank! Substack is amazing — the community here is incredible. There are so many great writers and thinkers that it’s hard to keep up with all the fascinating ideas and conversations.
Yes, IIT is an interesting theory! I wrote a bit about it last year: https://suzitravis.substack.com/p/what-is-information-the-ins-and-the
You’ve probably come across @erikhoel and his Substack, The Intrinsic Perspective. Giulio Tononi was Erik’s doctoral supervisor, and he occasionally writes about these ideas in his newsletter.
For the record, “I think, therefore I am” is also empiricist. “I am observing my own sense of thinking” is an axiomatic proposition from a subjective point of view ;)
I like the idea of treating subjective awareness as an ‘observation’ of sorts! 😉 Not sure strict empiricists would 100% agree, though.
Never liked strict empiricists anyway ;)
Excellent, very informational.
But how information is represented in our minds? The words matter. Words are concepts.
https://open.substack.com/pub/luciferv/p/in-the-beginning-was-the-word?utm_source=share&utm_medium=android&r=5e4lda
On the other hand, the current AI uses the same concepts, whether English, Chinese, Russian, or Armenian.
Also, what matters alongside word concept is the information processing speed.
https://open.substack.com/pub/luciferv/p/the-speed-of-information-processing?utm_source=share&utm_medium=android&r=5e4lda
But what is the next thing is the Artificial Language. Not like computer programming ones but with new concepts for processing all information, from sensing and feelings to the science of physics, medicine, etc.