But at the same time, I'm hesitant to disregard functionalism entirely. It has come a long way, morphing and changing as new evidence and critics are raised. I wonder whether there is a form of functionalism that fits with our understanding of how the brain works.
The debate between reductionist views of consciousness, like functionalism, and those that think there is something more (like a soul) is really at the heart of the mystery. And I suspect it will continue to be for some time.
It is funny how many verbal covers and interpretations we have among people agreeing on strong materialistm. For me epiphenomenalism agrees in the 1-to-1 map between consciousness states and brain configuration. Still there are people that consider this too soft, and they need to say that an stream of conscious experience is the same as network of neurons (!)…
It is interesting! Traditionally, epiphenomenalism is considered a dualist claim -- the mind is caused by the brain, but the mind has no influence on the brain. It seems the physicalist might argue that a 1-to-1 mapping doesn't work if the brain has certain properties -- that is, it has causal effects in the world -- but the mind does not have those same properties.
But I agree, the spectrum of interpretations within materialist views is wide. I have a good friend who says he aligns with epiphenomenalism, but denies strongly that his views are dualist. Even among those who agree on a materialist foundation, there's a wide range of views on how consciousness relates to physical brain states. It makes the topic complex (and highly debatable).
Another excellent article—thank you. Do you have a link for the Adam Bradley article? This functionalist view seems deterministic. That function is the definition of consciousness (not just part of it) seems very problematic—we have an old antiquated toaster at home that doesn’t toast, it is not a toaster? Is it valueless because it doesn’t perform its function as a toaster any more? To be more pointed, at the end of her life my mother didn’t know what day it was, had no idea if she took her medication or not, and many other mental challenges. She did still recognize her children—though it took a moment—and we’d have fun together making jokes about this when I’d say I was one of my brothers. She’d pause, look and say “stop that! I know who you are” and correct me then laugh—“I’m not that bad. . . At least not yet!” she’d say—‘you’re doing fine mom’. and we’d laugh together. With such diminished function, was she a diminished person? Was she a diminished, less valuable consciousness, a less valuable mom? Like the antique toaster that doesn’t toast, there is an intrinsic value in human beings, in human consciousness. And this exists outside of function—no matter how inclusive the definition of functional thought, functional mind is. The very concept is singularly deterministic.
Thank you for another very insightful comment. Your mother sounds like she was a wonderful person. I can imagine losing her to dementia was difficult. It sounds like you and your family were amazing.
Functionalism does seem to fit well with a deterministic approach to human behaviour. I think most functionalists, like most physicalists, would tend towards a deterministic approach.
But I think many physicalists distinguish between theories of consciousness and free will, on the one hand, and normative theories of morality and dignity, on the other hand.
Undoubtedly, the way that we approach moral questions are informed by our theories of how the world "is". The question of whether something has conscious experiences might inform moral judgments, but I wouldn't see it as necessarily the sole or primary factor in determining moral worth, value, or responsibility.
As for moral judgments about how to treat family members with declining cognitive function, some determinists might focus on the outcomes of actions and how they affect overall well-being, while others might emphasise the inherent rightness or wrongness of actions based on universal principles, duties or community norms.
For example, a determinist who focuses on outcomes might argue that treating fellow humans (no matter the quality or degree of their experience or function) with utmost respect and moral value, especially family members, is a moral obligation because it leads to better societal or long-term outcomes, regardless of the metaphysical truth of free will. Who would want a society, they'd ask, where you know that once you lose cognitive function you'll be discarded? While other determinists might contend that it is intrinsically "good" to treat all people with dignity and respect based on their preferred bedrock for making those judgments. For example, Calvinist determinists will have recourse to the New Testament when founding the moral obligation to care for the sick or aged.
BTW - I don't have a link to Adam Bradley's article -- and I couldn't find one. I was lucky enough to find a copy in a course collection at my university's library. But Bradley's primer is a summary of Hilary Putnam's article, The Nature of Mental States. Here's a link to the Putnam paper: https://home.csulb.edu/~cwallis/382/readings/482/putnam.nature.mental.states.pdf
Love this! Functionalism feels like "the purpose of a system is what it does" heuristic applied to the brain. Super interesting stuff as always Suzi, you've given me a lot to think about :)
I guess my problem with functionalism boils down to not liking black boxes. It matters to me *how* a function is accomplished, and that seems to make the *what* (it is) important. Going back to the laser light analogy, many diverse physical systems can produce laser light, but they all share important common properties. So, I would expect that consciousness being (potentially!) realized in multiple ways still implies constraints on how and what.
In particular, I resist the notion often expressed by functionalists that a software simulation of a physical process has a real identity with that physical process.
Not everything can be a toaster! It doesn't matter how much I shout "Toast!" at my microwave, it won't give me crispy bread. The blender, coffee maker, and electric can opener, also stubbornly refuse to transform my bread into toast.
I'm working on an article with the working title "Is the brain a computer?", which I think you may enjoy.
Very good overview. Truth in advertising before I comment, I'm a card carrying functionalist. I actually have sympathies with what many consider the most radical version, analytic functionalism, although I think the psycho-functionalists make important points, and I'm overall less confident in the specific variety than the overall functionalist view.
One thing I'm glad you covered is that multi-realizability isn't just between biology and silicon, the usual examples given, but happens *within* biology, in the various ways evolution finds to solve functional problems. Eyes, for example, exist in a variety of configurations throughout the animal kingdom. The radical differences between our nervous systems and octopuses, despite the functional similarities we can recognize, is the best example.
Thanks Mike! Functionalism is also where I tend to default back to. But I also have sympathy for the idea that consciousness -- at least in the common everyday way of thinking about it, with all its human-likeness -- probably has a lot to do with being a biological creature that has been shaped by the pressures of evolution. So multiple-realisable -- sure. But what we label as consciousness in a machine will likely be very different from what we label as conscious in a human.
I agree that a machine consciousness would likely be very different, so different that most of us will resist the word "conscious" for it. I do think in principle we could make a machine arbitrarily like a living system. and I don't doubt there will be experiments along those lines. But it's hard to see machines built for practical purposes being like that. Even companion robots wouldn't be the same, designed to be more consistent and reliable in their companionship and care than the most dedicated human.
Hi Walter!
Thanks for sharing your thoughts! You raise some interesting points about some of the more difficult aspects of studying consciousness.
I understand your scepticism towards functionalism. (My latest article explores this idea https://suzitravis.substack.com/p/will-ai-ever-be-conscious -- I'd love to know what you think).
But at the same time, I'm hesitant to disregard functionalism entirely. It has come a long way, morphing and changing as new evidence and critics are raised. I wonder whether there is a form of functionalism that fits with our understanding of how the brain works.
The debate between reductionist views of consciousness, like functionalism, and those that think there is something more (like a soul) is really at the heart of the mystery. And I suspect it will continue to be for some time.
It is funny how many verbal covers and interpretations we have among people agreeing on strong materialistm. For me epiphenomenalism agrees in the 1-to-1 map between consciousness states and brain configuration. Still there are people that consider this too soft, and they need to say that an stream of conscious experience is the same as network of neurons (!)…
It is interesting! Traditionally, epiphenomenalism is considered a dualist claim -- the mind is caused by the brain, but the mind has no influence on the brain. It seems the physicalist might argue that a 1-to-1 mapping doesn't work if the brain has certain properties -- that is, it has causal effects in the world -- but the mind does not have those same properties.
But I agree, the spectrum of interpretations within materialist views is wide. I have a good friend who says he aligns with epiphenomenalism, but denies strongly that his views are dualist. Even among those who agree on a materialist foundation, there's a wide range of views on how consciousness relates to physical brain states. It makes the topic complex (and highly debatable).
Amazing! How can I learn more about your project?
He is right in one sense: for me epiphenomenalism is the most materialistic theory that makes sense. Beyond that, what? Deny your own self?
Another excellent article—thank you. Do you have a link for the Adam Bradley article? This functionalist view seems deterministic. That function is the definition of consciousness (not just part of it) seems very problematic—we have an old antiquated toaster at home that doesn’t toast, it is not a toaster? Is it valueless because it doesn’t perform its function as a toaster any more? To be more pointed, at the end of her life my mother didn’t know what day it was, had no idea if she took her medication or not, and many other mental challenges. She did still recognize her children—though it took a moment—and we’d have fun together making jokes about this when I’d say I was one of my brothers. She’d pause, look and say “stop that! I know who you are” and correct me then laugh—“I’m not that bad. . . At least not yet!” she’d say—‘you’re doing fine mom’. and we’d laugh together. With such diminished function, was she a diminished person? Was she a diminished, less valuable consciousness, a less valuable mom? Like the antique toaster that doesn’t toast, there is an intrinsic value in human beings, in human consciousness. And this exists outside of function—no matter how inclusive the definition of functional thought, functional mind is. The very concept is singularly deterministic.
Hi Dean,
Thank you for another very insightful comment. Your mother sounds like she was a wonderful person. I can imagine losing her to dementia was difficult. It sounds like you and your family were amazing.
Functionalism does seem to fit well with a deterministic approach to human behaviour. I think most functionalists, like most physicalists, would tend towards a deterministic approach.
But I think many physicalists distinguish between theories of consciousness and free will, on the one hand, and normative theories of morality and dignity, on the other hand.
Undoubtedly, the way that we approach moral questions are informed by our theories of how the world "is". The question of whether something has conscious experiences might inform moral judgments, but I wouldn't see it as necessarily the sole or primary factor in determining moral worth, value, or responsibility.
As for moral judgments about how to treat family members with declining cognitive function, some determinists might focus on the outcomes of actions and how they affect overall well-being, while others might emphasise the inherent rightness or wrongness of actions based on universal principles, duties or community norms.
For example, a determinist who focuses on outcomes might argue that treating fellow humans (no matter the quality or degree of their experience or function) with utmost respect and moral value, especially family members, is a moral obligation because it leads to better societal or long-term outcomes, regardless of the metaphysical truth of free will. Who would want a society, they'd ask, where you know that once you lose cognitive function you'll be discarded? While other determinists might contend that it is intrinsically "good" to treat all people with dignity and respect based on their preferred bedrock for making those judgments. For example, Calvinist determinists will have recourse to the New Testament when founding the moral obligation to care for the sick or aged.
BTW - I don't have a link to Adam Bradley's article -- and I couldn't find one. I was lucky enough to find a copy in a course collection at my university's library. But Bradley's primer is a summary of Hilary Putnam's article, The Nature of Mental States. Here's a link to the Putnam paper: https://home.csulb.edu/~cwallis/382/readings/482/putnam.nature.mental.states.pdf
Love this! Functionalism feels like "the purpose of a system is what it does" heuristic applied to the brain. Super interesting stuff as always Suzi, you've given me a lot to think about :)
Thanks, Siddharth!
I guess my problem with functionalism boils down to not liking black boxes. It matters to me *how* a function is accomplished, and that seems to make the *what* (it is) important. Going back to the laser light analogy, many diverse physical systems can produce laser light, but they all share important common properties. So, I would expect that consciousness being (potentially!) realized in multiple ways still implies constraints on how and what.
In particular, I resist the notion often expressed by functionalists that a software simulation of a physical process has a real identity with that physical process.
Now, on to part two! 😃
Wonderful comment, as usual!
Not everything can be a toaster! It doesn't matter how much I shout "Toast!" at my microwave, it won't give me crispy bread. The blender, coffee maker, and electric can opener, also stubbornly refuse to transform my bread into toast.
I'm working on an article with the working title "Is the brain a computer?", which I think you may enjoy.
Oh, *that's* what I've been doing wrong, then. So, you're saying that yelling at my kitchen appliances does no good. 🤔
Looking forward to that article! 😁
😗
Very good overview. Truth in advertising before I comment, I'm a card carrying functionalist. I actually have sympathies with what many consider the most radical version, analytic functionalism, although I think the psycho-functionalists make important points, and I'm overall less confident in the specific variety than the overall functionalist view.
One thing I'm glad you covered is that multi-realizability isn't just between biology and silicon, the usual examples given, but happens *within* biology, in the various ways evolution finds to solve functional problems. Eyes, for example, exist in a variety of configurations throughout the animal kingdom. The radical differences between our nervous systems and octopuses, despite the functional similarities we can recognize, is the best example.
Thanks Mike! Functionalism is also where I tend to default back to. But I also have sympathy for the idea that consciousness -- at least in the common everyday way of thinking about it, with all its human-likeness -- probably has a lot to do with being a biological creature that has been shaped by the pressures of evolution. So multiple-realisable -- sure. But what we label as consciousness in a machine will likely be very different from what we label as conscious in a human.
I agree that a machine consciousness would likely be very different, so different that most of us will resist the word "conscious" for it. I do think in principle we could make a machine arbitrarily like a living system. and I don't doubt there will be experiments along those lines. But it's hard to see machines built for practical purposes being like that. Even companion robots wouldn't be the same, designed to be more consistent and reliable in their companionship and care than the most dedicated human.