29 Comments
Jul 9Liked by Suzi Travis

Must try to say this without sounding too sycophantic, but I wish you had given some introductory philosophy lectures when I was first at university (although I’m guessing you would not have been born until a few decades later). Anyway. Very clear (which is often a problem in this context). Thank you.

Expand full comment
author

John, thank you so much. It truly means a lot to me. Making complex ideas accessible is one of my core goals with this newsletter. I may have missed your university days, but I'm glad that our paths have crossed now.

Expand full comment

I'm very much enjoying this series and looking forward to more!

Objection #3 puts nicely what I tried in my comment on the previous post. Yes. I generally find functionalism too liberal in what it suggests might be capable of consciousness. I'm especially skeptical of computational functionalism. I quite agree with Anil Seth on this point.

In the context of functionalism, computationalism seems to require the brain perform a computation as defined by Church-Turing. Therefore, the brain must somehow be isomorphic with a Turing machine. Some assert the brain IS a computer, often comparing neurons to logic gates. But I find that, the closer I look, the less the brain seems like a computer.

I agree with what Penrose asserted in "The Emperor's New Mind" -- that consciousness, whatever it is, transcends computation. A big part of his thesis there is pointing to things we know to be uncomputable and asking if consciousness isn't at least, if not more, complicated than these.

For me, a kind of synthesis of identity theory and functionalism, a middle ground, is structuralism -- the notion that the structure matters (IIT gets at that but misses the point, I think). While I'm very skeptical of a software simulation, I don't see why a hardware emulation wouldn't work (unless the biological chauvinists are right). In Isaac Asimov's robot novels, the positronic brain was a physical analogue of a human brain. Multiple realizability seems more probable if structure is preserved. Then the only issue is materials used. Then the functionalism of the pieces of the structure matter -- does an emulated "synapse" provide the exact functionality a real one does. On that level, I buy functionalism.

Expand full comment
author

This is a wonderful summary of a debate currently happening in the cognitive neurosciences. Is the brain-is-computer analogy fundamentally flawed? Computational functionalism has been (and remains) a prominent theory. Much of what we do in cognitive neuroscience is based on the idea that the functions of the brain are computational. But some researchers are questioning this assumption and asking whether the brain-is-computer analogy is doing more harm than good (Anil Seth being one of them).

I'll be writing much more on this topic, and as always, I'm looking forward to reading your comments.

Expand full comment

I'm looking forward to your posts on computationalism! For years over on WordPress another blogger and I debated it. He was a staunch functionalist who hoped for brain uploading someday. Over many years, I wrote dozens of posts exploring my skepticism. That said, some of the latest AI advances do have me wondering.

I've noticed some pushback on the brain=computer notion, too. A while back I posted about an article in The Guardian by Matthew Cobb, "Why Your Brain Is Not Like a Computer." The article mentions how the metaphor can be misleading. It also points out we've been comparing the brain to deterministic mechanisms for a long time (Descartes's hydraulic model, for instance, and didn't someone refer to it as an "enchanted loom"? I always liked the beauty of that one.)

Expand full comment
author

Ah! Yes. The enchanted loom, which was a nice surprise when featured in the Harry Potter series.

Expand full comment

Nice write up yet again.

I think the strongest objection is that functionalism eliminates qualia rather than explaining it. For example, a functionalist explanation of pain would appeal to things like, the functional role of avoiding the danger, seeking medication, groaning, resting the affected area.

But where in any of those functions is the nasty inner feel that is what we all mean by pain? We could perform all of those functions without the inner experience. In fact, it happens all the time when someone wants to avoid work and it's called “faking” it.

Expand full comment

Good point. Animals seem to be on a different level with pain. They don't attach the emotional baggage we do, all the linkages to what the pain *means* to us.

Expand full comment
author

Wonderful comment! You've nicely captured the most common objection to functionalism -- our old friend sensory qualia.

Functionalism does manage to dodge one of the major problems that tripped up behaviorism -- the issue of internal states. Behaviorism's claim was that mental states could be fully explained by outwardly observable stimulus-response patterns. This approach completely ignored what was going on inside a person's mind.

Functionalism does improve on behaviorism by acknowledging that what's happening inside our heads matters. But, as you explained, many believe that it misses the point. When it comes to the 'inner' mental states, the qualitative nature of those mental states is really the core of what they are. Functionalism has some 'splaining to do!

Expand full comment

I've come into Philosophy of Mind from the Dualist camp because it just strikes me as really obvious. I need to learn more about Physicalism to be confident though, and these articles have been really handy! Explained functionalism really well, thanks

Expand full comment
author
Jul 11·edited Jul 11Author

Hi Connor! Thanks so much, I'm happy you are finding them helpful.

Sounds like we came to the Philosophy of Mind from the same camp. Dualism has the advantage of fitting nicely with how we commonly think about consciousness. We talk about the mind being separate from the body a lot in our everyday language. For example, we might say, things like 'mind over matter', suggesting that the mind can overcome our physical limitations, or we might say 'she's absent minded', implying that her mind is somewhere her body is not.

Throughout history, we see the concept of a mind-body separation quite a bit. Even the New Testament contains language suggesting a distinction between the spirit being willing but the flesh being weak. But we haven't always conceptualised the mind and body as separate and different substances. The ancient Greek philosopher Aristotle, did talk about the soul and the body, but the soul was seen as the form of the body, not as a separate entity. The Stoics had a materialistic view of the soul, seeing it as a kind of fine matter dispersed throughout the body. Epicureans also had a materialistic view, considering the soul to be made of atoms.

The thing about Descartes is that he formed his ideas about the mind and brain around the same time that science was having its revolution. So, Descartes' ideas about how the mind relates to the brain had a strong influence on not just the philosophical theories that followed dualism -- like behaviourism and functionalism (which were reactions against dualism), but also the scientific fields that study the mind -- like psychology and cognitive neuroscience.

Expand full comment
founding
Jul 10Liked by Suzi Travis

Hi Suzi,

I’m new on Substack though when I financially subscribed to Tina Lee Foresee’s site, fortunately yours and Tommy Blanchard’s came up as well. So I agreed to subscribe sight unseen.

It’s quite clear to me that brains function as computers, though the non conscious sort. They accept input information and neurally process it for output function. It also seems quite clear to me that consciousness functions as a value based form of computer, and that our brains are responsible for creating it. Wyrd Smythe once even informed me that the word “computer” originally meant a person who is paid to run algorithms on paper. To me the question is, how do our non-conscious brains create the conscious form of computer?

I consider functional computationalists to get this wrong. They claim that information processing in itself is sufficient, and even though that’s not how our computers work. For example the computer you’re using doesn’t create the images on its screen by means of information processing alone. Instead processed information is sent to the right sort of output mechanism.

I’ve attempted to help functional computationalists grasp this error in their reasoning through my thumb pain thought experiment, though they always seem too invested to question their belief. Thus they concede to me that if paper with the right marks on it were fed into a computer that processes it to print new paper with the right other marks on it, then something here will experience what they do when their thumbs get whacked. They’re not sure what, but something. I instead tell them that the output marked paper will need to animate the right sort of physics by then being fed into another computer that’s armed with that this unknown physics.

I used to be confounded by this physics. What do our brains animate to exist as consciousness? What’s it made of? Then I came across the ideas of Johnjoe McFadden and realized that the only thing which makes sense, is the electromagnetic field associated with the right sort of synchronous neuron firing. Consider the thought that your brain is a non conscious computer, though everything that you see, hear, think, and so on exists under the proper parameters of neurally produced electromagnetic field. I’m told that the only reasonable neural correlate for consciousness found so far, lies in a synchrony to neuron firing that supports his theory. My opinion of academia is pretty low however, so I don’t think anyone’s mind will be changed until dedicated experiments make this correlation indisputable.

Expand full comment
author

Hi Eric! Welcome to Substack. I'm glad you're here.

Great comment, thank you.

Your analogy of brains as non-conscious computers creating a conscious "value-based form of computer" is intriguing. The distinction you draw between information processing alone and the need for appropriate output mechanisms is a also an interesting point. I'll get to this point when I cover some of the scientific theories, but there is an idea in the neurosciences that an action-perception loop is crucial for consciousness.

Your thumb pain thought experiment is a clever way to illustrate the limitations of purely information-based approaches to consciousness. It highlights the (possible) importance of considering the physical implementation of conscious experiences.

The electromagnetic field theory of consciousness you mentioned, based on Johnjoe McFadden's work, is indeed an interesting approach. I must admit, as with the quantum theories of consciousness, I have not spent enough time exploring these ideas to know them deeply. What I do like about them is that they are alternative theories that are not epiphenomenal. Instead, they provide an alternative, but still physical, basis for conscious states.

Expand full comment
founding
Jul 12Liked by Suzi Travis

Thanks Suzi,

I still need to read what you’ve said before in this series and can’t wait to see where it ends up going. My experience suggests that you’ll settle on panpsychism. Should that be the case I hope you won’t be too disappointed given my own argument that this position isn’t really a useful solution.

At Tina’s site I recently met an EMF consciousness theorist who’s both a panpsychism and a theist. I suggested that if McFadden’s theory becomes highly verified experimentally, then he should be set up well for theistic interpretations. https://open.substack.com/pub/philosophyandfiction/p/a-testable-solution-to-the-mind-body?r=2xwlat&utm_medium=ios

Expand full comment
author

We're in agreement there -- I don't find panpsychism's claims satisfying. I think it raises far more questions than it answers, and the assumptions it requires are difficult to justify.

Having said that, some of my closest friends, whom I respect immensely, see some merit in panpsychism. As all good friends do, they challenge my thinking and remind me to keep an open mind -- which is probably especially good advice when exploring consciousness.

Expand full comment
founding
Jul 17Liked by Suzi Travis

Consciousness science is probably the softest science there is so I think we should expect disagreement among intelligent people. But that doesn’t mean that consciousness science will never become a reasonably hard form of science and so eject a vast load of nonsense.

I like to ask panpsychists if they believe in anesthesia. So far they all say they do. And though here they grant that consciousness can be so eliminated, some have told me that brain consciousness always remains in a sort of unfocused “white noise”. So then I ask them how their theory that everything is conscious helps explain the “focused” consciousness that we have? No sensible answers received yet. It’s good to hear that you’ve avoided this road Suzi, since I consider it a dead end.

Expand full comment
author

Yes, indeed, keeping an open mind shouldn't mean that we lose our heads completely.

Unfocused white noise! That's an interesting way to describe it. Anaesthesia does seem to be a tricky one for panpsychists to explain. Normally, I try to steel-man others' points of view, but I'm not sure how constitutive panpsychism would address this. If consciousness is a fundamental feature of all physical entities down to the smallest particles, then at some level, even an anaesthetised person would be conscious, by definition.

The perspective from Integrated Information Theory (IIT) panpsychism is easier to grasp. This type of panpsychist might argue that anaesthesia disrupts the normal patterns of brain activity, thereby preventing the integration of information necessary for consciousness.

I'm not convinced this solves all the problems with panpsychism, though.

I do like your question about anaesthesia. Too many theories of consciousness only address questions about what we like to call 'contents of consciousness' -- the things we see. But we use the word consciousness to also mean states of consciousness (like sleep, coma, awake and anaesthesia) as well as the conscious sense of self. Any theory of consciousness will need to, at least address, these distinctions.

Expand full comment

Really enjoying this series, Suzy!

I think the inverted qualia problem is exactly what makes us humans. The stimuli and response might be identical for two people but the internal feeling is quite different. The feelings and internal functions of each person are what make AI, robots, and neurologically implanted devices so difficult, I believe. Recreating feelings and empathy seems, at this stage, almost impossible.

LLMs are surely databases that contain more information than we can ever grasp, but their ability to truly dissect and utilize that information to paint a story making readers feel connected, seems very human. And the biggest leap AI will need to make if it ever can.

Expand full comment
author

Thanks, Jacob. I'm so glad you're enjoying it.

That's an interesting idea. I like the idea that the inverted qualia problem might be what makes us human. It makes me wonder, what causes the differences in qualia? Can we account for differences in our conscious experience with differences in how our brains are wired up? Is that enough to explain our different responses to stimuli? If we make computers that are more and more like human brains, will they be more and more human, or just more and more 'like' human? These are the sorts of questions that keep me up at night!

Expand full comment

Those sound like questions that would keep you up at night! Differences in brain wiring creating different experiences and results to stimuli seems to make sense. But could it be the years of experiences that created more learned behavior than actual brain wiring?

Your question about making computers more and more human-like, or more and more human is really interesting also…

Great questions and I know nothing about them. That's why I read your stuff!!

Expand full comment
author

I suspect that’s true — years of experience create more learned behaviour. But I also think that years of experience change the brain wiring. That’s what learning is — learning is changing the wiring of your brain.

Expand full comment

Great point. Super interesting stuff and I’m excited to follow along more closely!

Expand full comment
Jul 31Liked by Suzi Travis

that was very interesting and educational kudos❤️

Expand full comment
author

Thanks Lisa!

Expand full comment

Another sycophant here!

I am doing a philosophy degree late in life specifically because I wanted to learn more about philosophy of mind but our Mind textbook was simply awful. I wish I had had your Substack at the time!

Expand full comment
author

Oh no! Why are assigned philosophy of mind textbooks so terrible? It's like there's some unwritten law that philosophy of mind textbooks must be as confounding as possible.

But, on a happier note, what a great thing to study when you're a little wiser. Apart from the philosophy of mind textbook, how are you enjoying the degree?

Expand full comment

Loving it! Coming into my final year now.

Expand full comment

Why are they terrible? I think our one tried a bit too hard to sensationalise the odd conclusions. It was also just badly written. It’s weird. Our textbooks have been universally excellent so far — except just that one.

Expand full comment

A good overview of the criticisms against functionalism!

The inverted / absent qualia one, I think, hinges on how much we're prepared to accept that experience is unrelated to behavior, or makes any difference at all in the world. I've always thought this kind of absolute epiphenomenalism was a strange proposition, since we're currently engaging in the behavior of discussing experience.

The too liberal argument doesn't particularly concern me. The hypothesized examples always seem contrived. But I'm in the camp that accepts the China Brain, Chinese Room, and other similar constructions as conscious.

The homunculus fallacy seems like a concern for all theories of consciousness. One thing I definitely agree with Dennett on is if our theory of consciousness has a component labeled "consciousness", we don't have a theory of consciousness yet. We're not done until we've reduced the mental to the non-mental (or discovered irrefutable evidence that is impossible).

On the needs form one, I think it's clear that the brain is very different from the architecture of the devices we're using right now for this conversation. It's a judgment call whether to call the biological one "computational". I think it makes sense, but I'll save my powder for your computational post.

Expand full comment