29 Comments
Jul 16Liked by Suzi Travis

**On Mortality**

I notice that Hinton doesn't use the word "mortal" in the abstract of the paper you linked to. And while that word is in the title of his talk, he didn't spend any time talking about how death was helpful.

The fact is, we already have mortal computers, in the sense of computers that go out and get themselves destroyed. They fly over Ukraine and parts of Russia.

Hinton seems to be using a bit of metonymy in his talk -- suggesting that we introduce features that we find in beings **who happen to be** mortal. In fact, reading the abstract ("the negative passes could be done offline") makes me think more of sleep than of death. But I suppose "sleepy computers" would be less exciting than "mortal computers".

Expand full comment
author
Jul 17·edited Jul 17Author

Hi Mark!

You're right, he doesn't use the word mortal in the abstract, which is kinda strange. The concept of mortal and immortal computation isn't raised until the end of the article. But what he says there is interesting. He defines immortal computing by saying that the separation of software and hardware in computer science... "makes the knowledge contained in the program or the weights immortal: The knowledge does not die when the hardware dies" And later in the same section he talks about "abandoning immortality" describing the possibility of a different type of computer where the hardware does the computation. Because in this type of hypothesised computing, the hardware and software are not separate... "the computation they perform is mortal: it dies with the hardware."

Your observation about drones is thought-provoking. I think the distinction people like Seth would make is that these drones, despite being physically destructible, still operate on the principle of separating hardware and software. When a drone is destroyed, its hardware is lost, but the software is not. The software can be replicated across multiple drones.

This, is the fundamental difference I think Hinton and Seth are interested in. But for different reasons. Hinton's interest in morphing computers is because he suspects morphing computers would be far more energy efficient than traditional computing that separates hardware from software. And Seth is interested in the idea because he thinks this separation (or lack thereof) is key to consciousness.

But, as you suggest, they may be talking about different things when they use the word mortal. Although, I suspect that Hinton and Seth would agree on the definition of mortality; I also think you're right here -- others might argue that they are actually talking about different things.

Expand full comment
Jul 17Liked by Suzi Travis

Thank you, Suzi, for your kind reply -- and for all the wonderful items you've posted.

I seriously doubt that mortality is required for consciousness. Mortality is, of course, bound up in the environment that consciousness evolved in, and so our need for consciousness may have arisen from our need to keep ourselves alive long enuf to reproduce. But that's not the same thing as mortality being required for consciousness.

The theory that mortality is required for consciousness would rule out, among other things, the idea of an immortal soul that will be judged and rewarded/punished in the afterlife. If the soul is immortal, it cannot be conscious. If the soul is not conscious then it cannot experience the everlasting joy of being with God or the everlasting torment of Hell.

NOTE: I do not believe in immortal souls. I do not believe in God. Nevertheless, I fail to see why immortal souls would be necessarily be non-conscious. If forced to choose between mortality being necessary for consciousness and consciousness being possible for immortal souls, I'd go with the latter.

And what if we actually someday manage to abolish death? According to this theory, we would thereby abolish our minds. (Mortality being necessary for consciousness, that is; not merely for its development. If only necessary for its development, then it'd only be our children who have no minds.)

Expand full comment

As a layperson, my first response to the idea that mind is software and brain hardware was WHAT? My resistance stemmed from my emotional sort of horror that the mind is premade and downloaded into the brain by the authorities, possibly ethereal. But then I thought, well, mind software could be different from computer software in that the brain might write its own code or software in response to experience. I’m going to have to let a day or two pass while I think about it and then reread your post which, btw, is stunning. I love the way you teach through writing! It’s such a rare talent.

Anyway, I realize that we are talking analogies and metaphors with these terms, but mortality with regard to both humans and computers is not a metaphor but a fact. And the idea of consequences seems to me to make a crucial difference. I just published a post of a talk I had with Claude in which I asked Claude for the best directions to an airport during rush hour. I didn’t see it while writing, but the difference between Claude’s output and my input was this: Claude could predict nothing, and prediction was everything to me. The difference was this: I would be the one in the car driving. I had skin in the game. What Claude and I were talking about meant something to me. Claude more or less admitted that it meant nothing to “him” and told me to consult a traffic app for in the moment data. Claude is not only not mortal, Claude is not “in the moment.” Human consciousness came about because of its survival benefits. I can’t imagine consciousness in what we know is inorganic, purely physical like a fence post. Suzi, please critique my reasoning here if I’m completely off topic. I may just be rambling—proof that this isn’t coming from a bot:)

Oh, and I don’t respond to the word “mortal” in the way Mark Young does. I do think it’s clever, the point about Russian drones. It makes me think of suicide. AI isn’t ever “suicidal” is it? Can a rock or a stoplight be suicidal? I think mortal is a particularly apt word.

Expand full comment
author

Hi Terry -- thanks for another great comment! And thank you so much for the lovely feedback. It makes my day to read that you enjoy my writing.

I never thought of it like that -- that the mind-is-software metaphor could be interpreted to mean that the mind was pre-made and downloadable to the brain. No wonder you thought, WHAT!?

I also find the potential survival benefits of consciousness to be a compelling idea. It's at the centre of one of the key questions in consciousness research: Is there an evolutionary advantage to consciousness? This question certainly divides researchers. Those convinced by the argument might say something like, "We have precisely the type of conscious experience required to survive and thrive in our environment." On the other hand, those opposed to the argument might counter, "There doesn't seem to be a need for consciousness — why couldn't we do everything we do without it feeling like anything?" How we conceive of consciousness usually determines what side of the debate we land.

On mortality -- I agree, I think the word works here. But I can see how others might disagree. Some might feel that 'true' mortality involves more than this definition allows. It's an interesting idea to think about.

Expand full comment

The rebuttal to an evolutionary advantage from consciousness seems to turn on the idea that feelings are separate from consciousness, that if we had no feelings we could still be conscious right? If so, consciousness would be strictly reasoned. What about say sociological imagination? motivation? Purpose? These elements imply to me that whether consciousness has an evolutionary advantage depends on how one defines consciousness. Is there a school of thought that excises emotions from consciousness?

Expand full comment
author

You have touched on a hotly debated topic. What is the relationship between feelings and consciousness? While some theories do try to separate what they call 'pure' awareness from feeling content, many researchers argue that feelings are what conscious experience is. Nagel famously defines consciousness as, 'what it is like', which many interpret as 'what it feels like'.

The question you ask about whether there is a school of thought that excises emotions from consciousness is an interesting one. I think psychologists like to distinguish between feelings and emotions. Emotions are studied as the biological and chemical response to an event (this could be an event in the world or a thought we may have). And the idea is that these responses could happen unconsciously. Feelings, on the other hand, are thought to be the conscious evaluation or appraisal of what we are experiencing. So, I guess, some would say that, yes -- emotions can be separated from conscious experience because we can have a physical response (i.e. a chemical response in our brain) of which we have no conscious experience.

The question of whether we can have feelings without consciousness is a much more controversial question. And, you are absolutely correct, the answer to that question will depend on how we define consciousness.

Expand full comment

A lot of these debates seem to turn on stipulative definitions. Yes, if I define emotion as unconscious and feelings as conscious, I can say emotions can be excised. But if I define feeling as immediate and emotion

as a general category of all feelings, feelings are real and conscious but emotion is a linguistic abstraction useful in talking about feelings but having nothing to do with feelings other than being a convenient bucket.

If I slice my finger, I feel pain. You see me and you feel lucky that you didn’t. I feel something which you don’t, you feel something which I don’t. I might call pain

an emotion and lucky a feeling, but I could just

as easily reverse those terms. Things change in our theories, but still pain is pain, part of consciousness in that one is aware of it. I’m

trying to think of an example of an unconscious emotion from a psychological perspective. Repressed hatred, for example, is not conscious in the sense that it is repressed, but its presence has consequences for how one experiences feelings. Take the mind is brain argument. What happens if we switch it around? Brain is mind? These linguistic issues seem to cloud all discussions. Even the software is immortal phrase is rooted in a definition of

immortal. Mort- has long signified death. And death is ineluctably linked to life. Life is impossible to envision with physical birth of babies through natural processes. If consciousness is the whole brain at work, how can the whole brain work as one if part of it is disconnected from the other part? Isn’t the fact that that are linked significant? What if I wanted to argue that there is a higher and a lower type of consciousness and call “unconscious” low consciousness? So much depends upon the language. As Humpty Dumpty approaches the world, words mean anything he chooses them to mean. Wittgenstein may have a point—there is a limit to how far language can go. Sorry

for rambling Suzi:) I hope my confusion is sensible.

Expand full comment

The rebuttal to an evolutionary advantage from consciousness seems to turn on the idea that feelings are separate from consciousness, that if we had no feelings we could still be conscious right? If so, consciousness would be strictly reasoned. What about say sociological imagination? motivation? Purpose? These elements imply to me that whether consciousness has an evolutionary advantage depends on how one defines consciousness. Is there a school of thought that excises emotions from consciousness?

Expand full comment

On the first point, the separation of hardware and software, this is fundamental to what we *mean* by computation. (Church-Turing explicitly addresses the separation in saying any Turing complete hardware can run any software. The "China Brain" is one example of extreme hardware.) The term "mortal" is perhaps a bit misleading (and certainly emotionally freighted). In CS, it's simply that hardware is physical, software is *abstract*. Essentially, any software is just a huge single number. (So is any book, blog post, etc. The abstraction of text is also realized in many physical forms. In the paper's sense, text and numbers are also immortal.)

I find multiple largely orthogonal axes for "computationalism". For lack of better terms, I call them emulation, simulation, and replication. Emulation attempts to capture the supposed algorithm that must exist if strict computationalism is true. The algorithm of the mind. Which can then be run on some other platform than a brain. Simulation attempts to capture the meat the way we'd digitally simulate a heart or kidney. Simulation doesn't care about a putative algorithm, and the compute aspect is just about the physics (and extreme form of finite element analysis). Replication attempts to capture the structure of the brain with the assumption that function follows form. In this case, special hardware -- brain-like hardware, arguably analog -- is required.

Emulation: I don't think there is an algorithm of the mind. I don't think what the brain does counts as computation as defined under Church-Turing. Believing there is requires a kind of "digital dualism" where the mind is actually an abstraction separate from the brain. I don't think nature deals in abstractions or dualism.

I can't account for why simulation wouldn't work besides the observation that simulated earthquakes don't knock down buildings. I think it will ultimately depend on whether consciousness is in the output or in the process. If it's in the output, simulation might work. An important question is how granular the simulation needs to be. Cellular, for sure. Molecular? Atomic? Quantum? At what point do the subtle emergent properties of the lower level not matter?

As a structuralist, I think replication has the best chance of working. Brain-in-a-vat notions and Boltzmann brains make me question how important a body really is. I don't really have an opinion on that, yet. But I don't see why something in the shape and function of a brain, regardless of composition, wouldn't work.

Science fiction has painted some interesting pictures of machine intelligence and the consequences of potential immortality because of it. So, I'm not sure possible death is necessary for consciousness. I do think fuzzy thinking and forgetfulness might be, though. Consciousness seems finely balanced between order and chaos (and hence easily tipped). We exist, I think, in a Yin-Yang tension that would be a challenge to implement mechanically. It might be their very clarity that prevents true consciousness in machines.

Sorry to ramble on, it's a favorite topic. I just checked, and there are 36 posts under the "computationalism" tag on my old blog (over a period of 13 years), so I've rambled on about it a lot! 😁

Expand full comment
author

Brilliant! I love this comment. It's also one of my favourite topics, so I really enjoyed reading your thoughts (and trust me, it's far from rambling).

I particularly appreciate your breakdown of computationalism into emulation, simulation, and replication. I've been searching for a framework like this because the distinctions, especially between simulation and replication, seem crucial for understanding recent advances in AI.

I'm intrigued by your suggestion that consciousness might exist in a balance between order and chaos. I'm not sure what you mean by this. Are you proposing that consciousness is fragile? That the conditions required for it are delicate or need to be precisely calibrated?

Is your blog still available to read? It sounds like something I'd thoroughly enjoy reading.

Expand full comment

Thanks! These posts are among my favorites on Substack -- always look forward to a new one. Delighted to contribute. I quite agree that ANNs raise some interesting questions. As prediction machines, they approach, and I expect will succeed, human ability to generate output. If so, they may well appear to us as Chalmers's P-Zombies, able to pass as human but (presumably) with no inner life. I know those who feel 'consciousness is as consciousness does' and would consider the question of an inner life irrelevant, but I think it matters.

Interesting question. I guess I do think consciousness is fragile. Brain defects or damage can have profound impacts, and very small amounts of certain chemicals (LSD, for example) can also affect it profoundly. I suspect consciousness exists as a kind of multi-dimensional mathematical maximum between chaos and order. With epilepsy, for example, the brain becomes chaotic. As I understand it, there are also conditions where the brain's activity becomes too regular, too much in lock step.

My blog is still around (though, because reasons, I've gone from dabbling here to actually considering moving from WordPress). Here's a link to the computationalism tag I mentioned: https://logosconcarne.com/tag/computationalism/ (Thanks for asking; I'd be very interested in your reactions!)

Expand full comment
author

Consciousness does seem fragile, doesn't it. It's true that small changes can have big effects. The example of epilepsy is an interesting one. One common idea in consciousness research is that synchronised activity might be important (maybe even essential) for conscious experiences. But, as you point out, we know that too much synchronised activity can trigger an epileptic seizure. It's almost like neurons need to cooperate with each other. To me, this is where some of the most interesting work is being done. What function could the collective action of neurons have?

Thanks for the link to your blog. You really have written a lot. And on some very fascinating topics! I'm looking forward to exploring what you have written.

(btw... while I'm tempted to jump into the p-zombie argument, I'm going to hold my thoughts for an upcoming post.)

Expand full comment
Jul 17Liked by Suzi Travis

I feel like you’re my tour guide through consciousness. Getting my head around this topic has been my intellectual hobby for decades. So your articles often bring up topics that I’ve come across in my travels, but the way you tie those topics together is incredible. It feels like different parts of my brain are being introduced to each other for the first time. I love it.

When I finished listening to you reading this article (thank you for doing that!), I was left head in hands wondering how much the ground has moved under me. I hadn’t really challenged my thinking about consciousness with questions about what the thing we call consciousness happens to do in the world. It really does seem like the answer to that question might shine important light on what it means to be conscious, as opposed to mimic the actions of conscious beings.

When I learned that our evolved predictive models are incredibly energy efficient when compared to ocean boiling “immortal” LLM code, it seemed to independently suggest that Seth is on to something because we’d expect an evolved predictive model to be energy efficient.

I just want to say thank you. The quality of your Substack is seen in the quality of the comments, which I also enjoy reading. I can’t contribute the kind of insights that many of the people in your comments do. So, I’ll just say thank you. Your Substack is a joy and the time you take to write these pieces is appreciated.

Also, it does seem like this article could link back to so many of your other articles. As I was reading I was recalling your articles on flies dreaming, among other articles. This is like watching a work of art come together. Thanks again.

Expand full comment
author

Hi Geoff! Thank you so much for your lovely comment. Feedback like this is pure joy.

Consciousness is indeed a tricky topic, so I'm honoured that you're finding my articles helpful. I love the moment you described when disparate ideas suddenly click together - it's the ultimate "aha" moment and it's probably what most non-fiction writers aim for. So, I'm flattered that you have found those connections from reading my writing.

You're spot on about the importance of understanding the evolutionary benefits of consciousness. This question of "what consciousness is for" is crucial. My guess is that how we answer this question will have a big influence on our understanding of consciousness (or perhaps it's the reverse - our understanding of consciousness shapes our views on its purpose).

I couldn't agree more about the quality of comments on Substack. The community here is truly special. I love how people are both kind and intellectually engaging. It's the perfect combo for great conversations.

Your note about seeing connections to previous articles is music to my ears. My hope is to create a series of interconnected articles, that work together.

Thank you again for your thoughtful comment. It's readers like you who make writing this newsletter so rewarding.

Expand full comment
Jul 19Liked by Suzi Travis

Thank you, Suzi.

Expand full comment

Superb explanation as usual! I have a couple points I'd like to address with what I think are some, not exactly counter arguments but a bit more elaborate arguments on the computationalist camp, just to further discussion. I'll do it in a follow up post referencing this one ;)

Expand full comment
author

Thanks Alejandro! I can't wait. Your insights are always so insightful, so I'm excited to read what you have to say. I'm like a kid waiting for xmas (but it's only July).

Expand full comment

Jaja thanks, I'll try hard to be up to the challenge, but it's hard. Your article is very comprehensive. I just want to touch on some maybe slightly more nuanced arguments for what is software vs hardware and what could computationalists mean when they claim brains are "computers". It's not a rebuttal at all, just adding some fuel to the conversation. It's an article I've wanted to write for a while and this is a perfect justification!

Expand full comment
author

Oh, I didn't intend pressure! Just enthusiasm and appreciation. Looking forward to reading what you have to say on the topic.

Expand full comment

Great work, Suzi! The idea of building more organic computer systems has fascinated me for a while. Particularly imbuing them with properties like mortality. But it strikes me that will be a very difficult process. It just sort of reduplicates the question of cultivating in a computer system true awareness about its own processes including their limits. Technically, all computer systems today are just one environmental or systemic disaster away from erasure, and yet current system can only report on their own internal processes to the extent that we can program that process inside the vectors of machine learning or other emergent processes.

Expand full comment
author

Thanks, Nick! I always appreciate your insights.

I agree, the idea of organic computers seems like an incredibly challenging process. There appear to be two major hurdles. One, is engineering the morphing computer components. But I suspect, this will be the easier of the two challenges (although still incredibly complex). Two, is developing the algorithm. To me, figuring out how to make this morphing computer function like a human brain is an (almost) insurmountable problem.

Expand full comment

What assumptions make it almost possible, Suzi? Could range from “anything is possible” to “we know that…” is there something known or reasonably assumed? From my vantage point when I think about computers becoming human in the way of a human brain, well, I see two Macs out on a date winding up in her apartment having sex. Or two rocks on a beach making out. Or maybe there is some asexual way like a microscopic organism splitting apart and regenerating. I’m out on a limb here, but I can’t even imagine what might sit inside those parentheses (almost).

Expand full comment
author

Hi Terry!

You're right, my use of 'almost' wasn't the best choice of words here. I was trying to leave open the possibility of a non-reductive physicalist perspective, which might argue that the human 'code' could, at least in theory, be completely known.

However, the question of whether a complete set of 'algorithms' or human 'code' could ever be fully understood, even theoretically, is highly debatable.

Views on this, I suspect, turn on ones view of emergence. Those who think consciousness is reducible to the physical brain might argue that with enough technological advancement, we could potentially map and understand every neural connection and process in the brain. But others who think that consciousness is an emergent property that cannot be reduced might argue that a complete 'decoding' of the brain is impossible, even in principle.

Expand full comment

Hi all, here's what Claude is thinking about this Article (I haven't changed one word, it's a copy-paste):

This article presents a thought-provoking exploration of consciousness, comparing biological brains to digital computers and questioning whether artificial intelligence could ever be truly conscious. Here are my key impressions:

1. The article challenges the popular "brain as computer" metaphor, arguing that this analogy may be hindering our understanding of how the brain actually works.

2. It highlights fundamental differences between brains and computers, particularly the inseparability of "hardware" and "software" in biological systems versus their clear separation in digital computers.

3. The concept of mortality is presented as a potentially crucial factor for consciousness. Biological systems, being mortal, need to maintain clear boundaries and predict their environment for survival, which may be fundamental to consciousness.

4. The article introduces the Predictive Processing Theory, suggesting that our brains constantly generate and update predictions about ourselves and our environment, which may be closely tied to consciousness.

5. It presents Anil Seth's view that consciousness might be fundamentally dependent on biological processes, making conscious AI unlikely.

6. However, it also discusses Geoffrey Hinton's proposal for "mortal computers" that could potentially bridge this gap between biological and artificial systems.

7. The article is balanced, presenting different viewpoints and acknowledging that many of these ideas are still in early stages of research.

8. It raises intriguing questions about the nature of consciousness and the potential for artificial consciousness, without definitively answering them.

Overall, I found this article to be a nuanced and thought-provoking exploration of consciousness, AI, and the differences between biological and artificial systems. It effectively challenges common assumptions and encourages readers to think deeply about these complex issues.

Expand full comment
author

That's a nice summary! Thanks Michael (and Claude)

Expand full comment

Suzi, you're welcome. Michael & Claude

Expand full comment

Great read. I agree that the brain isn't just hardware running on biological software. But whatever it is, it is made of matter and obeys the laws of physics and chemistry. Its hardware is its software; the brain may not deeply resemble a computer, but it is still a physical system. I can't really see what properties it possesses that uniquely allow it to have conscious experiences. Quantum tubules? Doubtful.

Qualia, the experiences of the world that are intrinsic to consciousness can't be expressed in words, and I can't imagine that equations can do any better. The same goes for consciousness. Frank Jackson's famous thought experiment of Mary in a room, I think, captures this insight. Mary, a brilliant scientist, equally capable in all fields of knowledge, is raised in a room with no colors beyond black and white. With all her knowledge, brilliance and study what will she make of the color red when she is finally allowed to leave the room? Will she have already deduced what the the qualia of red feels like? Will she know to call her experience red? I reluctantly believe she will not. Reluctantly, because this kinda forces me to reject a pure physicalist theory of the mind, which as a physicist I am loath to do. And I have no idea what needs to be added to the picture, nor do I think anyone does anyone else.

Expand full comment
author

Thanks Bruce! I resonate so much with your comment. It's challenging, isn't it!? But (I think) it's the challenge that makes the topic of consciousness the most fascinating of all topics.

Expand full comment