61 Comments
User's avatar
User's avatar
Comment deleted
Apr 11
Comment deleted
Expand full comment
Suzi Travis's avatar

Hey Jack!

Yes, good point. Almost all of the research we’ve done on consciousness uses report as a proxy. If someone can tell us they’re conscious—or describe what they’re conscious of—we count that as evidence. But as you say, there’s a real argument that consciousness might exist without the ability to report it.

There’s been some really interesting work lately on non-report paradigms to get around that problem. But once you start going down that road, you realise just how tricky it is—what we call conscious might not be one single thing that can be measured the same way.

Expand full comment
User's avatar
Comment deleted
Apr 13
Comment deleted
Expand full comment
Suzi Travis's avatar

Please never ask me about politics, current events, celebrity gossip, wine pairings, sports rankings, or how to change a tyre. I will absolutely fold under pressure.

But if it’s weird brain stuff? I’m all in.

Seriously though — thank you, Jack. That really means a lot.

Expand full comment
Johnnie Burger's avatar

“Whether the cerebellum has absolutely no role in consciousness is still under discussion. Some recent studies suggest it might play a modulatory role in emotion or cognitive function”

It could have self awareness of its own and simply not tell the narrative self about it, other than in crude flags like “I don’t feel so well”, or “I feel hungry”.

To me, consciousness is like self-awareness, which presupposes attention. In flow states all attention is placed on an absorbing task and the awareness of self reduces. Unconscious tasks of attention by the brain may be like a first stomach, predigesting information so the aware self can ‘ruminate’

Expand full comment
Suzi Travis's avatar

I 100% agree! The cerebellum is almost certainly influencing feelings and bodily states. This highlights an issue I have with the 'where in the brain' question that captures so much attention. I think it's the wrong question. There's no theatre screen in there.

Yes, and I agree attention plays a crucial role! In fact, my PhD thesis explored precisely this idea. Your analogy of unconscious attention as a "first stomach" nicely aligns with theories like Global Workspace, Higher-Order Thought, and Recurrent Processing, which all see consciousness as emerging after an initial unconscious selection or preparation stage. Do you have a favourite among these theories, or perhaps a different view entirely?

Expand full comment
Mike Smith's avatar

I like the distinction between measured and meaningful complexity. It addresses a worry I've had about the putative non-complexity of high entropy states. Although it feels like maybe the "meaningful" part may need some scrutiny. What do we mean by it? It could be functional (as in causal), understandable, or something else.

Overall, I think it's right that complexity is necessary for what most of us mean by "consciousness", but it isn't sufficient. My attitude about the various scientific theories, is that many of them capture aspects of the problem but none can justify a claim to being the one and only solution. For those of us who accept that "consciousness" is a hazily defined collection of capabilities, that shouldn't be surprising.

Interesting discussion Suzi, as always!

Expand full comment
Suzi Travis's avatar

YES!!! The meaningful part absolutely needs unpacking. Sometimes I wonder whether what we call complex is really just a reflection of us — our sense of what seems tangled or hard to grasp. If we were wired differently, maybe we'd find entirely different things complex. There’s definitely some truth to the idea that we tend to label what we don’t understand as complex.

But then there's the energy angle. Complex systems don’t just look complicated — they also tend to convert low-entropy energy into high-entropy energy in structured, sustained ways. So while meaning is a slippery concept, it does seem that there is something more than just our perception going on. Or at least a correlation.

And yes, I loved your phrasing — 'a hazily defined collection of capabilities'. It echoes Dennett’s framing, doesn't it? I think there’s real value in seeing consciousness that way.

For me, I keep circling back to the question: What is consciousness for? It’s funny — when I was an undergrad, I gave my first-ever research talk, and someone asked me that question during the Q&A. At the time, I thought, 'What a crazy question'. But over the years, I’ve come to see it as one of the key questions. I think the person who asked it was hoping I’d say, 'It’s not for anything'. And I think I disappointed him, because that wasn’t my answer.

Anyway… if we’re talking about a bundle of capabilities, I think it’s worth asking: what are those capabilities for? What do they let us do that we couldn’t do otherwise?

Thanks again for a great comment!

Expand full comment
Mike Smith's avatar

Thanks Suzi! Maybe a good way to think about meaningful complexity is in terms of dynamism, a system that manages to maintain its transformability. Usually when we think about systems that have crossed into chaos, their dynamics of become too fragmented for us to think of it as dynamic.

Definitely my phrasing echoes Dennett's in a lot of ways. I'm not always wild about his framing, but I think he mostly gets the ontology right.

There's definitely a strong sentiment among a lot of people interested in consciousness who want to see it as something irretrievably mysterious. But I agree. Asking what it's for, what its adaptive roles are, is exactly the right question. It seems like the only way for science to make progress.

My shot at a summary answer, probably oversimplified, is it enables a system's actions (through distance senses, memory, learning, etc) to be affected by a much wider causal cone in space and time. But that's a view which sees consciousness as a type of intelligence, which also goes against a widespread sentiment.

Expand full comment
Malcolm Storey's avatar

"deep non-REM sleep ... EEG signals tend to be slow, repetitive, and highly predictable."

I'm just reading Pinker's Blank Slate and I've just read (p91) about Schatz's work: how the visual cortex organises itself by firing regular waves in different directions across the retina.

This suggests that non-REM sleep is the brain's way of reminding neurons where they are and (by implication) what they should be doing.

Expand full comment
Suzi Travis's avatar

Ooh, that’s a fascinating -- thank you!

There is the theory that REM sleep (dreaming) helps keep the visual cortex active while we sleep, so it doesn’t do its neuroplasticity thing and start responding to all kinds of input. So dreaming might be a kind of screensaver to prevent disuse — I love that image. But I hadn’t come across something similar for non-REM. Now you’ve got me wondering. Why we sleep is still an open question — it doesn’t seem to be just about rest. There’s such a cost to being detached from the world for so many hours that the benefits must be huge.

Thanks for the great thought — now I want to go back and re-read Blank Slate!

Expand full comment
Malcolm Storey's avatar

Sorry should have said: Schatz's work was on embryos - it's how the original connections are forged.

Expand full comment
Malcolm Storey's avatar

"All we seem to have is DNA." and the chemical environment that we spent the first half of our evolutionary history developing, without which it's just a ropey polymer. ;)

Similarly the building blueprint assumes the building blocks of bricks, steel girders, cement and concrete; far from a complete description - but it would be silly to go down to quarks.

The problem is that any spec presupposes the language it's written in and the language may (usually does?) conceal application-specific information.

Expand full comment
Suzi Travis's avatar

Haha, yes! I was being a bit provocative with that “Next week” section, wasn’t I!?

And yes — DNA really is just a ropey polymer without the right chemical soup. Love that. I’m totally with you on blueprints too: so much depends on the materials, the builders, and all the hidden context we tend to take for granted.

But… I think there might be an important difference here that’s worth exploring.

Thanks for playing along — I promise I’ll say more next week! 😉

Expand full comment
Malcolm Storey's avatar

Douglas Hofstadter, in his book "Godel, Escher, Bach: An Eternal Golden Braid", envisages a jukebox with only one record but multiple record players each containing a decoding mechanism to extract a different song from that record.

Expand full comment
Joseph Rahi's avatar

Re complexity and entropy, I think we might instead correlate, or perhaps even equate, "meaningful" complexity with the *rate of change* of entropy. The curve of entropy increasing over time is more S shaped rather than linear, with a slow start, gaining speed in the middle, then settling into equilibrium (at least for closed systems). In that case, the n-shaped complexity curve may be the derivative of the entropy curve.

Putting it another way, we might measure complexity as the rate at which free energy is bound, perhaps linking us back to the free energy principle.

It also reflects the idea of complexity as nature forming energy/information channels that allow it to follow its thermodynamic drive to disperse energy more efficiently. It's this concentrated channeling of energy, I think, that makes this kind of complexity more "meaningful".

Re consciousness, I think the idea that a certain level of complexity suddenly "switches on the lights" feels too much like magical thinking to me. It's just... why? How?

It's very cool to see all the theories of complexity and consciousness, because they're all a bit mad! It's like reading about various sciences before they gained a dominant paradigm, like when people saw electricity as an actual fluid or multiple fluids, or magnetism as the work of invisible hooks.

Expand full comment
Suzi Travis's avatar

I love this! Meaningful complexity as the rate of entropy change feels like it captures the dynamic and temporal aspect of complexity — how systems move through states that vary in complexity. The nice thing about it is it is measurable (at least in theory—we’d just need to clearly define the system boundaries, entropy model, and measurement framework).

Complexity = 𝑑𝑆/𝑑𝑡. Hmmm... that's something to think about 🤔

Totally agree on the “lights turn on” idea feeling a bit like magic. It sounds intuitive — or at least neutral — but it actually smuggles in a ton of assumptions. And the fact that it feels innocent makes it even trickier. It’s a loaded idea pretending to be a default starting point.

And yes, the Free Energy Principle! I think Friston’s work is really helpful here — especially around active inference. I might be a little biased since this framework is core to a lot of my research, but I do think it ties together a wide range of data in a way that’s conceptually elegant. I know some folks criticise the Free Energy Principle for being too broad or hard to test — but I think where it really shines is when it’s used to build specific models (like active inference), which can be tested. That’s where it starts to feel more scientifically grounded to me.

Thanks for such a thought-provoking comment — I might go sketch myself an n-curve or two now. 😄

Expand full comment
Joseph Rahi's avatar

You're welcome!

I love the free energy principle. Very jealous of you doing research with it. I guess it's maybe hard to test in a similar way to evolution - they're just such huge ideas with broad application, but not incredibly specific predictions. But I expect the evidence will continue to mount in its favour over time like it has for evolution.

Expand full comment
James Cross's avatar

There is often quoted number of 80-85 billion neurons in the brain but it is less recognized that three-fourths of the neurons are in the cerebellum. I've also read some research suggested oscillations in the 200-400 hertz range which is much faster than most of what goes on in the cortex. The structure of neurons is in cerebellum is fan-like and unlike the layered cortex. I would be interested in seeing if there is a Phi estimate for the cerebellum since it is probably little involved in consciousness.

Consciousness seems to go on more in the 6-40 hertz range which suggests the brain needs to operate more slowly when it "thinks" about things rather than reacting automatically.

Expand full comment
Suzi Travis's avatar

Wow, yes — this is such an interesting observation. I’m really glad you brought it up!

I think Tononi and colleagues have pointed out that despite the cerebellum’s complexity, its functional architecture may not support the kind of integration and differentiation that IIT associates with consciousness (i.e., a high Phi value). So even with all those neurons, it might not generate much intrinsic cause-effect power. I was trying to find the paper, but couldn’t track it down — though from memory, I believe they ran a computer simulation that suggested as much.

The oscillation research is so fascinating! There’s been a lot of recent work exploring things like coupling between frequencies, cross-frequency integration, and even the role of phase (whether a wave is at its peak or trough) in perception. I find that especially compelling.

I’ve been digging into some of the newer science lately, and it’s painting such a nuanced picture. It’s been really cool to see how our understanding of oscillations has evolved. I might try to pull some of that together in a post or two soon — there’s just so much to explore, and it’s exciting to see how far the field has come.

Thanks, James—you’ve got me thinking about some really interesting things.

Expand full comment
James Cross's avatar

FYI here's Scott Aaronson's view on IIT:

"In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are."

https://scottaaronson.blog/?p=1799

Expand full comment
Suzi Travis's avatar

I’m not convinced by IIT either—but for different reasons

Expand full comment
Michael Pingleton's avatar

You bring up some interesting points about how consciousness might work. I agree that complexity alone, or having just the right amount of complexity, is enough to explain consciousness. It doesn't seem any of the theories that exist today can really explain it, hence all the disagreement.

In Myelin, the neural networking engine that I'm working on, complexity grows as the network learns. This being said, even if a Myelin network were to grow at least as complex as an actual human brain, it won't become conscious at all. Any illusion that it would have consciousness would only be a perception, not reality. Despite this, it can still be useful in its own right.

Expand full comment
Suzi Travis's avatar

That gap between simulation and reality feels like a big question, doesn’t it!? I think you might enjoy next week’s essay — I’m circling back to some of the ideas you’re exploring with your Myelin network.

Expand full comment
Mark Slight's avatar

Cool, really looking forward to that v

Expand full comment
Michael Pingleton's avatar

"Are we in a simulation?" That's the question I find myself asking sometimes lol. Looking forward to next week's essay!

Expand full comment
Suzi Travis's avatar

Oh yeah, that one crosses my mind occasionally too. It’s hard not to go there once you start poking at what’s real and what’s generated.

Expand full comment
Men's Media Network's avatar

Electronic Communications 101: Chapter 1 - “Statistical Processing.” Think of a fax (remember those?) transmission consisting of a single little dot. Then think of a high speed fiber optic trunk line carrying the Internet data across an ocean for an entire country. Without the genius of one simple statistical algorithm, in one line of code, neither would be possible. The algorithm simply assumes the next bit of information, a one or a zero, will be the same as the preceding bit. Until it isn’t. Turns out the algorithm is right more than half the time. Without this time saving algorithm, the fax would take 100X longer to receive. The fiber internet cable would be a practical impossibility. Because of one line of code. I believe people (including me) use a similar algorithm in our moment to moment consciousness. When the algorithm flips, we call that “surprise,” then we move on. The secret to overcoming complexity is simplicity. “I Am” isn’t simply the name God goes by. It’s the bootstrap code for consciousness. One line of code. We have no instruments (yet) in the lab to replicate that. God help us if or when we do.

Expand full comment
Suzi Travis's avatar

That idea of prediction and surprise being core to both information compression and subjective awareness is something I am very interested in. It’s wild how often the simplest algorithms — just one line of code, like you said — end up reflecting something much deeper than we expected.

Expand full comment
Men's Media Network's avatar

I’m very impressed you understand the statistical algorithm as data compression. Then again, I’d forgotten about the biomedical majors in all my hardest EE classes, they had to go way deeper into EM and signal processing theory than the rest of us.

Expand full comment
Suzi Travis's avatar

Haha yeah, not too surprising I guess — EEG signal processing is a big part of my research, so I’ve had to get pretty comfortable with EM theory and the deeper sides of signal processing and analysis. Comes with the territory!

Expand full comment
Mark Slight's avatar

Thank you Suzi! Well written, as always! I must say though, for what it's worth, that this is the first essay by you (that I've read) that I really disagree a lot with. Even though you're clearly expressing neutrality and openness, to my ears you are talking about consciousness in a way that seems to exclude (computational) functionalist/virtualist/illusionist views. I take no issue with that per se - I say this only because I have the sense you don't intend to do so! That's how I see it, anyway.

I wrote a much too long response attempting to elaborate on this. Among other things, in my opinion your distinction between consciousness and the content of consciousness is misguided, and LLM's really are recursive (while being active!), but this seems like the wrong time and place to post all of that. Instead, if you have the time, I'd be really curious of your response to following thought experiment:

Imagine that you have access to a neuroimaging device that can track the biological and biochemical processes in a brain in extremely high temporospatial resolution . It tracks every molecule of glucose, systemic hormones, neurotransmitters, oxygen, ATP levels, and every ion flux and depolarisation with incredible (or complete) temporospatial resolution. It can be worn on one's head and it can record for an arbitrarily long time.

To the best of my understanding, to date there are zero neuroscientific findings to suggest that neurons do not act like any other cell - in accordance with known physics, biochemistry, cell biology and so forth - please correct me if I'm wrong! Let us assume this holds true in the future: even though we most certainly will have to revise and expand our cell biological models, let's assume every part of the whole will keep behaving in accordance with the laws of physics.

Now, assume that you are the one who has been wearing this device for a long time, including all the time that you spent conceiving of, developing and writing this very essay. You're still wearing it. Starting now, you're getting access to a computer that not only confirms that every neuron behaved as expected, with out any signs of "strong emergence", but it can also analyse and answer any question you ask it.

You can trace every word in your essay, and every word in your response to this comment (if you do respond), down to all the molecular biology and neuroscience that you know an love, without any surprises at the low level. Furthermore, this supercomputer is very intelligent, and can provide visual animations on prompts such as: "show me an animation of the most active and relevant pathways in my brain for how I read and responded to Mark Slights functionalist views, at 1/10th normal speed, and simultaneously show me the letter output that I produced as a result of that activity" (let's just presume that you were filmed while reading and responding, with accurate time-stamps). In other words, whatever you respond to me, you have access to a complete mechanical description of how that response came to be exactly as it is, at any resolution or detail that you want.

How do you imagine you would react to this? Would it challenge any of your views?

To me, this illustrates what I view as the core insight of functionalism. What we need to explain is how my brain models myself and my environment, so as to lead to doing the things I do. We need to explain what internal models lead people, philosophers and neuroscientists included, saying the things that they say, on either side of the debate. It is the models that we need to explain - not that which is modelled! We need to explain the model of a subject experiencing redness, but we don't need to find redness itself! Looking for an explanation of redness itself is, as intuitive as it may seem, simply a category mistake.

Importantly, this is not to deny the realness of consciousness. That is certainly not an illusion. I'm simply denying that personal introspection provides any data on what exactly it is that we need to explain. That intuition is illusory. It's the same mistake as looking inside GPU's to try to understand the emergence of intelligence (or mimicked intelligence if you prefer) of LLM's. Even if you have a complete map of the GPU's structure, you just won't find it there.

If you read this, I hope I'm not wasting your time. Keep it going, looking forward to the next one!

*hard-nosed functionalist rant over*

Expand full comment
Suzi Travis's avatar

Thank you so much, Mark! This is amazing — truly. My essays are never written with the intent to convince anyone of a particular position; they’re intended to start a conversation, so this kind of exchange is exactly why I love writing them.

That said… in this case, I don’t actually disagree with anything you said. I found your thought experiment compelling (and beautifully put!), and the conclusion you draw from it — that what we need to explain is the brain’s model of consciousness, not some metaphysical "redness" itself — is something I wholeheartedly agree with.

But reading your comment made me realise that something in my essay may have been unclear — or maybe just sloppily phrased — because I got the sense you were responding to an argument I wasn’t trying to make.

So now I’m wondering: what did I write that gave you the impression I might disagree with what you're saying here?

You mentioned the distinction between consciousness and the content of consciousness. Just to clarify — that distinction isn’t something I’m inventing. It’s standard in neuroscience research on consciousness. And it’s not meant to imply anything spooky — just that some researchers study states (like sleep, coma, anaesthesia, psychedelics), and others study perception (like whether someone sees, hears, or feels a stimulus). It’s a practical distinction for structuring research questions, not a metaphysical claim.

I’m also wondering if the issue came from me saying complexity isn’t enough. If so, I want to be super clear: I’m not rejecting the idea that consciousness could be computational or functional. What I was trying to point out is that not everything we call "complex" is created equal — and if there’s a link between complexity and consciousness (which I think there is), it probably depends on specific kinds of functional organisation.

So when people say things like “ChatGPT is getting so complex, maybe it’s conscious”, I think we’re sometimes throwing the blanket of "complexity" over the conversation without asking what kind of complexity matters. I assume the internal models that guide behaviour — in humans or machines — are highly complex. I’m just asking: would any model do? Or does it need to be a particular type, doing a particular kind of thing, performing a particular function?

So I guess I’m genuinely curious: is this what you thought I meant? Or was there something in my phrasing that felt like I was stepping outside a physicalist frame? If there’s some language that read that way, I’d love to know — both because I want to be clearer, and because I think we actually agree on more than it might have seemed!

Thanks again for such a generous and thought-provoking reply. I’m really grateful for it.

Expand full comment
Mark Slight's avatar

Thank you Suzi!

Clearly, you have caught my ADHD brain jumping to conclusions about what you're saying a bit too fast. I'm sorry about that. I am very grateful for the time and effort you took to respond! I do think it is worth getting into a bit more, will get back to this as soon as I can!

Expand full comment
Suzi Travis's avatar

Honestly, I really appreciated your comment. It was thoughtful and generous and got me thinking in new directions (which is always the best kind of comment). I’d love to hear more when you have the time — but you’ve already been so generous, so please don't feel obligated.

Expand full comment
Mark Slight's avatar

Thank you again, Suzi! Likewise, your posts make me think in new directions – the best kinds of posts! I certainly don’t feel obligated.

I jumped to conclusions too quick, and it’s not the first time I do that. There’s certain language that I associate with an unwarranted reifying of consciousness as some clearly identifiable phenomenon or property. I realise one probably cannot avoid that kind of language completely - especially not if one wants to remain accessible to a variety of readers. Also, I’ve now read your essays on functionalism and on the homunculus fallacy, which, taken together with other essays and you being a long-time neuroscientist, makes me realise you know all of this much better than me. I do think a lot of neuroscientists are still entrenched in a kind of Cartesian model of the mind (even physicalists in the form of Cartesian Materialists), so please forgive me for assuming you were one of them.

I now see that you say “consciousness might need — or simply might be — specific kinds of structures, architectures, dynamics, and functions”. I missed this (or perhaps you clarified). With “simply be” you certainly are leaving room for functionalism.

Yes, I think misunderstood you on the complexity part the way you suspected. I agree that we probably agree on more than I thought. I can’t stop myself going into rant mode again (in part to practice putting words to my thoughts). If you read it - please don’t take it as a response to your post – since I think there is significant overlap in our views.

I’m aware that the distinction between consciousness and its content is well established, I didn’t mean to suggest you made it up. But I do think it’s a confusing and misleading distinction the way I take it to usually be made. I think it’s a similar mistake to the distinction between “me” and the “objects” of my experience. These distinctions are necessary constructs for the purpose of our functionality – for example our self-narration and communication with others. But when it comes to philosophy of mind and neuroscience, the consciousness <-> content distinction and the mental subject <-> objects distinction often confuses matters more than it helps. It misguides us in what exactly we think we need to explain.

I think the distinction between consciousness and content supports the notion that consciousness “comes online” as a kind of medium that hosts the content. The distinction is implicitly dualistic. To me, and other functionalists I presume, consciousness is simply the totality of the content. The distinction is no more valid or useful than the distinction between an operating system and its components, between an LLM’s ‘intelligence’ and its capabilities, or between a movie and the movie’s content. Furthermore, the distinction kind of suggests that it’s possible to be conscious without any content in that consciousness (please note that a sense of no content is a perfectly good example of a kind of content!). I made a 2-minute video pushing this a while back, if anyone’s interested: https://youtu.be/3QRei0upNeA?si=iifVmhbCz6wDylpm. That said, I do recognise the usefulness of talking about different brain states such as awake, sleep etc, but I take them to be differences in the quantity and type of content the brain & body processes rather than a particular phenomenon being definitely on or off.

I feel like connecting consciousness and complexity risks confusing things more than being helpful. I know this is a big deal in philosophy of mind and that you’re not making this up either. I also think the field of complexity science/philosophy, and the relation to entropy, is incredibly fascinating and important in its own right. But this is my problem with what I take to be the common approach:

I think that asking about the relationship between complexity and the human mind (consciousness) is much like asking about the relationship between complexity and the emergence of the first replicating cells, the first eukaryotes, bee hives or mammals. Yes, of course, nothing like that will happen without complexity, but complexity itself lacks any explanatory power for the phenomenon we want to explain. What kind of complexity is required for a bee hive to emerge as opposed to a mammal? Or for a human to emerge as opposed to a dolphin? Again – I think that the LLM analogy is useful. What is the connection between complexity and the capabilities of LLMs? It’s the particular arrangement of things that make things into what they are. Not what kind of complexity. Complexity is not a mechanism.

I agree with Dennett that LLMs are not conscious “in any interesting way” (but not 100% unconscious either!). I also agree with Hofstadter that LLMs display “strands of consciousness”. The question of to what degree AI possesses human-like consciousness (during inference) is a question of how much its functionality resembles ours. It’s not a question about ‘states’ per se or of levels of complexity.

AGI / ASI will not be events at particular points in time – and neither will AI consciousness. Not any more than there was an event of a first mammal or a first human being (or animal sentience!).

*know-it-all attitude random adhd rant over*

Again, thank you for your excellent posts.

Expand full comment
Suzi Travis's avatar

Hey Mark! This is such a great follow-up. Thank you!

I completely understand how certain phrases (like the content/states of consciousness distinction) can raise red flags depending on how they’re usually used — especially in this very slippery space where language so often carries philosophical baggage. And yes, sometimes it’s better to let go of a little precision in favour of clarity — especially when the goal is to start a conversation.

And don’t worry about the rants—I’ve been the instigator of my fair share too. Often on the very same issue.

I find it hard to make sense of the idea of contentless consciousness. Can we really have a form of consciousness that persists even when all thoughts, perceptions, and emotions are absent? I don’t think so. But I have friends and colleagues who do. I wrote a little about this here, if you're curious: https://suzitravis.substack.com/i/146703315/does-consciousness-require-content

I totally agree with you — there’s a real risk of smuggling in the idea of a kind of Cartesian “theatre” that exists separately from the stuff that fills it. That framing can lean dualistic, even when that’s not the intent. This was actually one of my favourite rants during my PhD years.

On the complexity front — I’m with you. The danger in treating complexity as the mechanism is exactly the issue I was trying to raise in this essay. I like the way you put it. I hear people using the word complexity as if it is the explanation, but I think that misses what’s actually going on. The specific kinds of structures, architectures, dynamics, and functions are probably complex—but that doesn’t mean complexity is the thing doing the explanatory work. That type of talk can distract us from the real story (which I think is far more interesting).

That said, my goal here isn’t to write opinion pieces or take firm positions. These essays are meant to be conversation starters. I’m a computational neuroscientist with a strong respect for philosophy, so naturally the topics tend to orbit biology, psychology, information theory, and philosophy. Sometimes they reflect the way I see things — it’s hard for them not to. But not always. The real aim is to explore, ask better questions, and see what kind of thinking it sparks — not to have the final word.

Expand full comment
Mark Slight's avatar

Thank you so much for this! I think you are doing a fantastic job at achieving those goals. Also, reading your post again, it's obvious that you're bringing assumptions about complexity into question.

"I totally agree with you — there’s a real risk of smuggling in the idea of a kind of Cartesian “theatre” that exists separately from the stuff that fills it. That framing can lean dualistic, even when that’s not the intent. This was actually one of my favourite rants during my PhD years."

-yes, exactly. I think it is not only that the language leans dualistic. I think it often reflects a kind of implicit dualism even among those who claim to be physicalists/functionalists and who explicitly reject all forms of dualism. It takes many shapes. Sometimes it's a medium-content dualism, sometimes a mental subject-object duality. So strong is the mental construction of the "theatre", that one often claims to reject it while still stuck in it's framework - what Dennett called "Cartesian Materialism". Also, to be fair to those who I view as "stuck": as a mental construct the Cartesian Theatre is arguably a real thing. The key is that it is a construct, not fundamental. (I'm just riffing on what you said - I think this is in line with what you were saying if I'm not mistaken).

I will read the post you linked, really looking forward to it. Thank you.

Expand full comment
First Cause's avatar

I hope you don't mind if I jump in here Suzi, but I would like to address this comment.

“I’m not rejecting the idea that consciousness could be computational or functional.”

It is my contention that this idea that consciousness could be computational or functional should indeed be rejected and relegated to the trash heap of absurd assumptions.

See the additional comment I posted on your substack for a more thorough explanation.

thanks

Expand full comment
Wyrd Smythe's avatar

From your post's title, I thought this one might be about epiphenomenalism, but you surprised me with something much more interesting. (And it occurred to me as I started reading that, with your posts, I should just click the ❤ at the top because I always enjoy them.)

"Meaningful complexity" seems a good way to put it. I've never thought that mere complexity was sufficient (though it seems necessary). Indeed, as you say at the end, "specific kinds of structures, architectures, dynamics, and functions" are what results in consciousness emerging. (But I don't care for the notion that it "comes along for the ride" because it implies it's secondary and perhaps not the point.)

> "The researchers found that brain activity was significantly more complex during the forward viewing than the backward one."

Fascinating! That is the opposite of how I expected that paragraph to end. My thought was that viewing something incoherent would stimulate the brain more than watching something that makes sense. Apparently not. I suppose that the mind disengages to some extent when confronted with nonsensical input.

Which almost seems contrary to what's found in seizures and psychedelics. But it makes sense mind-altering drugs raise the mind's complexity (certainly my experience of them — the ride can be mentally exhausting). In those who over-indulge or can't handle them, the result often is seizure-like, which suggests that maybe psychedelics do edge the mind more towards the right side of that curve.

I have long suspected that if we were to build a hardware brain with the same structure and function (Isaac Asimov's robot "positronic" brains), it would almost necessarily be conscious. (But I remain skeptical about software versions.)

Expand full comment
Suzi Travis's avatar

Ahh, thank you!

The “comes along for the ride” phrase does have an epiphenomenal flavour, doesn’t it? In some sense, I think that’s what people do mean when they say consciousness might emerge once things get sufficiently complex — like it gets “switched on” by the brain, but doesn’t do anything in return. It’s produced, but doesn't play an active role.

On the movie study — I wonder whether it’s an attentional thing. When the movie is running backwards, it’s less interesting and coherent, so maybe the brain just checks out a bit and defaults to its default mode network (internally-oriented cognition which tends to be less complex than when we pay attention to what's going on in the external world).

And I love the Asimov reference. I share your instinct: if we really could build something with the same structure and function, I think it would be conscious. I don’t think there’s any spooky stuff going on. But I do think we grossly underestimate what it really means to “build something with the same structure and function.”

Expand full comment
Wyrd Smythe's avatar

Oh, very true. Long ago I wrote some posts about what would be involved in a software-based full brain simulation. The static data representing the model was in the tens of petabytes just to represent all the neurons and synapses and connections. A physical version would encompass as much data and would have to capture all the subtleties of brain operation.

I've long pondered a thought experiment (I believe due to Chalmers) that involves replacing neurons one-by-one with artificial neurons. He asks, at what point would consciousness fail (on the premise that computationalism is false). It's a challenging question for someone like me who suspects that premise is correct. But what he's proposing isn't really a software simulation but a hardware construction, so I think maybe it "just might work."

[I just finished rereading for the umpteenth time Terry Pratchett's "Guards! Guards!" in which million-to-one chances always work if someone says, "It's a million to one chance, … but it just might work."]

I think you're likely right about the backwards movie. Makes sense to me, anyway.

Expand full comment
The Plucky Welshman's avatar

I'm a mystic so needless to say I think the brain is important but not fundamental in an experience of consciousness. Perhaps it was there all along, perhaps the fabric of spacetime is conscious, perhaps as touched upon in this essay, corelation is not the same as causation. Perhaps consciousness comes from outside of ourselves, making the brain a bit like a two way radio that relays signals between the body and one's personal awareness wherever that happens to be. All the same, it's a very interesting read and thank you for posting.

Expand full comment
Suzi Travis's avatar

What a lovely, gentle, and thoughtful way to express a very different perspective.

Yes, the correlation vs. causation point is such an important one—especially when the thing we’re studying is also the thing doing the studying!

Thanks so much for reading, and for your kind words.

Expand full comment
First Cause's avatar

The top 22 theories of consciousness and counting read more like the stand-up comedy routine of Abbott and Costello”s “Who’s on First” rather than science. As far as scientific progress goes, consciousness research is still in the Stone-Age. It hasn’t even elevated itself to the metaphorical Bronze Age. Even Penrose & Hameroff’s quantum intuitions, as insightful as they may be are grounded in false assumptions. Science has to get their original assumptions, i.e. their hypotheses right before any progress can be made. But how can science accomplish this objective; what method would they use?

I agree whole heartedly with Philosopher Eric’s assessment that the scientific community needs guidance and direction in order to make progress. Where I differ from Eric is that this guidance will not come from academic philosophers. It has to come from within the scientific community itself. A new branch of science has to be established. I would call this branch the synthetic sciences and their mandate would be grounded by the “synthetic scientific method”.

But wait; what? I’ve never heard of the synthetic scientific method before. Sure you have; Immanuel Kant, the German dude who wrote the book “Critique of Pure Reason”? Even though Kant never used the phrase “synthetic scientific method” he explicitly referred to it as synthetic a priori judgments followed by rigorous synthetic a priori analysis.

Here is a brief but crude example of how synthetic a priori analysis works. Is it possible; wait, let me rephrase that. Does it make any sense to use an empirical model of something that we understand very very well, say an information processing machine to explain something that we do not understand and are absolutely clueless about, say consciousness for example?

Expand full comment
Suzi Travis's avatar

I appreciate the passion here. And I agree—consciousness science is far from perfect. But the people working in the field are asking seriously difficult questions, and they often do so with impressive methodological care.

It’s easy to dismiss a field—any field—as chaotic when it’s still in its early days. And we really are in the early days. Many people mark the beginning of the neuroscience of consciousness with Crick and Koch’s 1990 paper, “Towards a Neurobiological Theory of Consciousness.” That was only 35 years ago. Not a lot of time when you're trying to reverse-engineer the most complex system we know of.

Studying consciousness isn’t like studying anything else. Studying anything is hard—studying trees is hard! But at least with trees, we all agree on what a tree is. With consciousness, we’re still debating the definition. So the concepts, the methods, and the terms of the discussion are in flux. That means progress is going to look messy.

By all means, let’s critique the field (there’s plenty to critique!)—but I think it’s worth recognising that real, meaningful progress has been made, even if we haven’t yet settled the big questions.

Expand full comment
First Cause's avatar

"With consciousness, we’re still debating the definition."

Ah yes, the definition; the scape goat for any armchair dialectician who gets backed into an intellectual corner. Believe it or not we do have a definition and its origin can be traced to Latin: con means "together", scious means "to know" and ness means "us".

So the definition of consciousness is succinct and to the point. Consciousness literally means: "together to know us". So the problem does not reside with the definition as you and others assert, it lies with the "knowing us" part.

If one is convinced by the dogma of the religious community, Homo Sapiens are all children of God. On the other hand, if one is convinced by the alternate dogma of the church of science then Homo Sapiens are nothing more than mind-less calculating/prediction "machines". Pick your poison folks.

The late Richard Rorty once stated that without a vocabulary that captures the way the world really is or a core human nature, there isn't even a possibility of locating a metaphysical foundation for truth.

Party on........

Expand full comment
Suzi Travis's avatar

Ha! I do appreciate the energy here.

But I think we might be using different definitions of definition. Without going too far down the definition rabbit hole, I think there’s an important distinction to be made between etymology and a scientific definition.

The Latin roots are interesting — poetic, even. But I’d say etymology gives us historical texture, not a working definition we can use in empirical research. In consciousness science, the challenge isn’t that we lack words — we have plenty of those. Scientific definitions aren’t about tracing word origins — they’re about operationalising concepts so we can study them. The challenge is that we lack consensus on how best to do that. That’s where the real debate lies.

One scientist might take a computational functionalist view (which, I know, you believe should “be rejected and relegated to the trash heap of absurd assumptions”) and say: we’re on the right track — we just have some details to work out. Another might agree with you, arguing that’s absurd. Why? Because they define consciousness — that is, they operationalise it — differently. Instead of pointing to a function, they might point to sensorimotor engagement with the world. In that sense, they’re defining consciousness as something entirely different.

But I suspect a lot of our disagreement here isn’t really about scientific definitions. I think it might be about how we approach truth in the first place. What counts as knowing? Where does understanding come from?

And those are interesting debates. So — party on indeed. 😉

Expand full comment
First Cause's avatar

"But I suspect a lot of our disagreement here isn’t really about scientific definitions. I think it might be about how we approach truth in the first place."

You've got that one right Suzi. Empiricism is not the modality by which truth is discovered; it's reach is limited to instrumentalism. Posteriori analysis is an "after the fact" method and this technique works very well with data that can be observed, weighed, measured and tested. But what do we do with all of this observed data that cannot and will never conform to being weighed, measured or tested? Does all of this data still fall within the jurisdiction of the physical sciences? I'll let you answer those questions party girl.

As far as what counts as knowing and the origin of understanding: the short answer is experience. The only way you or I can know anything is through direct personal experience. I've experienced the explanatory power of synthetic a priori judgements followed by rigorous synthetic a priori analysis; you have not.

You're a trained physical scientist Suzi and I am a freelance synthetic scientist. Therefore, our interests are not merely different but irreconcilable. I've enjoyed our brief conversations so I will sign off and wish you the best of luck with your substack blog.

Expand full comment
First Cause's avatar

One final anecdote: The two examples of function and sensorimotor engagement you cited are based on the false and erroneous assumption that "mind" is what the brain does. Science makes the same categorical mistake by assuming that space is a fabric and the quantum world is a wave function.

My synthesis does not support the conclusion that "mind" is what the brain does hence my assertion that consciousness science is still in the stone-age.

Expand full comment
Eric Borg's avatar

Last week I implied that the notion of things becoming more conscious when they become more complex, was somewhat like “using the magic to explain the magic”. I’m happy that you veered away from supporting such an unfalsifiable position Suzi. Instead you concluded that consciousness should depend upon the right kind of complexity. Indeed this should be a kind that evolved to become human grade in us. Though you mentioned in footnote 6 that mainstream neuroscience doesn’t have much respect for electromagnetic consciousness proposals, I highly suspect that a theory in this vein will become empirically validated some day. Here’s how I think this question will become settled one way or the other:

Theoretically all that we see, hear, smell, think, and so on, exist by means of an electromagnetic field that’s associated with the right sort of synchronous neuron firing. Therefore researchers put instruments in a human test subject’s head that ought to produce electromagnetic energies that are similar to the kind that’s produced by standard synchronously fired neurons from around the brain. If certain typical energies in certain parts of the brain are not reported by the test subject to affect them phenomenally (vision, hearing, thought, and so on), then EMF consciousness should ultimately be dismissed (since fields of a certain kind constructively and destructively interfere with others of that kind). If test subjects do report distortions however, then with enough verification it should become proven that consciousness exists as a neurally produced electromagnetic field.

Imagine how transformative such a discovery would be. Just as the rise of chemistry was only able to occur by means of the empirical validation of the atomic model, something similar should happen in neuroscience if it’s discovered that consciousness resides by means of a neurally produced electromagnetic field.

Expand full comment
Suzi Travis's avatar

Thanks, Eric!

I agree with you on several points: the brain certainly generates EM fields, and we can measure them using EEG and MEG. I’ve actually conducted quite a few of these types of experiments myself. I also agree that synchronous neural firing is strongly correlated with conscious states—that’s something I’ve explored as well. And yes, those fields can interact constructively and destructively with each other.

That said, I remain skeptical that manipulating only the EM fields—without directly affecting neural activity—would significantly influence conscious experience. The EM fields naturally produced by the brain are incredibly weak, several orders of magnitude lower than the fields we typically use in techniques like TMS.

That said, your experimental idea is certainly creative. One thing to consider, though, is that the brain is a nonlinear, noisy, and dynamic system. So we can’t just add an artificial EM field and expect it to "stack" cleanly onto what's already there. If your manipulation did produce any effect, it likely wouldn’t be clean or predictable. Things would get even more complicated since any externally introduced electrical activity would interact with ongoing neural processes in complex, possibly unintended ways.

Expand full comment
Eric Borg's avatar

Thanks for considering my testing proposal Suzi! I’ve found that others rarely offer their two cents on the matter, let alone actual neuroscientists like yourself. My plan might conceptually be even more straightforward than you’re currently imagining.

First observe that the test is specifically to create the incredibly weak energies that are typical of endogenous synchronous neural firing centered at various places in the brain, and hopefully without also altering the firing of any neurons by means of those weak energies in themselves. So you can be skeptical that such a thing would alter someone’s consciousness, but let’s actually do the testing to see if participants notice anything phenomenally unexpected by this means. Also the point would not be for an artificial EM field to stack cleanly with what’s already there. The point would merely be to constructively and destructively interfere with the endogenous field that’s currently thought of mainly as a waste product of neural firing, and specifically to see if it not only isn’t a waste product, but resides as an experiencer of what’s seen and all other elements of consciousness. For certain such parameters, if participants tell us things like their vision blurs, colors change, strange sounds are heard, or whatever, then what are the conditions which cause such reproducible alterations? Here researchers would try to refine given parameters to see if they could create more extreme or novel alterations in future attempts. The more that they could narrow down what it takes to alter someone’s consciousness by this means in ways that shouldn’t be affecting the brain in other ways, the more solid the case should be that consciousness exists in the form of these incredibly weak EMF disturbances that neuron firing creates. But if researchers don’t receive reports of altered consciousness from test subjects, and even after a vast assortment of neurally typical energies were tried in various locations of subjects’ brains, then ultimately scientists should need to conclude that this theory must not be true.

Unless you can think of any conceptual problems with this proposal, the main question should be how such testing might practically occur? When neurons fire in synchrony, how many of them tend to do so in a given part of the brain? That would tell us some things about the sorts of EMF energies that would be appropriate to try when the source is at those locations. Given leads to those areas, could researchers induce appropriate energies, or even adjust them on the fly to simulate the synchronous firing of different numbers of neurons? I’d think that the best people to explore this would be the researchers who are currently working on brain-computer interface. Some of them should already have techniques and skills to design and implement this sort of testing.

Since we’re talking about this I’ll also mention a brain-computer interface study that was published a couple years ago that I think already provides some reasonable evidence of support for the notion of electromagnetic consciousness. There was a woman who couldn’t effectively speak given recently atrophied speech muscles. Researchers implanted two EMF detection arrays at two different areas of her brain that are known for speech production, totaling four arrays. Then after healing she spent a total of 100 hours trying to read aloud from planned text while her brain waves were being monitored by these arrays. Would some of this data correlate with what she was trying to say, meaning that it could be used as an artificial means from which she could interface with a computer and so effectively “speak”? Yep! Or at least to a workable degree when combined with predictive text software.

The question that I ask is why was this possible at all? I get the sense that the researchers left this with the theme of “Since the brain is a mysterious thing that we don’t understand much about, we attempt things like this on the hope that they’ll work”. My own interpretation however is that they probably got at least one of their arrays close enough to a place in her brain where the EM energies that exist as her intentions to vocally say specific things, ephaptically couple with speech neurons that use to cause her to clearly say what she wanted to say. Should EMF consciousness be true, this is because there should be elements of those energies specifically (or the ones that tell her speech muscles what to do), which correlate well with her intended speech.

https://med.stanford.edu/news/all-news/2023/08/brain-implant-speech-als.html

Expand full comment
Suzi Travis's avatar

Hey Eric!

On your point about generating EM fields at endogenous levels (i.e., as weak as those produced by synchronised neural firing) and how they might affect consciousness without altering neural activity—I’ve actually worked on something closely related.

I’ve conducted multiple studies using TMS (transcranial magnetic stimulation) and tDCS (transcranial direct current stimulation) to stimulate the brain — including over the visual, motor, and sensory cortices. When I stimulate over visual cortex, I can produce phosphenes; over motor cortex, involuntary movement (typically in a limb or finger); and over sensory cortex, participants report a feeling somewhere in the body.

But here’s the issue: when we try to use these techniques to produce perceptual or behavioural effects, we usually have to apply stimulation levels far above those generated naturally by the brain. And even then, we often don’t see any subjective effects until those levels are quite high — in some cases, too high to pass ethical review protocols.

If we dial TMS or tDCS down to intensities equivalent to endogenous EM field strength, we see nothing — no perceptual changes, no behavioral shift, nothing participants report. I’ve tested this directly. A TMS pulse matched to the brain’s natural EM field intensity has zero measurable effect on sensation or movement.

Thanks for sharing that BCI study — I know the one you mean. It's cool, I like what they did here. Just to clarify though, the arrays in that study didn’t measure EM fields — they record local field potentials and spiking activity.

The success of that BCI system came from decoding neuron firing patterns, not field-level interactions or ephaptic coupling. So while it’s a great example of how closely intention and neural activity are linked, it doesn’t support the idea that consciousness resides in or as the EM field. It just reinforces that we can map certain neural correlates of intention in high-resolution.

Ephaptic coupling — a non-synaptic communication via electric fields — is a legitimate phenomenon in neuroscience. But this study didn’t aim to measure or use that mechanism.

Expand full comment
Eric Borg's avatar

Thank you Suzi! It’s wonderful to be able to discuss these matters in general, but especially with a person who has actually done this sort of work herself.

Some have asked me why non invasive instruments, such as TMS and tDCS, can’t be used to assess the truth or falsity of EMF consciousness? This is essentially because such machines don’t create anything similar to the EMF associated with neurons that fire synchronously inside the brain. As I understand it they essentially shoot outside energies through the skull into the brain to cause neurons to fire in specific areas, and commercially for potential therapeutic uses. So yes, visual cortex stimulation might incite phosphenes, motor cortex stimulation might incite movement, sensory cortex stimulation might incite senses, and specifically because each directly alter neuron firing in appropriate places. If you turn the energy down on such a machine however, what’s produced shouldn’t replicate synchronous neuron firing inside the brain since this will instead be transmitted from outside the brain. Thus my testing proposal is to put leads into the brain that are hooked up to some sort of machine that’s able to create energies that are similar to endogenous synchronous firing, and sourced to interesting places inside the brain.

Some have also asked why synchronous firing is important rather than the field created by other firing? Theoretically synchrony constructively amplifies such energies to get above the noise of unrelated firing and thus remain relatively pristine. Indeed, it’s this pristine field that I seek to disturb for testing purposes. Theoretically in nature there is a causally mandated zone of consciousness that lies under certain parameters of EM field, and evolution serendipitously tapped into it to create the conscious form of function.

Ah, so those BCI arrays from the experiment that I mentioned weren’t specifically set up to detect the EM field itself, but rather local field potentials and spiking activity. Well okay. But aren’t local field potentials and spiking activity things that make up an EM field anyway? If that’s the case then I’d think my account why this experiment worked at all, should remain on the table. It’s not like any good explanations yet exist among the professionals. I realize that they didn’t try to test EMF consciousness, but they may have done so anyway. That doesn’t happen with standard consciousness proposals, and specifically because they never propose anything tangible, and thus falsifiable, to exist as consciousness.

Expand full comment
Dave Slate's avatar

Another good article, Suzi.

Although we can study correlations between measurements of human brain activity and perceptions of our own consciousness, this is harder to do with non-human animals, who don't convey their thoughts and feelings in language easy for humans to understand. Still, we can plausibly infer similar associations in other mammals, since their brains are structurally much like ours. But what about an apparently intelligent invertebrate like an octopus, whose brain is organized in a quite different manner? Is an octopus conscious in the way we understand the concept, and, if so, how does its consciousness relate to its brain activity?

Also, is it proper to speak of consciousness as a binary attribute that a creature either has or lacks, or is consciousness a matter of both type and degree?

Another puzzle: to reduce seizure activity in cases of severe epilepsy, doctors have experimented with cutting the corpus callosum that joins the two brain hemispheres. Although this treatment has been effective in many patients, I've read of some strange post-operative side effects, such as a patient's hands moving at cross purposes and even fighting each other. Could this mean that such patients might have developed two partly separate conscious entities fighting for control of the same body?

Expand full comment
Suzi Travis's avatar

Hi Dave! So many good questions!

Non-human consciousness is such a fascinating and thorny issue. With mammals, we lean heavily on structural similarity and shared behaviour. But with something like an octopus, which has a radically different neural architecture—and most of its neurons distributed through its arms!—we’re forced to ask some much tougher questions. I actually wrote a fun piece on this a while back: https://suzitravis.substack.com/p/the-mind-of-an-octopus

I’m not a fan of thinking about consciousness as binary. Questions like “are the lights on or off?” feel a bit misleading to me. I suspect different creatures (or systems) might have different kinds of experience—not just more or less of the same one.

And yes -- the split-brain studies are some of the weirdest and most mind-bending in all of neuroscience. The idea that severing the corpus callosum might result in two semi-independent conscious agents inside one skull is both fascinating and unsettling. But I think these findings suggest a couple of interesting things about the self and perception.

It seems that the self -- or at least our perception of self -- is probably not as unified as we like to think. But perception -- that's interesting -- because at least as reported by these patients, they don’t seem to experience two different perceptions at once.

Expand full comment
Dave Slate's avatar

Good octopus article, Suzi, although now every time I eat some coconut I'm going to think about octopuses!

"... because at least as reported by these patients, they don't seem to experience two different perceptions at once." But could that be because only one of the patient's multiple personalities is in control of the reporting apparatus (mouth, vocal cords, etc.)?

Expand full comment
Suzi Travis's avatar

Yes, exactly! The typical explanation is that language is usually strongly lateralised to one hemisphere (typically the left), so even if there were two separate streams of consciousness, we’d likely only hear from one of them.

But people have questioned whether what’s going on in the other hemisphere really counts as conscious — and that’s what makes it so interesting. It’s hard to pin down. And it raises questions about what we mean by consciousness.

Expand full comment
Dave Slate's avatar

The amusing science fiction short story and play "They're Made Out of Meat" (1991), by Terry Bisson, may be a bit off-topic for this particular article, but I think it fits in well with the general subject matter of When Life Gives You a Brain:

https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html

https://web.archive.org/web/20080702175538/http://www.terrybisson.com/meatplay.html

Expand full comment