43 Comments

Suzi, you make it more and more difficult to take sides with the spiritualists and the ghost busters. Your explanation of how scientists agreed to stipulate that consciousness is observable by using devices to “look” at the brain in action clarifies the issue. Once this stipulation is accepted, consciousness becomes a predictable phenomenon. But the existential question of “feeling like” the bat in a cave, the hard question, the sense of self, while clearly in some way caused by the brain, remains open. I have the feeling that your take on free will is going to be most helpful in bringing together the brain as the physical basis of consciousness and our existential experience of it. Or maybe not:) thank you 🙏

Expand full comment
author

Hey Terry! You raise an excellent point. Defining consciousness as say 'a verbal report' and then looking in the brain at what correlates with that 'verbal report' is problematic. But it's not a problem that is lost on researchers. The 'hard problem' is still open, as you say. While we've learned a lot about the physical basis of consciousness using NCCs, explaining subjective experience still seems elusive.

I will eventually get to free will. It's certainly an interconnected issue with consciousness.

Expand full comment
Aug 6Liked by Suzi Travis

Very nice breakdown. I also have just noticed the subtle (because I have just noticed it!) rebranding. Great work.

Expand full comment
author

Thanks John!

Expand full comment
Aug 6·edited Aug 6Liked by Suzi Travis

As I started reading this post, it occurred to me that I could just go ahead and click the Like button right now, because your posts bat 1.000 with me. And ee-yup, your batting average remains unsullied!

In answer to your questions, my speculation is that neither self nor content is absolutely necessary for consciousness. The brain-in-a-jar notion seems viable, at least in principle, though I do wonder how much simulated inputs would be necessary for the mind to remain stable. Some SF authors have imagined scenarios of disembodied consciousness, and some of those have seemed plausible to me.

As to content, there was an analogy I read once that I wish I could remember better. It was along the lines of comparing consciousness to a projected movie. Not referencing the notion of an inner homunculus watching, just that the movie represented our consciousness (perhaps along the lines of GWT). The analogy also invoked the idea of the projector being on with no film (content) running through it. "Pure consciousness" then being the white light of the projector. So, maybe consciousness is possible without content?

As an off topic aside, you mentioned "that little voice in your head, always chattering away", and it made me wonder what you thought of those who claim to have no inner voice. (Actually, maybe not so off topic if it somehow connects a little with the idea of contentless consciousness.) It seems that most, me included, find the claim of a lack of inner voice questionable.

Expand full comment
Aug 6Liked by Suzi Travis

it's funny I had the same projector analogy here https://kwiri.substack.com/p/is-there-a-neuron-of-consciousness - with the exception of thinking of the projector screen as an analogy for raw consciousness without content

I tend to think consciousness is separate from its contents. One reason is that all conscious experience appears so similar yet so distinct. For example, seeing and hearing feel very different, yet they share the commonality of being a subjective feeling of being something.

Of course this doesn't prove anything, but it feels like if seeing and hearing created consciousness (rather than "colored" it), we wouldn't have a subjective sense of them sharing anything in their nature. They would feel like totally separate phenomena, although I can't imagine how that feels like.

It's all uneducated guesses here, but the question is so interesting I can't stop myself from making those guesses :)

Expand full comment

I wish I could remember where I saw it, because I remember it being better than how I expressed it. The light of the projector was raw consciousness, and the projected images were mental content. If the film runs out, it’s just the raw light of consciousness.

Of course, it’s a metaphor that may not apply to the real thing.

Expand full comment
author

Consciousness definitely seems like a unified phenomenon, doesn't it? So that unity needs to be explained -- or at least the seeming that it feels unified needs to be explained. You're so right! These questions are endlessly interesting.

Expand full comment
author

Ah! Thanks so much!

Hmmm... I like the projector analogy, but as you allude, we need to be wary of the potential infinite regress with such an analogy. If there is a projection, we might want to ask who the projection is for?

I suspect the little voice in our head is related to language. It almost certainly adds to our sense of self. And because it is so often dominant, I suspect, it is also tied closely to our conscious experience.

Because what we are conscious of is limited (we can't take in all the information bombarding us) there is an element of 'selection'. So, I suspect people who report not hearing that inner voice, might have something else that is capturing their attention.

If that inner voice is related to language, my guess is that people who have language comprehension deficits (because of damage to Wernicke's area) might not have the same inner voice like most of us do. I also suspect that animals don't have a little inner voice chattering away -- well not in any way that is similar to humans with normal language abilities.

Expand full comment

Oh, now that's intriguing. Not being aware of one's inner voice because some other aspect of consciousness captures all the attention. That might explain it. Whatever the cause, it has to be constant, as I understand people reporting no inner voice say they *never* have one. I've wondered if it's possible one has to learn to hear it or recognize it as such.

I watched a video of an interview with a college student who claimed she had no inner voice, and one of the questions involved how she able to write. She ascribed it to using the rules of grammar to construct sentences. She spoke and understood perfectly normally, and it really made me wonder about a kind of blindness to one's own experience. I'm one of those who can't fathom lacking an inner voice, so the topic fascinates me.

(I'm also intrigued by Jaynes's bicameral mind theory.)

Expand full comment
author

Endlessly intriguing!

It seems like factors such as personality, cognitive style, and even cultural background can influence the strength of inner speech.

Like inner speech, there are large individual differences in visualisation skill. Some people can create vivid, detailed mental images, while others struggle to visualise at all.

Jaynes's bicameral mind theory is fascinating -- although highly controversial. But it does make me think about whether our conscious experience has changed -- and if so how much. I can't help but think that language is integrated into our experiences, and because it is so, humans likely experience differently now than they did 3000 years ago.

Expand full comment

Yeah, good points, we’re a spectrum in everything, aren’t we.

It would be very interesting to see how someone from 3000 YA would respond to today. Impossible experiment, but I’d be curious about: (1) an adult from that time brought forward; (2) a child likewise; (3) a newborn. A nature/nurture experiment.

Put another way, how much have our brains changed in 3000 years? Genetically, I’d guess not much, but how they get wired in our environment must be vastly different.

Expand full comment
author

I agree -- the basic neural hardware (if we can call it that) probably hasn't changed much in 3000 years. It's likely our environment, especially our social structures, that's had the biggest impact on how our brains get wired up.

Expand full comment
Aug 6Liked by Suzi Travis

Very useful and thought provoking as always!

One question that might be relevant: if it's true that some blind people use their visual cortex during auditory tasks, is that an indication that the visual cortex is not the source of conscious vision (as otherwise we'd expect those people to "see" when hearing sounds)?

Expand full comment
author

What a great question!

Yep it is totally true, the visual cortex of people who are blind (especially those blind from birth or early childhood) can be repurposed to process other types of sensory information, including auditory information.

What's interesting though is that some research suggests that the visual cortex in blind individuals can develop a kind of "soundscape" mapping. Normally the visual cortex maps to our retinas -- different locations on the retina map to corresponding areas in the visual cortex. Well, it seems like that mapping can happen for sound too. For some blind people, different areas in the visual cortex respond to sounds from different locations in space, in a similar way to how the visual cortex normally maps visual space.

There is some fascinating research looking at blind individuals who use echolocation (navigating by producing sounds and listening to the echoes). It turns out that parts of the visual cortex activate in response to echoes, that might preserve spatial mapping.

So, in some sense, yeah! some people might 'see' with sound.

Just a technical note -- while the visual cortex is primarily involved in processing visual information, conscious experience might not be the source of conscious vision. The prevailing idea is that conscious experience occurs through the interaction of multiple brain areas.

Expand full comment

Whatever consciousness is, the brain is underappreciated by most of us. Soundscapes and echolocation? Amazing!

Expand full comment
author

Indeed!

Expand full comment

Great breakdown of the scientific view of consciousness and the intricacies of what that word even means. I especially like the inclusion of the phrase, "neural correlates of consciousness", since so many articles out there seem to just leave that off, which can give the impression to those who aren't aware of these issues that science is somehow tapping directly into phenomenal consciousness. So many obnoxious quibbles could be prevented with that simple upfront admission!

The question of contentless consciousness is an interesting one. I think you may have provided a hint in saying the overlap between states of consciousness and contents of consciousness is inevitable.

From a 1st person phenomenal perspective it seems truly contentless consciousness is when we're unconscious, as in a dreamless sleep. Another way of putting it is that we can know about our own state of unconsciousness only indirectly and in retrospect. It can't be remembered as an experience...there's nothing to conjure up. Being "unconscious" from the experiencer's point of view is almost theoretical; you go to bed, you wake up. When you went to bed, it was dark. When you woke up, the sun was shining. Clearly time has passed, but you weren't there to experience time's passage; you can only infer it. It's somewhat disconcerting when you think about it!

Which makes me wonder whether the meditator who speaks of contentless consciousness means something not quite empty of content. It's hard to imagine having experience at all, even if that's the bare condition of being conscious, without some sense of the passage of time, however strange or distorted that may be as compared to a normal experience of time. If you picture the world disappearing with nothing whatsoever to experience in it, there would still be your own experience of your own thoughts in motion...passing through internal time (as opposed to clock time). If you can make these thoughts stop...or even make them disappear, what is it that notices the disappearance or total absence of thought? It seems there must be at least the sense that the thoughts could be there, but they're not...right now. Which is itself a thought moving, so to speak, through time. Then again, I could be wrong. I'm certainly no meditator!

Extrapolating from the above about dreamless sleep, it seems that if you truly experienced timelessness, you wouldn't be able to remember it. At least not as something you directly experienced...unless there's some reason to exclude time as a kind of content?

Expand full comment

Interesting. If a tree falls in the forest and I am experiencing a period of contentless consciousness, was there ever a tree? Suzi has a way of provoking free flowing thoughts. Very cool, Tina.

Expand full comment

Haha thanks!

Expand full comment
author

I'm constantly amazed by the comments here! I love your twist on the classic thought experiment.

Expand full comment
author

Hi Tina! Wow, what a fantastic comment!

I especially love this part:

"If you can make these thoughts stop...or even make them disappear, what is it that notices the disappearance or total absence of thought? It seems there must be at least the sense that the thoughts could be there, but they're not...right now. Which is itself a thought moving, so to speak, through time."

Brilliant! You've really captured the paradox beautifully.

Expand full comment

Haha...thanks! I spend way too much time staring off into space. Glad to see there's some use for it!

Expand full comment
Aug 8Liked by Suzi Travis

This comment is art.

Expand full comment

Thanks!

Expand full comment
Aug 10Liked by Suzi Travis

Tina, I think you have this right. I’m a light weight in meditation but can enter a state of ‘no thought’—which is a kind and suspension of time (no awareness of time) and a complete intentional suspension/disconnect from physical/sensory perception—at the same moments the awareness of self (a beingness) disconnected from stimulus (content) is immensely enhanced. This awareness of self could be a kind of content in itself—though that seems experientially very inverted. The focus is deeper ‘concentration’ on the absence of content, on the void, diving into the gap.

When I’ve done this at night—let myself go into this ‘state’—at times I’ve become aware at some point in the night that I’m asleep. It was very disconcerting the first time it happened—aware that I existed in my body but my body was completely disconnected from me—I couldn’t hear, see, move, etc. In fact it was a bit terrifying. Now—as strange as it might sound—I look forward to these experiences and intentional set them up sometimes ( they may or may not occur.). As you say—the world disappears, and my body connectedness disappears but I’m aware of me—I’m awake—though there is no sense of the moment in time or measured time passing, there is a sense of existence in time in contrast to a complete meditative state where even time as a concept floats away. (In meditation there is only present existence. I exist—minutes, hours, days, pfft! Who knows?). Here, my body is there, asleep, and I can’t move it even if I want to. I have to go back asleep to wake up if I want to move.

Suzi—Is that content or hyper self conscious with everything else shuttered for the moment?

Expand full comment
author

Wow! I want to experience that!

I'm not sure I can answer your question, but it does remind me of descriptions of out-of-body experiences. The science behind these experiences is really fascinating (and something I'm planning on writing about).

Expand full comment
Aug 21Liked by Suzi Travis

Perhaps consciousness arises from a sort of "event horizon" of time, on the other side of which intentionality happens, but leaves no trace.

Expand full comment

Such a clear and compelling description of our current understanding. I’m always most fascinated by that tricky concept of self-consciousness, which is so pervasive and familiar — largely because the thought of a consciousness that didn’t have a sense of being me couldn’t truly be me. When a person has advanced Alzheimer’s that detaches them from their sense of who they are, we describe them as “not being themselves.” A colloquial way of saying that an individual’s self-consciousness is a core component of their being, of their day-to-day consciousness.

Endlessly fascinating stuff. 🧠

Expand full comment
author

Thanks Rose!

I'm particularly struck by your observation that "the thought of a consciousness that didn't have a sense of being me couldn't truly be me." How do we define "me"? I'll be thinking about that for a while.

And, I 100% share your fascination with these topics.

Expand full comment
Aug 7Liked by Suzi Travis

A lot of attempts to describe Dao converge on the similar formula: Dao (playing a role of consciousness in this argument) is a vessel that can contain something; like Us, or Time, or The World, or anything else that can be conceptualized. Then content exists objectively, but Dao exists, containing these concepts, yet it cannot be observed.

Panpsychism adherents sometimes bring this up as a way to wrap our minds around the topic.

In computer science the virtualization theory is also a useful reduction: a computer can have a virtual machine that has no knowledge of its host yet it “knows” that something hosts it. E.g. resource bottlenecks of the host affect the pathway of the logic in the VM, as it will slow down if host does. Then later, VM can host other VMs, etc.

In the brain, this is recently observed in cortical columns - segments of our cortex that seem to be picking tasks and working on them in isolation, almost. And then “voting” with the wider strata of the cortex for the winning response.

Vessel-based description of consciousness is the most promising path to understanding it.

Expand full comment
author

Thank you for sharing these intriguing ideas! Your analogy between Dao, virtualisation, and consciousness is fascinating.

Regarding cortical columns, while there are indeed cortical columns in the brain, I'm not sure I'd say they are "picking tasks" and "working on them in isolation". The brain is highly interconnected. Current neuroscience understands these columns as part of interconnected networks, working together rather than in isolation. By 'voting' are you referring to an attention or selection type system in the brain?

The idea of consciousness as a 'vessel' is an interesting framework for discussion. It seems to be a common view in Eastern philosophy. Although not entirely alien to Western philosophies, the idea probably feels like a unique perspective to many Western thinkers.

Expand full comment
Aug 8Liked by Suzi Travis

Thank you.

On cortical columns and their features I’d recommend the book “A Thousand Brains: A New Theory of Intelligence” by Jeff Hawkins.

It’s a mind blowing theory and is supported by a lot of new evidence. Cortical columns as a micro-state analytical nodules seem to hold the answer to how we think. And yes, they seem to actually “vote” - multiple microdecisions compete and seek to form a consensus of sorts that gives us a consistent model of the world.

https://a.co/d/2PzEPKm

Expand full comment
author

Thanks for the recommendation!

Expand full comment
founding
Aug 8Liked by Suzi Travis

I’ve developed strong beliefs regarding each of the topics discussed here. In recent weeks I’ve also been trying to put these basic elements of my position together so that others might understand as well. So far I haven’t been pleased with my efforts however. Hopefully as a blog comment I’ll be able to get the theme right.

Instead of taking the highly evolved human form of consciousness as a starting point (as most seem to try), I think we should go back to fundamental concepts and then work up to human consciousness from an evolutionary perspective.

Consider what evolution needed to do before it had consciousness at its disposal. Here it essentially needed to institute involved code which causes organisms to function reasonably effectively, just as we must do for our robots. Under less open environments (like the game of Chess) this should have worked reasonably well. Under more open (or advanced) circumstances however, it may be that another tool progressively became more effective than standard coding alone.

Imagine evolution additionally creating something by means of brain based physics which feels good to bad on the basis of that physics. For example perhaps certain parameters of neurally produced electromagnetic field exist as such an experiencer. Observe that this in itself would constitute phenomenal experience, or consciousness, though initially in an entirely non functional or epiphenomenal way. But given the coding challenges of more open environments, perhaps with enough iterations these epiphenomenal experiencers would be given a chance to affect muscle operation which can steer things to its own interests to feel good rather than bad. Then if somewhat successful the experiencer ought to be given more resources in the form of senses from which to assess what was going on.

What I’m suggesting is that evolution couldn’t always find effective code for life forms that reside under more open environments, though by chance it was also able to use phenomenal physics to essentially subcontract certain decisions over to an experiencer of existence. For example, our robots never feel pain. But instead of coding for everything, imagine being able to add pain to various conditions for a robot to experience. Here you’d effectively say to it, “Now beyond just standard coding that you can’t control, you also have incentive to figure out how not to be in pain, and at least some means of implementing your decisions”. Thus your robot effectively becomes purpose driven in a way that simplifies your coding responsibilities (and yes this is a sadistic example).

I call this perspective a “dual computers” form of function. Here we have a brain which functions as a standard computer, and a consciousness which functions as a phenomenal computer. The standard computer is conventionally fueled. The phenomenal computer however is fueled by the desire to feel good rather than bad. From this model I believe that I can effectively account for virtually all that we are.

Expand full comment

Devils advocate. Why do we keep doing things that make us feel bad? Hume located the source of all ethics in emotion. No doubt feeling good or feeling bad has something to do with it. Is the phenomenal computer physical? If not, are we talking about spirit? Or the experience of spirit generated by the brain as an illusion which can degrade into hallucination?

Expand full comment
founding
Aug 9Liked by Suzi Travis

Thanks for your interest Terry!

Why do we keep doing things that make us feel bad? Well in the end I don’t think it’s because we want to feel bad. I think we all try to make ourselves feel better from moment to moment, though there are various psychological details of our nature that sometimes make it seem otherwise. I could get into some of my models here if you have any specific scenarios where a person seems to desire unhappiness. But yes we do commonly end up doing things that make ourselves feel bad. In the end I tend to classify this under ignorance about how things work to the point that bad choices become made.

I have liked what I’ve seen from David Hume, though I’ve only considered his ideas casually when they’ve come up in blogs. I like his assertion for example that statements about “what is” can’t be used to derive “moral oughts”. I actually go further than he did because I don’t believe there is anything concrete to our various moral notions. So my assertion is “Is is all there is”. This is to say that there is only the goodness to badness which is experienced, or value itself rather than any rightness to wrongness regarding behavior.

On whether the theorized phenomenal computer is physical, yes my own metaphysical position demands this. Which is not to say that I could tell you anything about the physics behind it when I first theorized the model over a decade ago. It was only when I learned that consciousness might reside under certain parameters of neurally produced electromagnetic field (around 2020) that I finally grasped a likely causal substrate. (Certain friends would merely snicker when I told them that while I couldn’t tell them what this supposed second computer was made of, that I could be more certain it existed than my body itself.) Anyway though lots of people with consciousness theories call themselves “physicalists”, I don’t know of any other theories that are highly testable in the sense that consciousness is proposed to exist in the form of a well known aspect of physics. Even for supposed physicalists, unfalsifiability seems to be a near given.

By the way, do you know how to get alerts about new Substack comments in general for a given post? I only seem to be notified for direct replies and likes.

Expand full comment

I don’t, Eric. It’s a jungle out there. Substack is getting better but has a way to go. Thanks for replying. You used the word “happiness” as a synonym for “goodness” in your comment. What relationship between happiness and goodness? I’m not a Hume scholar, either, but I somewhere along the line got the message from him that “outrage” is the source of judgement about immorality—outrage is neither happy nor good as I construe the terms, but Hume might argue outrage (a bad feeling) may actually be “good.” When I try to understand the ideas Suzi brings up, I find myself burrowing more deeply into words. Perhaps the best anyone can do is make peace with themselves. Suzi’s writing has helped me think through things, and you’re helping too. Thanks’

Expand full comment
founding
Aug 10Liked by Suzi Travis

That’s right Terry, I consider the word “happiness” to be a synonym for “goodness”. But to expand a bit, I’m making a distinction between existence which harbors no personal value, or goodness/badness, and existence which does (such as yours). My position conflicts with standard utilitarianism in the sense that I don’t consider “the greatest happiness for the greatest number” to technically be what’s good. That should instead be what’s good for the noted subject — “the greatest number”. Various subjects from within needn’t have the same interests. Instead I consider good to be composed of the greatest happiness for any defined subject. I also consider this to break down beyond the unitary existence of an individual organism. Here each moment of sensation exist as an individual self. “You” right now aren’t technically the same self as the “you” years or seconds ago, and also years or seconds from now. What binds each momentary past self with your present one, I think, is your memory of them. Then what binds your current self with potential future selves, is your current positive sensation of hope and your current negative sensation of worry. This would be the classic “carrot and whip” that motivates us along. In order for fields like psychology to become “harder” forms of science, they’ll need to work out such fundamentals so that larger ideas can be supported or dismissed on that basis. Unfortunately today psychologists seem to build models without much concern about what might lie below. I consider this quite troubling and so would like to help such fields gain solid foundations from which to work.

I did a brief internet search on this “newsletter” thing. Apparently there are some technical differences with blogs. Still to me they sure seem to walk and quack like blogs! By not letting its citizens subscribe to general commentary notifications however, I’d say that Substack has been missing something quite basic. I’ve subscribed to notifications for hundreds of posts over the years. How else might commenters automatically be alerted if older post commentary sparks something interesting once again?

I see that your newsletter is quite active on the topic of education. Some day we should discuss the issue of funding. Unlike for defense and such, I’m not convinced that a government run business monopoly should offer the best path for the education of citizens.

On Suzi’s project, it’s actually the best work I’ve seen. And she doesn’t just need people like us for direct funding, but to build a thriving community. With podcasts and videos you don’t really get a community of discussion where people get to know each other. Furthermore since she provides a recording of her newsletter, it’s already somewhat like a podcast. And it seems to me that the format should let her expand to recorded interviews or even video segments if she likes. I’m most interested, however, in the potential to have quality discussions with reasonable people. Whether warranted or not, this is the sort of thing that brings me hope.

Expand full comment
author

Thanks Eric! You've articulated my hopes for this little newsletter perfectly. Intelligent conversation with a wonderful community sounds perfect. I'm glad you and Terry (and everyone else) is here!

Expand full comment
author

Thanks Terry! You're helping me think too :)

Expand full comment
author

Hi Eric!

I like the idea of starting from an evolutionary perspective! Some of my favourite work is done in simple systems like fruit flies.

Does the brain based physics need to be electromagnetic fields? Could the 'feels good to bad' be produced with varying amounts of a neurotransmitter? Could it be chemical rather than electromagnetic? This would align with our current neuroscientific understanding of feels, moods, and reward-seeking behaviour.

Your "dual computers" model presents an interesting idea. The idea of evolution creating a phenomenal experience alongside standard coding reminds me of the perception action cycle in neuroscience. I'm particularly interested in your suggestion that this might have come about as a solution to challenges faced in more open environments. By open environments, I take this to mean environments with more complexity too?

It's an interesting idea you have! I'm curious to know your thoughts on how this model might account for things like self-awareness and abstract thoughts?

Expand full comment
founding

Hi Suzi!

I’m not exactly claiming that an electromagnetic field is the only element of brain function that could exist as consciousness. That’s just the only element that currently makes causal sense to me. Note that I was armed with my dual computers model of brain function even before I began blogging in 2014. It’s a psychology based model with boxes for phenomenal input, phenomenal processing, and phenomenal output, including non-conscious brain function as well. But it doesn’t get into any biological brain mechanisms whatsoever. My functional computationalist friends however would then ask me what this supposed second computer was made of? All I could tell them was that I knew it existed more certainly than my body itself, but no, I couldn’t put my finger on what specifically this computer might be made of. Given my naturalism I merely presumed something associated with brain function. They were less than impressed.

Months later in discussions about Searle’s Chinese room, it dawned on me that they believed something which seemed seriously funky. Thus I developed my thumb pain thought experiment to display their position to mandate that an experiencer of thumb pain must exist if the right marks on paper happen to be used to create the right other marks on paper. I told them this had to be wrong since in a causal world information should only exist in respect to what’s informed by that information. But here I was still left with a major question. If the brain algorithmically processes whacked thumb information into new information, then what does this new information inform to exist as the experiencer of a whacked thumb? Though I still didn’t know, at least I was now thinking about potential possibilities in an effective way. What element of the brain might be dynamic enough to actually use such information and so create all that I see, hear, think, and so on?

Could neurotransmitters themselves be what’s informed to exist as that consciousness? To me neurotransmitters seem more like what should do the informing rather than exist as what’s informed. The binding problem that you’ve recently written about argues this, for example. How might separate neural function create a unified consciousness given such disparate dynamics? Or let’s say that consciousness instead exists as some sort of brain produced chemical structure. Sure, in a sense substances might be considered “bound”. But how might any chemical substance accept massive visual information to thus exist as something that sees what I see from moment to moment? Wouldn’t such a substance need to change continually in appropriate ways as new information informs that substance? And shouldn’t it need to do so far more quickly that DNA is informed by what enters its domain? The “substance” possibility also didn’t make much sense to me.

In December 2019 however, which was some months after I’d developed my thought experiment, a friend posted about an EMF consciousness theory from Susan Pocket. This idea blew me away! Couldn’t a neurally produced electromagnetic field potentially be dynamic enough to exist as the element that neurons inform to exist as my consciousness? The Wikipedia article on the matter then let me know about Johnjoe McFadden’s more complete (and more natural) version.

In your post you mentioned how neuroscientists sidestep all sorts of speculative consciousness questions by trying to limit things to neural correlates. And indeed, that does sound like a great idea. As I understand it however, NCCs have been extremely difficult for neuroscientists to actually find. Correct me if I’m wrong, but my understanding is that the only reasonable one found so far is the synchronous firing of neurons? Do you know of any other NCCs?

Regardless, McFadden’s model depends upon such synchronous firing in order for the combined firing electromagnetic energies to reach a level that physics (presumably) mandates to exist as consciousness itself. A higher energy level also means that individual neuron firing which isn’t associated, shouldn’t substantially alter consciousness in distorting ways given their low energies. Then consider “representational drift” — apparently it’s been found that neurons may fire for one phenomenon while different neurons might fire in subsequent examples for that same essential phenomenon. Perhaps this is because it’s actually the EM field created by neural firing that exists as consciousness rather than the specific neurons which fire?

The coolest thing about his model, I think, is where it goes full circle for EMF consciousness to have an effect on the world. Here McFadden discusses ephaptic coupling — the conscious decision to act (and mind you here that I’m still referring to an energy field) effectively feeds back to incite neural function for corresponding muscle operation.

Regarding the “openness” of a given environment, I like to use the game of Chess as a quite limited example of the idea — in this game there aren’t all that many options regarding what a player can do each turn. So the opposite would be a more open environment. And yes under more open environments I’d think that exponentially greater programming complexity would be appropriate. Thus my point is that instead of programming alone, under more open forms of function evolution may have found it more effective to implement purpose based function (also known as “teleology”). Here it should have been able to effectively punish and reward life on the basis of various appropriate conditions (such as injury), and then let the subjects themselves try to figure out what to do given that they don’t want to be punished and do want to be rewarded.

On “self”, I probably use this word differently than most. For my version self simply exists as feeling good/bad. So one will inherently be aware of their self to the extent that they feel good/bad, and specifically to the magnitude of that feeling. (Here self can increase and decrease in the “state” sense you’ve mentioned, and with all sorts of associated distortions regarding perception and thought.) Thus the elimination of feeling good/bad should essentially turn consciousness off, and somewhat like halting the voltage differentials which drive the function of a standard computer. As I mentioned to Terry, from this model instantaneous self is joined to past selves through memory of the past and future selves through anticipation. So when you don’t remember your past you become disconnected with those former selves, as is standard with Alzheimer’s disease. And if you have no hopes or worries about the future (apathy) you also become disconnected from potential future selves and so tend not to work for the interests of those selves.

Actually after going through your essay again, maybe my conception of self isn’t that far from what you were referring to? Perhaps it’s just an extremely “content based” or “intentional” perspective, with that content or intention being to feel good rather than bad? This would be the teleology or purpose that I mentioned. I don’t consider this feature to be a general blessing however. Yes some of us have wonderful lives, though it seems to me that far too many sentient beings instead have horrible lives.

Does my model have anything interesting to say about abstract thought? We humans have language at our disposal and this seems to permit us to think about things abstractly. So maybe this model is too basic to address abstract thinking?

Expand full comment