74 Comments
User's avatar
User's avatar
Comment deleted
Jun 17Edited
Comment deleted
Expand full comment
Suzi Travis's avatar

Interesting idea — what exactly is life form inside host here?

Is it living things inside living things all the way down?

Expand full comment
User's avatar
Comment deleted
Jun 18
Comment deleted
Expand full comment
Mark Slight's avatar

You asked what I thought about this!

I think you know what sort of narrow-minded Dennett-brainwashed response I am going to give!

I think Suzi will agree that her series does not kill off functionalism. I think she's sympathetic to functionalism.

I do not think we need to break any habits or think outside any boxes to solve the hard problem, because I think there is no hard problem, other than as a psychological problem. We don't need metaphysics to solve psychological problems.

As before, I don't see the relevance of quantum effects at all. What does it matter what quantum effects take place in microtubules, or some organelle, or wherever? Whatever science may uncover there, it is going to be a story of quantum EFFECTS. Deterministic or probabilistic the the Born rule, doesn't matter one bit. In any case, it is MECHANISM. This is the core claim of functionalism.

If you have a system with causes and effects--QM, newtonian, or any mix thereof, or any other kinds of physics--then you can describe it mathematically (whether it is discovered or not, it is in principle possible). If you can't describe it mathematically, then it's not a physical system.

Put as much QM in the brain as you want! It has no bearing whatsover on the validity of funtionalism.

Whatever you might discover about microtubules, when you have the complete math figured out, do this: put the math in a computer in a robot. Have the computer calculate, and thereby instantiate, the same MECHANISMS as the microtubules instantiate. You will end up with a robot that behaves exactly like a human being. it will laugh and cry and write anti-materialistic posts on substack. Just as a QM universe can be simulated--so can a QM brain.

You're perhaps going to claim the simulation is not conscious. I have never seen a good argument for that. the robot is going to defend it's consciousness as well as you can. If it doesn't know its a robot, it may even claim that robots cannot be conscious. If there were such a "copy" of me, I'd immediately grant it consciousness. If you wouldn't I'd like to know why.

Furthermore, if you don't think the robot is conscious, you must also maintain that we cannot live in a simulation. (I don't believe that we are, but not for that reason).

Expand full comment
First Cause's avatar

Thanks for your input. Can you point me to a mathematical model that can predict what new life forms will evolve in the biosphere? I mean, after all that's what my comment is all about; think evolution.

Expand full comment
Mark Slight's avatar

Here is a mathematical model that can predict the weather exactly one year from now: the core theory. If the core theory is wrong, which it might, be, then the claim is that some other theory, which can be mathematically described, is right. it is not the case that NO theory is true, so as to describe the physical world.

Can I predict the weather next year? hell no. Can the total compute of all electronics, and all our brains, predict the weather next year? hell no. Will it ever happen? hell no.

It doesn't matter if the core theory is correct, almost correct, or needs to be replaced by some other math. What matters is that matter exists, whether simulated or not (doesn't matter), and that it obeys the laws of physics, whatever they may be.

The claim is this: whatever the weather exactly one year from now may be, it can, IN PRINCIPLE (italics would be politer but can't do it, sorry), explained byt mathematical models. The process that lead to the EVOLUTION of the present state of the universe, to that future state of the universe, can be mathematicalaly described. WE cannot, but there's no particles doing funny business.

If the world is not deterministic, then the math can give us probability distributions. As such, we cannot predict exactly what will happen, but we can assign probabilities. And the we can predict this: events will be distributed according to the probabilities that are mathematically describable. No funny business entering the picture.

Weather or life, same thing. If we knew the right math and the complete physical state of the universe, and it's deterministic, then in principle we can predict new species. If it's probabilistic, we can predict which species are probable, and we can precit all possible outcomes and whatever new species may emerge, they will be among the ones we predicted as possible. And, whatever you respond to this comment, would also have been predicted!

Unfortuantely, we'll never have that compute, or information, and it is in principle impossible for it to be contained within the universe that it is supposed to predict. The computation changes the outcome.

Expand full comment
First Cause's avatar

You can buy into the dogma of a mechanical world view if you want, it's certainly your choice. This is where I usually quote my favorite response to your kind of fanaticism:

Whenever one is convinced by a rational argument one does not know more, but one knows less. This is because the door to any other possible explanation is slammed shut. Keep the door open Mark, even if it's just a crack....

Expand full comment
Mike Smith's avatar

This is long been my concern with the word "representation", it seems to imply something being presented to an inner observer (re-presentation). If we use words like "schema", "model", or even "reaction cluster" or "early dispositional pattern", it seems more evident this is actually part of the processing of a system, something we can imagine happening in a computer or dynamical system.

It doesn't surprise me that everyone is using "representation" to mean different things, since everyone is using words like "consciousness", "mind", or "emotion" to mean different things as well, often even the same person in the same conversation. This language ambiguity, I think, offers the impression of deep mysteries. When we use more precise language, mysteries remain, but they seem a lot less intractable, more conducive to scientific investigation.

Interesting post, as always Suzi!

Expand full comment
Suzi Travis's avatar

Thanks, Mike!

Expand full comment
Eric Borg's avatar

This essay may seem extra complicated because Suzi is presenting a problem without presenting a solution. If broken down with a credible solution however, the question of meaning can become far more simple.

Notice that there is no problem of meaning for the computers that we build. This is because we presume that nothing means anything to them — they just accept input information, process it algorithmically, and the processed information tends to inform various causally appropriate instruments. Processed keyboard information may inform a computer screen for example. If the data doesn’t get to the screen however then there won’t be anything for it to inform and so it won’t be “information” but rather just “stuff”.

Our brains function like computers as well, and yet we know that meaning also results. So how might this occur without us resorting to a homunculus and magical infinite regress? This mandates that there must be something that processed brain information informs to exist as such, and one that ultimately feeds back to affect the brain for a fully causal loop. So what might processed brain information be informing to exist as meaning, or effectively what ultimately sees, hears, feels, thinks, and so on? The only instrument I know of that might have enough informational bandwidth to exist as all that, is the electromagnetic field created by the firing of neurons. Thus a potential solution for scientists to empirically test rather than perpetually ignore.

Expand full comment
Suzi Travis's avatar

Hey Eric,

I'm still having a hard time seeing how this move avoids the problem. How does it not just shift the homunculus one level over. If the EM field is where meaning “lives,” wouldn't we still have to explain how meaning arises there? How does EM fields avoid becoming just another kind of “vehicle” that needs a “consumer”? Does it help explain meaning, or just relocate the mystery?

Expand full comment
Eric Borg's avatar

Well as you know Suzi, the homunculus fallacy is about infinite regress — a little man in the head with meaning that has a little man in it’s head with meaning… and so on forever. What I’m taking about however is a potential final solution. It’s not like electromagnetic fields have heads with electromagnetic fields in them that have heads and so on forever. If this theory becomes empirically validated then we’d have a physics based ending to all such nonsense. In that case we’d be talking about a final vehicle that’s also a final consumer. But you’re right that there’d be more to discover. Which parameters constitute pain, vision, hearing, joy, fear, and so on? That would be something for scientists to experimentally figure out in the lab.

In some sense I’d expect this to give us the same situation we have in quantum mechanics. We don’t know why it seems impossible for us to measure a particle or wave with perfect certainty, beyond just that it’s both. Indeed, we find this magical. Similarly for the mind there’s a famous “hard problem” that should remain regardless of how much we learn about the electromagnetically bound nature of consciousness. My hope however is that this would provide a catalyst from which to reinvent the poorest areas of academia, like psychology, psychiatry, sociology, and yes in some sense even philosophy. And what refrain would be provided to those scientists who still dream of the days of Dennett when it was thought that our minds would some day be uploaded to standard computers? “SHUT UP AND CALCULATE!”

Expand full comment
Suzi Travis's avatar

Hmmm.. interesting.

I'd push back on the quantum mechanics analogy. The Heisenberg uncertainty principle is not a mysterious both-and we throw up our hands at; it’s a mathematically precise trade-off baked into the way conjugate variables (position–momentum, time–energy) are defined. Once you write down the wave-function formalism, the limits fall out of the algebra. In other words, physicists can calculate the limits because the theory tells us exactly where they come from.

By contrast, the homunculus regress is an explanatory gap, not a measurement one: each time we assign meaning to a lower-level part, we’ve simply hidden a smaller interpreter in the model. There’s no theorem that tells us where that chain must bottom out -- only the uneasy feeling that we haven’t really explained anything yet.

I think there is something interesting here though. We do like to explain things by going smaller. But that strategy only works if the pieces still work on their own. Many phenomena we are interested in (meaning, agency, value) are not decomposable in that way. The parts don’t keep the properties we see in the whole.

So if we propose electromagnetic fields are the final vehicle and consumer, we assume the whole causal story is already in the part. I worry that what appears to be ‘proper parts’ from the point of view of description may not have properties that can be described without reference to other features of the whole they compose. I'm not convinced yet that the final solution is “finished” at the EM-field level. I don't think this slays the homunculus beast.

I still think our best move is to stop thinking about meaning as a property of stuff. To me, this is where the confusion (and spookiness) enters the picture.

Expand full comment
Eric Borg's avatar

Well analogies are inherently only themes so we needn’t follow them all that far, though should certainly object when they seem unhelpful or even misleading. And should EMF consciousness be empirically validated well enough, I did want to get in that “shut up and calculate!” dig at the modern day leader that would thus be exposed as magical. I don’t recall using this particular analogy before, though to me it does at least still seem reasonable. This is all tangential to the topic itself, but I don’t mind testing my grasp of physics against yours Suzi. Perhaps I’ve got some things wrong that you can help me with.

The QM explanatory gap as I see it is that the more precisely we measure a particle in a given respect, the more it confounds us by functioning like a wave. Vise-versa too. I don’t consider this to be a measurement gap because I suspect that such measurements have been top notch. Instead there should be an explanatory gap in the sense that we don’t quite grasp what’s happening. Either reality is ultimately non-deterministic and thus QM reflects ontological magic, or reality is ultimately deterministic though we don’t grasp how quantum superposition, tunneling, and entanglement permit perfectly determined outcomes. I’m metaphysically bound to the natural option though I’m not sure if science will ever find a good way to fill this gap.

On the empirical validation and scientific acceptance of EMF consciousness, I’m plainly stating that this should not “fill the gap”. We’d still wonder why certain parameters of EMF, and even well documented ones that are shown to reside as the vehicle/consumer of pain, vision, joy, or whatever, would do so. In the end we might have to just “shut up and calculate”.

I do consider this different from the homunculus fallacy however. Beyond all the popular but unfalsifiable consciousness proposals that would be left behind, that business should end as well. Observe that if we could give someone an involved conscious experience of vision, hearing, taste, and so on by inducing novel exogenous energies that mimic the ones produced by the brains of people who are actually having such experiences, then it’s not like we’d need to look for a deeper “EMF inside the EMF” solution. Furthermore I’m only talking about inducing such a field in someone’s head because then the person could report having such an experience. Theoretically the field itself would be the whole of the consciousness and so the experiencer would exist that way even in the air without a human host.

We shouldn’t say that meaning is a property of stuff? I can go along with that since I consider there to be something more basic that is a property of stuff. Consider the very first thing that could feel good/bad on this planet. Value would have been a property of it no less than it’s a property of me, and even if primal and epiphenomenal. Indeed I consider value to ultimately be what each of us are. That’s also a great segue to my second post!

https://eborg760.substack.com/p/i-value-therefore-i-am

Expand full comment
Suzi Travis's avatar

That’s a really interesting. I still wonder whether before we “shut up and calculate,” we need to ask a different sort of question: what keeps the field going? If an electromagnetic field is going to count as conscious -- not just as a clever correlate -- we’d need to show that its pattern isn’t just floating there, but is shaped by the system’s own history and loops back to help keep that system intact. Otherwise, we haven’t solved the mystery -- we’ve just moved it from neurons to fields. I still can't see how this gets rid of the homunculus problem. Unless the field is doing the work of constraining the system in a way that matters to its own survival, we’re still left looking for the part that makes the pattern mean something.

Expand full comment
Eric Borg's avatar

Yes before people “shut up and calculate” they’d want strong empirical evidence that any consciousness proposal does facilitate brain feed back for a full causal loop of function. But that’s indeed what McFadden proposes. It’s well known that electromagnetic fields aren’t just caused by neural firing, but can also alter such firing in novel ways. So here the brain would create EMF consciousness and EMF consciousness would affect the brain for a full feedback loop. There’s plenty that such an answer would not yet answer, though if empirically validated well enough there’d be no homunculus fallacy given both input and output rather than a need to resort to any more levels, let alone infinite regress. Furthermore instead of just standard word play there’d be evidence for scientists to interpret. So this would essentially be like the early days of relativity, quantum mechanics, and so on. For fields that have not yet even come across “Newtonian” understandings, such a thing would clearly be monumental.

I’ve now gone through that video you suggested in your appendix 16. Very enjoyable! John Krakauer impressed me most given how adamant he was that it should be detrimental at this early stage for academia to “dumb down” representational language with casual talk of neuron function. It didn’t surprise me that he’d challenge Dennett, a person who had no problem mixing consciousness talk with unsupported brain talk. No one on this panel seemed to entertain such notions, though Krakauer was most adamant by far. Unlike myself however he didn’t seem to have a potential solution. I’ll now send him an email to ask if he’s thought about McFadden’s consciousness proposal, or perhaps would be interested in talking about it? I’ll also link to this conversation in case he wants to see what we’ve been up to over here.

Expand full comment
Mark Slight's avatar

As always Eric, your test shouldn't work because any EMF IS in every moment computable merely from the positions of all ions in the brain. The EMF is not an additional entity. The EMF is simply directly correlated to whatever computations the neurons are doing.

Think about it like a magnetic field and magnets. You cannot have magnets without magnetic fields. You cannot change the magnetic fields without any change to the position or forces on the magnet. They're all interconnected. The magnetic field is part of what a magnet is. It's not two different entities.

Likewise, ions have EMFs, or else they are not ions. You cannot change the EMF without affecting ion movement and neuron depolarisations. If you want a decent test, you need to update it!

Ignoring that your conceptions of the EMF and its relation to neurons are deeply flawed: granting that they make sense, however your EMF works we can just apply the same tests to simulated EMFs. Then we produce the same behaviour, perfectly, in a robot. We can also apply the same EMF tests in the simulated environment, and they would produce the same results.

As you know, you have "bitten the bullet" and said that robots that behave exactly like humans are conscious. So it's in principle (not in reality) simple to simulate how a brain computer would interact with an EMF and thus build a robot that by virtue of behaving exactly like a human is conscious.

Magical conceptions of meaning require magical computers or magical EMFs. Non-magical meaning requires non-magical computation or non-magical EMFs (that are simply part of the computation).

You gotta stop accusing us of believing in magic while you're painting a fantasy magical EMF picture, Eric!

Expand full comment
James Cross's avatar

"any EMF IS in every moment computable merely from the positions of all ions in the brain"

It's a chaotic system built on top of a constantly changing, self-modifying initial state.

The ions are in motion. You're not dealing with a single position in a single moment. You are dealing with motion, not only with ions through membranes, but also with firings in the neural circuit paths. Timings appear to be important. Speed of firings can represent intensity of stimuli. Direction of firings through traveling waves can be predictive for motion in the visual cortex.

The biggest problem I have with McFadden's EM field theory is he hasn't fleshed out enough how this fits into his theory. I don't think consciousness is like frames of a movie built from instantaneous flashes of EM fields.

Let me quote from something I've been writing:

"The philosopher Robert Sokolowski draws on Husserl’s idea that perception is an active constitution of meaning and argues that the mind synthesizes diverse sensory inputs into one object — not because it’s all happening simultaneously at a neuron level, but because consciousness has the power to gather and unify experiences across time. In other words, we do not experience fragmented blips, but flowing continuity — because consciousness actively holds past and anticipates future moments. Consciousness may not be a single experience but many experiences that happen around the same time become integrated by consciousness itself. "

Expand full comment
First Cause's avatar

I'm looking forward to your next essay Jim. Don't know if you follow any of my comments, but my own model is still a work in progress.....

Expand full comment
James Cross's avatar

Somehow missed your comment, but still the next is to come.

Expand full comment
Mark Slight's avatar

"The ions are in motion. You're not dealing with a single position in a single moment. You are dealing with motion, not only with ions through membranes, but also with firings in the neural circuit paths. Timings appear to be important. Speed of firings can represent intensity of stimuli. Direction of firings through traveling waves can be predictive for motion in the visual cortex."

Oh, I agree 100% the mind, nor brain physiology, can be meaningfully analysed by looking at a time slice. I also agree that consciousness does not occur at any "frames per second", although certain aspects of it might.

My point is that to the extent that there is such a thing as a brain EMF, the state of that EMF is 100% determined by the positions (and perhaps spin and other stuff) of all the charged particles in the brain. As such, it is not an "extra thing" or "medium" that could host anything at all. No information could be presented "to" it.

Expand full comment
James Cross's avatar

I think the argument is that the EM Field is the medium. That it might be determined or computable in theory from ion positions and states is beside the point.

My problem is I don't think we find coherent EM fields over large parts of the brain. Rather we find pockets of fields at any point in time, but the pockets seem organized across brain regions. So if the EM Field Theory is right, it seems something more is required.

Expand full comment
Mark Slight's avatar

Yeah, I totally get that the EM field is supposed to be the medium. My point is that if the EM field is the medium, then the charged particles are the medium, because that is what it is to be a charged or polar particle (to electromagnetically influence the surroundings). That is what an EMF is. It doesn't make sense to say that the EMF is the medium but the particles are not, beacuse the EMF IS the particles. Particles ARE as particles DO. The EMF is what the particles DO (part of it).

Computability cannot be beside the point, unless you want to get into epiphenomenalism (which Eric denies). If consciousness resides in the EMF (I don't think that's even a viable hypothesis, for the reasons above, but I grant that for the present purposes), then a robot brain could simply compute how neurons and EMFs interact, and as such behave exactly as if it were conscious. As such, it IS conscious, because epiphenomenalism is a bad, bad, idea (which Eric thankfully agrees to). Eric agrees that this makes it conscious.

The only other way out, besides epiphenomenalism, which is a terrible way out, is to claim that the EMF values and interactions are not computable. This is an equally terrible way out--this poses that the EMF is magic, not mathematically describable.

Expand full comment
James Cross's avatar

The particle of an EM field is a virtual photon. It isn't free electrons, calcium, sodium, or potassium ions. Those are not the EM field. The movement of those things produces the EM field.

Expand full comment
First Cause's avatar

How about you Mark, what do you think of my ideas on evolution that I presented to Suzi?

Expand full comment
Drew Raybold's avatar

A full response would be many times longer than your article, but here are a few observations on some of the points being raised as I go through the article with the provisional working assumption that, to a first approximation, the consumer of mental representations is the mind.

Yes, that is a very-far-from-complete, and therefore unsatisfactory answer, but that, I suggest, is not surprising, given our current state of knowledge about minds in general, which is also very-far-from-complete, and therefore unsatisfactory.

I am surprised to read that there’s a noticeable reluctance in neuroscience to say that brain activity misrepresents anything. Any neuroscientist in that position should contemplate whether optical and aural illusions, or phantom limb pain, are accurate representations.

From this, it would follow that mental representations fail to satisfy the requirements to be natural signs - if this is not a false dichotomy in the first place.

The objection via analogy to digestion falls flat. The brain is an information-processing organ, where representations are likely to feature prominently in any understanding of them, while the gut is not (or only in a minimal and accidental way, should anyone want to get pedantic about it.) It should surprise no-one that the brain is not like the gut.

I can readily imagine Dennett thundering "Cartesian theater!" in response to the Koch/Crick quote - does anyone know if he did? Taken out of context, it appears to be a manifestation of just that, but I suspect that, in context, this is merely an empirical observation about information flow and levels of abstraction within brains.

There is no dichotomy between message-passing machines and dynamical systems - in many information processing systems, these are just alternative abstract views.

For what it is worth, the birds are loudly chirping outside my window as I write. It is not only well established that these sounds mean something to the species that make them, but also that, in some cases, individuals of other non-human species also grasp (in the appropriate sense) their 'intended' meaning, and even fake them for their own means. From this, it seems likely to me that the puzzle of representation in brains and minds will be explained without any specific reference to human consciousness.

Expand full comment
Suzi Travis's avatar

Hey Drew!

On the point about misrepresentation: good point illusions and phantom limbs are great examples. I've been thinking a bit about this reluctance in neuroscience to say brain activity misrepresents. This was a finding and claim made in the survey I mentioned. But I'm not sure it's that true. I think there are some situations we'd shy away from that sort of framing, but there are some situations, I think saying the brain misrepresents is the framing we'd use. I'll run an informal poll of friends and see if they agree.

Yes exactly, the gut analogy is odd for that reason. But if we say the brain is different, then the question becomes how is it different. Is it fundamentally not an input-output machine in the usual sense?

Crick and Koch do anticipate the homunculus objection and quote Attneave’s "In Defense of Homunculi" in an effort to block the regress. Whether it works is, well, debated.

On the dynamical systems point, yes good point. My wording wasn't the best here. By message-passing machine I was thinking of a passive, feedforward system.

Expand full comment
Drew Raybold's avatar

Hi Suzi, and thanks for your reply. I will say a few words about the differences between the brain and the gut, particularly as it is also relevant to a common anti-computationalism argument.

Personally, I'm strongly inclined towards computationalism insofar as I suppose that a sufficiently fast digital computer running a sufficiently-accurate simulation of a human neural system would have a mind (at least if connected (physically or virtually) to appropriate peripherals.) This possibility is one of the objections raised against Searle's Chinese Room, and one of his responses has been to point out that a computer simulation of a rainstorm will not get you wet (Bernardo Kastrup makes the same point by saying that a computer simulation of a kidney will not pee on your desk.)

These analogies are, of course, correct, but here's another, equally correct one: a computer simulation of an Enigma machine really will encrypt and decrypt messages indistinguishably from an actual machine.

Which (if any) of these analogies are relevant? Clearly (I would say) it's the Enigma machine, on the grounds that the utility of an organism's neural system is a consequence of it being an information processing organ as opposed to one which performs material transformations, even though the neural system does, in the course of its operation, perform some material transformations and consumes energy. This is not, as far as I am aware, a controversial view of what the neural system does.

Expand full comment
Suzi Travis's avatar

I see! This was helpful thank you.

For fun, I will present an opposing view.

What makes the code running on an Enigma count as encryption is an interpretive mapping somebody set up in advance. The very same hardware could -- tomorrow -- be mapped onto chess or weather. Computation is therefore an interpretive gloss on a mechanism. Computation in this sense can be seen as borrowed meaning.

What happens in the brain matters to the organism. It helps stave off entropy -- repairing tissue, steering limbs, finding calories. That nested, self-maintaining is what constrains our system. A digital copy that never has to burn glucose, pump ions, or rebuild its own channels lacks the causal structure that gives brain activity its "purpose".

A rain-storm simulation won’t wet your shoes because the program leaves out the very molecular constraints that make water do work. We must be careful, the argument goes, not to confuse the map for the territory. Consciousness sits on the “wet side” of the map/territory divide. Logic can mirror causal sequences, but it cannot generate all of them in silico.

Both the gut and the brain are autogenic systems: they harness energy to maintain the very boundary conditions that keep them intact. Strip away that noisy, leaky, real-time thermodynamic struggle and you also strip away the higher-order reciprocal relationship between self-organising processes that lets their signals carry meaning.

If we can build a machine whose own survival hinges on sustaining a hierarchy of self-generated limitations or restrictions -- where information processing is in the service of keeping the system far from equilibrium -- then, the argument goes, maybe we could call it minded. Until then, simulating a brain on a computer is not actually simulating what a brain does.

Expand full comment
Drew Raybold's avatar

Hi Suzi, thanks for playing devil's advocate in this matter. You have accurately portrayed the nature of much of the discussion of this matter.

Let's start with the  claim that, as a digital computer can be programmed to perform a host of different computations, computation is therefore an interpretive gloss on a mechanism. Given the semantic sloppiness of language, I have little doubt that some justification could be made for this inference, but it is hardly the only thing that could be said about computation. In particular, it is completely uninformative about both what computation is and what computers do (I have no doubt that a great many pre-19th. century people could understand digital computation given adequate explanation, but calling it interpretive gloss on a mechanism would not even be a useful starting point.)

This is, of course, more or less the set-up for the appeal to intuition in what's left of Searle's anti-computationalism after you strip away the non-sequiturs, but as nothing else here depends on it, I will, for brevity's sake, avoid going down that rabbit hole.

Returning to the argument set out here, we move on to the observation that an organism's brain helps (most of the time!) in preserving its physical integrity and functioning. Again, this is correct, but only minimally informative about what brains do.

When we investigate how a brain (or, more precisely, a nervous system) participates in these activities, we find we can draw a well-defined boundary around it, delineated not only materially, but also functionally in terms of information flow. Again, information flow is not the only thing one can say about what happens at that boundary, but in this case, I would argue that characterizing it as such is highly informative, at least (but not only) in this sense: any attempt to explain how a brain participates in preserving its host organism's physical integrity and functioning without acknowledging the significance of this information flow would be as making-it-difficult-for-no-reason as trying to understand and explain biological reproduction while insisting genes play no part.

Again, noting that both the gut and the brain are autogenic systems is at the same time correct and unhelpful: in this case, we are simply abstracting away everything that makes a difference. It would be a strategy suited for someone who wants an excuse to avoid looking through the telescope.

If computationalism were the thesis that a digital computer running a sufficiently-accurate simulation of an organism would be alive, then these objections could well be valid, but it is not that (see my previous post.) (To be clear, my enigma-machine analogy is not intended to justify computationalism; it is intended to illustrate how unsatisfactory Searle's and Kastrup's analogies are.)

The final paragraph hangs on the question of whether we can usefully contemplate minds independently of the bodies which instantiate them. Personally, I think we can - it is a useful abstraction. Note that if we cannot, then most of the philosophy of mind may well have been ineffective, given the widespread indifference in that field towards contemplating the biology of mind-possessing organisms.

Expand full comment
Suzi Travis's avatar

Hey Drew! Thanks for unpacking the “interpretive gloss” point -- framing information flow in a bounded nervous system as something that does real explanatory work, not just rhetorical lifting, is a helpful way to ground the discussion.

I’m still left wondering, though: once we lift the brain out of the body, don’t we lose sight of the fact that brains evolved in the context of bodies --through selection for traits that helped whole organisms persist? Strip away that context, and it’s harder to say what those signals are “for,” or why they should matter at all.

Expand full comment
Drew Raybold's avatar

Hi Suzi, I completely agree that a nervous system can only be fully understood in the context of, and with all due reference to, the body of which it is part and how they evolved in concert. (Furthermore, given the importance of language to humans, that scope has to be extended to society.)

At the same time, the combination of abstraction and the separation of concerns has proven to be a very general and effective approach to understanding complex systems, and essentially all of our biological knowledge takes this form - for example, I am fairly sure one can draw as many parallels between the circulatory system and the gut as one can between the gut and the brain, and neither can function for long without the other, yet it is not problematical to explain either one largely independently of the other (it is actually very helpful.) This does not, of course, guarantee that the same can be said of the nervous system, but there is as yet no good evidence that it cannot be done.

Expand full comment
John's avatar

It’s been an enjoyable jaunt :)

Thanks for this series, Suzi.

Enjoy your conference. All the best, John.

Expand full comment
Suzi Travis's avatar

Thanks, John!

Expand full comment
Wyrd Smythe's avatar

One challenge is that "meaning" is a high-level concept. Restricted, I think, to humans, an abstraction. We create meaning, it's not something out there we find. Meaning to you may not be meaning to me. It's a rabbit hole concept like "consciousness" or "representation" or "real". I'm not sure it's possible to define such nebulous concepts effectively. (Endlessly palatable for philosophers, though.)

Maybe a problem with representation is the pigeonholes "vehicle", "target", and "consumer". That works okay for external symbols, but as you say, "And the consumer is… who, exactly? [...] There’s something off here." The notion of natural signs seems more on target.

I thought about the way we train LLMs. The encoding that results from their training seems more aligned with natural signs — THIS experience causes THAT encoding — than with representations — symbols standing for experiences. In part because it's impossible to say exactly *where* facts are stored in an LLM. There are no concrete symbols, just a unified set of parameters. Like a unified set of trained neurons in a brain.

In software, an old decomposition approach is IPO — Input-Process-Output. As you point out, it's a general framing that applies to many processes, including many aspects of humans. I do think it applies to brains although, as with software, it's recursive. Each input, Process, and Output is itself made of IPOs, which also decompose to IPOs, and so on until you get to the most basic functionality. In brains, even neurons can be decomposed — synapses are their own IPOs (composed of biochemical IPOs). FWIW, I see the brain as more like an old analogue radio, a signal processor, than as a numerical information processor.

I wonder if the question of the homunculus is another version of the Hard Problem. How can clay have opinions? Why is this IPO system self-aware? I think to the extent a "homunculus" exists, it's the whole brain having that self-awareness.

Very interesting series. Looking forward to whatever is next. Have fun in Sydney!

Expand full comment
Suzi Travis's avatar

Great points!

I agree, the recursive element is important. But I'd probably go one step further and say that a certain type of recursion is important.

Yes, I think the smaller IPO boxes, can be helpful to answer some questions. But I think we need to be aware of the possibility of erasing the thing we are actually interested in. Many of the things we are interested in, meaning, thoughts, etc, are not properties to be found by breaking the system apart. They are not decomposable. They are found in the limitations or restrictions of the larger system.

Thanks Wyrd! Sydney was great. I got to catch up on some of the latest in vision research. There's some really interesting things happening.

I'm about to start planning the next set of essays. I think it's going to be a fun one!

Expand full comment
Wyrd Smythe's avatar

Oh, IPO is fine for breaking down physical processes, but with consciousness I quite agree it’s not the right tool. That consciousness is irreducible and incorrigible has been a part of my way of looking at it for a long time.

Expand full comment
James Cross's avatar

Who is the consumer? That could be a Zen koan.

"Calling the entire brain the consumer feels odd. "

Maybe not the entire brain, but just the parts of the brain that represent the brain to itself.

Expand full comment
Suzi Travis's avatar

Haha, yes exactly! There’s probably a witty t-shirt in there somewhere!

Expand full comment
James Cross's avatar

How about a large brain with "What is the consumer that consumes itself?"

Expand full comment
Suzi Travis's avatar

Love it!

Expand full comment
James Cross's avatar

More seriously. The brain needs to create a consumer or self as a reference point for the body and senses. The body cannot turn left and right at the same time. You can't raise your head up and turn it down at the same time. There needs to be unitary action originating from a reference point which appears to us as a self and gives rise to the consumer illusion. As your quote shows, it is hard even for neuroscientists to think in other terms.

Expand full comment
Suzi Travis's avatar

Great point!

It's the question of all questions, isn't it!? How does a bag of billions of neurons produce one coherent “steering wheel” so the body doesn’t try to look left and right at the same time?

I don't think any of us would accept that the answer to this question is that the brain creates a little consumer to sit in the driver’s seat. But, it seems, the brain needs some kind of reference point to "steer" from. I wonder whether this reference point comes about from the very act of keeping the system going.

I wonder whether what we experience as a “self” isn’t something the brain builds and then hands the keys to. It’s what happens when a system needs to stay alive -- trying to keep its own patterns intact in a world that’s constantly trying to pull them apart (entropy!).

That, I wonder, might be what gives us this sense of “me”.

And, I wonder, whether the sense of unity comes from competition. The brain is full of overlapping processes, but when it’s time to act -- as you point out -- we can’t go in two directions at once. All those overlapping processes must collapse into a single, coordinated output. So, the constraints -- the things the system can and cannot do -- turn out to be very important for this sense of self.

Expand full comment
James Cross's avatar

You've expressed my view well.

This could have origins with the motor systems of the early bilaterians. Having sides increases the requirements for coordination across the organism and also creates a sharper 3D reference point - front-back, up-down, left-right.

Expand full comment
Jim Owens's avatar

I wonder whether the trouble with meaning has a lot to do with representations after all. We seem to keep coming back to that question, but as you suggest, the model brings a lot of baggage. Much of it is outlined in Richard Rorty's _The Mirror of Nature_, a study of the Enlightenment imagery of a reflecting and possibly distorting mirror that stands between the observer and the observed.

If, as we search for meaning, we adopt the model of vehicle, target, and consumer, we may find ourselves distracted by the vehicle -- that is to say, the mirror. This is an easy mistake to make, especially if we are focussed on symbols. We ask how the symbols acquire meaning; for example, how a battery indicator comes to mean something about a battery level. But a closer look reveals that the problem of meaning originates with the very concept of the target. A more fruitful question is how a target becomes a "target", instead of an indifferent occurrence or state of affairs. What is it about a battery level that matters? Why would anyone care?

This question brings us closer to an enactivist view of meaning, in which meaning is located, not in a secondary representation, but in a direct functional significance. How that significance becomes translated into proxies -- how the significance of a low battery, for example, is mapped onto a mirroring symbol of filled and empty bars -- engages questions of higher cognition, which are worth asking. But the meaning of a low battery is functionally analogous to the meaning of hunger. It engages that feeling of want or sense of lack that is experienced by anything that cares about itself and its environment.

And here we introduce the idea of caring or concern, of an engaged agency, which is so crucial to the concept of meaning, and yet so puzzling to the Enlightenment model of a clockwork universe of indifferent matter winding down indifferently. We are in the territory of Spinoza's conatus, and the entire neglected tradition of "being-in-the-world" for which the analytic tradition of philosophy has no patience. Thus it is that Rorty observes of Wittgenstein, Dewey, and Heidegger, who tease an enactivist way of thinking, that “their attitude toward the traditional problematic is like the attitude of seventeenth-century philosophers toward the Scholastic problematic.”

The problem of "the mirror of nature" engages a paradigm that has reached its limit: a paradigm in which agency and meaning are puzzles for the adopted model, rather than an integral part of our way of thinking. An alternative paradigm need not detract from the neuroscientific project of understanding how the brain works. But it may cast a different light on the questions asked and the answers revealed.

Expand full comment
First Cause's avatar

"...a paradigm in which agency and meaning are puzzles for the adopted model, rather than an integral part of our way of thinking."

Puzzles for the adopted model instead of an integral part of our way of thinking. WOW! How and why did we loose our way?

Expand full comment
Jim Owens's avatar

I took this to be a rhetorical question, but in case it's not, Charles Taylor examines some relevant historical developments in great depth in _A Secular Age_.

Expand full comment
Bill Taylor's avatar

I am really enjoying these articles and learning a ton. Great stuff!

With that compliment, I pose a challenge or thought exercise. The specific realm of inquiry is that of computer science and deep-down microchip stuff. In this realm we deal with things which contain zero intuited or inherent characteristics. We have literally zero sense perception for them. They can't be seen, can't be touched, can't be experienced, have no shape, have no smell nor taste nor color. They are nearly-pure abstractions.... all we know of them is a written description and some figures created by those who made them. Here's one of many examples; this one was chosen at random by just forcing some terms into a wikipedia search:

https://en.wikipedia.org/wiki/XDR_DRAM#Masked_write_command.

The thought exercise is this: in terms of meaning and representation, how is this thing the same as a dog (or an apple or table or chair); and how is it different? In terms of meaning and representation, are dogs and XDR-RAMs all of one category? Or are they fundamentally different?

And, is it fair to say a framework for meaning and representation must work for both XDR-RAM and for a dog? Or is that not fair?

I am out on a limb here; interested in any future follow-up of any kind.

Expand full comment
Mark Slight's avatar

Great post. Can the case be made that representation are a sub-class of natural signs?

In my opinion it is often useful to try to drop the first-person anthropocentric perspective and agent perspective, which both are emergent rather than fundamental.

So. A developing polaroid photo a natural sign that there is a cat nearby. If a human encounters such a photo, she can draw reasonably certain conclusions, not only about the cat but of the photography event. The representation of a cat is a subset of these natural signs. They're not proof, but evolution has done a pretty good job. Crucially, the conclusions do not happen at a temporospatially focused point. It is distributed.

By extension, from a third person view, the pattern of a cat photo on a retina is a natural sign that there is a cat photo nearby. The activation of a cat photo pattern in V1 is a natural sign that there is corresponding activation on the retina, and a corresponding photograph. And if the person says there must be a cat nearby because of this fresh photo of a cat, and the person has a good track record of drawing correct conclusions, then this is a natural sign that there is a cat nearby, and a photo of a cat nearby.

What counts as a representation here is whatever cognitive mechanism that makes humans say that something is a representation, or whatever collective mechanisms make us agree on what counts as representations. I'm inclined to say that the photo, the retinal and V1 activation are representational, while the photons travelling to the retina, and the activity of the optic nerves are not representational. But I think this comes down to my psychology and is not defensible in the end.

Not sure what I'm trying to say really lol. This comment is a natural sign of unfocused thinking.

Expand full comment
Suzi Travis's avatar

I don’t think you need to sell yourself short. I think you're making a key point here!

Yes, natural signs are everywhere. A fading Polaroid and a pattern of light on a retina all point to something else. And yes, from a third-person perspective, they form a kind of chain: the cat, the photo, the eye, the brain. Each one carries forward a trace of what came before.

But I think we can take this further. Representation, at least the way you and I think about it, isn’t just about pointing. It’s about mattering for the system’s own persistence. And, I think, this means it's about whether that trace gets pulled into a specific type of loop — a loop where it shapes the future of the system it’s part of. A Polaroid becomes a representation not just when it reflects light just right, but when someone sees it, makes a judgment, takes action, and maybe even survives because of it. In that case, the sign doesn’t just signal. It does work. It is a constraint that keeps the whole "show" running.

Expand full comment
Tom Rearick's avatar

A great post and a perfect end to a wonderful series.

I have trouble with many words used in philosophy and neuroscience. Representation is one. Mind is another. My favorite baddie is "Executive function". That is where the homunculus lives. All these teleological terms presume a purpose and an intention. I have a hard time finding intention in evolutionary processses. One might argue that survival is the purpose of evolution but, once again, that is assigning a teleological explanation to a random process.

Expand full comment
Suzi Travis's avatar

Exactly!

Expand full comment
Charlie Stephens's avatar

I love this! I believe meaning is how our consciousness guides itself in knowing something

Expand full comment
James of Seattle's avatar

[Just so ya know, I’m an amateur philosopher with a theory of consciousness. Lucky you. As such, I’m familiar with the major works of some folks you mention, like Dennett, and less familiar with others, like Dretske.]

Given the same trepidations as Mike Smith concerning the different understandings of “representation”, I was a little skeptical starting into your essay, but I was happy to find that you give a very clear 3-part explanation of your definition. It turns out that your idea of representation maps quite nicely to what I refer to as “pattern recognition”, and I posit that this (pattern recognition/representation) is the fundamental basis of consciousness. Having thought about this a LOT, I have very specific answers to many of your questions. So …

What is a representation? I like that your three items map straight on to Peirce’s “Sign”, because I am familiar with Peirce’s Object/Sign Vehicle/Interpretant, which seems to match your target/vehicle/consumer. Unfortunately, restricting requirements to these 3 features is not sufficient to get where you need to be. For example, there are (at least) two physical processes that have to be considered: 1. The process from the target to produce the vehicle, and 2. The process from the vehicle to generate a response (Peirce’s interpretant). So here is how I diagram a representation:

Inputs(A, …) —> [mechanism] —> vehicle —> [mechanism] —>Outputs

What is a natural sign? Here I need to bring in the natural history of information. Every physical interaction generates information. The outputs are correlated with the inputs, and so carry mutual information. Thus the smoke has mut. info. with respect to the fire. This mut. info. is an affordance for any system with a goal. If something wants to get to the fire, they can go towards the smoke. This is one kind of “aboutness” which does not rely on an observer. Note that in my diagram, the vehicle necessarily carries this mutual information/“aboutness”.

Where does meaning come from? Notice that, given this explanation of mutual information, every physical thing has mutual information with everything in its causal past. Smoke is correlated with fire, but also with dried plant matter, and also with whatever ignited the fire, and so on. But I said before that mutual information is an affordance for a system with a goal. Such a system is only interested in a subset of the mutual information. For a wild animal (or Frankenstein), that smoke may mean “fire … Bad!!!”, whereas for a lost hiker (or Tarzan) that smoke may mean “People, Good!!”. So meaning is determined by the response, assuming that response was “selected” to achieve a goal.

So what about the brain? First we have to introduce the “symbolic sign”. This essentially is a sign/representation where the first mechanism is generated for the purpose of generating a sign vehicle, called a symbolic sign vehicle. The second mechanism (interpreter/observer) is also generated for that purpose and is must be coordinated with the first mechanism. And this is how we get the neuron, a cell whose main function is to participate in representations, producing neurotransmitters as symbolic sign vehicles. It’s important to note that representations can be combined, such that a number of neurons can act in concert to produce a single vehicle, and also that a single neuron can participate in multiple representations in that copies of the same vehicle can be sent to multiple targets, each of which may have its own purpose,and so, meaning.

Finally, what about observers (homunculi) in the brain? Here I can speculate, and I need to introduce two theoretical structures: Ruth Millikan’s unitrackers and Chris Eliasmith’s semantic pointers. A unitracker is essentially a pattern recognition unit which tracks a single pattern. The unitracker may be a group of cells, and can generate multiple actions in response to that pattern, including memory, action, feedforward, and feedback (predictive active inference). As such, the unitracker can act as both the first mechanism (pattern recognizer) and second mechanism (observer). A semantic pointer is essentially a convolutional neural network which can be stimulated to “display” an output which is unique to the set of inputs, but such that those inputs can be operated on and read out, “pointing” to the relevant input(s). The canonical example is to input “king”, then operate (“remove ‘man’, add ‘woman’), and read out (point to) “Queen”.

My (obviously simplified) proposal is that the cortical minicolumn (specifically L4, and L2-3 neurons) constitutes a unitracker. The whole cortex is just (essentially) a huge collection of unitrackers. The thalamus constitutes a number of separate semantic pointers each of which constitutes a separate stage in the Cartesian theatre. So, for example, inputs from the retina generate a pattern in an early stage in the thalamus. This stage is observed by a set of unitrackers in the visual cortex, and those unitrackers that get sufficiently activated send feedforward to the next stage in the thalamus. This second stage is observed by the next set of unitrackers, some of which recognize patterns within those patterns, and continue the process.

So there is not a single homunculus looking at the stage, but instead an audience of unitrackers. Ultimately there will be unitrackers at the highest level (tracking high-level patterns like structural goals, high level abstractions like morality, etc.) which do not generate another stage, but note they may influence what goes onto the final stage via feedback.

Explains everything, yes?

Whew.

*

[now to copy this and post it on my site]

Expand full comment
Suzi Travis's avatar

Hi James -- Whew is right! This is great! I love how you stitch Peirce, Millikan, and Eliasmith into one sweeping picture.

I'll push back on two points.

First, on mutual information.

I agree that any physical process -- smoke, neurons, pixels -- carries mutual information about its causes. But I'm not sure we should say that kind of correlation is aboutness (yet). It’s just structure. What turns a pattern into a representation is whether it plays a role in keeping a system intact -- whether the system depends on that link to survive. In other words, meaning isn’t just the pattern itself. It’s in the work that the pattern does.

Second, on the unitrackers.

Your model avoids the homunculus by spreading the job of interpretation across lots of pattern recognisers. This is great.

But I think we should still ask what gives any one of those patterns its weight? Why does one win out over the others? I wonder whether, the answer has to lie in the body. It’s the organism’s fragile, energy-hungry state that sets the stakes. The patterns matter because they keep the system viable.

So does your model explain everything? I'd say it gets impressively far. But I might say: don’t forget the messy, metabolic, constraints that drive the loop between signals and their targets. What gives all those signals their point?

Expand full comment
James of Seattle's avatar

Hey Suzi, thanks for getting around to this and engaging. I was getting worried. :)

As for your comment on mutual information, (assuming I understand how you’re using “structure”) I would say that structure is just a type of aboutness. For me, aboutness is a relation to something which cannot be determined by physically measuring the thing in hand, so to speak. I’m assuming the structure in question is the structure of the causal relationship.

I agree with your comment that what “turns a pattern into a representation is whether it plays a role in keeping a system intact -- whether the system depends on that link to survive. In other words, meaning isn’t just the pattern itself. It’s in the work that the pattern does.” Except that you have specified a particular goal (keeping the system intact) whereas I propose that other goals can be equivalent in determining the representation. There necessarily has to be some goal because this goal is determining the relevant (mut.) information being acted on, which is the essence of the representation. My goto attempt to explain this is a hand-written sign in a remote Japanese village that reads “C’mon in for the best breakfast this side of Mt. Fuji!”. If you’re hungry, that sign represents a place to get food (goal-satisfy hunger) whereas if you’re a lost American that sign represents a person who speaks colloquial English.

As for unitrackers and what gives one of those patterns its weight, I think we’re getting to Dennett’s competition for “fame in the brain”, and the answer is going to depend on lots of factors, such as “predictions” (feedback) from unitrackers higher in the hierarchy as well as the mutual inhibition among unitrackers at the same level in the hierarchy. I kinda think this is standard stuff for neural net pattern recognition. But you also mention susceptibility to other systemic factors (dopamine/seratonin?). “What gives those signals their point?” My answer is that these signals come from systems which have their own goals (like learning?), and they probably don’t have the “unitracker” structure found in the cortex.

Let me know if this helps (or hurts).

*

Expand full comment
Michael Pingleton's avatar

What is the meaning of meaning? Such an ironic question lol! It does make me wonder if certain neurons within the brain themselves are mapped to certain meanings, similarly to how tokens in an LLM represent words.

Perhaps it's the misrepresentation problem that explains why cats are so afraid of cucumbers; the cucumber's shape can be said to resemble the appearance of a snake.

Ultimately, I'd be curious to see just how differently the brain and an LLM each process meaning. I think it would be interesting to have an activity diagram for an artificial neural network compared side-by-side with something like an MRI.

Very nice work on the article and the narration, Suzi!

Expand full comment
Suzi Travis's avatar

Hahaha! Yes, the irony of asking “what’s the meaning of meaning?”!

The idea of neurons storing meanings like LLM tokens store words -- this is a tempting way to think about it. But there are some important differences. In an LLM, each token (like “cat”) is associated with a fixed vector in the embedding space. This vector represents its position in a high-dimensional semantic map, learned during training. So, “cat” corresponds to a fixed location in this learned space -- after training.

But in a brain, the meaning of a neuron’s activity depends on context -- what other neurons are firing, what the body’s doing, what the stakes are in that moment. In vision research, for instance, the very same V1 cell can signal a crisp edge in bright light but fade into the background when contrast drops or attention shifts. So, rather than holding stable symbols, brains build flexible patterns that shift as the organism and the world shifts.

The cucumber example is a fun one. Cats probably aren’t afraid of cucumbers per se, but the shape triggers a quick-and-dirty guess: “snake!” That’s a great example of misrepresentation. This is a feature, not a bug. Fast, sloppy guesses can be lifesaving, even if they’re wrong now and then. When it comes to survival, a false alarm is better than a miss.

Expand full comment