If the brain is like a computer, and intelligence and consciousness are properties arising from the program it's running, then studying electronics wouldn't help you understand even the program, let alone the emergent properties.
Similarly the development aspects wouldn't per se inform, although variability of development (and damage at any stage - eg TBINDSC) can help to tie down which bits do what.
Then again, every science is based on the one below so chemists study physics, biologists study chemistry (so you can end up spending more time studying the foundation science than the one you signed up for!).
"f the brain is like a computer, and intelligence and consciousness are properties arising from the program it's running.."
I'm not at all sure that the brain really is like that. Further, I'm not at all sure whether it would be a good thing or a bad thing if it were.
For example: I work on large, complex, computer systems. They are designed and engineered to be understandable and maintainable through having defined interfaces between lower-level base routines called by higher (and yet higher) levels of code, which then depend on them.
By contrast, biological systems (like brains) are neither designed nor engineered: they "just grew there".
I've worked on computer systems like that!
They are awful!!
I once (for my sins) was asked to analyse a payroll system that was Just One Big Program, which (in computer terms) had indeed "just grown there". Everything depended on everything else. Parts of it couldn't even be called anymore! Some bits with bugs in them had been re-written to avoid buggy behaviour - but the old code was left in there "just in case" something else (who knows?) still called it and depended on it - which sometimes was true. Everybody who worked on it was (wisely) very afraid of even touching it.
The whole thing (in IT terms) should have been treated as a total loss; trashed, then re-done from scratch: properly designed and engineered. Of course, it wasn't (at least not that I know).
I often use that payroll system as a mental-model stand-in for what biological system are like. They "just grew there", they're fiendishly complex and interconnected, they weren't designed, and they can't be trashed - except by extinction (which seems rather drastic).
I use that analogy for the genome. But it's still a computer program so you've rather proved my point!
Perhaps the brain is more like the early wire-wrap computers which were reprogrammed by changing the wire connections. https://www.youtube.com/watch?v=IXvEDM-m9CE (that's a tiny one - I remember one 4ft long)
We make the connections by growing neurons.
But in the genome, at least, there is some good modular coding - the segment is the classic example: you build a worm of lots of segments each with 2 legs, then you modify legs on segments at the head end into mouthparts, eyes, antennae, then fuse those segments into the head. This is why we have a spine and ribs. And the various parts of the brain, and why the nose goes to the front of the brain and the eyes to the back.
(I rather like Dawkins recent observation that the genome is palimpsest.)
I use that payroll program not so much as a useful analogy but more as a personal reminder of how horridly complicated undesigned systems can be. I'm sure that biological systems are many orders of magnitude 'worse' (as in more complicated).
To use your worm example - while that might *seem* (from a high-level view) to be an exercise in modular 'design' - but I'll bet that in real-world worms there are mysterious and subtle differences between even identical-seeming segments, and the mechanisms by which segments vary are also different segment-by-segment (maybe some using similar mechanisms, while others use quite different mechanisms for their differentiation).
In a designed system, things probably wouldn't be done that way. But biological systems aren't designed - they just grow there.
While were on the subject: there's loads of terminology to name good programming paradigms but apparently none to describe bad beyond "bug" and "spaghetti". We could use "fossil" to describe methods that are no longer called. There must be lots of gotcha's in bad programs for which we have no ontology. Considering the amount of time programmers spend patching legacy systems I'm surprised no language has emerged.
What a fascinating discussion! It actually gets into next week's question quite a bit.
The computer program analogy can be helpful to a point, but I think it raises an important question about levels of analysis. As Mike suggests, I do wonder if the brain may be fundamentally different from engineered computer systems in crucial ways.
I'll talk more about this next week, but I think Mike makes an excellent point about the key distinction: biological systems like brains arise through growth and evolutionary programming rather than top-down engineering design.
I wonder whether the wire-wrap computer analogy is still a little too neat -- biological systems don't just involve rewiring connections, we get complex developmental processes where the rules themselves can change at each step. Even seemingly modular biological systems often have hidden complexities and interdependencies.
Rather than seeing the brain as either a clean engineered system or pure spaghetti code, perhaps we need a new framework that acknowledges both the remarkable robustness of evolved biological systems and their fundamentally different organisation from designed systems. The brain's complexity may not be 'bad design' but rather a different kind of organisation altogether -- one that we get through growth and evolutionary programming.
This connects back to my main question here -- what details can we safely ignore? The messy interconnected nature of biological systems suggests we may need to be careful about assuming we can cleanly separate levels of analysis or ignore 'implementation details'.
"The computer program analogy can be helpful to a point, but I think it raises an important question about levels of analysis."
I can recall reading (once, somewhere?) that we in "the computer age" are rather prone to drawing computer-based analogies. Because that's one of the newer and more dominant technologies of our time. Back in the late-Victorian era, apparently, people were rather too liable to draw "steam" and "hydraulic" analogies. In the early days of the 20th century, people were inclined to draw "electricity-based" analogies. I'm sure there's an 'etcetera' there..
I think we should all be on our guard against too-glib analogies with dominant and recent things: I fear they might mislead as much as they might help unless great care is exercised.
Yes, I agree with Mike on this. I see a lot of psychology blogs which are prone to making comparisons & analogies between the human brain and a computer. These analogies can be helpful for explaining the highest-level of abstraction (whether in the brain or in a computer), but quickly fall apart once you dig any deeper than surface-level. Some people seem to think we'll be able to emulate consciousness on a computer one day, store it on a hard drive, e-mail yourself to your doctor, etc. I'm not convinced.
The human brain would be more similar to an FPGA, but even that is a VERY vague and highly abstract comparison. You'll see computer people make analogies to the human brain too, but this is done with a very different intention.
Thanks, Mike and Michael; I agree. While computational analogies might offer some insight, the point at which the analogy is helpful doesn't get us too far. One difference that I'm starting to think is critical is that the brain isn't built first and trained later -- it grows. The building stage and the function stage are not easily separated. I'm not sure the brain can be easily reduced to computer-like architectures. More on that idea next week.
The point of the comparison is that we understand computers but we don't understand brains. Comparing the two highlights the differences and these are the bits we need to understand.
The main thing they have in common is that they both have hardware (wetware) and software (connectivity).
Now we have LLM's which adds another category
so hardware=brain, "wires"=neurons, software=connectivity (including LLM=speech centre).
Re building and segmentation: the brain develops very early as presumably two (four?) strands running down the primitively segmented embryo. The rest of the body develops, maintaining the connections and the brain regions (ganglia) in the posterior segments get wrapped around the headphone band.
This was already there in the first fish.
More recently, our ancestors had multi-segented tails so the homunculus would have been extended further downwards.
That's a good point; it's important to understand that a biological system is inherently different from a piece of technology. A computer has a processor, which has registers, a decoder, an ALU, etc. Then there are opcodes and operands which form instructions. A group of these instructions form software.
This is nothing like how the brain works. Even AI is no different than this. The so-called "neurons" in an ANN are just a metaphor for the biological neurons, not a genuine replication.
The nice thing about wire-wrap is that it encourages you to think about how the brain makes the right connections. If it has a problem to solve how does it know that connecting node X to node Y will help? Obviously you can't build a brain like that. I presume what you do is connect everything to everything else and then use the SatNav algorithm to send out messages until you find a route that works, then you reinforce that connection.
So a useful metric will be average path length (like the game Six Degrees of Kevin Bacon).
Copilot offers this list:
Degree Centrality: Measures the number of direct connections a node has. In social networks, this could represent the number of friends or contacts a person has.
Betweenness Centrality: Reflects the number of times a node acts as a bridge along the shortest path between two other nodes. It helps identify nodes that control the flow of information in the network.
Closeness Centrality: Indicates how close a node is to all other nodes in the network. It measures the average shortest path from a node to all other nodes.
Eigenvector Centrality: Similar to degree centrality but also considers the centrality of a node's neighbors. It helps identify influential nodes within a network.
Clustering Coefficient: Measures the degree to which nodes in a graph tend to cluster together. It reflects the likelihood that two nodes connected to a common node are also connected to each other.
Path Length: The number of edges in the shortest path between two nodes. Average path length provides an idea of how well information or resources can be transmitted through the network.
Diameter: The longest shortest path between any two nodes in the network. It represents the maximum distance between any pair of nodes.
Density: The ratio of the number of edges in the network to the maximum number of possible edges. It indicates how interconnected the network is.
I also wonder if it uses the Skyscraper Elevator optimisation, where you have a mix of long and short links to reduce travel times.
Rather than solving connection problems directly, like wire-wrap does, the brain develops its connections through growth. This growth process is where connections form through developmental programs that unfold over time. Rather than starting fully connected or searching for optimal paths.
To find the right connections, they use:
-- chemical signals. These point growing branches in the right direction
-- Exploratory structures called filopodia. These are tiny finger-like feelers that reach out and explore the surroundings.
-- Self-organizing mechanisms. There are also built-in rules that keep branches from tangling up with the wrong partners, similar to how magnets can repel each other.
-- And, perhaps most importantly, there is the 'use it or lose it' system where connections that get used become stronger while unused ones fade away
This biological approach is quite different from engineered solutions like elevator systems or network routing algorithms -- instead of optimising connections or finding paths, it relies on developmental programs that naturally unfold over time.
re unexpected connections: "Google Quantum" robin if you aren't familiar with how European Robin's navigate. A classic example of evolution throwing a curve ball.
Our sense of smell also uses QM to measure bond lengths (which is why cyanide and benzaldehyde smell the same) but neither of these affect our discussions on brain or consciousness.
"There must be lots of gotcha's in bad programs for which we have no ontology. Considering the amount of time programmers spend patching legacy systems I'm surprised no language has emerged."
Oh, it most certainly has.
It's just that the words used are a bit too 'robust' for genteel conversation on an engaging and informative Substack!
For certain types of often-repeated errors or stupidities they are used consistently (if often with added adjectives).
[No, your stinking bubble-sort routine is not a good idea - there’s a reason why teams of highly-paid Hungarians have been perfecting the system’s sort routine for decades. Use it. (Just dumb. And if you ever have a genuine reason to use your own sort, do not use a bubble-sort.) Yes - every GETMAIN really should be paired with a FREEMAIN - unless you know exactly what you’re doing, and why, and for what type of storage - and how it gets cleaned up otherwise. (Storage leaks; see also leaks of file handles etc. for equivalent errors). Yes, resources should be enqueued and dequeued in the same order in every related program in the system. (Deadly embrace.) Hung child processes need to be detected and killed (zombie processes). Etc.]
I always used wrapper methods that counted them in and counted them out. Then, if it's a memory (sub)allocator, you can insert a name tag at the start of each (suitably larger) memory block and advance the counter before returning the pointer, or append a few known values to check for overruns.
I've programmed a few bubble sorts too - not a problem if it's a short list and rarely used - anyway, your Hungarians were no more than twinkles in their father's eyes back then!
System methods are fine until you're running on 4 different platforms.
I agree that studying the electronics doesn't give you a complete or adequate picture of the system. Studying the behavior of the organism is possibly the best means of understanding it.
My take is that until we can build a system that is able to reliably convince us it's conscious, or has human level intelligence, or even common animal level intelligence, we should explore at all levels of abstraction. Let a thousand flowers bloom.
There is a danger in the people looking at one level not talking with those working on other levels, or insisting that all the answers are at their particular layer. I'm not sure what needs to happen in academia to protect against that. But it's probably worth remembering that the answer to an intractable problem might be in another layer.
I love your "thousand flowers bloom" metaphor! At this stage, I think we can't afford to ignore any level of analysis. Or at least be in discussions with others. The silos problem, as you mention, can be a big problem. I'd love to see more interdisciplinary work where researchers from different levels collaborate and share their insights — not just within the sciences but across disciplines, too.
Interesting. I didn't know phrase's lineage, although it does seem to predate Mao. Thanks! But I wouldn't think his misuse of it invalidates the effectiveness of the strategy.
It is important to distinguish between studying the function of a device and studying its elements. Studying the design of an internal combustion engine is not studying the functions of a car - the engine can be electric. Studying a transistor is not studying how a computer works - instead of a transistor, you can use a relay, a vacuum tube, and even purely mechanical elements. Studying neurons is not studying intelligence - it can be implemented on a computer.
And you're absolutely right -- just as understanding transistors isn't necessary to understand how a computer processes information, we don't need to understand neurons to understand certain aspects of intelligence.
I'll get into this more next week, but biological systems might be different. Unlike engineered systems, where we can cleanly separate development from function (like building a car versus driving it), biological neural networks seem to blur this line. The processes that build the brain during development continue to reshape it during learning and memory formation. In biology, the hardware-software divide (or the function elements divide) is blurry.
There are general principles that help separate functions from their implementation: first, examine the functions by observing their functioning, and then examine the object by finding the components responsible for the functions. For example, in the case of examining the owner of the brain, we can discover the presence of the function of remembering what is happening/observed, the function of making decisions about what to do in the absence of external stimuli, and the function of curiosity. The set of functions, in turn, indicates the presence of certain internal states as part of the contents of memory, etc.
You make a great point about separating functions from implementation -- it's such a useful way to understand most systems! But I think things get messy with biological brains. It's a wonderful type of messy. The physical "machinery" that creates functions is constantly reshaped by those same functions, making them more deeply intertwined than in human-engineered systems.
Because I spend hours a day interacting with ChatGPT and other AI resources, writing books and articles, I have come to truly appreciate the ability of ChatGPT to develop First Principles on any topic. As you may know, this was how Musk developed SpaceX from the ground up, saving untold millions of dollars. It's a great way to simplify any complex topic and make sure you are logically grounded. Sometimes I challenge it further to make sure that the output is totally logical, content is mutually exclusive, and if appropriate, Principles proceed in a logical manner such as time or importance sequence. Give it a try!
I seem aligned with Mykola here. If we didn’t need to understand, just as evolution doesn’t need to understand, then fine, we could put as much neuroscience or anything else we like in the pot. Because we actually do need to understand however, dinking around at the wrong levels of abstraction should have costs.
Last time I mentioned a belief that our mental and behavioral sciences need a founding premise from which to build — that is except for economics which does have such a foundation in the form of “utility maximization”. If it were suggested that economists need to understand neuroscience in order to better understand economics, I think they’d rightly object!
Economics supervenes on psychology and psychology supervenes on neuroscience. Thus psychology should at least be closer to neuroscience than economics happens to be. Furthermore I suspect it will be found that consciousness exists in the form of a neurally produced electromagnetic field, as well as hope that this understanding will help improve psychology in certain ways. For the most part however I doubt psychology is in need of much neuroscience. Most important should be for it to gain a solid foundation from which to build.
I did end up starting a new substack with this theme, and published my first post moments ago. I’ll of course get into neuroscience at some point given my EMF consciousness suspicion, though that’s actually just a minor element of the whole!
Hey Malcolm, thanks for thinking about the possibility of EMF consciousness! Apparently there are dozens of theorists proposing different variants today, though I think the best comes from a Surrey University biologist by the name of Johnjoe McFadden. Back in 1999 he put together a book about how quantum mechanics might explain certain biological mysteries such as how the photons that hit a leaf could possibly navigate a sea of chlorophyll to provide virtually 100% of their energy to a plant? Perhaps quantum superposition? Anyway he figured he’d conclude that book with a chapter on the Penrose and Hammeroff quantum consciousness proposal. Upon reading their book however he realized that they were proposing things that didn’t jibe with evidence. But that got him thinking that a classical electromagnetic field might do the trick, and of course neurons do produce them. So he somehow hammered out a coherent theory to serve as the final chapter for that book. He’s made his career in quantum biology, where they’re now graduating doctorates in this specialty field, but still publishes papers on electromagnetic consciousness too.
Your question on fMRI is a quite technical one that he addressed under “prediction 6” in the following paper (though apparently back then they called it NMR, which was changed because normal people didn’t want to be exposed to anything with a “nuclear” reference): https://philpapers.org/archive/MCFSFA.pdf
It’s just one paragraph so I would have pasted it here, but that paper doesn’t let me copy. Apparently his point is that fMRI uses a static rather than alternating field, so the minuscule brain field ought to ride along under that static field without being affected. I know that you’re more technical than I am though, so I’d love your thoughts on that.
Actually if you do give this some thought, there’s a proposal of my own that I’d love you to consider. Firstly yes, the theory is that the field itself exists as all that you see, hear, think, feel, and so on. Theoretically synchronously fired neurons produce an energy that gets above the noise of other neural firing to reach a zone of physics which constitutes vision, pain, itchiness, and all that is phenomenal. So without the field that our neurons produce, we’d be vegetables. Or if a machine were to produce the same field that your brain does, that artificial field would have the exact same consciousness that you do. But here’s my question:
McFadden has said before that he doesn’t know how to empirically test his theory. It seems to me however that this should be quite possible. My plan would be for scientists to implant transmitters in various parts of the brain that produce energies that are similar to the energies that typical synchronously fired neurons tend to produce. Shouldn’t a person who’s wired up this way then be able to notice if their consciousness gets screwy when by chance a given exogenous energy constructively and destructively interacts with an endogenous energy that exists as sight, sound, or any other element of consciousness? Thus shouldn’t that person be able to report this strangeness? I’d think that given sufficient testing though no verifiable reports, that this theory could then be dismissed. But with such reports maybe the energies that incited them could be explored further to see if similar energies could be produced that create quite novel phenomenal dynamics in themselves related to what was reported? Maybe the parameters of consciousness could be mapped under an EMF spectrum to thus highly validate this theory? Does that make sense?
Thanks for clarifying the link between NMR and MRI - I always confused the two - now I know why!
I've got Penrose's book (one of) in my reading stack, but I've got my current project to finish before I dive into this properly.
Finally, I think you might be over-estimating my knowledge. I've given this sort of thing a lot of thought over the years, but read very little of others' thoughts. I need to correct that balance.
No worries Malcolm! Some of your comments seem pretty technical to me on things that I know little about, but of course we all specialize. Also I did have to fix an error in my comment above because I accidentally said that McFadden was proposing “quantum” rather than just “classical” electromagnetic consciousness. He’d actually say don’t waste your time with the Penrose and Hammeroff book. Regardless I’ll keep looking for people to consider my proposed test of McFadden’s EMF consciousness proposal. I’ve gotten responses from him for other things, though never when I’ve brought up such testing. If it makes no sense then why not take a moment to say a thing or two about what’s wrong with it? Maybe he can’t think of a true problem, though I’m the one who thought this up rather than him? I don’t know.
I fear awareness is an emergent property and so beyond understanding. (OK, there's a semantic gotcha here - if we understood an emergent properly it would become predictable behaviour - like UFO's if you know what they are they're no longer unidentified!)
But take Rule 110 (as Suzi showed us recently). That's really simple but the emergent behaviour is very complex (see Wikipedia). If we have trouble with something as simple as that, what chance have we of understanding awareness?
We only understand things by comparing them to familiar things (billiard balls and snowballs seem to feature a lot!). Nouns are easy - nobody ever talked about emergent structure (beauty?) [OK, I see "emergent structure" is used, but in a non-concrete sense]. Adjectives are also easy. But properties are verbs (under certain circumstances it "DOES").
As Douglas Adams might have put it: understanding awareness may take an artificial mind which it will be our destiny to design.
But perhaps there's a neurodivergent mind out there that could understand it?
For me awareness is a quale, perhaps the ultimate quale (religious people would no doubt put experience of god higher) or the root quale.
As such it's a thinking thing, so will arise out of thinking processes not out of physics.
Maybe it's just another sense organ that senses your thoughts? (Apparently it's distributed, but maybe that's the thoughts it senses that are distributed).
Maybe it happens when you nest neural networks 4 deep. (So what happens if you go 1000 deep?) And is it even meaningful to think of nested neural nets?
Hi Malcolm. I’ve switched up my account here so I didn’t notice your replies.
On awareness being “emergent”, you seem to be using the term in an epistemic or natural way rather than an ontological or supernatural way. Otherwise I’d say “Perhaps, but I don’t believe in magic”. Since we’re square on this however, and since you seem to be grasping at straws regarding awareness right now, see if the following makes sense.
I like to think of the brain as a computer. If so this ought to tell us some things about awareness. So how does a computer work? It takes input information, it processes that information, and the processed information goes on to operate various things in appropriate ways. That’s certainly how our robots work. So maybe consciousness is an added output feature that our robots don’t have but evolved in certain kinds of brains as another thing that they can do?
From this model when light hits your eyes it sends input information to your brain. Then that information should get processed in a way that probably concerns the function of neurons and synapses. Then the processed information should go on to animate some sort of causality that exists as you the experiencer of the input light information. So what sort of causality might be appropriate for vision? Or pain? Or hope? Or itchiness? Or any other element of your awareness? The only element of brain function that I can think of which might have enough fidelity to utilize such an amazing amount of processed information (sight, sound, smell, taste, touch, thought, and so on), is an electromagnetic field. Furthermore we know that neurons produce them when they fire. That’s why I’d like the testing that I propose to be attempted. Beyond an electromagnetic field, can you think of another potential consciousness medium of the brain which might have sufficient bandwidth to utilize all that processed information?
I think studies at the various levels can compliment each other and may lead to unexpected insights. A striking example for me was the work of Bud Craig. I became aware of his work when putting together a lecture on the insula for my affective neuroscience course. I was very surprised that his 2015 paper (“Significance of the insula for the evolution of human awareness of feelings from the body”) had been so widely cited. As I later learned, Craig had spent most of his career mapping interoceptive pathways- pretty hardcore stuff! Yet by 2015 he was beginning to ponder what his findings might mean for understanding the origins of self-awareness and “subjective thoughts and feelings”. And of course we have Damasio’s work, directed more at understanding psychological processes such as decision-making from the start, now focusing increasingly on how consciousness may come about. And I’m excited by the prospect that the nascent field of. “LLMology” may yield some insights about human cognition and information processing. As you say, divisions into different levels and fields may be necessary, but cross fertilization. Is essential.
Hi Frank! Thanks so much for the kind words and for introducing me to Bud Craig — I’m excited to get into some of his work.
On LLMology, I agree — it’s fascinating to think about the kinds of insights we might gain from LLMs and other artificial systems. As we continue developing them, I imagine they’ll give us fresh perspectives on human cognition — even if those new insights are simply an understanding of how fundamentally different humans and artificial systems really are.
I think it boils down to, if you don't study things at every level, how can you find out what isn't relevant?
The notion of missed details looms large in my view about computer simulations of brains. How detailed do they need to be to have a hope of working? (Those pesky unknown unknowns.) How can we know that without knowing all the details?
A late comment but you never know when a thought (hopefully useful and/or interesting) will pop into your head.
Anyway, I would say the level of analysis (which determines what can be ignored) depends on what you are trying to achieve. If you want to bake a cake, then knowing the ingredients and method is sufficient without getting into the chemistry.
Essentially, this is true of everything that isn't quantum physics (assuming that is the base layer of everything we could hope to understand). So actually we are always ignoring 99.9% of the detail but still seem to be able to do useful things.
Love this. It reminds me how science is basically the art of knowing what to ignore. Even quantum physics gets sliced into effective theories depending on the energy scale.
We bake cakes with recipes, not wavefunctions — and somehow, it works.
If the brain is like a computer, and intelligence and consciousness are properties arising from the program it's running, then studying electronics wouldn't help you understand even the program, let alone the emergent properties.
Similarly the development aspects wouldn't per se inform, although variability of development (and damage at any stage - eg TBINDSC) can help to tie down which bits do what.
Then again, every science is based on the one below so chemists study physics, biologists study chemistry (so you can end up spending more time studying the foundation science than the one you signed up for!).
"f the brain is like a computer, and intelligence and consciousness are properties arising from the program it's running.."
I'm not at all sure that the brain really is like that. Further, I'm not at all sure whether it would be a good thing or a bad thing if it were.
For example: I work on large, complex, computer systems. They are designed and engineered to be understandable and maintainable through having defined interfaces between lower-level base routines called by higher (and yet higher) levels of code, which then depend on them.
By contrast, biological systems (like brains) are neither designed nor engineered: they "just grew there".
I've worked on computer systems like that!
They are awful!!
I once (for my sins) was asked to analyse a payroll system that was Just One Big Program, which (in computer terms) had indeed "just grown there". Everything depended on everything else. Parts of it couldn't even be called anymore! Some bits with bugs in them had been re-written to avoid buggy behaviour - but the old code was left in there "just in case" something else (who knows?) still called it and depended on it - which sometimes was true. Everybody who worked on it was (wisely) very afraid of even touching it.
The whole thing (in IT terms) should have been treated as a total loss; trashed, then re-done from scratch: properly designed and engineered. Of course, it wasn't (at least not that I know).
I often use that payroll system as a mental-model stand-in for what biological system are like. They "just grew there", they're fiendishly complex and interconnected, they weren't designed, and they can't be trashed - except by extinction (which seems rather drastic).
Spaghetti programs. Everybody hates everybody else's programs!
I use that analogy for the genome. But it's still a computer program so you've rather proved my point!
Perhaps the brain is more like the early wire-wrap computers which were reprogrammed by changing the wire connections. https://www.youtube.com/watch?v=IXvEDM-m9CE (that's a tiny one - I remember one 4ft long)
We make the connections by growing neurons.
But in the genome, at least, there is some good modular coding - the segment is the classic example: you build a worm of lots of segments each with 2 legs, then you modify legs on segments at the head end into mouthparts, eyes, antennae, then fuse those segments into the head. This is why we have a spine and ribs. And the various parts of the brain, and why the nose goes to the front of the brain and the eyes to the back.
(I rather like Dawkins recent observation that the genome is palimpsest.)
I use that payroll program not so much as a useful analogy but more as a personal reminder of how horridly complicated undesigned systems can be. I'm sure that biological systems are many orders of magnitude 'worse' (as in more complicated).
To use your worm example - while that might *seem* (from a high-level view) to be an exercise in modular 'design' - but I'll bet that in real-world worms there are mysterious and subtle differences between even identical-seeming segments, and the mechanisms by which segments vary are also different segment-by-segment (maybe some using similar mechanisms, while others use quite different mechanisms for their differentiation).
In a designed system, things probably wouldn't be done that way. But biological systems aren't designed - they just grow there.
While were on the subject: there's loads of terminology to name good programming paradigms but apparently none to describe bad beyond "bug" and "spaghetti". We could use "fossil" to describe methods that are no longer called. There must be lots of gotcha's in bad programs for which we have no ontology. Considering the amount of time programmers spend patching legacy systems I'm surprised no language has emerged.
What a fascinating discussion! It actually gets into next week's question quite a bit.
The computer program analogy can be helpful to a point, but I think it raises an important question about levels of analysis. As Mike suggests, I do wonder if the brain may be fundamentally different from engineered computer systems in crucial ways.
I'll talk more about this next week, but I think Mike makes an excellent point about the key distinction: biological systems like brains arise through growth and evolutionary programming rather than top-down engineering design.
I wonder whether the wire-wrap computer analogy is still a little too neat -- biological systems don't just involve rewiring connections, we get complex developmental processes where the rules themselves can change at each step. Even seemingly modular biological systems often have hidden complexities and interdependencies.
Rather than seeing the brain as either a clean engineered system or pure spaghetti code, perhaps we need a new framework that acknowledges both the remarkable robustness of evolved biological systems and their fundamentally different organisation from designed systems. The brain's complexity may not be 'bad design' but rather a different kind of organisation altogether -- one that we get through growth and evolutionary programming.
This connects back to my main question here -- what details can we safely ignore? The messy interconnected nature of biological systems suggests we may need to be careful about assuming we can cleanly separate levels of analysis or ignore 'implementation details'.
"The computer program analogy can be helpful to a point, but I think it raises an important question about levels of analysis."
I can recall reading (once, somewhere?) that we in "the computer age" are rather prone to drawing computer-based analogies. Because that's one of the newer and more dominant technologies of our time. Back in the late-Victorian era, apparently, people were rather too liable to draw "steam" and "hydraulic" analogies. In the early days of the 20th century, people were inclined to draw "electricity-based" analogies. I'm sure there's an 'etcetera' there..
I think we should all be on our guard against too-glib analogies with dominant and recent things: I fear they might mislead as much as they might help unless great care is exercised.
You're right and there's also a tendency to think that I don't understand this modern stuff and I don't understand the brain so they must be the same.
But your examples show increasing complexity so they're getting more brain-like, at least on one metric.
Yes, I agree with Mike on this. I see a lot of psychology blogs which are prone to making comparisons & analogies between the human brain and a computer. These analogies can be helpful for explaining the highest-level of abstraction (whether in the brain or in a computer), but quickly fall apart once you dig any deeper than surface-level. Some people seem to think we'll be able to emulate consciousness on a computer one day, store it on a hard drive, e-mail yourself to your doctor, etc. I'm not convinced.
The human brain would be more similar to an FPGA, but even that is a VERY vague and highly abstract comparison. You'll see computer people make analogies to the human brain too, but this is done with a very different intention.
Thanks, Mike and Michael; I agree. While computational analogies might offer some insight, the point at which the analogy is helpful doesn't get us too far. One difference that I'm starting to think is critical is that the brain isn't built first and trained later -- it grows. The building stage and the function stage are not easily separated. I'm not sure the brain can be easily reduced to computer-like architectures. More on that idea next week.
The point of the comparison is that we understand computers but we don't understand brains. Comparing the two highlights the differences and these are the bits we need to understand.
The main thing they have in common is that they both have hardware (wetware) and software (connectivity).
Now we have LLM's which adds another category
so hardware=brain, "wires"=neurons, software=connectivity (including LLM=speech centre).
Re building and segmentation: the brain develops very early as presumably two (four?) strands running down the primitively segmented embryo. The rest of the body develops, maintaining the connections and the brain regions (ganglia) in the posterior segments get wrapped around the headphone band.
This was already there in the first fish.
More recently, our ancestors had multi-segented tails so the homunculus would have been extended further downwards.
We lost the tail as apes split from monkeys (https://www.sciencenews.org/article/genetic-parasite-humans-apes-tail-loss-evolution) when Alu jumped into Tbxt. That our homunculi don't have tails suggests it acted on the initial segmentation rather than later development (but maybe it's still there and got repurposed?).
The differences between normal and Manx cats' brains would also be interesting. (It seems there's some innate vulnerability in the tail vertebrae)
That's a good point; it's important to understand that a biological system is inherently different from a piece of technology. A computer has a processor, which has registers, a decoder, an ALU, etc. Then there are opcodes and operands which form instructions. A group of these instructions form software.
This is nothing like how the brain works. Even AI is no different than this. The so-called "neurons" in an ANN are just a metaphor for the biological neurons, not a genuine replication.
The nice thing about wire-wrap is that it encourages you to think about how the brain makes the right connections. If it has a problem to solve how does it know that connecting node X to node Y will help? Obviously you can't build a brain like that. I presume what you do is connect everything to everything else and then use the SatNav algorithm to send out messages until you find a route that works, then you reinforce that connection.
So a useful metric will be average path length (like the game Six Degrees of Kevin Bacon).
Copilot offers this list:
Degree Centrality: Measures the number of direct connections a node has. In social networks, this could represent the number of friends or contacts a person has.
Betweenness Centrality: Reflects the number of times a node acts as a bridge along the shortest path between two other nodes. It helps identify nodes that control the flow of information in the network.
Closeness Centrality: Indicates how close a node is to all other nodes in the network. It measures the average shortest path from a node to all other nodes.
Eigenvector Centrality: Similar to degree centrality but also considers the centrality of a node's neighbors. It helps identify influential nodes within a network.
Clustering Coefficient: Measures the degree to which nodes in a graph tend to cluster together. It reflects the likelihood that two nodes connected to a common node are also connected to each other.
Path Length: The number of edges in the shortest path between two nodes. Average path length provides an idea of how well information or resources can be transmitted through the network.
Diameter: The longest shortest path between any two nodes in the network. It represents the maximum distance between any pair of nodes.
Density: The ratio of the number of edges in the network to the maximum number of possible edges. It indicates how interconnected the network is.
I also wonder if it uses the Skyscraper Elevator optimisation, where you have a mix of long and short links to reduce travel times.
Rather than solving connection problems directly, like wire-wrap does, the brain develops its connections through growth. This growth process is where connections form through developmental programs that unfold over time. Rather than starting fully connected or searching for optimal paths.
To find the right connections, they use:
-- chemical signals. These point growing branches in the right direction
-- Exploratory structures called filopodia. These are tiny finger-like feelers that reach out and explore the surroundings.
-- Self-organizing mechanisms. There are also built-in rules that keep branches from tangling up with the wrong partners, similar to how magnets can repel each other.
-- And, perhaps most importantly, there is the 'use it or lose it' system where connections that get used become stronger while unused ones fade away
This biological approach is quite different from engineered solutions like elevator systems or network routing algorithms -- instead of optimising connections or finding paths, it relies on developmental programs that naturally unfold over time.
re unexpected connections: "Google Quantum" robin if you aren't familiar with how European Robin's navigate. A classic example of evolution throwing a curve ball.
Our sense of smell also uses QM to measure bond lengths (which is why cyanide and benzaldehyde smell the same) but neither of these affect our discussions on brain or consciousness.
"There must be lots of gotcha's in bad programs for which we have no ontology. Considering the amount of time programmers spend patching legacy systems I'm surprised no language has emerged."
Oh, it most certainly has.
It's just that the words used are a bit too 'robust' for genteel conversation on an engaging and informative Substack!
Robust is a great word!
But are they used consistently? Or do they all just mean "I hate this program"?
For certain types of often-repeated errors or stupidities they are used consistently (if often with added adjectives).
[No, your stinking bubble-sort routine is not a good idea - there’s a reason why teams of highly-paid Hungarians have been perfecting the system’s sort routine for decades. Use it. (Just dumb. And if you ever have a genuine reason to use your own sort, do not use a bubble-sort.) Yes - every GETMAIN really should be paired with a FREEMAIN - unless you know exactly what you’re doing, and why, and for what type of storage - and how it gets cleaned up otherwise. (Storage leaks; see also leaks of file handles etc. for equivalent errors). Yes, resources should be enqueued and dequeued in the same order in every related program in the system. (Deadly embrace.) Hung child processes need to be detected and killed (zombie processes). Etc.]
Yes, I was forgetting leaks, embraces etc.
I always used wrapper methods that counted them in and counted them out. Then, if it's a memory (sub)allocator, you can insert a name tag at the start of each (suitably larger) memory block and advance the counter before returning the pointer, or append a few known values to check for overruns.
I've programmed a few bubble sorts too - not a problem if it's a short list and rarely used - anyway, your Hungarians were no more than twinkles in their father's eyes back then!
System methods are fine until you're running on 4 different platforms.
I agree that studying the electronics doesn't give you a complete or adequate picture of the system. Studying the behavior of the organism is possibly the best means of understanding it.
My take is that until we can build a system that is able to reliably convince us it's conscious, or has human level intelligence, or even common animal level intelligence, we should explore at all levels of abstraction. Let a thousand flowers bloom.
There is a danger in the people looking at one level not talking with those working on other levels, or insisting that all the answers are at their particular layer. I'm not sure what needs to happen in academia to protect against that. But it's probably worth remembering that the answer to an intractable problem might be in another layer.
Interesting, as always Suzi!
I love your "thousand flowers bloom" metaphor! At this stage, I think we can't afford to ignore any level of analysis. Or at least be in discussions with others. The silos problem, as you mention, can be a big problem. I'd love to see more interdisciplinary work where researchers from different levels collaborate and share their insights — not just within the sciences but across disciplines, too.
"Let a thousand flowers bloom" does not have a good history. It's a misquote of Chairman Mao's trick to get dissenters to identify themselves.
Interesting. I didn't know phrase's lineage, although it does seem to predate Mao. Thanks! But I wouldn't think his misuse of it invalidates the effectiveness of the strategy.
It is important to distinguish between studying the function of a device and studying its elements. Studying the design of an internal combustion engine is not studying the functions of a car - the engine can be electric. Studying a transistor is not studying how a computer works - instead of a transistor, you can use a relay, a vacuum tube, and even purely mechanical elements. Studying neurons is not studying intelligence - it can be implemented on a computer.
And you're absolutely right -- just as understanding transistors isn't necessary to understand how a computer processes information, we don't need to understand neurons to understand certain aspects of intelligence.
I'll get into this more next week, but biological systems might be different. Unlike engineered systems, where we can cleanly separate development from function (like building a car versus driving it), biological neural networks seem to blur this line. The processes that build the brain during development continue to reshape it during learning and memory formation. In biology, the hardware-software divide (or the function elements divide) is blurry.
There are general principles that help separate functions from their implementation: first, examine the functions by observing their functioning, and then examine the object by finding the components responsible for the functions. For example, in the case of examining the owner of the brain, we can discover the presence of the function of remembering what is happening/observed, the function of making decisions about what to do in the absence of external stimuli, and the function of curiosity. The set of functions, in turn, indicates the presence of certain internal states as part of the contents of memory, etc.
You make a great point about separating functions from implementation -- it's such a useful way to understand most systems! But I think things get messy with biological brains. It's a wonderful type of messy. The physical "machinery" that creates functions is constantly reshaped by those same functions, making them more deeply intertwined than in human-engineered systems.
Because I spend hours a day interacting with ChatGPT and other AI resources, writing books and articles, I have come to truly appreciate the ability of ChatGPT to develop First Principles on any topic. As you may know, this was how Musk developed SpaceX from the ground up, saving untold millions of dollars. It's a great way to simplify any complex topic and make sure you are logically grounded. Sometimes I challenge it further to make sure that the output is totally logical, content is mutually exclusive, and if appropriate, Principles proceed in a logical manner such as time or importance sequence. Give it a try!
Thanks, Buck!
I seem aligned with Mykola here. If we didn’t need to understand, just as evolution doesn’t need to understand, then fine, we could put as much neuroscience or anything else we like in the pot. Because we actually do need to understand however, dinking around at the wrong levels of abstraction should have costs.
Last time I mentioned a belief that our mental and behavioral sciences need a founding premise from which to build — that is except for economics which does have such a foundation in the form of “utility maximization”. If it were suggested that economists need to understand neuroscience in order to better understand economics, I think they’d rightly object!
Economics supervenes on psychology and psychology supervenes on neuroscience. Thus psychology should at least be closer to neuroscience than economics happens to be. Furthermore I suspect it will be found that consciousness exists in the form of a neurally produced electromagnetic field, as well as hope that this understanding will help improve psychology in certain ways. For the most part however I doubt psychology is in need of much neuroscience. Most important should be for it to gain a solid foundation from which to build.
I did end up starting a new substack with this theme, and published my first post moments ago. I’ll of course get into neuroscience at some point given my EMF consciousness suspicion, though that’s actually just a minor element of the whole!
Great! I'll check it out
Wouldn't an EMF consciousness be wiped out (at least temporarily) by high magnetic fields like fMRI?
Do you mean the EMF is conscious (seems unlikely)
Or is this a shared back-channel between adjacent neurons?
Hey Malcolm, thanks for thinking about the possibility of EMF consciousness! Apparently there are dozens of theorists proposing different variants today, though I think the best comes from a Surrey University biologist by the name of Johnjoe McFadden. Back in 1999 he put together a book about how quantum mechanics might explain certain biological mysteries such as how the photons that hit a leaf could possibly navigate a sea of chlorophyll to provide virtually 100% of their energy to a plant? Perhaps quantum superposition? Anyway he figured he’d conclude that book with a chapter on the Penrose and Hammeroff quantum consciousness proposal. Upon reading their book however he realized that they were proposing things that didn’t jibe with evidence. But that got him thinking that a classical electromagnetic field might do the trick, and of course neurons do produce them. So he somehow hammered out a coherent theory to serve as the final chapter for that book. He’s made his career in quantum biology, where they’re now graduating doctorates in this specialty field, but still publishes papers on electromagnetic consciousness too.
Your question on fMRI is a quite technical one that he addressed under “prediction 6” in the following paper (though apparently back then they called it NMR, which was changed because normal people didn’t want to be exposed to anything with a “nuclear” reference): https://philpapers.org/archive/MCFSFA.pdf
It’s just one paragraph so I would have pasted it here, but that paper doesn’t let me copy. Apparently his point is that fMRI uses a static rather than alternating field, so the minuscule brain field ought to ride along under that static field without being affected. I know that you’re more technical than I am though, so I’d love your thoughts on that.
Actually if you do give this some thought, there’s a proposal of my own that I’d love you to consider. Firstly yes, the theory is that the field itself exists as all that you see, hear, think, feel, and so on. Theoretically synchronously fired neurons produce an energy that gets above the noise of other neural firing to reach a zone of physics which constitutes vision, pain, itchiness, and all that is phenomenal. So without the field that our neurons produce, we’d be vegetables. Or if a machine were to produce the same field that your brain does, that artificial field would have the exact same consciousness that you do. But here’s my question:
McFadden has said before that he doesn’t know how to empirically test his theory. It seems to me however that this should be quite possible. My plan would be for scientists to implant transmitters in various parts of the brain that produce energies that are similar to the energies that typical synchronously fired neurons tend to produce. Shouldn’t a person who’s wired up this way then be able to notice if their consciousness gets screwy when by chance a given exogenous energy constructively and destructively interacts with an endogenous energy that exists as sight, sound, or any other element of consciousness? Thus shouldn’t that person be able to report this strangeness? I’d think that given sufficient testing though no verifiable reports, that this theory could then be dismissed. But with such reports maybe the energies that incited them could be explored further to see if similar energies could be produced that create quite novel phenomenal dynamics in themselves related to what was reported? Maybe the parameters of consciousness could be mapped under an EMF spectrum to thus highly validate this theory? Does that make sense?
Thanks for clarifying the link between NMR and MRI - I always confused the two - now I know why!
I've got Penrose's book (one of) in my reading stack, but I've got my current project to finish before I dive into this properly.
Finally, I think you might be over-estimating my knowledge. I've given this sort of thing a lot of thought over the years, but read very little of others' thoughts. I need to correct that balance.
No worries Malcolm! Some of your comments seem pretty technical to me on things that I know little about, but of course we all specialize. Also I did have to fix an error in my comment above because I accidentally said that McFadden was proposing “quantum” rather than just “classical” electromagnetic consciousness. He’d actually say don’t waste your time with the Penrose and Hammeroff book. Regardless I’ll keep looking for people to consider my proposed test of McFadden’s EMF consciousness proposal. I’ve gotten responses from him for other things, though never when I’ve brought up such testing. If it makes no sense then why not take a moment to say a thing or two about what’s wrong with it? Maybe he can’t think of a true problem, though I’m the one who thought this up rather than him? I don’t know.
I fear awareness is an emergent property and so beyond understanding. (OK, there's a semantic gotcha here - if we understood an emergent properly it would become predictable behaviour - like UFO's if you know what they are they're no longer unidentified!)
But take Rule 110 (as Suzi showed us recently). That's really simple but the emergent behaviour is very complex (see Wikipedia). If we have trouble with something as simple as that, what chance have we of understanding awareness?
We only understand things by comparing them to familiar things (billiard balls and snowballs seem to feature a lot!). Nouns are easy - nobody ever talked about emergent structure (beauty?) [OK, I see "emergent structure" is used, but in a non-concrete sense]. Adjectives are also easy. But properties are verbs (under certain circumstances it "DOES").
As Douglas Adams might have put it: understanding awareness may take an artificial mind which it will be our destiny to design.
But perhaps there's a neurodivergent mind out there that could understand it?
Experts usually reply when you request clarification, but usually not if you question their ideas.
For me awareness is a quale, perhaps the ultimate quale (religious people would no doubt put experience of god higher) or the root quale.
As such it's a thinking thing, so will arise out of thinking processes not out of physics.
Maybe it's just another sense organ that senses your thoughts? (Apparently it's distributed, but maybe that's the thoughts it senses that are distributed).
Maybe it happens when you nest neural networks 4 deep. (So what happens if you go 1000 deep?) And is it even meaningful to think of nested neural nets?
Hi Malcolm. I’ve switched up my account here so I didn’t notice your replies.
On awareness being “emergent”, you seem to be using the term in an epistemic or natural way rather than an ontological or supernatural way. Otherwise I’d say “Perhaps, but I don’t believe in magic”. Since we’re square on this however, and since you seem to be grasping at straws regarding awareness right now, see if the following makes sense.
I like to think of the brain as a computer. If so this ought to tell us some things about awareness. So how does a computer work? It takes input information, it processes that information, and the processed information goes on to operate various things in appropriate ways. That’s certainly how our robots work. So maybe consciousness is an added output feature that our robots don’t have but evolved in certain kinds of brains as another thing that they can do?
From this model when light hits your eyes it sends input information to your brain. Then that information should get processed in a way that probably concerns the function of neurons and synapses. Then the processed information should go on to animate some sort of causality that exists as you the experiencer of the input light information. So what sort of causality might be appropriate for vision? Or pain? Or hope? Or itchiness? Or any other element of your awareness? The only element of brain function that I can think of which might have enough fidelity to utilize such an amazing amount of processed information (sight, sound, smell, taste, touch, thought, and so on), is an electromagnetic field. Furthermore we know that neurons produce them when they fire. That’s why I’d like the testing that I propose to be attempted. Beyond an electromagnetic field, can you think of another potential consciousness medium of the brain which might have sufficient bandwidth to utilize all that processed information?
I think studies at the various levels can compliment each other and may lead to unexpected insights. A striking example for me was the work of Bud Craig. I became aware of his work when putting together a lecture on the insula for my affective neuroscience course. I was very surprised that his 2015 paper (“Significance of the insula for the evolution of human awareness of feelings from the body”) had been so widely cited. As I later learned, Craig had spent most of his career mapping interoceptive pathways- pretty hardcore stuff! Yet by 2015 he was beginning to ponder what his findings might mean for understanding the origins of self-awareness and “subjective thoughts and feelings”. And of course we have Damasio’s work, directed more at understanding psychological processes such as decision-making from the start, now focusing increasingly on how consciousness may come about. And I’m excited by the prospect that the nascent field of. “LLMology” may yield some insights about human cognition and information processing. As you say, divisions into different levels and fields may be necessary, but cross fertilization. Is essential.
Enjoying your Substack!
Hi Frank! Thanks so much for the kind words and for introducing me to Bud Craig — I’m excited to get into some of his work.
On LLMology, I agree — it’s fascinating to think about the kinds of insights we might gain from LLMs and other artificial systems. As we continue developing them, I imagine they’ll give us fresh perspectives on human cognition — even if those new insights are simply an understanding of how fundamentally different humans and artificial systems really are.
I think it boils down to, if you don't study things at every level, how can you find out what isn't relevant?
The notion of missed details looms large in my view about computer simulations of brains. How detailed do they need to be to have a hope of working? (Those pesky unknown unknowns.) How can we know that without knowing all the details?
Yep! That's where I'm at, too.
A late comment but you never know when a thought (hopefully useful and/or interesting) will pop into your head.
Anyway, I would say the level of analysis (which determines what can be ignored) depends on what you are trying to achieve. If you want to bake a cake, then knowing the ingredients and method is sufficient without getting into the chemistry.
Essentially, this is true of everything that isn't quantum physics (assuming that is the base layer of everything we could hope to understand). So actually we are always ignoring 99.9% of the detail but still seem to be able to do useful things.
Love this. It reminds me how science is basically the art of knowing what to ignore. Even quantum physics gets sliced into effective theories depending on the energy scale.
We bake cakes with recipes, not wavefunctions — and somehow, it works.