79 Comments
User's avatar
User's avatar
Comment deleted
Jan 15
Comment deleted
Expand full comment
Suzi Travis's avatar

Hi Jack. Yes, that's correct. With cellular automata, the rule acts in only one way all of the time.

A simple rule, like Rule 110, applies the rules in the same way each time to a local area, regardless of the larger pattern. It doesn't need to "look" at the bigger picture or change based on larger patterns like triangles.

For example, in Rule 110, each cell only "looks" at itself and its immediate neighbours to decide what to do next. It doesn't matter what patterns exist elsewhere or what happened many steps ago -- the rule always works the same way based just on those three cells.

So, you're spot on — what you're describing sounds more like a more complex rule because it adds additional conditions and changes behaviour based on the state of the system (like the triangle with five black squares). Rule 110 is consistent in its action and doesn’t depend on complex conditions or different outcomes based on previous states.

But you're idea is a really interesting thought — modifying simple rules like that definitely opens up new layers of complexity!

I guess whether we decide to label it a "complex rule" or not is up for discussion.

Expand full comment
rookledookle's avatar

amazing article thank you x

Expand full comment
Suzi Travis's avatar

Thank you so much! I'm glad you enjoyed it.

Expand full comment
Mike Smith's avatar

This is one of the reasons I struggle with the logic that we need an expanded ontology to explain some phenomena, like the mind. We have a tendency to underestimate just how much the rules we already know about can produce.

And of course there's also the issue of the limitations of our minds, current information systems, and measuring devices. It seems like these issues loom much larger than quantum indeterminacy, an explanation many of us are often too eager to reach for.

I'm enjoying these mini-essays Suzi! It's a good format!

Expand full comment
Suzi Travis's avatar

Thank you so much, Mike — I’m glad you’re enjoying the format!

I completely agree: we often underestimate the power of simple rules. In that light, our current fascination with algorithms is particularly striking. We marvel at what LLMs can do while struggling to predict their outputs. Even when we write the rules ourselves, the outcomes can still surprise us!

Your point about our limitations as observers is spot on. Progress often isn’t about discovering new rules or expanding our ontology, but about grasping the deeper implications of the rules we already know. It’s an easily forgotten lesson, but one that feels more crucial than ever as AI becomes increasingly embedded in our everyday lives.

Expand full comment
James Cross's avatar

"tendency to underestimate just how much the rules we already know about can produce"

Isn't the point that the rules don't tell how the result will look? Even knowing the rules doesn't explain the result.

Expand full comment
Mike Smith's avatar

It seems like there are two issues here: a) the consequences of the rules, and b) our ability to model those consequences. Since this issue arises even in fully deterministic simulations, at least in those cases, I think we can be confident a) produces the result, and the actual issue is with limitations of b).

Of course, in a natural system, it's always possible we're missing some of the rules. But in the absence of evidence for that, I think it makes sense to assume we're dealing with limitations of b).

Expand full comment
Tom Rearick's avatar

You very effectively demonstrate how a single, simple system can generate unexpected complexity. Now imagine stacking multiple complexity-generating systems into layers operating at different scales:

- one angstrom, molecules

- one micrometer, cell biology

- 100 micrometers, neurons and glia

- 1 millimeter, cytoarchitecture (attractor networks, etc)

- 1 centimeter, maps (Brodmann areas, entorhinal cortex, etc.)

- 10 centimeters, systems (cerebral cortex, hippocampus, etc.)

- one meter, central nervous system

- one meter to many kilometers, society, culture, Internet...

The potential complexity boggles the mind. It is also the reason I have a hard time with reductionism - the idea that you can understand the whole if you first understand the parts. Reductionism has served science well for hundreds of years but it has failed to penetrate a multi-layered system as complex as the human brain.

Expand full comment
Suzi Travis's avatar

What a fantastic way to frame this! I like the idea of the nested layers of complexity. It gets at what I want to explore here.

Your point about reductionism is great. Even if we gathered every piece of knowledge about the brain into a massive database or simulation, would we have the whole program? Would we know how the brain wires itself? It seems not, because each layer doesn’t just add complexity linearly—they interact. The process itself matters, and higher levels can’t be predicted solely from the ones below.

This is what makes brain development and function so fascinating. Understanding how molecules like netrin guide axon growth or how individual neurons form synapses doesn’t mean we can predict how neural networks will organise into functional circuits.

And as you point out, this becomes even more mind-boggling when these biological layers interact with higher-order phenomena like society and culture. It’s similar to artificial neural networks—engineered systems that can produce behaviors we can’t predict, even though we designed the rules. We get complexity not just from the components but from the feedback and interactions between layers.

What this means for ideas like emergent properties is fascinating. I’m so excited to dive deeper into this topic!

Expand full comment
Saj's avatar
Jan 15Edited

The key thing is to figure out the right 'level' to study in order to understand a particular issue. For instance, neurochemistry may not be the optimum level to look at depression (lots of debate currently given recent events - https://www.psychiatrymargins.com/p/dummies-guide-to-the-british-professor?r=2cl55d&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true - but does appear to be better suited to understanding something like Parkinson's disease (I've outlined the issue of 'complexity levels' in my article - link in my comment above).

Expand full comment
Suzi Travis's avatar

Yes! Great question. You're a step ahead of me. At what level should we study certain phenomena? This is a fascinating question (and it's also the topic I will discuss in next week's essay).

Expand full comment
Tom Rearick's avatar

Eric Jonas and Konrad Kording have challenged the idea that neurologists could figure out how a 1970s era microprocessor works by using the same reductionist techniques used to research the human brain.

Jonas and Kording simulated a 1975-era MOS 6502 microprocessor (in-circuit emulation or ICE) on a more modern computer. The MOS 6502 computer chip was used in the Apple 1, Commodore 64, and the Atari Video Game System. To emulate different behaviors, they simulated three video games: Donkey Kong, Space Invaders, and Pitfall. Then they applied different standard neurological tests to the 6502 simulator:

• Ablation or lesion studies are where a neurologist disables a neuron to determine how it affects the system’s behavior. Here, Jonas and Kording used the same technique to show that some transistors were necessary to run specific video games. You might call these “Donkey Kong Transistors” - like the neuroscientist’s “Grandmother cells”. But, since we know everything about the 6502 microprocessor, we know that “Donkey Kong Transistors” don’t exist.

• Spike train analysis correlates the on-off transitions or spike trains of individual neurons to behavior. In this experiment, Jonas and Kording hoped to correlate spike trains to pixel luminance but they had little success.

• Functional MRI (fMRI). Neurologists use functional MRI (fMRI) to explore localized field potentials in the brain. The brain has regions of functional specialization, as do microprocessors. The fMRI analyzes the rhythms in brain regions and the distribution of power across frequencies as a function of task. But, none of these techniques, when applied to the simulated 6502, shed any light on the workings of the 6502. In the case of transistors and neurons, fMRI lacks sufficient resolving power.

• Spike-word analysis, Granger causality, and dimensionality reduction are other analytic tools of neurology that were tested on the 6502 connectome. They each contributed little to the understanding of the 6502.

According to Eric Jonas, “Most of my friends assumed that we’d pull out some insights about how the processor works, but what we extracted was so incredibly superficial. We saw that the processor has a clock, and it sometimes reads and writes to memory. Awesome, but in the real world, this would be a millions-of-dollars data set.”

The Jonas and Kording paper wasn’t without its critics. But using the same techniques on real brains is even less likely to be productive because:

• a single transistor is much simpler than a neuron,

• there are few types of transistors in the 6502 but hundreds of types of neurons in the brain, and

• the 6502 has 3510 transistors, whereas the human brain has 86,000,000,000 neurons.

Their paper is a great read: Jonas E, Kording KP (2017) Could a Neuroscientist

Understand a Microprocessor? PLoS Comput Biol 13(1): e1005268. https://doi.org/10.1371/journal.pcbi.1005268

Expand full comment
Suzi Travis's avatar

Brilliant. Thanks, Tom!

Could a Neuroscientist Understand a Microprocessor? is a fantastic paper! When it was published, I presented it in a lab meeting. It was a very lively discussion. I might be a good one to discuss here. I'll add it to my ideas list...

Expand full comment
Eric Borg's avatar

We conscious beings function teleologically. This is to say that we’re purpose based and so must figure things out. Evolution doesn’t function this way however. We merely perceive purpose given our own purposeful nature. So this “blind watchmaker” is instead referred to as “teleonomy”. The point of Suzi’s article seems to be that our teleology puts us at a fundamental disadvantage when compared against the wonders of evolution that needn’t figure anything out. I quite agree.

One thing more since I can’t resist. If we do function with purpose, then what might that purpose ultimately be? Economists reduce this back to “utility”, or a fancy way of saying that our purpose is to feel good rather than bad from moment to moment. Furthermore with this foundation they’ve developed a vast collection of effective models regarding our nature. There is no replication crisis in this field. Psychologists however seem prevented from reducing our purpose back to such a blatantly hedonistic premise, and possibly given our moral inclinations. Could this help explain the field’s inability to develop effective basic models regarding our nature? That’s what I suspect.

Expand full comment
Suzi Travis's avatar

Hi Eric! You’re ten steps ahead, as usual! In this essay, I wanted to explore how simple deterministic rules can give rise to complex, unpredictable patterns and behaviors—like in Rule 110 or the development of a tree from its DNA. My goal was to raise some questions about the nature of information, complexity, determinism, and predictability. The relationship between purposeful and non-purposeful processes (teleology vs. teleonomy) is a fascinating one, and you’re right that it’s related. But it deserves a deeper dive. I plan to unpack it in future essays.

On your point about economics and psychology, I feel like I need to defend psychology a little!

Yes, psychology has faced methodological challenges, and some results haven’t been consistently replicated. But this isn’t unique to psychology—behavioral economics has encountered its own replication challenges. Some key findings in that field have struggled to replicate reliably, highlighting similar issues.

That said, I’d argue that both fields contribute unique insights. Economics’ utility models provide powerful tools for modeling certain aspects of human behavior, but they don’t fully account for the complexity of our decision-making and lived experiences—something psychology is uniquely positioned to study. Both disciplines have their strengths and limitations in understanding human nature, and I think they complement each other more than they conflict.

But this is definitely a discussion for another time—I’ll have to write an essay just for it!

Expand full comment
Eric Borg's avatar

Sounds wonderful Suzi! Take a deep dive on “purpose” as well as the relationship between psychology and economics whenever they work with the rest of your topics. I like listening to your posts when I wake up early Tuesday mornings. My comment above was simply what came to mind before I needed to leave for work.

I should also say that I don’t consider psychologists to be bad scientists. It’s more a belief that the deck has been stacked against mental and behavioral forms of science in general, with the most central taking the brunt. Apparently economics was non-central enough to become hedonistically founded. Conversely the evolved social tool of morality may not have permitted psychology to also go this way. I think they all need to become so founded. Thus I consider the replication crisis to be a mere symptom of a much deeper (rather than just methodological) problem. If I’m right however, fortunately this should also be quite fixable.

Lately I’ve been toying with the idea of writing a Substack called “Founding Psychology”. Sounds pretty benign, doesn’t it? Well not when it becomes clear that I mean this quite literally! Haha! We’ll see…

Expand full comment
Suzi Travis's avatar

Hahaha, Founding Psychology! Wilhelm Wundt and William James might be stirring in their graves... though I'm not sure if they'd be more intrigued or have their noses out of joint!

Expand full comment
Eric Borg's avatar

My ideas shouldn’t even slightly jeopardize the status of “original gangstas” like Wundt and James. So I doubt their noses would be out of joint today by a radical like me. But it’s true that we take pride in the various teams we join. So when solid arguments are made suggesting that these teams aren’t quite as good as we like to hope they are, this can get the best of us. You probably know a good psychological term for this sort of bias? Since 2014 when I began blogging I know that I’ve upset lots of people this way. I hope not to upset you as well Suzi!

Upon reflection however, “Founding Psychology” may be a bit modest. A bigger question opens up when I get into why economics was socially permitted to become hedonistically founded, while more central fields like psychology were not? Perhaps leaving metaphysics, epistemology, and (most importantly here) axiology in the hands of people who feel no obligation to reach agreed upon solutions, has generally put science on rocky ground? So I’ll probably go with a title more like “Founding Psychology (and science in general)”. 🙂

Expand full comment
Suzi Travis's avatar

Don't worry, Eric — I'm not easily upset. I highly value perspectives that challenge my assumptions. Kindness and respectful intellectual challenge mean more to me than simple agreement. So unless you resort to ad hominem attacks (which I can't imagine you doing), you don't need to worry about upsetting me.

On the biases in how we identify with academic 'teams' -- cognitive dissonance and confirmation bias are probably the most relevant psychological terms here. We all fall prey to them sometimes! Indeed, I think cognitive dissonance can explain a lot of human behaviour.

Your comment has got me thinking about how certain fields developed and how their philosophical roots still shape them today. The idea of hedonistic vs. non-hedonistic foundations is really interesting. While we might not agree on whether psychology needs a 'founding' in the traditional sense, I think you’ve raised some important questions about the philosophical assumptions behind different scientific fields. I often wonder how much our broader worldview influences our understanding of human behaviour. After all, we are shaped by what we know, and what we know is determined by our experiences. We can't escape our biases, even when we try to be objective.

Expand full comment
Eric Borg's avatar

Okay this might be my new Substack. Hard to say until they print it out. Unfortunately they wouldn’t let me use the old one with a new title because I originally did it with their “Quick Start” option. And even though for now there’s no content, it’s probably best for me to start using the new one anyway.

On kindness and intellectual challenge meaning more than simple agreement, I don’t know who would disagree with that in a theoretical sense. “Practicing what we preach” however can still be difficult, or the cognitive dissonance that you mentioned. Why? Because of psychology itself. And what is “psychology”? It’s a field of study that I believe should only provide effective basic answers to such questions, by means of a hedonistic founding premise. The fortunate thing about the approach you mentioned Suzi, is that it can be wickedly effective! I call this sort of rhetoric “Gandhi style”.

Even though I think I have effective answers from which to potentially found psychology upon the same premise that successfully founds economics, I also find it difficult to convey my answers in satisfying ways. Because my ideas in general connect up with a consistent unified whole, every time I open up a given position I feel like its justification mandates that I get into related dynamics. That’s where I tend to lose people. I guess this is why I most enjoy commenting on the work of others. That way I can reduce things back to my own themes without feeling like I need to necessarily connect it all back to an underlying whole. I guess when “singing a cappella” I need to just accept that I should only be able to present my ideas in small digestible bites, and so swallow the many connections that I’d ideally like to make. Because I have the rest of my life to make such connections, I need to be patient. Again, we’ll see!

Expand full comment
Terry underwood's avatar

Let me see if I have this straight. First off, because we have Rule 110, I assume we have rule 109 and rule 111 right? Something special about rule 110….Using 110 because that’s what’s in front of us, what would happen if we tried to understand the stable yet random behavior of a reader who alternates between science fiction and science fact? So we have a Scientist's reading pattern through five levels using Rule 110. Check my work, Suzi.

Starting with a predominantly science-based pattern with one SF intrusion:

Level 1:

S S S SF S S

(A science researcher who picks up "Contact" by Carl Sagan. I know you enjoy analyzing science fiction e.g. your use of the movie Her and Deus Ex Machina to illuminate actual brain activity)

Level 2:

S S SF SF SF S

(The Sagan text sparks curiosity, leading to more indulgence in imaginative thinking)

Level 3:

S SF SF S SF SF

(Scientific mindset begins merging with imaginative exploration)

Level 4:

SF SF S SF SF S

(Pattern of increasing comfort with fictional thinking)

Level 5:

SF S SF SF S SF

(A new equilibrium emerges - balanced between fact and speculation)

The initial mutation of SF in a science-heavy pattern acts like a seed crystal. Each generation shows increasing integration of imaginary thinking. By level 5, the methodical scientist has evolved into a more hybrid thinker.

I’m not at all confident in my understanding of Rule 110 so I may be totally wrong. Even so I wonder what this sort of analysis suggests about SF and S preferences in the early elementary grades? It could be interesting to do phenomenological interviews with scientists to gain insight into the role of science fiction in motivating young up and coming scientists in first grade.

Expand full comment
Malcolm Storey's avatar

"110" is actually the rule as a binary number, expressed in decimal (look it up on Wikipedia). So it's the code for the rule not a "name" as such. And yes, all the other number rules exist too, but this is the one that its inventor chose as being the most interesting - so it's fine-tuned!

Expand full comment
Malcolm Storey's avatar

Unless you can either come up with a convincing reason why that equlibrium should build like Rule 110 OR you can demonstrate experimentally that the results correlate, it's just an interesting idea.

However, it turns out that you could build a Turing Machine using Rule 110 (see Wikipedia) so any mechanism you might discover for the development of scientists' minds (or anything else) could be modelled using Rule 110, including Rule 110 itself.

Expand full comment
Terry underwood's avatar

I prefer to let Suzi comment, Malcolm. Wikipedia isn’t Suzi:) I hope it’s an interesting idea! I’m just curious to see what Suzi says. She is a fabulous teacher.

Expand full comment
Suzi Travis's avatar

Thanks Malcolm! I always thought Rule 30 was also Turing complete, but it turns out that's controversial. Wolfram suggests it may be too chaotic to be Turing complete. But there is no such proof in either direction.

Expand full comment
Suzi Travis's avatar

Hi Terry! Wow! That’s a really creative way to think about it!

First, yes! You’re correct that Rule 110 is one of 256 possible elementary cellular automata rules (numbered from 0 to 255). They are numbered according to a formula (or code).

The rules come from the work of Stephen Wolfram. He wrote a hefty book back in 2002 titled A New Kind of Science, where he explains this all in much more detail. It’s a big book, but it’s surprisingly accessible.

What makes Rule 110 special is that it’s proven to be “computationally universal” or “Turing complete.” That’s just a fancy way of saying it can simulate any computation a computer can perform, given the right initial conditions and enough time. (Interestingly, currently, there is much debate about whether the algorithms behind LLMs like ChatGPT are Turing complete)

On your application of Rule 110 to reading patterns… there are a few key differences between how Rule 110 works and how reading patterns progress that might make your analogy a bit of a stretch.

One thing that strikes me is that Rule 110 operates on binary states (usually shown as black/white cells), with each new cell’s state determined by looking at itself and its two neighbors in the previous row, following specific rules. Reading, in contrast, doesn’t seem strictly binary—it feels more continuous and influenced by overlapping factors like emotions, curiosity, or prior knowledge. It might follow more of an evolutionary or diffusion model, where preferences and habits spread in a nonlinear way, with influences coming from unexpected places like peers or cultural trends.

In other words, your application might be skipping over some levels of complexity.

That being said, your idea is an interesting one. At its core, the notion that small “mutations” in reading patterns could lead to large behavioral shifts in how people engage with science facts and science fiction is fascinating. This ties into an idea I’ve been thinking about for some upcoming essays—the concept of algorithmic growth. This is the idea that simple, rule-based systems can generate complex and unpredictable behavior over time. Your suggestion about science fiction “seeding” curiosity in scientists is a great example of how small interventions can trigger a cascade of intellectual exploration.

Learning leaves its traces. And learning has to happen in time, with each new bit of knowledge absorbed by a system already in a particular state. This idea aligns well with your suggestion about how early exposure to science fiction might shape scientific minds. Every new idea is filtered through the “rules” we’re already operating under, much like how Rule 110 evolves based on its initial conditions.

Science fiction might spark curiosity—it could help children build an early model of “what if?” thinking, encouraging them to consider alternative explanations and possibilities. A phenomenological approach, as you suggest, could give us valuable insight into how those seeds of imagination grow alongside analytical skills.

Also, language itself is a fascinating topic when we think about information! It’s a complex one and deserves some unpacking, but I’ll definitely be exploring it more in future essays.

Expand full comment
Terry underwood's avatar

I understand. Though we can agree that there is a sociocultural space in consciousness, the third space, it doesn’t follow the rules of physical ontological space (the second space) and the atomic space (the first). You cited a specific theorist who developed this model but the name escapes me. Binary cells that change depending on what is adjacent to them (same or different color) isn’t analogous to reading a science fiction book vs a science fact book—but your discussion of the arousal of scientific curiosity via science fiction confirms my thinking. It likely differs from history vs historical fiction where reading a small digression in a historical account of a period or event could arouse interest in reading a specific work of historical fiction but not become necessarily a part of a pattern. One can’t compute cells that aren’t rule based but change for multiple possible reasons. Btw, no way can ChatGPT or any LLM I’ve used compute anything that is computable (which SF vs S book reading is not). They have a hard time alphabetizing, for example

I just ordered Wolfram’s book. He’s a seminal thinker I knew nothing about. Thanks for the recommendation. As always, thank you for the opportunity to learn from you!

Expand full comment
Suzi Travis's avatar

Ah! You ordered Wolfram's Brick! It's a great read, plus you can use it as a doorstop, paperweight, or dumbbell.

Expand full comment
Terry underwood's avatar

Arrived today! I had to use a cart to bring it in the house:)

Expand full comment
Suzi Travis's avatar

🤣🤣🤣

Expand full comment
Saj's avatar
Jan 14Edited

Complexity from simplicity is like magic - it happens right under our noses but we still can't see how it works - which is why it's been difficult to translate neuroscience to clinical practice in mental health. I recently wrote a short piece on this (https://sajmalhi.substack.com/p/the-disappointment-of-neuroscience?r=2cl55d).

Really interesting article Suzi and I look forward to reading the follow-up!

Expand full comment
Suzi Travis's avatar

Thanks Saj! I'm looking forward to reading yours. I've added it to my to-read list.

Expand full comment
Saj's avatar

Great, let me know what you think.

Expand full comment
Michael Vigne's avatar

This is very good and I will give it a deeper read because it deserve it. I will restack too. I saw this example in the appendix of Sapolsky's book 'Determined...'. It is important to recognise that something may be both unpredictable and deterministic. Empirically the problem is often that we cannot establish sufficient precision in starting conditions to make something repeatable. Also another thing to consider is the role of attractors in chaos theory. I have mentioned Tim Palmer's pendulum on several occasions and this is relevant once again here. Think of a desk toy decision maker consisting of a ferrous pendulum and four magnetic basis corresponding to an 'answer'. In principle there are an infinite number of starting conditions but only four attractor end states. Each of the four end states has an unlimited number of starting conditions. We might say that it is possible that immense complexity at the input can also be the cause of a surprisingly limited number of outputs - if those outputs states are suitably constrained. I won't go into it here but this is also why my skin crawls when people talk about simplifying engineering systems when they really mean something else. It is a context where I find the concept of 'simplification' to be a myth.

Expand full comment
Malcolm Storey's avatar

Aren't you guilty of simplification here though? In the real world the magnets aren't points so there are a huge number of possible end states depending on exactly where on the magnet the pendulum ends up.

If you don't simplify your engineering calcs you have to start from Schrodinger every time.

Expand full comment
Michael Vigne's avatar

I don't think so. My claim was that each of the four magnetic bases (according to the analogy) represent four possible end-states. I understand what you mean when you say the magnets are not points so let me me modify the example. Imagine instead that each of the four base magnets also function as make-to-break proximity switches, each of which are in series with a live lamp circuit. The resting place of the pendulum bob would therefore cause the associated lamp to go out. This means that there are four discrete end-states, each of which are mapped to an infinite number of starting states. The magnets in this analogy are chaotic attractors. Schrodinger doesn't provide a solution to anything at the macro scale and that is what his cat shows us. Don't be distracted about my 'simplification' comment because I am referring to a construct I developed in 2016 to do with systems engineering. I shouldn't have assumed that my reference to that would make sense on its own.

Expand full comment
Malcolm Storey's avatar

I don't want to make a big thing of this but Planck might have something to say about "an infinite number of starting states".

And that's not what I take from S's cat.

Physics (and hence engineering) starts from smooth planes and light strings.

As Einstein famously said, "Everything should be made as simple as possible, but no simpler."

Expand full comment
Suzi Travis's avatar

Thanks, Malcolm and Michael! I have a question: if quantum mechanics challenges the concept of infinite starting states, could we think of smooth planes and light strings as idealised systems too? If so, how does our understanding shift when we move from these simplified models to more complex, real-world systems? In other words, how should we define complexity in this context? I’d love to hear your thoughts!

Expand full comment
Malcolm Storey's avatar

My experience of "smooth planes" and "light strings" was in school physics where it was a code for "you can ignore friction" and "you can ignore the weight of the string". But yes it is simplification and that's where you'd start.

The next step would be to include the rest of physics at the level then understood and with the then available computation methods. (So it depends when in human history you're doing it)

If your product is to be mass produced that's where you stop, but if you're building a one-off like a bridge you then need to move from the general to the specific and look at eg, the ground structure beneath.

But you're still generalising - you assume the strength of the concrete will be whatever the book says. It's not until a bridge fails that you get to the final stage and you can then analyse where the weak points were in the concrete cos it wasn't laid in a single go (or where a worker took a poo in your box girder bridge)

Any problem using "smooth planes" and "light strings" is obviously a thought experiment, but so is designing anything and, in a sense, the final product is a simulation of your abstract design. When you assemble flat-pack furniture you're building a simulation of the design in the instructions (in case you're not convinced about the relationship, the design came first and you can't simulate something that doesn't exist, at least as a concept).

You asked for "thoughts" but I'm not sure I answered how our understanding shifts: I guess it's just repeatedly drilling down into the detail.

Expand full comment
Michael Vigne's avatar

Some interesting thoughts here Malcolm but this is not how engineering design works. It is not just a question of what is ‘in the book’ but the standards that are being applied - these are not the same things. Quite often our processes be they for welds, materials, composites etc. have to be qualified (to certain standards). That is generally through cyclic testing, reliability engineering and the use of tools like FMEA which come under the general heading of ‘physics of failure models’. We can model environmental stressors and degradation rates - we use Arrhenius for example to model the break down of materials. There are also safety factors and provisions for assurance, completion and acceptance. I have made several attempts at writing a series (#16) where I expand on all these themes and techniques about this but have struggled to make it palatable. I am writing longer comments because I may cut my losses on that one.

Expand full comment
Michael Vigne's avatar

Yes that has to be right Suzi and it ties my answer to Malcolm. We have to remember that this is about models and certain assumptions so as you recognise, there is a difference between ideal and actual scenarios. Ideal states and systems give us a way to isolate phenomena and understand them. Any idealised set of conditions are effectively an abstraction of the real-world situation, or a way to understand it, net of inconveniences like turbulence, friction and degradation or the fact that when we think of an imaginary line that can be divided an infinite number of times it does not suffer from the real world limitation of being constructed out of particles. The exact same thing can be said for ‘smooth planes’ - it’s a concept freed from physical science despite being a model for it.

My claim is that true simplification is a myth. The complexity never goes away. You can either make something easier to understand or easier to do... but not both at the same time. If you ‘simplify’ the solving of a problem by using a computer the complexity of the problem has been reduced in your mind, precisely because the complexity is now in the machine. It has not gone away. So as per Malcolm - "Physics (and hence engineering) starts from smooth planes and light strings" is an abstraction. But notice that if you also allow yourself the convenience of thinking in terms of continuous planes you are effectively signing up to the concept of an infinite number of discrete points. You can argue against that but in practical situations we eventually come to realise it is moot.

Expand full comment
Suzi Travis's avatar

Yes, this is exactly what I suspected — complexity never truly goes away, and simplification is indeed a myth. The question I'm really fascinated by is: What do we lose when we simplify a problem? If we simplify in order to understand better, we inevitably ignore details. But if we don't fully understand the system, how can we be sure that the details we choose to ignore aren't crucial?

Expand full comment
Michael Vigne's avatar

Malcolm - taking your points in order.

1

In my first comment I said that in 'principle there are an infinite number of starting conditions'. I could have said instead, 'for all practical purposes there are an unlimited number of starting conditions....' by which I mean we could never exhaust the number of initial conditions given any level of resolution. We don't think about Planck when we are talking about a fractal that goes on forever. Similarly, we can think of any two points on an imaginary line and infer that there is always an intermediate point between them, even though as we reach down into the scale of particle physics this stops making sense.

These are theoretical and we can never physically demonstrate infinity but we can mathematically. Take a computer simulation for a Koch snowflake where we can (in principal) zoom in forever.

There is a sleight-of-hand here because for the visualisation to work it needs to 'zoom in' but in fact what it is doing is scaling up the image. Now say the image is scaled until what was once the size of a pinhead and represented by a single pixel, now fills the screen - we are looking at the same pattern repeated. We were to be able to ‘zoom in’ a further 12 orders of magnitude we would still be looking at the same pattern, even though, were it possible to translate that into physical reality, we would be operating at the scale of particle physics. It is a mathematical abstraction that doesn’t have a counterpart in the real world - so the simulation can take us to a theoretical space that would fail in reality.

Back to the pendulum, the circle described by the bob for any given angle to the vertical can be thought of as being continuous. Any two points has an intermediate point - we can say there are an indefinite number of points that for all practical purposes are infinite. The angle to the vertical is also continuous so we can make the same argument there for a practically unlimited number of intermediate angles. We have conical space that includes all the possible starting positions even though they are in practical terms uncountable. Yet for initial conditions we need of many other confounding factors. Even if we only let go of the bob rather than apply a force, there are frictional effects at the fingers which are affected by temperature, moisture etc...that must impact initial acceleration. Don’t misunderstand me, I am happy to assume we have ideal frictionless conditions, but what I am saying is that the assumption is no better than my use of the term ‘infinite’. All I was saying is that an unimaginably large set of starting conditions maps to four outcomes.

2

The point Schrodinger was trying to illustrate was how counter-intuitive quantum indeterminism is, particularly if we scale it up to something relatable at the level of our experience. Remember that the indeterminacy resided with the subatomic particle and not the cat. The inference was that if the particle was in a superposition and that superposition determined whether or not the cat was alive or dead, then the cat’s life must also be in a superposition. This was a scenario that Schrodinger himself described as ‘ridiculous’. So I stand by: “Schrodinger doesn't provide a solution to anything at the macro scale” because whether or not the cat is alive is just an outcome determined by the particle.

I think what has happened since is that it’s invited speculation over whether quantum behaviour should be scalable. So when people talk about wave function collapse they mean that in order to get a outcome, the entire set of counterfactual outcomes must disappear from existence. This is where the many worlds interpretation comes from. It supposes that when the wave function collapses to a value in our dimension, it also collapses to all the other values in an equally numerous other dimensions or ‘worlds’. This would seem to be unfalsifiable and frankly to me it reeks of being a massive cop out. Sabine Hossenfelder is very critical of theories that are based solely on the search for symmetry or ‘beauty’ and I think this probably falls into that category of theoretical physics.

3

Physics started with observation and empiricism. Later we learned to think in terms of ideal conditions so we could isolate phenomena from the noise.

4

Einstein was talking about abstraction.

Expand full comment
Malcolm Storey's avatar

Thanks for your reply. (And thanks for being so gentle!)

My background is IT (as you probably guessed) but I've watched the unravelling of an unreasonably large no of strategic errors in the civil engineering industry (box girders, HAC, cladding, RAAC) here in the UK.

It's disappointing that an industry so regulated and standardised still gets so many things wrong at such scale (but scale is the flipside of standardisation).

re S's cat: either the cat is an observer, or the cat is in a superposition which is resolved when you observe it. Of course if you're going to allow superposition, then those two interpretations could also be superimposed. S's cat is a single strand causality chain arising from a quantum event in a thought experiment. We have no experimental data to know what it reasonable and what is ridiculous in such a cross-over experiment, (but a Beryllium atom passes through both slits - and even Buckyballs I just learnt - in size a buckyball is close to halfway between an electron and a cat. If a buckyball can exist as a wave maybe a cat can too? - we're going to need a bigger slit!).

re search for symmetry or ‘beauty’: this is the search for minimum information solutions so ties in with modern interpretations of Occam's razor. OR merely chooses a starting point hypothesis for you to try to break.

3. I was talking about product development, not history.

Expand full comment
Michael Vigne's avatar

Thanks Malcolm. I think we can probably park the quantum mechanics there because I dare say we are not going to come up with an intuitive explanation for the double slit experiment in this thread. I was going to add in a previous response that I don't know much about civils beyond noticing massive differences in standards between the UK (where I am from) and the USA (where I am). These casual (non-professional) observations applies to foundations, construction methods, electrics, plumbing etc. In general a good standard is tolerant of error and should be capable of being audited in the end product. I would be interested to know of the failings you have observed in civils. Are we talking about cowboy builders?

Product development is different and I may have missed your earlier point. Again standardisation and qualification of the deltas (where innovation takes us beyond standard practice) are part of that. I think you are making a point about the difference between the bespoke and the mass-produced. Those are different and even the way we prototype is not the same. For mass-production we have to define the tooling because the manufacturing must also be part of the design case.

Expand full comment
Suzi Travis's avatar

I love this back-and-forth!

The engineering side is a little outside my wheelhouse, but I’m curious about how the magnets are treated as point objects versus extended objects and whether we truly have “infinite” starting states, given quantum mechanics. Could this difference arise because we’re discussing the system at different levels of description? This reminds me of what we see in neural systems, where we constantly navigate between levels of abstraction—from molecular mechanisms to cellular behavior to network dynamics—and our descriptions shift accordingly.

If I understand correctly, your modified example with the lamp circuits is really clever because it explicitly defines discrete output states while preserving the core insight about attractor dynamics. This parallels what we observe in brain development: despite seemingly infinite possible developmental trajectories, the system converges on functional patterns of neural organisation—resulting in remarkably similar brains across individuals.

I’d love to hear more about your 2016 work on systems engineering. Could you share a link to that work?

Expand full comment
Michael Vigne's avatar

I think it is actually a very good analogy for neural networks and beyond that (I suggest in Series 15) free-will. Obviously the basic brain structure is determined by genetics and from there it is environmentally conditioned. I notice that Erik Hoel mentioned that as the brain specialises it effectively allows the unused connections to die in a similar way to natural selection. I have a lot of thoughts on this analogy that I hope to get back to at some point - suffice to say that in the general sense I agree. The convergence you mention is due to the fact that we live in similar environments. However, if you grow up in a home where mother is a concert pianist and father plays the cello, the chances are any musical bent you have will thrive.

I have written a number of documents for various clients in aerospace, energy, defence that I cannot share - the concepts are mine but the applications are confidential. I may be able to share a video presentation from 2016 called ‘The Availability Construct’. It is online but hidden on YouTube but it’s not client specific. I will need to review to see if I really want to share it and if so, given it is about an hour long, find you a timestamp.

Here is a link to a comment I made to another one of your threads (which I don't think you saw) that I think may be relevant to this piece.

https://substack.com/@michaelvigne/note/c-77453559

Expand full comment
Suzi Travis's avatar

Thanks, Michael! There are many reasons why I love Substack, but the ability to keep track of threads is not one of them.

Expand full comment
John's avatar

Nice essay. I first met Conway’s game properly via William Poundstone’s small book on complexity. Then I played around with it in the environment on old computers with (very inefficiently) Algol60, as it was all I knew and had access to a compiler for. Great memories AND complexity theory with no maths! I do enjoy these weekly posts and love the way you hook your audience into areas that I’m pretty sure they wouldn’t necessarily connect with recreationally - it has a nice integrated cross-disciplinary vibe. Bit like physiology used to have for me (no expert, just a fan of the discipline). Thank you yet again :)

Expand full comment
Suzi Travis's avatar

Thanks so much for the kind words, John!

That's amazing that you got to play around with Conway’s Game of Life on those old computers, even with Algol60! That would have been a rare thing.

I’m really glad you’re enjoying the posts. I do like the cross-disciplinary topics. It's always fun to bring together different areas and see where they connect and where we might learn from each other.

Expand full comment
Malcolm Storey's avatar

It seems to me there's something special about integer numbers or quantized systems, in that they often have emergent properties in a way that continuous variables/real numbers wouldn't. Makes me wonder if the universe has to be quantized at the lowest scale. But then I find it equally unbelievable that 4-space is quantized or that it is infinitely divisible.

Expand full comment
James Cross's avatar

What's interesting is that for a pattern to become apparent requires projection on a 2 spatial dimensional grid that forms in a sequential (temporal?) manner. The grid (2+1 dimensions) in effect become a "hidden" partner in the computation.

Interesting parallel with Northoff and others:

Georg Northoff, Soren Wainio-Theberge, Kathinka Evers,

Is temporo-spatial dynamics the “common currency” of brain and mind? In Quest of “Spatiotemporal Neuroscience”,

Physics of Life Reviews,

Volume 33,

2020,

Pages 34-54,

ISSN 1571-0645,

https://doi.org/10.1016/j.plrev.2019.05.002.

For perhaps even more far out are views that consciousness works in a higher dimension(s) to the 3+1D in physics.

Expand full comment
Suzi Travis's avatar

What a fascinating observation! You're absolutely right — it’s remarkable how Rule 110’s patterns only emerge when we consider both space and time. I think time — the way the brain unfolds through it — is a dimension we too often overlook in neuroscience.

Expand full comment
dystopianAi's avatar

very fascinating article.

Expand full comment
Suzi Travis's avatar

Thank you!

Expand full comment
Wyrd Smythe's avatar

I'm a bit late to the party! I believe there are those actively studying what I think of as "assembly theory" (except that term means something specific that isn't what I mean) -- this astonishing property of reality to create complex structure from simple building blocks, basic rules, time, and energy. It's a kind of anti-entropy that allows us to exist. IIRC, I think it's Nagel who argues this might be a sign of a teleological universe.

Your mention of John Conway's Life before your discussion of 1D automata perked up my ears, it's a favorite topic. Other commentors have mentioned coding it. Likewise. In just about every major language I've used. It's a good problem for beginners because it's relatively easy but interesting enough to be chewy, and it has a nice educational design gotcha lurking in it (the need to double-buffer).

A while back I made a bunch of Life videos because I've always found it soothing to watch, but on many systems it's hard to compute it rapidly in real time. But it's easy to generate frames and stitch them into a movie. FWIW, here's my playlist of them:

https://www.youtube.com/playlist?list=PLeD-oQG--WF6jyCby9B5q85PNbegp_R3d

Expand full comment
Suzi Travis's avatar

I can't tell you how much I love your videos!!! I, too, find them so soothing to watch.

A bit later in the year, I'm going to write more about Conway's Game of Life. Would you mind if I embed some of your videos into my essays (with credit, of course)?

Expand full comment
Wyrd Smythe's avatar

Yeah, it’s nice to have Life play out on a 1920×1080 screen. I could only dream of implementing Life at that scale back in the day.

Help yourself to any video you like! I’ve been thinking I’d like to make some new ones, try some new things, especially 3D versions. The coding isn’t hard; it just takes a lot of time to generate all the frames. A five-minute video requires 7500 frames. 😜

Expand full comment
Suzi Travis's avatar

Thank you!

Expand full comment
Wyrd Smythe's avatar

You’re quite welcome!

Expand full comment