19 Comments
User's avatar
Glen Thomson's avatar

Oh, this is so rich!

Life, i.e. growth, is basically a work of art and an experiment from the get-go.

Expand full comment
Mike Smith's avatar

It seems like as technology continues to improve, the number of cases where the system self constructs (grows) will increase. Eventually the boundaries we draw between life and machine will become blurred.

Although the distinction between evolved and engineered systems may be more durable. Living systems are survival machines with their own agendas. We build machines for particular purposes. They are extensions of our interests. I'm not sure how much market there will be for machines that self actualize.

Of course, looking much further down the road, the boundary between how we reproduce and what we design might itself become blurred.

Interesting topic Suzi!

Expand full comment
Suzi Travis's avatar

Thanks, Mike! Yes, I agree — the lines are definitely starting to blur.

You’re absolutely right that we’re likely to see more systems that "grow" or self-organise — especially as we push into more open-ended learning, robotics, and adaptive architectures. But whether those systems want anything — whether they develop needs or goals of their own — that’s something I wonder about.

This reminds me of the old "neat vs. scruffy" debate in AI: whether intelligence should come from clean, top-down logic (neat) or messy, evolving systems (scruffy). That tension still feels very alive today.

And then there’s the work Geoffrey Hinton’s been pushing — trying to develop hardware that doesn’t just run learning algorithms on hardware but is growing chips, where the information those chips store is lost when the chips 'die'.

We are going to need a whole new vocabulary, aren't we!?

Expand full comment
Mike Smith's avatar

I hadn't heard of Hinton's work. Sounds like something I need to look up!

Expand full comment
Jack Render's avatar

Introducing forgetfulness or mortality into AI?? Asking for trouble, imo.

Expand full comment
Jack Render's avatar

I think you underestimate the market for self-actualizing AI. Consider the rapidly growing market for AI companions. If they have a limitation it might be the lack of whimsy or their intrinsic predictability - add that and you suddenly have a real companion. Alternatively, if you could develop an AI that, upon detection of an unsolvable problem experimented with coding until I could solve that problem - I'd guess they're already doing some of that, for obvious reasons.

Expand full comment
Mike Smith's avatar

The thing to ask is, why would we want artificial companions? If all we want is self actualizing companions, there are plenty of humans and pets around. But being self actualizing, they often aren't interested in playing the companion role we'd like from them.

I doubt an artificial companion that sells well would be like that. It would have to be more reliable, always be there for us, always find us interesting, maybe always laugh at our jokes, etc. But the one thing that makes that appealing is what would prevent them from being the same as the natural companions that are out there. They can't just be self interested agents. They have to be agents whose primary impulses are to be our companions.

Expand full comment
Jack Render's avatar

Yes, but I am imagining that some things could be programmatically hardwired into the AI, e.g., a filial connection that might allow some variance for whimsy but a basic fundamental attachment. I guess that currently the AI being marketed is what you suggest (it’s very big in Japan for both eldercare and youth “dating”), but whimsy is an essential human element, so I think it would find its niche.

Expand full comment
John's avatar

I’m off to think about pseudo-intrinsic adaptation. This is a fascinating piece as ever since the “Book of Life“ was sequenced, something in my brain began to wonder about the ways in which it was a recipe across time rather than a static blueprint. Our understanding of this has progressed fantastically throughout my lifetime, even though it is still so very basic in its current rich complexity. Thanks Suzi.

Expand full comment
Suzi Travis's avatar

Thanks, John! I am now going to include both “pseudo-intrinsic adaptation” and “a recipe across time” in my everyday vocabulary. Perfect!

And yes, it’s wild how far we’ve come, isn't it!? From thinking of DNA as a static instruction manual to seeing it more as a dynamic recipe. Even now, we’re just beginning to grasp how much of that process depends on interaction, timing, and feedback.

Expand full comment
John's avatar

It may have been you who pointed me this way through your articles - Philip Ball’s “How Life Works” has been a treat too. Thanks again.

Expand full comment
Eric Borg's avatar

To me the moral of the story is that life should always remain ridiculously beyond us in an engineering sense. We should never have the tools at our disposal to do the sorts of things that biology is able to do. Fortunately I do think we should still be able to learn more and more about the wonders of biology though. For example, let’s say I’m right that it will becomes empirically concluded that consciousness exists by means of a neurally produced electromagnetic field. This discovery would be historically profound. Using it to actually build effective conscious machines however, should be ridiculously difficult in both technical and ethical ways.

Expand full comment
Wyrd Smythe's avatar

Wow, another great post, and so much meat on this bone I hardly know where to start.

For one the difference between building and growing. Perhaps one difference lies in who or what does the building. In contrast to things we build, things that grow do it themselves (sometimes with help from a gardener or parent). A tree builds itself; someone else builds a doghouse from the wood.

The thing about both blueprints and DNA (and rule 110) is that without the larger process for implementing the information, the information does nothing. FWIW, my computer science background calls blueprints, DNA, and the automata rules "data" rather than "code". The latter being the algorithms that operate on the data.

An important aspect of this being that data requires a process to implement it. A builder (and materials) to implement a blueprint, an algorithm to implement automata rules, the biologic engines that implement DNA (or RNA).

> "A blueprint tells you what the final structure will be before you begin. Rule 110 doesn’t."

I'm going to push back on that a little. As you say, rule 110 is deterministic and always has the same outcome. So, in a sense, rule 110 (plus the implementation process) does specify the final structure. DNA might be a more interesting case because, as you point out, environmental factors make its "blueprint" nondeterministic.

As an aside, there are blueprints and plans that are (at least somewhat) context dependent. For instance, Telco wiring diagrams with multiple options controlling how the switch behaves (which makes those diagrams hard to read sometimes).

> "Building is plan-driven. You know what you’re aiming for. Growing is process-driven. You have to let it unfold."

Oh, I like that. Good way to put it.

A couple of SF stories I've read feature "blank slate" Ai that starts off nearly useless and learns to function over time just as a human does. In one, the humanoid robots had to go through a "teenage" phase that was just as annoying and obnoxious as human teens can be. Nature evolved brains to navigate an ever-changing world, and it seems entirely reasonable that Ai would need to do the same.

As one more aside, I grinned about following an IKEA manual to "build" a bookcase. I'd call that "assembling" a bookcase. I've built bookcases and doghouses from my own designs. Which doesn't at all detract from your points but made me smile.

(And now I really want to code up a 1D automata to see how big the code is.)

Expand full comment
Suzi Travis's avatar

Thank you — this is such a rich response, I’m not sure where to start either!

Let’s begin with the point you raised about data vs. code. I didn’t know that! That’s an important distinction between computer science and my type of science. I’ll definitely keep that in mind. In my work, I usually use 'data' to refer to raw or processed numbers (like matrices), while code is what I use to operate on that data — to analyse, transform, or simulate. But with DNA, things get a bit trickier — is it data, or code? Or both?

Totally fair pushback on the blueprint/Rule 110 line. What I meant is: if you hand a builder a blueprint, they can usually picture the finished house before laying the first brick. But if you hand someone the rule set for Rule 110, it’s impossible to guess what will emerge without actually running it. So maybe the distinction I was reaching for isn’t about determinism, but about the kind of information we get from a blueprint versus Rule 110. But you’re absolutely right, maybe it doesn’t make sense to think of that information in isolation from the process that implements it.

And on IKEA bookcases — well… maybe what some of us call “assembling,” I call “building,” because let’s face it: that’s as close as I’m ever gonna get 😄

And yes, if you do end up coding up a 1D automaton, please share it. I’d love to see it.

Expand full comment
Wyrd Smythe's avatar

One thing about the code/data distinction is that it can be context dependent. For example, a C++ program is code when it runs, but to the compiler that transforms the source text into executable code, the C++ program is just data (with the compiler code acting on it). DNA does seem a fuzzy example, but I lean towards calling it data that various cellular machines ("code") use as input. But I can see an argument that converting DNA into a protein — a functional unit — could be seen as similar to a compiler converting text data into an executable unit. Which would make the DNA code just as C++ text is seen as code. It really can be a fuzzy boundary.

A funny aspect of rule 110 versus a blueprint is that rule 110 always has the same result, but a builder might make acceptable changes to the building based on experience and creativity. I agree a blueprint gives a builder an immediate sense of the result, whereas rule 110, as you say, has to be run. The Mandelbrot is an even stronger example. No way to imagine the complex beauty of the result from the definition: z₁ = (z₀)² + c iterated for all coordinates c of interest until z₁ either "escapes" or you give up on iterating. Even the iteration process is dead simple. But the result is breathtaking.

Heh, well, let's face it. The line between "building" and "assembling" is faint to the point of transparency. I was just being cute. People who build models would, I think, be quite adamant that they are builders, damn it, not assemblers. 😄

I did whip up a 1D automaton last night. When I clean up the code and add code to have it make bitmap images (rather than text output), I'll share it. You were quite right when you wrote they were "simple computer programs that follow basic rules." The Python code is maybe just a dozen lines or so.

Expand full comment
Joseph Rahi's avatar

I wonder if we might need to ditch the idea of DNA as being primary altogether. So it may be less like the fundamental code telling it what to do, including how it switches parts of itself on and off, and more like the cell's "recipe book" - something it refers to or ignores as it sees fit. That seems more logical to me, if we consider that there must have been pre-DNA life that somehow evolved DNA.

I think the idea that DNA determines the behaviour/structure of the organism is about as absurd as the idea that it determines the behaviour/structure of our societies. DNA clearly influences the cellular level in a way that affects the organism level, and the organism clearly influences the societal level, but if genetic determinism were correct we'd expect it to fully determine the organism, and then fully determine the societal level.

My suspicion: it's self-organisation (even self-creation) all the way down.

Expand full comment
Mark Slight's avatar

This is a great article and a great read! This speaks a lot to me.

I think AI in an important sense already is growing like that. Yes, that's perhaps a rather wacky and probably controversial perspective. For anyone interested, and for my own sake, I'll elaborate with a few loosely connected points. My view is an eclectic mix of Darwinism, Dawkins' memetics and the extended phenotype, Buddhist emptiness/form and conventional/ultimate truth concepts, as well as Dennett's intentional stance and Dennet's take on memetics. It tries to widen the perspective from an anthropocentric one to a more non-human, non-subjective perspective. It can perhaps be summarised as "The Selfish Pattern". I don't suggest any of this should replace or challenge a more conventional perspective, but to me, at least, it seems relevant with these viewpoints.

-DNA uses RNA and proteins to make copies of itself. But it is an equally valid perspective that proteins use DNA and RNA to replicate. The coalition of proteins "harnesses" mutation, where one of them changes, but the coalition as a whole lives on and benefits. The same can be said for higher-complexity structures, all the way up to personality traits (and society traits).

-We have domesticated dogs from wolves. But an equally valid perspective is that some wolves, unlike almost every other animal, where opportunists and found a way to exploit a new

environment (humans and their surroundings) to increase their reproductive fitness. Thus, dogs are now much more dominant in "nature" (here, human society is viewed as part of nature) than other wolves and most other animals. But this is not all. Dog's have become more human-like in some ways, sacrificing some of the "original blueprint". But human traits, genetic and memetic, are in an important sense replicating in dogs to some degree (some genes may indeed have changed to be more human-like, but that doesn't really matter - what matters is the higher-level functionality of the genetics/memetics). Bred dogs also evolve by natural selection - only in a very special environment. Important to note is also that humans and dogs have common ancestors, and that the branches are recombining in a way. If dogs were more central to our society, perhaps even necessary, then it would be analogous to the symbiosis seen in lichen, or in the eukaryotic cell (archea combining with bacteria, presumably having a common ancestor).

-Switching gears slightly: language and language-mediated intelligence is part of our extended phenotype, with it's own memetic evolution and replication. Memetic fitness is not always aligned with our genetic "goals" - but the incredible adaptability that knowledge transfer via memetics allows makes our brain plasticity more than worth it - for most of us. Psychiatric illness like psychosis, vulnerability to brainwashing etc are serious drawbacks, though.

-Written language was a memetic revolution allowing much more complex knowledge and intelligence to not only replicate between indviduals and groups in new ways, but also aiding in aggregating and recombining knowledge within individuals like never before (working through ideas with pen and paper, for example). Then, the technology meme of printing combined with written language memes and both memes gained significantly in their spread. That's why almost every human on earth knows about written language and what printing is (and probably the human population would be smaller without those or equivalent memes)

-Domesticated dogs, wheat, and printed books are all examples of natural phenomena, as part of the human extended phenotype. Our phenotype is the environment, the soil, for things like words, books and dogs to evolve in.

-An AI such as an LLMs is no exception. It is growing in the milieu of our extended phenotype. It is the result of the memes of language, computer science and neuroscience combining and finding life outside our bodies (just like books have done). It replicates not through cell division, but memetically through written and spoken "AI research" and the AI industry. The principles are replicated, and parameters can replicate instantly in this environment (just wait for curious humans to come around). Detrimental principles are punished, and bad weights are short-lived. In this way, AI is constantly growing, transforming and learning. (this presumes the perspective that if I am cloned and the original body is destroyed then the new body is me and the brain has grown based on all my previous experiences).

-Neuroscience and computer science provided just the right environment for some of the selfish structural patterns that brains exhibit to for the first time replicate outside brains. That is partly what AI is. Brain patterns going viral in a new environment, breaking free from their hosts. Not totally unlike how a flu virus learns to infect a new species, transforming wildly in the process. Except this seems like a larger jump.

In short, I think AI is intelligence and knowledge replicating and transforming in a new environment.

Thanks again for a well written and stimulating post!

Expand full comment
Jack Render's avatar

I'm curious about two things. I note that you consistently use the word "brain" rather than "mind." That leads to one question, which is to what extent do you think the internet can be considered a distinct brain or mind? And more generally, to what extent do you think that collective humanity could be considered that? We are blips on the screen, but the collective mind is distinctly growing.

Expand full comment
Jem's avatar

I love all your articles Suzi, and am especially liking the focus on complexity theory and it's potential role in the brain. I actually came across a paper today that you might be interested in: https://journals.aps.org/pre/abstract/10.1103/PhysRevE.111.014410

Sabine Hossenfelder also recently covered this is an YouTube video. Basically it is looking at the relationship between the edge or chaos seems and consciousness. Interesting space to watch

Expand full comment