Humans like to build things. We’re pretty good at it, too.
Give us some materials, a bit of time and effort, and the right kind of information — and we can build just about anything.
It’s that last part — the right kind of information — that might be doing more than we often give it credit for.
If we want to build a house — we find what we need in a blueprint. If we are building a bookcase — we follow the IKEA manual.
With step-by-step instructions and a little patience, we (eventually) end up with a functional-ish place to store all our unread novels.
We build shelves, sheds, bookcases, and bridges using this kind of information — the kind we find in blueprints and manuals.
But what if we want to build a brain?
Not a bookshelf — but something that thinks, learns, maybe even understands. Whether we’re building a biological brain or an artificial one, is this the kind of information we need — a blueprint or a manual?
When we think about how life is built, it can seem like a similar process. We need good materials, some time and effort, and the right kind of information.
So what’s the right kind of information for life?1 DNA, right? DNA is often described as life’s instruction manual. And if that’s true, then maybe building a brain isn’t so different from building a building.
But here’s the thing: we don’t usually say that we build life. We say — life grows. There seems to be a line — a difference between what we build and what we grow.
Is that just semantics? Are we just using different words for the animate and inanimate? Or are there real differences between building something and growing something?
And now, us humans are trying to build artificial brains — machines that can think, reason, learn, and maybe even understand the world the way we do.
And it turns out, we’ve pretty good at that, too. We’ve built algorithms that can solve problems, mimic conversation, and beat us at our own games. We talk about it like an engineering challenge — with good materials, a bunch of time and effort, and the right kind of information — we can build an artificial brain.
And whatever line there might have been between what we build and what grows —it’s starting to blur.
So we might want to know — what is the difference between building a brain and growing one? Is there a difference? And if there is, does it matter?
This week, we’re asking three questions:
What’s the difference between building and growing?
How does complexity grow?
What does it mean to grow a brain?
Q1: What’s the difference between building and growing?
Let’s start by looking at something much simpler than a brain.
Not a living system. Not an artificial intelligence. Just a tiny computer program made of black and white squares.
If you’ve been around this newsletter for a while, you’ve probably heard me mention cellular automata before—like Rule 110. If you are new here and you want a quick overview, I explain them in more detail in the essay Having All The Information Isn’t The Same As Knowing Everything.

Cellular automata are simple computer programs that follow basic rules. They start with a straightforward set of instructions that determine whether each cell in a grid turns on (black), turns off (white), or stays the same — based on the state of its neighbours. That’s it. The set of instructions are incredibly simple.
But sometimes, those instructions — when left to run long enough — can produce astonishingly complex patterns.
The rules for Rule 110 contain some kind of information — they tell the system how to behave. But what kind of information is it?
Is it more like a blueprint? Or more like DNA? Do we build the outcome, or do we grow it?
It’s a little blueprint-like. The rules are fixed. The same starting point will always produce the same result.
But there’s a key difference.
A blueprint tells you what the final structure will be before you begin.
Rule 110 doesn’t.
With a blueprint we can read its instructions and know what will be built.
The only way to find know what a cellular automation, like Rule 110, will produce is to run the program.2
In that way, the rules for Rule 110 are more like DNA: a short sequence of code that doesn’t spell out the final form, but guides a process that unfolds over time.
So what’s the difference between building and growing?
It might come down to this:
Building is plan-driven. You know what you’re aiming for.
Growing is process-driven. You have to let it unfold.
Q2: How Does Complexity Grow?
The complexity that Rule 110 creates can’t be found in the set of instructions. For things that grow, there can be a huge gap between the complexity of the instructions and the complexity of the result those instructions produce.
A few weeks ago, in the essay Why is Complexity So Complex?, I mentioned algorithmic complexity—a way of measure complexity by measuring how much information it takes to describe something.3
The set of instructions for Rule 110 has very low algorithmic complexity — you can write those instructions in a few lines of code.
But let the instructions run for a while… and the complexity explodes.
So if the set of instructions doesn’t contain much information, and if we can start from something simple and still get something astonishingly complex —
Where does the rest of the complexity come from?
We don’t get complexity simply because the set of instructions exists. We need to run it. And to run it, two things are essential.
Time — because the pattern evolves one step at a time. Complexity doesn’t appear all at once; it unfolds.
And energy — because each computational step requires it. Without energy, the system sits still.
So the complexity in outcome of Rule 110 — or the Game of Life — doesn’t come from the code alone. It comes from code + time + energy.
This is how a cellular automaton grows. But is it how a brain grows?
Q3: What Does it Mean to Grow a Brain?
We’ve seen that a system like Rule 110 grows complexity through just three things:
Code + Time + Energy.
So what about a brain?
A brain also grows though code (we call that code DNA) and time (we call that development) and energy (in the form of nutrients).
So far, the story sounds similar.
But there’s a key difference. Cellular automata like Rule 110 always produces the same outcome. If we start from the same starting point we will always get the same outcome. Every time.
Life doesn’t work that way.
If life were just code + time + energy, then every organism with the same DNA would be identical. Every identical twin would have the same brain, same personality, same preferences. But that’s not what we see. Identical twins aren’t really identical.
So something else must be going on.
Let’s focus on the code for life — DNA.
All currently known living things have DNA.4
We might imagine that DNA is a kind of program — similar to the rule for a cellular automaton. It’s code. It’s fixed. And it tells cells what to do.
But it turns out DNA is far more dynamic than people usually assume.
It’s true that DNA — the A, T, C, and G bases in your cells — is largely fixed from the moment of conception. It’s also true that most of your body’s cells carry the same core DNA, copied from the original zygote. This stability is essential — it allows cells to reliably build proteins and keep the body running.
But DNA doesn’t behave like the static code we find in cellular automata.
Let’s imagine the cells in the body as the cells in a cellular automaton.
In a cellular automaton, the same rule is applied to every cell, equally — regardless of where the cell is located or when the rule is used.
But in biology, that’s not what happens.
Each cell applies a different rule, depending on where it is, what stage of development it’s in, and what its neighbours are doing. The code doesn’t run uniformly — it adapts, locally and constantly. The rules that are applied are constantly changing.
Imagine a newly divided cell — let’s anthropomorphise it and call her Cellder. Where Cellder ends up in the body will shape her future. If she lands near the developing gut, she’ll encounter different chemical signals than if she ends up near the skin. The cells around her release proteins that influence which genes Cellder turns on or off. (Genes are small stretches of DNA that tell the cell how to make specific proteins — you can think of them as biological instructions.)
Basically, these signals tell her which instructions to follow — which genes to turn on or off. In response, Cellder begins producing her own proteins — which then influence how her neighbours behave (and which rules they apply in turn).
It’s a feedback loop.
This turning on and off of genes is known as gene regulation — and it’s shaped by everything from developmental timing to diet, stress, sleep, and exposure to toxins.5
Cells regulate genes based on local signals, and those signals ripple through the body, triggering new patterns of gene expression as the system grows and reorganises itself.
And it doesn’t stop once we’re fully grown. Gene regulation continues throughout life.
So, like a cellular automaton, DNA doesn’t give us a finished blueprint. It gives us a starting state — and a set of rules for how things interact.
But unlike cellular automata, the way the rules are applied changes.
Which genes get used — which parts of the code are read, and when — depends entirely on a cell’s environment, its location, its neighbours, and its history.
DNA is code — but it’s not code in the way most software runs code. It’s not a fixed program. It doesn’t execute sequentially. And it adapts and responds, while it grows.
Which makes it extraordinarily difficult to reverse-engineer.
This is how we grow a brain. Not by following a rigid blueprint, but by running a responsive process: one that unfolds over time, changes as it goes, adapts to its context, and learns from the inside out.
So why do we grow brains this way? Why grow them through adaptation and feedback, instead of building them following a fixed set of instructions?
Because the world is unpredictable. And life has to survive in that unpredictable world.
Building a biological brain — a real, thinking, learning brain — means building one that can deal with that unpredictability.
A rigid, unchanging set of instructions wouldn’t survive long in an environment full of variation, danger, and surprise. Life doesn’t happen in a vacuum. It unfolds in a shifting world of varying light, temperature, nutrients, stress, and noise. A one-size-fits-all solution would break the moment conditions changed.
Instead of hardcoding every outcome — or relying on a fixed set of rules like a cellular automaton — evolution found a better strategy: flexibility.
By giving cells the ability to turn genes on and off — to sense, respond, and adapt — organisms became more resilient. They can not only grow in different environments, they can adapt their behaviour based on experience. A body that can regulate its own genes is more than just a machine following a script — or an algorithm.
It’s a system that changes itself.
Is this what differentiates life from cellular automata?
Life doesn’t just grow — it adapts and changes itself. That might be the real dividing line — not between code and no-code, but between systems that are changed and systems that change themselves.
As we build more advanced AI — systems that learn, adapt, and reshape themselves in response to the world — that distinction seems to be starting to blur.
If a machine starts to change itself the way life does — adapting, responding, reorganising to survive and thrive — is it still something we built?
That’s an interesting question.
But life adapts because it has to — because survival depends on it. The drive to grow, respond, and reorganise comes from within. It’s not an optional upgrade; it’s a condition for continued existence — not just for the individual, but for the species.
Only a few years ago, we would have claimed that AI doesn’t grow like this. And for the most part that is still true. Most AI still changes because we tell it to — when we supply new data, tweak a loss function, or modify its architecture. The motivation, timing, and direction of change still come largely from the outside.
At least for now.
But some researchers are building systems that might change that — models that can learn continuously, reshape themselves, even set their own goals. And in some experimental cases, AI does begin to unfold in surprising, self-organising ways.
But there’s still a whole lot of scaffolding involved. These systems don’t grow in the wild — they grow in the lab. And even when these systems adapt it’s not quite what living systems do.6
In biology, growth and learning aren’t separate. A brain doesn’t get built and then start adapting. It adapts as it builds. The system shapes itself while figuring out how to function.
That’s because the instructions for growth — DNA — are embedded inside the very cells doing the growing. There’s no separation between the thing being changed and the thing doing the changing. The code is in the system. And the system rewrites itself.
And these changes aren’t completely random. They’re shaped by interaction — with the environment, the body, and the system’s own history. The system doesn’t just change — it changes in ways that make sense for its environment, its body, and its goals.
So maybe the more interesting questions are:
Would a neural network ever need to grow like that?
Could it grow like that?
I’ll leave those ones with you. I’d love to know what you think.
In this essay, when I say life, I’m referring to biological life — organisms that grow, develop, and reproduce using DNA (or RNA). This includes all known biological life — from bacteria to humans — and excludes hypothetical non-DNA-based systems, including current AI.
Some cellular automata (e.g., Rule 90) are predictable and analysable without simulation. Rule 110 is Turing complete, which makes prediction practically impossible.
Algorithmic complexity is often formalised as Kolmogorov complexity. It’s not just “how long the instructions are,” but the shortest possible program that can output the object.
All known cellular life uses DNA as genetic material. Some viruses use RNA instead, and in retroviruses, RNA is reverse-transcribed into DNA.
Gene regulation operates through mechanisms like transcription factors and epigenetic marks, which affect how and when genes are expressed.
Some recent AI models incorporate mechanisms for intrinsic motivation, curiosity-driven learning, or self-supervised adaptation. These systems attempt to optimise behaviour based on internally generated signals, such as surprise, prediction error, or novelty.
For instance, reinforcement learning agents may use intrinsic rewards to explore unfamiliar environments without explicit goals. However, the underlying architecture and training regime — including what counts as novelty or error — are still crafted by human designers. So, current AI exhibits pseudo-intrinsic adaptation, where learning mechanisms feel internal but remain scaffolded by external choices.
e.g. Pathak, D. et al. (2017). Curiosity-Driven Exploration by Self-Supervised Prediction. arXiv. https://arxiv.org/abs/1705.05363
It seems like as technology continues to improve, the number of cases where the system self constructs (grows) will increase. Eventually the boundaries we draw between life and machine will become blurred.
Although the distinction between evolved and engineered systems may be more durable. Living systems are survival machines with their own agendas. We build machines for particular purposes. They are extensions of our interests. I'm not sure how much market there will be for machines that self actualize.
Of course, looking much further down the road, the boundary between how we reproduce and what we design might itself become blurred.
Interesting topic Suzi!
Wow, another great post, and so much meat on this bone I hardly know where to start.
For one the difference between building and growing. Perhaps one difference lies in who or what does the building. In contrast to things we build, things that grow do it themselves (sometimes with help from a gardener or parent). A tree builds itself; someone else builds a doghouse from the wood.
The thing about both blueprints and DNA (and rule 110) is that without the larger process for implementing the information, the information does nothing. FWIW, my computer science background calls blueprints, DNA, and the automata rules "data" rather than "code". The latter being the algorithms that operate on the data.
An important aspect of this being that data requires a process to implement it. A builder (and materials) to implement a blueprint, an algorithm to implement automata rules, the biologic engines that implement DNA (or RNA).
> "A blueprint tells you what the final structure will be before you begin. Rule 110 doesn’t."
I'm going to push back on that a little. As you say, rule 110 is deterministic and always has the same outcome. So, in a sense, rule 110 (plus the implementation process) does specify the final structure. DNA might be a more interesting case because, as you point out, environmental factors make its "blueprint" nondeterministic.
As an aside, there are blueprints and plans that are (at least somewhat) context dependent. For instance, Telco wiring diagrams with multiple options controlling how the switch behaves (which makes those diagrams hard to read sometimes).
> "Building is plan-driven. You know what you’re aiming for. Growing is process-driven. You have to let it unfold."
Oh, I like that. Good way to put it.
A couple of SF stories I've read feature "blank slate" Ai that starts off nearly useless and learns to function over time just as a human does. In one, the humanoid robots had to go through a "teenage" phase that was just as annoying and obnoxious as human teens can be. Nature evolved brains to navigate an ever-changing world, and it seems entirely reasonable that Ai would need to do the same.
As one more aside, I grinned about following an IKEA manual to "build" a bookcase. I'd call that "assembling" a bookcase. I've built bookcases and doghouses from my own designs. Which doesn't at all detract from your points but made me smile.
(And now I really want to code up a 1D automata to see how big the code is.)