Can We Build AGI, or Does it Need to Build Itself?
The problem with separating development from function
If you are new here, welcome!
I write on topics in neuroscience, consciousness, and AI.
Usually, I write in-depth essays at the intersection of these topics, but I thought I’d start the year with something a little different — a series of shorter essays.
Six questions have been rattling around in my head. This mini-series of mini-essays will introduce those questions. Throughout 2025, I’ll return to these questions and explore them in more depth.
This is Question 4
When we build something — let’s say a car — there is a time when we build the car and a time when we use the car. This is how engineering traditionally works: first, develop something, then use it. We see this process of building (development) and then use (function) everywhere in the human-made world.
It’s even the process used to build our current artificial intelligence systems like ChatGPT. These models are first built (trained) and then deployed for functional use, with a clear separation between the development and use phases. Like a car rolling off the production line, once training is complete, the model is considered built and ready for use.
For decades, neuroscientists thought this was how the brain worked, too. We viewed brain development and brain function as separate processes. Development was seen as the period when neurons grow, form connections, and establish basic circuits — essentially building the brain’s hardware. What the brain does (its functions), on the other hand, was considered to be what happens after development is complete. You’ve probably heard the idea that what the brain does is like software that runs on the hardware of the brain.
This clean separation between the development of the brain and the function of the brain has been highly influential in how we think biological neural networks work.
But this view is not accurate. In biology, the distinction between development and function is not so tidy.
The same processes that help build our brains during childhood — things like strengthening some connections while pruning away others — keep working throughout our lives. Even as adults, our brains are constantly being remodelled. New connections form while others are eliminated, all based on our experiences.
This blurry line between development and function has got me thinking about the differences between biological intelligence and artificial intelligence.
When we learn something new — like riding a bike or solving math problems — our brains physically change. New connections form between neurons, while others are eliminated or weakened. These physical changes are how our intelligence develops, so the development of intelligence and the changes in our brains are not easily separated. This continuous physical reorganisation allows our brains to flexibly learn and adapt to new situations while maintaining existing capabilities — what we call general intelligence.
So what about artificial general intelligence (AGI)?
Current artificial neural networks, like ChatGPT, excel at specific tasks like image recognition or language processing, but they don’t exhibit the kind of general intelligence we see in biological brains — the ability to flexibly learn and adapt to new situations while maintaining previous capabilities. So, we don’t yet consider our LLMs to have AGI.
Today’s AI systems are built like cars — first, we build them, then we use them. But what if that’s not how general intelligence is built? What if general intelligence requires constant development and change?
Let’s say our goal was to build AGI (which might not be the best idea, but this is just a thought experiment). Would we need to rethink how we build that AI system? Would we need to move away from the development of static architectures toward networks that develop — continually growing and self-organising?
Would artificial neural networks achieve higher forms of intelligence if they underwent a developmental growth process similar to biological brains?
So, the fourth question on my list of questions for 2025 is:
Can general intelligence be built, or does it need to build itself?
This question raises many others: What do we mean by intelligence? Are there different types of intelligence, each with unique characteristics and developmental needs? When we talk about general intelligence, are we referring to a single thing, or could there be many forms of general intelligence we’ve yet to imagine?
We often think of AGI as simply human intelligence — but faster. But human intelligence might be just one of many possible forms. Perhaps we will need to broaden our understanding of what intelligence could be.
In a lot of contemporary software development, the agile methodology now mixes use and development. It's a tacit recognition that our ability to develop using the old waterfall method (documenting requirements, followed by specs, coding, testing, production, etc) was always far more limited than anyone wanted to acknowledge. Now minimum viable product is developed and put in front of the users, who provide feedback which is incorporated into further development. At some point it's good enough for real use, but the feedback-development cycle continues (usually for some contractually specified number of cycles or time period).
It's interesting that AI training hews closer to the old waterfall method. There's a lot to be said for being more "organic" about it, although I suspect there are technical challenges still to resolve to make it possible, such as allowing AI to update during use could lead to unpredictable results. The systems we have remain a lot more limited than the hype implies. Not that they won't eventually get there.
I've long found our reference to ourselves as a "general intelligence" somewhat dubious. It seems like we're a survival intelligence. Most of the fears of developing AI is that it will somehow turn into a survival intelligence of its own. But a survival intelligence seems like a very specific type of architecture. There are some dangers associated with tool intelligence, but the idea that it will accidentally become a survival intelligence seems similar to worrying that a word processor will accidentally become a travel reservation system.
Excellent post Suzi!
I proposed a Naked and Afraid test for AI once.
https://broadspeculations.com/2023/12/21/naked-and-afraid-ai-test/
I'm guessing the standard AGI tests are highly oriented to a subset of human capabilities. At any rate, I thought I read AI is already performing at a 80-90% level AGI. Going past AGI may require AI to feed its own data back to itself, but that presents the challenge of subtle biases and idiosyncrasies becoming reenforced. The end result could be dangerous, useless, or both.