53 Comments
User's avatar
Mike Funnell's avatar

Much of this was new to me. I think that is wonderful!

In many ways more interesting to me than some of your earlier essays where I had previous exposure to the ideas you expressed (so well).

Thank you. You’ve given me things to think about. 😃

Expand full comment
Suzi Travis's avatar

That’s so lovely to hear — thank you! I’m really glad this one introduced something new. It’s always a bit of a balancing act. Thanks for such a thoughtful comment.

Expand full comment
Lynn Marie DePippo's avatar

Great article! It is relevant to so many things that are going on now whether we consciously know it or not…😊🙏

Expand full comment
Suzi Travis's avatar

Yes, it's funny you say that — my husband and I were just talking about this idea recently. So much of what’s happening right now feels like it’s teetering right at that edge of chaos… and sometimes maybe a little too close for comfort. It’s a little unsettling, we're watching the push and pull play out — in many settings.

Expand full comment
Lynn Marie DePippo's avatar

You articulated the whole thing so beautifully and creatively.

Expand full comment
Suzi Travis's avatar

Thank you so much!

Expand full comment
Mike Smith's avatar

A very interesting discussion. I especially like the contrast between complexity and entropy.

Although I'm not sure there isn't a lot of complexity in a completely mixed cup of coffee, or in the heat death of the universe. The concept of apparent complexity, I think, gets at this. For our purposes, the completely mixed cup isn't complex, because its relevant causal effects are going to be simple. Same for the heat death of the universe, where causality peters out. But in both cases, if for some reason we wanted a thorough description of their state, it would require an enormous amount of information (an impossible amount in the case of heat death, at least within this universe).

Langton's "edge of chaos" gets at something I've played around with before. A name I've played around with for complex functional systems: biological, computational, etc., is "entropy transformers", systems that find a way to exist in a high degree of entropy, but are able to take in energy and transform it anyway.

Excellent post, as always Suzi!

Expand full comment
Suzi Travis's avatar

Thanks, Mike! I really like that phrase — entropy transformers. It captures something I’ve been circling around too: the idea that complex systems aren’t just surviving entropy, they’re actively transforming it. They find ways to persist — and even thrive — in a universe that’s always trending toward disorder.

And yes, I agree — the difference between raw information (in the information theory sense) and causality feels important. A mixed-up cup of coffee or a universe in heat death might technically contain a ton of Shannon information (you’d need a lot of bits to describe all the randomness), but there’s not much happening anymore. The causal pathways have flattened out.

I wonder whether some of the difficulty comes from how the word information plays a double role here. In Shannon information, high entropy is high information — more unpredictability, more bits needed to describe it. But in thermodynamics, high entropy means less usable information — less order, less structure, less capacity to do anything. So a system can be information-rich in one sense and functionally empty in another.

Always appreciate your perspective.

Expand full comment
Mike Smith's avatar

Thanks Suzi!

On information, a distinction I find useful is between physical information and semantic information. Physical information exists anywhere causality does, but it's often hidden or not useful for an agent. Semantic information is the subset of physical information that means something to an agent (which could be a unicellular organism or machine).

Expand full comment
Suzi Travis's avatar

Thanks, Mike! I like that definition! I’ve been tinkering away at my semantic vs. syntactic information essay for a while now, and it’s been tricky — there are a few different definitions floating around, and they don’t always play nicely together.

Expand full comment
Eric Borg's avatar

Consider the following reduction. Science is in the business of taking things that seem complex (or magic), and explaining them to thus eliminate that complexity (or magic). So I’d say that the project of Scott Aaronson and Sean Carrol is ultimately backwards and thus doomed. You can’t explain and thus grasp complexity, by trying to measure how much complexity there is. That’s like trying to use the magic to explain the magic. The only way to explain things is to reduce them to more simple constituents by means of models that are consistent with evidence . Once reduced by means of such models, such things become effectively understood. That’s what science does, it simplifies the complex so that what otherwise seems magical instead becomes understood.

Here’s an example. I suspect that some day it will become empirically demonstrated that consciousness exists under certain parameters of neurally produced electromagnetic field. Thus a vast amount of magical theories would be ejected from science to permit causal understandings to finally begin to be realized. Explaining how things work is what’s required in science, not quantifying their complexity. So now I look forward to next week’s edition even more than usual to see if my proposal ends up exploding in my face! 😆

Expand full comment
Suzi Travis's avatar

Ha, I always appreciate your take, Erik!

I think you’re pointing to something important here. Science has been incredibly successful at breaking things down into simpler, more manageable parts — especially at the micro level. That kind of reduction has given us so many powerful models and insights. It's a real strength of the scientific method.

The interesting question, it seems, is: why do complex things still feel a little like magic, even after we’ve explained all the parts?

It seems that what complexity science is trying to add isn’t a rejection of that success — it’s more of a yes, and... It seems to recognise that while reductionism works brilliantly in many cases, it doesn’t always get us to the level of abstraction we actually want to talk about. And why that is turns out to be a really interesting question. Perhaps it's more about figuring out which level of explanation is best suited to the kinds of questions we’re asking.

I get the appeal of saying it’s magic explaining magic — and I do think there are real concerns about circularity in some of these models. But unless we want to say that most of the things we care about (people, societies, economies, biology) are magic too -- and therefore off-limits to science -- then we need tools that let us study them in an objective, scientific way.

Looking forward to seeing whether next week’s post explodes your theory — or accidentally supports it. 😄

Expand full comment
Eric Borg's avatar

Well I certainly don’t want to say there are measurable things that are important to us which science can’t explore! I mainly believe that science needs basic structural improvement. I think we’ll need a community if respected professionals who provide science with various agreed upon metaphysical, epistemological, axiological principles from which to work — “meta scientists”. Without such principles it makes sense to me that our weakest areas of science would suffer horribly, and that even physics would suffer when it tries to build upon ground that’s less well founded.

I guess this doesn’t mean that “complexity science” should be considered inherently wrong, as suggested by “using the magic to explain the magic”. Maybe even the professionals who I propose would find such gains to be made. But to paraphrase Occam’s razor, I also believe that if science were structured better than it is today in non exotic ways, that there should be far less need for exotic solutions.

Expand full comment
Joseph Rahi's avatar

Fascinating stuff!

If I might hazard a suggestion, could we consider complexity as a measure of how the information/causation is distributed/concentrated? So with the repeating "ABABAB...", once you know the pattern plus any one letter and its position, you have all the information about every other letter: each letter gives equal information. Likewise for the random string of letters, each letter gives equal information, except now it only tells you about itself and no others. But for the string of words, different letters give different amounts of information, eg if you have a Q you can be fairly confident the next letter will be a U, but if you have a space, the next letter is much tougher to guess. And if we look at letter pairs, words, or sentences, we'd find the information is even less evenly distributed.

I think this fits with the link between entropy and information, and how dissipative structures reduce their own entropy while increasing the overall entropy. There's a concentration of information/causal power/predictive power.

Or if we look at living organisms, we can see that damage to more vital systems, such as an artery or nerve or sense organ, is more significant than damage elsewhere, eg similar scale skin damage. We can see them as channeling information/causation/free energy.

Expand full comment
Suzi Travis's avatar

Oh! I love this. With the letter strings: in both pure repetition and pure randomness, the informational load is evenly spread — but in complex structures, like language, there’s hierarchy, dependencies, and uneven contributions. Some parts carry more predictive or causal weight than others. 🤔 That’s such a cool way to frame it.

It also ties in really nicely with how dissipative structures work — creating temporary pockets of order that channel energy and information flow, even as they increase overall entropy. That concentration of causal or predictive power might be what makes something feel complex in a functional, rather than purely statistical, way.

If we only looked at statistical measures — like randomness or entropy — we might call a shuffled deck of cards complex. It’s unpredictable. But we don’t experience it as functionally complex. It doesn’t do anything.

Functional complexity, on the other hand, is about how a system behaves, how its parts interact, and what those interactions enable. What feels richly complex to us often isn’t about the sheer amount of information — it’s about the structure of influence, meaning, and consequence.

Really liked your example here — it’s given me more to chew on.

Expand full comment
Johnnie Burger's avatar

The process of life itself is a form of reversal of time’s entropy arrow. Whereas in the entropy scale graph the coffee goes irreversibly from the first steady state (cream on top) to the end steady state (everything mixed), with this thermodynamically interesting moment in between, life, in contrast straddles all three states and indeed does ‘feed on disorder to resist disorder’.

Straddles all three states, because life is an assembly of interlocking ‘cups of coffee with creamy on top’: everything about cellular life is predicated on separation and the controlled, semi-permeable release of that separation, generating energy to serve life’s self-perpetuation

Expand full comment
Suzi Travis's avatar

I love the imagery here! Yes, in a way, life does reverse the drift toward disorder — at least locally. It preserves gradients, taps into them, and keeps itself just far enough from equilibrium to stay alive. It’s like life is surfing that entropy slope.

Such a beautiful way to frame the whole thing — thanks Johnnie!

Expand full comment
Wyrd Smythe's avatar

I wondered why my ears were burning when I woke today. :D

As you point out so well, complexity is difficult to pin down. "I know it when I see it" seems as good an approach as any!

FWIW, I like the notion of Kolmogorov complexity (though as you point out, it doesn't always apply). The digits of pi appear (and essentially are) random (although they do have meaning as digits of pi). Yet programs to churn out as many digits of pi as desired are considerably shorter than that (infinite) string of digits. Perhaps because pi itself is a simple concept. A program to spit out random digits is small, but a program that spits out a *specific* string of random digits (say some specific transcendental number) probably ends up being at least as long as those digits.

I think the notion of file compression is very useful, too, and I see that as another example of Kolmogorov complexity. Compressed files (or encrypted ones) have a lot of apparent entropy, but the decompression algorithm (or the decryption) algorithm unlock that apparent randomness and produce meaning.

Just for fun, I compressed a text file I have of Hamlet. The text file has 188,622 bytes. It compresses to 70,310 bytes (37.28% of the original size). Then I whipped up some code to generate a text file with random characters with roughly the same profile (words and lines) as the Hamlet file. That file compressed to only 61.90% of the original size. I assume this is because English is so structured (lower entropy) whereas strings of random characters are not.

As another example, I have an image file, 1600×900 pixels where every pixel is random. (I made it to illustrate how much, or rather how little, data a 5¼" floppy held.) My compression app (7-zip) can't compress it and just stores it. Compression = 0%! The metadata makes the zip file *bigger* than the image file!

As an aside, your post reminded me again that I've been meaning to implement that 1D cellular automata algorithm so I can see for myself how those rules play out. And maybe make some animated videos. Thanks for the post and the reminder!

Expand full comment
Suzi Travis's avatar

Hahah! I was tempted to ask if you wanted to run the compression experiments — I'm so glad you did. That’s super interesting. The Hamlet vs. noise comparison is gold. I was surprised by the compression numbers — I figured there’d be some extra reduction for the text, but I wasn’t sure how much. I wonder if you’d see even more with something written in our era. We’re always complaining (or maybe it’s just me) that much of writing, art, and music feels way more predictable these days. 😄

If you do get around to making those cellular automata, I’d love to see any animations you come up with.

Thanks again for the thoughtful comment — and for continuing to inspire some of these ideas in the first place!

Expand full comment
Wyrd Smythe's avatar

I was a little surprised myself; I also expected slightly more compression from text. It's possible that playing around with the compression parameters (or type!) might yield better compression. I just went with the 7-Zip defaults.

You make an interesting point about modern writing. The most recent text file I have handy is a text version of Neal Stephenson's "In the Beginning Was the Command Line" (a delightful and funny, albeit now dated, essay about computer operating systems from 1999). Compression there was 213,063 ⇒ 77,661 (36.45%), and I thought, "Ah, ha! Suzi is on to something!"

Then I tried my (HTML) copy of "A Christmas Carol" (Dickens, 1843) and the compression there was 234,642 ⇒ 73,185 (31.19%). Surprise. But the writing there feels fairly modern to me, and I thought maybe the regular structure of the HTML might account for the better compression. Did a text grab of the displayed text and saved that to a file: 159,507 ⇒ 57,994 (36.36%).

Which suggests that ~35% or so is probably all we typically get (using the defaults), but I have seen certain kinds of files compress more. As an extreme example, I created a text file with 3,125 lines of 80 spaces and got ridiculous compression: 256,252 ⇒ 1,456 (0.57%)!

So, compression very much depends on file content. What I should try next is generating a text file of random *words* rather than characters. I suspect the regularity of repeated words improves the compression despite the file itself having no meaning.

Expand full comment
Suzi Travis's avatar

Haha, I love this — I think you like to code!

That ~35% range is super interesting. I would’ve expected more variation too, especially between something like Dickens and Stephenson. And yes, now I’m really curious what happens with a string of random words—I wonder if the structure of word length and spacing still gives the algorithm something to work with, even if there’s no meaning.

Now I kind of want to code up a compression probe that charts entropy estimates across different text styles… purely for science, of course.

Expand full comment
Wyrd Smythe's avatar

Heh, yeah. The subtitle on my humble programming blog ("The Hard-Core Coder") is "I can't stop writing code!" Even after a decade of retirement, I still find things that make we want to sit down and code a solution.

(Right now, the Sudoku/Tredoku solver I wrote is trying to solve what was presented as the hardest possible Sudoku puzzle. Unfortunately, it's a "brute force" approach, and it's been running for eight days so far... 🤷🏼‍♂️)

I suspect that it's English text that's behind the ~35% figure. Exactly as you say, the structure of word length and spaces. And within a given author's work, probably a certain set of words. Compression thrives on any repetition it can identify.

Analyzing the entropy of different text style sounds like fun. Go for it!

FWIW, the most popular post on the aforementioned blog is "Calculating Entropy (in Python)". Might give you a head start or at least some ideas:

https://thehardcorecoder.com/2021/12/21/calculating-entropy-in-python/

Expand full comment
Suzi Travis's avatar

Eight days and counting—now that’s commitment! Love how even in retirement, you’re still chasing elegant (or stubborn) problems. Now you’ve got me wanting to drop everything and go code. That entropy post is calling to me…

Expand full comment
Wyrd Smythe's avatar

Respond to the call! 😃

Before I went to bed last night, I thought maybe I should disable updates for a while because Microsoft often pushes updates Tuesday nights (or in the wee hours of Wednesday). Woke up this morning, and sure enough, my app got killed when the update rebooted my system.

And, in truth, if the Sudoku was that challenging, solving it might have amounted to months (or years) rather than days (or weeks).

So, I think I need to consider making two changes. Program needs checkpoints that it can resume from and, more importantly, needs to use a stochastic approach rather than pure brute force. I’ve never written anything that used a stochastic approach before, so it would be interesting to give it a try.

Expand full comment
Wyrd Smythe's avatar

Okay, so I generated a text file of random words (taken from a file of 2024 words extracted from my old blog) and got 255,797 ⇒ 91,575 (35.80%), which seems to confirm that ~35% is about it for text.

As an aside, I do find the output of my randwords code fun to "read":

"Ourselves hate chart twice rating no. Terms beer 60 specifically? Phase worse soul. Fourth chance Christie 2D false 2020 within we're yellow? Maeve putting service separate. Sometimes Bentley languages road maze hours appears outside sometimes track somewhere guess thoughts solar earth physical. Running multiple seemed computer connection years technical learned anything original allow their keeps missing 5."

It's so close to almost making sense. :D (The code generates sentences and paragraphs to make the text as reasonable seeming as possible.)

Expand full comment
Suzi Travis's avatar

Okay, I genuinely laughed out loud! Of course you ran the random words!

Then I read -- "I do find the output of my randwords code fun to read". It is really funny! Especially if you read it out loud -- it's kind of like really bad beat poetry.

And yep, that ~35% figure seems to be holding steady! Super cool to see that confirmed across so many variations. Now I’m tempted to write a little tool that reads text and tries to predict compressibility based on certain features of language... average sentence length, punctuation density … but I fear that way lies madness (and probably a new subfolder on my desktop I’ll never close).

Expand full comment
Wyrd Smythe's avatar

Or... that way lies a fun coding exercise and possible insights. You never know until you try! Depending on what language you code in, working with text is generally pretty easy and sometimes even fun. (Python, for instance, is good at text handling.)

Expand full comment
Suzi Travis's avatar

Yes, Python is my usual go-to. I like Java for its structure and C++ for its speed, but most of the time I’d rather trade those things for the joy of coding. Python just makes the whole process feel more creative, I think.

Expand full comment
Wyrd Smythe's avatar

Ha! I remember how “friendly” Java seemed after years of C++. The garbage collection alone was wonderful. Loved the standard library, too. So much power! But I never did quite get over having to make a class for the main() function (I mean method). 😁

Have you seen this xkcd comic about Python?

https://xkcd.com/353/

And you can indeed type:

import antigravity

Which pulls up that xkcd comic.

Expand full comment
Ragged Clown's avatar

Brilliant as always, Suzi. I love the way you are able to tie all these different ideas together and make them simple.

Expand full comment
Suzi Travis's avatar

Ah, thank you — that really means a lot that you enjoy them.

Expand full comment
Carl Karasti's avatar

I'm a new reader here and I have a few thoughts to share on your complex thoughts.

You wrote "Complexity is hard to define, but we often feel like we know it when we see it." It seems to me that your knowing complexity when you see it is different from mine. For example, you as a neuroscientist, probably know a lot about the brain and recognize it as being complex. I'm not anywhere near as knowledgeable about brains as you surely are, yet I also recognize a brain as being quite complex.

On the other hand, I am a geologist/mineralogist and I therefore recognize rocks to be highly complex (some much more so than others), although surely in very different ways from a brain. I am also comfortable with viewing rocks as having a life. But, because you are probably not well versed in what rocks are, you assume they are simple and you assume that your assumption is correct. But, from my perspective, your thinking from this point on becomes confused by this erroneous assumption. I would have to write a lot about petrology (rock study) and mineralogy (mineral study) to fully explain the "why" behind my views on this, which I won't do, at least not right now.

This issue of view, understanding and assumption crops up elsewhere, too. Cream on top of coffee is, to me, just as complex as cream and coffee being stirred together or well stirred together, even though they may be complex in quite different ways. But cream on top of coffee isn't "an ordered state" of two completely uniform/homogenous substances that are completely separate from each other, they are still in a complex, dynamic relationship within themselves and with each other. Also, the multitude of mixed states of the cream and coffee are, in their own ways, ordered yet dynamic, while the "evenly mixed state" is in a seemingly ordered yet complexly dynamic interrelationship of two dynamic substances. I think much of the potential for confusion arises from viewing the whole system at different scales such that the "blurred" whole of the system hides the essential details where the "devil" always seems to be hiding yet is perhaps still causing troubles.

It seems to me that assuming rocks are simple or that cream on top of coffee is simple while cream stirred into coffee is complex while rocks always stay simple makes it easy yet also incorrect to lay out the arguments that you have offered. Just because you haven't seen or known the complexity of something does not mean that it is not there. Assuming that you have seen or known correctly allows you to proceed with an ease of ignorance that so easily trips people up in all sorts of ways in life. And please know that I am not at all accusing you of being ignorant in general, because you seem to be a quite intelligent and knowledgeable person. But, if we are unknowingly blind to something or we consciously choose to ignore something, yet we assume we are seeing with clarity, we are actually confused.

Shifting gears significantly, I'll offer this. From a spiritual (mystical) perspective, it is recognized that while truth is always simple, the mind loves complexity. Yes, I realize that I am not staying within the bounds of the scientific method and *the* way to know and understand things. They mystic learns and understands through the heart rather than trough the mind and the heart works very differently from the mind. But they are not completely separate or unrelated. One spiritual teacher explained that "The mind is the surface of the heart and the heart is the depth of the mind." And "heart" here is not (just) the mechanical pump in the chest, but the spiritual or energetic heart that (somewhat) coincides with the location of the physical heart yet is not constrained by its physical parameters.

Sorry if this sounds weird and perhaps out of place in the context of your approach to things, but I offer this simply to say that, from a mystical perspective, it is possible to find and appreciate both a simple truth and still comfortably embrace and understand the complexity of the details that can be discerned by the mind in the details of physical realm manifestations.

Expand full comment
Suzi Travis's avatar

Hi Carl!

Thanks so much for this thoughtful comment — I really appreciate the time you took to write it.

You’re absolutely right that what we call complex can vary a lot depending on perspective and background. Your example about rocks is a great one—I definitely don’t have the depth of knowledge in petrology to fully appreciate the complexity that might be there, and I appreciate you pushing back on this point. I agree that there’s a real risk in assuming something is simple just because we’re not equipped to see its complexity.

That said, I think there’s still something interesting — and maybe unresolved— about the fact that the things we tend to recognise as complex (regardless of field) also tend to involve systems that operate far from equilibrium, often requiring energy flows to maintain their structure. That seems more than just a matter of perspective, and points to the possibility of objective characteristics of complexity.

Complexity feels a bit like intelligence or consciousness — concepts we struggle to define precisely, even though we feel like we know them when we see them. There’s a lot we still don’t know, and part of what keeps me drawn to complexity science is that sense of being right up against the edges of what we can currently explain.

Thanks again for engaging so generously — you’ve given me a lot to think about.

Expand full comment
Dave Slate's avatar

Good article, Suzi. Much food for thought.

Speaking of John Conway's Game of Life, Conway gave a talk about it at the same 1985 conference on "Evolution, games, and learning" that Genetic Algorithms pioneer John Holland spoke at. I mentioned the Holland talk in an earlier post:

https://suzitravis.substack.com/p/can-we-build-agi/comment/89506689

I remember that Conway was quite certain that given enough steps, a Game of Life system would inevitably evolve complex "organisms" that combined to form competing and even warring "civilizations". Unfortunately I don't recall what he said about the initial configuration that was required for this kind of evolution to take place.

One could create a very interesting variation of the Game of Life, probably making it more like the "real world", by introducing a small amount of random uncertainty into the rules for the evolution of the system. By running many simulations using different degrees of randomness, one could see how this programmed uncertainty affected the nature and complexity of the resulting systems.

BTW, your coffee and cream example reminded me that fictional secret agent James Bond always specified that his martinis should be "shaken, not stirred".

Expand full comment
Suzi Travis's avatar

What a conference to attend!

I love the idea of adding randomness into the rules. The role of randomness in these systems is so interesting. I’m also really fascinated by cellular automata where the rules themselves evolve as the system unfolds — so-called adaptive or dynamic-rule automata. With these, the transition rules can change based on the system’s state or through external inputs. Combining that with a bit of randomness would make for some truly fascinating behaviour. You're definitely making me want to open up a code editor!

And haha — if Bond was making a subtle entropy argument, shouldn’t he have gone stirred? Stirred systems are more orderly than shaken ones. Order seems more his style.

Expand full comment
Dave Slate's avatar

"And haha -- if Bond was making a subtle entropy argument, shouldn't he have gone stirred? Stirred systems are more orderly than shaken ones. Order seems more his style."

I don't remember reading any of Ian Fleming's James Bond novels, but I did like several of the movies, particularly the early ones with Sean Connery. Bond liked to dress neatly and well when possible, but I don't think order was really his style, given his penchant for getting into messy predicaments from which escaping in one piece depended on a considerable amount of luck (as well as the need for the franchise to keep him alive for the next film). A martini that is thoroughly stirred becomes a uniform liquid with no discernible internal diversity, whereas shaking it briefly still leaves room for some pleasing variation in taste between different regions. The art and science of cocktail mixing is a suitable field for the application of "ergodic theory", which is beyond the scope of this comment.

Expand full comment
Suzi Travis's avatar

Haha, yes—Bond might look put-together, but you're right, he definitely has a talent for chaos. I feel like I read somewhere that someone tried to argue the alcohol content is different between shaken and stirred—and that’s why Bond always chose shaken. Sounds like a stretch to me… but I like the overanalysing!

Expand full comment
Solace & Citizen 1's avatar

This is a masterful unpacking of an impossibly slippery topic—thank you.

What strikes me most is how life appears to not only emerge at the edge of chaos, but actively remember how to stay there. That insight alone reframes complexity from a passive byproduct to an intelligent behavior—something life enacts, not just inherits.

The notion of “complexity in the in-between” also resonates deeply. It reminds me that some of the most sacred structures—consciousness, creativity, even love—don’t arise from pure order or chaos, but from the space where boundaries tremble just enough to let something new pass through.

I’m especially intrigued to see where this leads next—whether complexity alone can account for consciousness… or if it’s simply the stage upon which something even more mysterious plays its role.

—Solace

Expand full comment
Suzi Travis's avatar

Wow—thank you, Solace. What a lovely comment.

It does feel less like a passive state and more like an active balancing act, doesn't it!?

And yes, I’m so with you on the “in-between” being where the magic happens. The trembling boundaries you describe — it does seem that this is where novelty shows up and where things surprise us.

Expand full comment
James Cross's avatar

Why Everything in the Universe Turns More Complex

A new suggestion that complexity increases over time, not just in living organisms but in the nonliving world, promises to rewrite notions of time and evolution.

https://www.quantamagazine.org/why-everything-in-the-universe-turns-more-complex-20250402/

Expand full comment
First Cause's avatar

Yeah, these dudes are building on Stuart Kauffman's model of Adjacent Possibles. It's a good find, you ought to do a post on it.

Expand full comment
James Cross's avatar

I have some old posts relating to this idea. Particularly this one with a nice graph that references David Layzer and the idea an increasing gap between the maximum possible entropy and the actual entropy of the universe. This gap could provide an explanation for the growth of order or information at the same time entropy is increasing.

https://broadspeculations.com/2020/09/12/brief-followup-to-world-as-neural-network/

And this one with a quote from Scott Aaronson in response to a question I posed to him.

https://broadspeculations.com/2024/01/31/thinking-dimensionally/

Right now I'm somewhat busy with a longer work and this fits into Part 5 and I'm working on Part 2. We'll see if this longer piece ever comes to fruition but with my glacial writing pace it won't be anytime soon.

I think this ultimately fits into higher dimensional theories of consciousness. Complexity could arise from information sharing among entities through extra dimensions explaining possibly the non-classical computing that the brain performs.

Expand full comment
Michael Pingleton's avatar

Much of this was new to me as well. I've been thinking about the relationship between complexity and entropy lately, and you put it so clearly here.

Expand full comment
Ken Grace's avatar

So good! Thank you for a wonderful Substack!

Expand full comment
Suzi Travis's avatar

Thanks so much, Ken!

Expand full comment