67 Comments
User's avatar
User's avatar
Comment deleted
Mar 20
Comment deleted
Expand full comment
Suzi Travis's avatar

Thank you! It's fascinating, isn't it!? It's amazing how much of cognition, survival, and even happiness boils down to how well we process information -- whatever that means ;)

Expand full comment
Mike Smith's avatar

An excellent discussion on the relationship between information and entropy Suzi.

One addendum I've been making to the disorder definition of entropy for the last few years is disorder for transformation. In a system ordered for transformation, the energy gradients are arranged to enable spontaneous transformations. In a disordered system, the gradients have become fragmented and isolated, which makes transformation unlikely, at least without energy from outside the system.

But a system that is highly disordered for transformation also contains an enormous amount of information. Or maybe it's easier to accept if we say, it's a system that would require an enormous amount of information to describe. Of course, for systems we normally label "high entropy", it's information that's hidden from us, again without injecting energy. But this also implies that every high information system is also a high entropy one, dependent on incoming energy for its ongoing dynamics.

Consider that the brain is the most energy intensive organ in the brain. And all the energy that data centers now require. Or the fact that my phone and laptop need frequent charging to avoid them turning into useless bricks.

I've never been particularly concerned about Maxwell's demon. It's always struck me as adding a non-physical concept to posit an exception to physical laws. How does the demon get its information? How does it exert energy to open and close the door? And of course, how does its brain work? To me, the answers to all these questions resolve any dilemmas. But I'll admit, like many thought experiments, it clarifies intuitions and generates interesting discussions!

Expand full comment
John's avatar

Very nice essay. Glad environmental catastrophe has spared you and you are up and running again!

This topic has occupied me for many years and, even so, your thoughts here are clear and succinct (hard to do much but poetry outwith the mathematics imho - but you have managed to).

Looking forward to reading more. Thank you, Suzi.

Expand full comment
Suzi Travis's avatar

Thanks, John! It was tricky not to get lost in the math on this one, so I’m really glad you enjoyed it.

Expand full comment
Johnnie Burger's avatar

‘If that’s true, then the whole argument is circular reasoning’ - I am starting to fear that all thought experiments tend to lead to circular reasoning. Because why is ‘there is no such demon’ not simply the end of it?

Expand full comment
Suzi Travis's avatar

That’s a fair worry! Thought experiments do tend to walk a fine line between revealing insights and just restating assumptions in a clever (or not so clever) way. Maybe the trick is finding the ones that actually push our intuitions somewhere new. A lot of them were created when we knew less, so feel less relevant now.

Expand full comment
Johnnie Burger's avatar

A reply more in the spirit of celebrating thought experiments: because of the presence of the interacting demon the system is no longer a closed system. On top of that: whatever powers the demon (eating souls, blood, whatever) accounts for the difference

Expand full comment
Malcolm Storey's avatar

I've always taken entropy to be just an obvious statistical effect.

If it takes energy to sort the hot molecules from the cold ones, then the reverse must be true if they start sorted and are allowed to mix. This should be testable.

In a perfect gas, any tagged group of molecules must tend to move towards random distribution through diffusion. So I could tag all the molecules in one half and let them disperse. Obviously there would be no energy effect in the gas since the tagging is purely conceptual in my mind.

Expand full comment
Suzi Travis's avatar

Yep, entropy is, at its core, a statistical effect — just a matter of probabilities playing out over time.

Your example of tagging molecules and watching them diffuse is a cool idea. If it takes energy to separate hot and cold molecules, does it make sense that reversing that process should also have some kind of cost?

I'm no physicist, but I wonder if the response would be something like this: diffusion happens naturally because there are vastly more ways for molecules to be mixed than to stay neatly separated. That’s why the universe tends toward higher entropy — not because it’s being pushed there, but just because high-entropy states are overwhelmingly more probable.

But doesn’t that still mean that moving to a higher-entropy state involves some kind of energy transformation? Even in simple diffusion, molecules are constantly moving and colliding, redistributing their energy. That doesn’t require an external energy input, but wouldn’t it still contribute to increasing entropy by spreading energy into a state with more possible microscopic arrangements, making it less locally structured? 🤔

Expand full comment
Wyrd Smythe's avatar

Yes, I believe that's right. As you point out in the article, Boltzmann entropy can be viewed as a statement about how much *useful* work you can get out of a system. Essentially, it's about heat differentials. In a high-entropy gas, no region is much hotter than any other, so there's no work that can be extracted from the system.

Expand full comment
Malcolm Storey's avatar

OK, so the point is that Maxwell's demon (about energy and entropy) separated hot and cold molecules, and in doing so created a useful energy differential.

My scheme (about entropy/information) only separates arbitrary molecules so there's no energy differential.

So the question then is if Maxwell's demon separated arbitrary molecules with no energy implications, would it take less energy (or entropy) to do this? It's hard to see what part of the process would have a changed cost.

Expand full comment
Wyrd Smythe's avatar

I believe the answer is that the demon does the same amount of work in both cases. The demon has to keep track of all the molecules, which requires memory and energy. If it *does* anything to the molecules, it exerts even more energy. The original demon opened a door when molecules it wanted to sort approached. Here it would be tracking tags rather than energy levels, but I don't see how the situation is much different.

Yet, in tagging half the molecules, even though they have roughly the same energy (temperature), it seems to me that the tagging creates a virtual low-entropy situation where that virtual low-entropy increases over time until the two sides are mixed. Essentially, there are far more states with the tagged molecules distributed throughout than states with them all on one side.

FWIW, I did some short videos exploring things like this. One of them applies well to the situation you describe. In this video, the "particles" are tagged red or blue but are identical in energy levels. Their natural motions mix them:

https://youtu.be/dNB3fFTLPqc?si=zhveaPF0EXbuvHlz

Expand full comment
Malcolm Storey's avatar

There is an implication here that you can't make a passive molecular non-return valve that only allows one-way passage as this would effectively be a perpetual motion machine.

Expand full comment
Wyrd Smythe's avatar

Indeed. The big takeaway about entropy is that everyone pays.

Expand full comment
Suzi Travis's avatar

Yes — exactly!

Expand full comment
Eric Borg's avatar

I’m going to disagree that information erasing solves the riddle of missing entropy in the Maxwell’s demon thought experiment. Was it circular reasoning? Maybe. But as soon as we start positing magical demons that alter the function of causality, then I’d say we pretty much revoke our ability to posit natural places for missing entropy to be found. The missing entropy should actually reside in the effects of the magical door, and regardless of any supposed lack of friction. (Edit — And don’t forget lack of mass too!)

Let’s instead consider the thought experiment without a magical door that keeps fast particles on one side and slow particles on the other. Instead we could posit the exact same situation to occur, though only through incredibly improbable chance would we get hot and cold sides. So no magic this time. Might we then epistemologically say that the second law of thermodynamics was broken? Sure, but that would simply reflect human ignorance regarding the dynamics of causality itself. Causality would mandate this situation to occur regardless of our supposedly unbreakable laws.

Unfortunately I didn’t get the chance to posit that information should only be said to exist as such to the extent that something causally appropriate becomes informed by it. But at least I was able to help dispel the notion that we can learn about how reality works, by positing magical situations.

Expand full comment
Suzi Travis's avatar

I agree that introducing a magical demon can make things murky — it’s always a bit risky to use supernatural entities in a thought experiment while still expecting naturalistic conclusions.

There’s been a long-standing debate about whether the demon in Maxwell’s Demon is a misleading distraction. Some physicists and philosophers argue that people focus too much on the idea of an intelligent, supernatural being rather than the physical mechanism involved in the sorting process.

One way to avoid the magic problem is to replace the demon with a tiny robot or mechanical sorting device. The paradox (it is argued) still holds: if it could separate hot and cold molecules without increasing entropy elsewhere, we’d have a serious problem. But once we consider how that little robot would actually work, we run into Landauer’s insight — any mechanism that processes and erases information has a thermodynamic cost.

I see what you’re saying — if something happens, then by definition it was allowed within the laws of physics. But does that mean causality is actually doing any work here? The universe evolves based on its present state, not its past, so entropy increases simply because high-entropy states vastly outnumber low-entropy ones. The second law isn’t about what must happen — it’s just overwhelmingly likely. Wouldn’t the real surprise be if entropy didn’t increase?

Expand full comment
Wyrd Smythe's avatar

As an aside, in the same way we reverse the expansion of the universe to decide that there must have been a Big Bang at the beginning, the relentless increase of entropy points to the universe having extraordinarily low entropy at the beginning. Which seems at odds with the sense that the CMB has very high entropy. Way beyond the scope of your article, but the resolution to the paradox, in a word, seems to be "gravity".

Expand full comment
Eric Borg's avatar

Agreed Suzi, we’re surprised when things that we consider overwhelmingly unlikely, happen anyway. So an apparent void of entropy would surprise us. In truth however I’ve always been uncomfortable with the fascination that physicists have with entropy. Yes your suggestion of “high entropy states outnumber lower” is better than the “disorder conquers order” concept that I was generally taught in school. But I’ve always been a strong determinist and therefore figured that all human uncertainty about what will happen, should ultimately be explained by human ignorance. If we live in a fully natural world then why worry about humans being surprised by things that they don’t grasp? Instead of positing that entropy always increases, we might do better to posit that systemic causality never fails, and regardless of human ignorance that creates human surprise.

On Landauer’s insight, shouldn’t we already presume that any mechanism that processes and erases information, would thus have thermodynamic costs? Flipping through Wikipedia I do realized that the physics here is well above my pay grade, but I wouldn’t think physicists would consider any element of causality to have no such effects.

Expand full comment
Suzi Travis's avatar

You've got me thinking...

Do you mean that we fail to predict simply because we don’t know enough? The question I’ve been thinking about is: even if determinism is true, does that necessarily mean uncertainty is just ignorance? What about systems that are deterministic but still unpredictable — like chaotic weather models or Wolfram’s cellular automata, where you can’t shortcut the computation?

And on causality... the fundamental physical laws work just as well forward as backward in time — they don’t distinguish between cause and effect. In physics, thermodynamics, and information theory, causality is seen as something that emerges statistically, not something baked into the laws themselves. If we go by the math, causality looks more like a principle we impose because it helps us make sense of the world.

So when you say systemic causality never fails, do you mean that causality is more fundamental than entropy — that it's somehow built into the deepest layer of physical law? Or do you see it more as a conceptual framework we use because it's so useful for explanation and prediction?

Expand full comment
Eric Borg's avatar

I’ll try to be more explicit about what I’m suggesting here Suzi. It’s that perhaps the reason physicists invented entropy was because they needed a metaphysical premise from which to build, though the people who are technically in charge of metaphysics (philosophers) have never provided them with any generally accepted understandings from which to do so. So for the metaphysical grounding of their field perhaps they invented the flawed concept of “entropy always increases”, but would have been better served by presuming “systemic causality never fails”? And why make my suggested metaphysical presumption? Because if it does ever fail, then such magic would make science obsolete in that respect.

One implication of grounding physics upon this ontological premise is that all things that ever have happened, or ever will, are fixed on the basis of that systemic causality. So for any coordinates of x, y, z, t, (and so on should more dimensions exist), there will always be a unique situation of existence given that presumed absolute systemic causality.

Yes our supposed “fundamental laws” work the same forward or backwards, “cause” or “effect”, but that should merely be because they’re just models that attempt to describe how reality works. They aren’t “truth”. Theoretically we could imagine something outside our systemic causality which could perfectly grasp everything about our system, and thus predict all that ever has or ever will occur here, including quantum mechanics. But in practice I can’t imagine anything more ridiculous than such and outside understander. And clearly we’ll never be gods either.

In this comment thread I see that as an aside Wyrd Smyth hinted at a potential way for the second law of thermodynamics to be causally violated (or at east that’s my interpretation of what he said). Perhaps gravity could get this done? Again this is all well above my pay grade, but maybe so. Currently physicists presume that the Big Bang will forever expand existence until there’s no more effective heat — no more work left to do ever again. But maybe this presumption is false. Maybe causality in the form of gravity (or something) will eventually bring the system back together again for an extreme violation of our second law of thermodynamics, thus causing another Big Bang? What I like most about this idea is that it would leave reality as a perpetually functioning system rather than a one way system that begins with function and then goes on and on perpetually under “heat death”.

Expand full comment
Suzi Travis's avatar

Like always, you’ve raised some big questions here, Eric! Let me see if I’ve understood you right:

-- You suggest that physicists needed a foundational assumption—a metaphysical starting point—to ground their theories.

-- Then you propose that philosophers (who, in your view, are responsible for metaphysics) failed to provide this.

-- So physicists, needing something to build on, “improvised” by using entropy as a kind of stand-in foundation—instead of grounding everything in causality.

So if I’m following, your claim is that philosophers should have done a better job providing science with an ontological foundation—and because they didn’t, physicists invented one: entropy.

That’s the part I’m struggling with. Entropy doesn’t feel like a metaphysical placeholder—it comes from statistical mechanics, not armchair metaphysics. It’s a mathematical tool that describes what large systems tend to do, based on how microstates relate to macrostates.

I think you're absolutely right that scientific models aren’t "truth" in any final sense. Science doesn’t deal in certainty; it deals in probabilities. It’s always about building the best possible models based on the current evidence and being ready to revise them when better data comes along. In that sense, laws like the second law of thermodynamics aren’t metaphysical absolutes—they’re just incredibly reliable generalisations that have held up across a huge range of systems.

But the idea that “philosophers deliver metaphysics to physicists” strikes me as odd. That seems to suggest philosophers define what exists and then scientists go build models on top of that foundation. But don’t questions about what exists ultimately depend on observation, data, and empirical modelling? Philosophers can help clarify these questions, but they can’t resolve them in isolation.

Philosophy plays an essential role—it helps analyse, clarify, and critique assumptions. But it doesn’t dictate ontology to science.

If metaphysics is about what exists, shouldn’t those claims be based on evidence? Isn’t that what scientists are doing—building metaphysical models grounded in observation, rather than waiting for philosophers to tell them what’s real?

As for cosmology—I love the idea of a cyclical universe too. But it’s worth remembering that when we say “the universe,” we really mean the observable universe. There may be far more that we simply can’t see, which complicates how we think about endings—or beginnings, for that matter.

Expand full comment
Eric Borg's avatar

I’d say you’ve got my project roughly right Suzi, though perhaps don’t grasp it fully or why I consider it so important. And since reading your article and having this conversation with you, I’ve only now been considering the possibility that my long held single principle of metaphysics might also help physics entropically. That physicists have been stymied by the implications of a magical scenario for over a century, does not inspire confidence in me that their house is in metaphysical order. Their second law of thermodynamics can’t deal with a frictionless, mass less door that nevertheless magically alters particle function by forcing fast particles one way and slow the other, except by means of a theorized “information erasing” which occurs inside the magic? I’m very much a physicalist regarding all that’s real however, so I’m sure that what you’ve been working on to ground information in the physical, will succeed.

At least in a literal sense metaphysics suggests “before physics”, and physicists specialize in physics rather than what comes before physics. Philosophers specialize in metaphysics, but they famously agree on nothing and so their ability to found science in this regard has been limited. I’m not saying that philosophers should stop doing what they’re currently doing. Instead I’m saying that in addition to that, a respectable community of “meta scientists” needs to become established whose specific purpose would be to provide agreed upon metaphysical, epistemological, and axiological principles from which to found science. That way scientists shouldn’t need to focus so much on the structure of science but could instead leave such questions to dedicated specialists. And I’d hope for certain scientists, like Sabine Hossenfelder, to become members of such a community. Mind you that a given community wouldn’t inherently need to become sanctioned by any current professional organizations. If my plan were to grow legs however then I’d expect competing newsletters to be written by individual groups who are vying to become the currently dominant regulators of science. Of course there’d be all sorts of nasty politics and such in the fight to win influence and become so distinguished. The end goal however would be for dedicated specialists to structure the function of science so that scientists might then be able to focus on what they’re actually suppose to be focusing on, which is to say science rather than the philosophy behind science. In practice I’d expect the greatest gains to be made in the fields that have struggled the most. For example there seems to be far more disorder to potentially improve upon in modern psychology than physics.

Expand full comment
Dave Slate's avatar

One problem with determinism is that the laws of quantum mechanics imply that there is an essential degree of randomness and unpredictability to the physical world. Albert Einstein, who didn't like the idea that this was a fundamental characteristic of nature, famously quipped: "God does not play dice with the universe".

Physicist Roger Penrose, in his book "The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics", speculated about the connections between quantum mechanics, consciousness, and the possibility of free will.

I think there may be some validity to the idea, advanced by some scientists (e.g. former NASA physicist Thomas Campbell), that the universe we inhabit is really a gigantic complex dynamical simulation, like that in a video game, governed by a set of elegant mathematical rules, and complete with its own random number generator in the form of quantum mechanics. This simulation is presumably being conducted by some higher beings, for purposes unknown. If that's true, then those higher beings may themselves be inhabitants of a yet even larger simulation, and so on.

Expand full comment
Eric Borg's avatar

Right Dave, many physicists do interpret quantum mechanics that way. Conversely I stand with Einstein by interpreting our associated perceptions of fundamental randomness, to reflect our ignorance of what’s happening. Furthermore it seems to me that this position is “natural” while the other mandates a “super” modifier. So there’s that too. But I’ve noticed that physicists who oppose Einstein in this sense, tend not to like reductions to this effect. So it goes…

Expand full comment
Dave Slate's avatar

As Hamlet says in Shakespeare's play: "There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy."

Expand full comment
Wyrd Smythe's avatar

Good post. Entropy does seem often misunderstood, especially in the differences between Boltzmann and Shannon entropy. I've long been bothered by the tendency to view the former as some kind of force. I don't think it is — it's just the statistical probability that the system moves to a more probable macrostate. And immediately there's a problem in how macrostates are defined — which is both arbitrary AND observer-dependent.

I've also long been bothered by the notion that unbreaking an egg (or a glass) is possible. Egg shells and glass form over time, and I don't see how any careful reversal of forces can — in the same amount of time it took to break them — fuse them seamlessly.

Something I've never seen discussed is the entropic cost of not erasing but overwriting old data with new data. "Erasing" a file on disc often just means changing a single byte in the directory so the system "forgets" it has that file (freeing up the space to be overwritten later). So, in this sense, the cost is tiny. If instead one "erased" every byte (which is sometimes done for security), this is usually done by (often repeated) over-writing. I would think the energy the system uses to perform any of these outweighs the entropic cost involved in the actual bytes on disc.

And I think likewise the requirements of the Demon's very existence and information gathering are more likely to account for the decrease in entropy. (But I've never been one to swim in the main stream. Iconoclast has always been my secret middle name. :)

Expand full comment
Suzi Travis's avatar

Treating entropy like a force feels really intuitive — it’s easy to see why people fall for it. But it’s just probability doing its thing.

Your point about macrostates being observer-dependent is something I’ve been thinking about too. Macrostates are a highly useful way of talking -- useful for prediction -- but not necessarily fundamental. But even still very much real to us.

I also like your point about overwriting vs. erasing data. Even if erasure is just marking a file for overwriting, doesn’t that still involve a physical change somewhere? Landauer’s principle ties entropy cost to logical irreversibility, so I wonder if delayed overwriting is just shifting the cost rather than avoiding it? I don't know enough to know the answer to this one.

And, Iconoclast is a great secret middle name! 😆

Expand full comment
Wyrd Smythe's avatar

Indeed. Some even believe that *time* emerges from entropy or, somewhat less fantastically, that entropy is why time runs only one way. My sense is that both put the cart before the horse. My suspicion is that time is fundamental and axiomatic.

That macro states are observer-dependent is one of the reasons I can't see entropy as a force or the basis of anything. Without a clear definition for macrostates, there's no way to measure it in complex systems. Yet it remains true that systems probabilistically evolve towards higher entropy states. (Shannon entropy is so much clearer and cleaner on this that I tend to see Shannon entropy as distinctly different from Boltzmann entropy despite their key equations being somewhat similar. I wish Shannon had picked a different name, though.)

Yeah, the erasing/overwriting thing is too far into the weeds for me to truly grasp. My thinking, FWIW: Often, "deleting" a file just means the O/S changes the directory entry such that the filesystem "forgets" it has that file and will eventually reuse that disc space with new data. It strikes me that deleting a big file takes (at least initially) exactly the same effort as deleting a small file. But at this point, the original data is still on the disc and in many cases can be recovered. Only when enough of the original data has been overwritten can that file be said to be gone.

But this seems to entangle the storing of new data with the overwriting (and therefore "deleting") of old data, and I have no idea how to view this in terms of entropy. Writing new data to never-used sectors should be the same as overwriting existing data, so it seems the entropy bill should be sent to the storage side of things. [shrug]

Expand full comment
Suzi Travis's avatar

This is such a great comment. I’m going to be thinking about this for days.

One thing I’ve been wondering — since we, as biological and cognitive systems, are embedded in an entropic process, maybe our experience of time is inseparable from that fact. If our brains track change, and change is just the accumulation of entropy, then maybe time, for us, is simply what entropy feels like?

This is making my brain hurt.

Expand full comment
Wyrd Smythe's avatar

I think one can argue that our experience of time is necessarily connected to entropy because, as you say, as large biological machines we are entropic, though we ourselves are islands of low entropy. For that matter, so, in some sense, is the Earth and its biosphere. The question, then, is how much of our sense of time is from entropy versus our self-organization.

I would push back slightly on “change is just the accumulation of entropy” — change can also be the reduction of entropy via some process + energy. One might wonder if our sense of time has anything to do with memory and learning. If I judge passing time by counting heartbeats, my accumulating count in some mental register seems a reduction in entropy (at the cost of fuel consumption and an overall environmental increase in entropy).

So, yeah, brain pain and not a clue, but since entropy is in the mix one way or the other, it does seem involved somehow.

Expand full comment
Suzi Travis's avatar

Good point — when we look locally, not all observed change is entropy increasing. Self-organising systems (like brains) can reduce entropy in their own structures. But overall, entropy is still increasing in the larger environment. So in the end — if we’re taking a physicalist view — our sense of time must have something to do with entropy. In some sense everything does. Whether that's through memory, learning, or something else. Either way, it's all part of the bigger entropic picture.

Expand full comment
Michael Pingleton's avatar

Your description of microstates vs macrostates remind me of how pixels work to form an image on a monitor. I've also always viewed things like this in a hierarchal matter; basically layers of abstraction. In this sense, it reminds me of your other article titled "The Problem with Complex Things."

This idea of entropy also somewhat ties into the neural networking engine that I'm working on. In a sense, it starts off small, then grows as it learns to process more complex information. The microstate/macrostate idea works here as well, as we distinguish between the individual neurons and the network as a whole.

Some very interesting thoughts here, Suzi. Thanks for sharing!

Expand full comment
Suzi Travis's avatar

The pixel analogy is great! It gets at the micro-macro distinction. Let's just hope our pixels don't start trending toward disorder over time.

Your neural networking engine seems to be getting at complexity -- how does complexity grow in a world of increasing entropy. Which is a super interesting question (that I'll get to in a couple of weeks).

Looking forward to learning more about this neural networking engine.

Expand full comment
Michael Pingleton's avatar

Yes, the idea of pixels on a screen was the first thing that came to mind. Though, when these pixels become disordered, you get static. Yes, that's how TV static works.

As for the neural networking engine, that grows as we add more and more neurons to the network. The growth is focused on individual neurons, rather than the network as a whole. When viewing the network as a whole, you might say that entropy is increasing. However, when thinking strictly about input vs output data, that doesn't matter. As long as we get the expected output for a given input, the entropy within the network itself doesn't matter.

It does make me think about how the human brain works though; we don't really understand the network itself that well, but we can still predict certain outputs when given certain inputs.

Expand full comment
Dave Slate's avatar

Good thought-provoking article, Suzi.

One thing that puzzles me about Maxwell's Demon: isn't the demon itself part of the "closed system" of the partitioned box? In which case, isn't its nature drastically different, in a physical and informational sense, from the other components of the system, such as the box's perfectly reflecting interior walls, the partition dividing the box's two chambers, and the molecules themselves?

As an aside, I met Claude Shannon once, in 1980. I, along with several other participants in the "Third World Computer Chess Championship" in Linz, Austria, traveled from Linz to Vienna after the tournament to have dinner with Claude Shannon and his wife Betty at the Hotel Sacher, where the Shannons were staying. We probably enjoyed some of the Hotel's famous "Sachertorte" for dessert, although that was too long ago for me to remember clearly. Claude Shannon had written a paper in 1950 titled "Programming a Computer for Playing Chess", which described methods, including the "Shannon Type A" strategy, that were widely used in subsequent computer chess programs, including ours. But actually testing Shannon's ideas was not possible at the time of his paper, since they required computing hardware that was not yet available.

Expand full comment
Suzi Travis's avatar

Yes, good point. I think the demon should be considered part of the closed system rather than something external. This is a great point because it forces us to think about whether the system is truly isolated or whether the demon, by necessity, interacts with the molecules in a way that makes it not truly closed. The demon is not just another gas molecule bouncing around but an entity that processes information and makes decisions.

Wow, you met Claude Shannon! That must have been an incredible experience. And you were involved in computer chess as well? That's fascinating — what did you think about early programs, like Belle or Deep Blue?

Expand full comment
Dave Slate's avatar

Suzi Travis wrote: "Wow, you met Claude Shannon! That must have been an incredible experience. And you were involved in computer chess as well? That's fascinating -- what did you think about early programs, like Belle or Deep Blue?".

My hazy recollection of meeting Claude Shannon was mostly that he was an "interesting character". And yes, in a previous life, or as I sometimes like to say, "A long time ago, in a galaxy far, far away", I made computers play chess. Here is my page on the "Chess Programming WIKI": https://www.chessprogramming.org/David_Slate

The third photo on this page was taken in Claude and Betty Shannon's room at the Hotel Sacher in Vienna, Austria, in 1980, following the "Third World Computer Chess Championship" tournament (won by Belle) in Linz, Austria. From left to right:

Ben Mittman: Director of the Vogelback Computing Center at Northwestern University, where I worked for many years. Ben provided a lot of support for our computer chess activities.

Monty Newborn: Co-author of computer chess program "Ostrich" and professor emeritus of computer science at McGill University in Montreal, Canada.

Tony Marsland, Canadian computer scientist and game researcher.

Dave Slate: American computer scientist and co-author (with Larry Atkin) of the winning program at the Second World Computer Chess Championship in Toronto, Ontario, Canada, in 1977. See https://www.chessprogramming.org/WCCC_1977

David Levy: British chess master, director of several computer chess tournaments, and author of numerous books on chess and computers. He also wrote the 2007 book "Love and Sex with Robots", which he promoted on an episode of the TV show The Colbert Report. I watched that episode, which aired many years after I had last seen Levy in person, and was quite amused by his and Colbert's efforts to "keep a straight face" while discussing the book.

Claude Shannon: American computer scientist, known as the "father of information theory".

Ken Thompson: American pioneer of computer science, author of the original Unix operating system, and co-creator (with Joe Condon) of the Belle chess-playing machine, which was the first computer to achieve master-level play.

Betty Shannon: American mathematician and Claude Shannon's wife.

Tom Truscott: American computer scientist and co-author of "Duchess" (Duke University Chess Program).

Deep Blue's victory over chess world champion Garry Kasparov in 1997 was quite an achievement of both hardware and software. Since then, chess program development has reached the point where programs running on ordinary PCs, such as StockFish, play at superhuman grandmaster level. More recently, Google's Deep Mind division has produced programs that teach themselves from scratch to play games like Chess and Go at extremely high levels. Deep Mind is hoping to apply the same technology to tackle real world problems: https://deepmind.google/research/breakthroughs/alphazero-and-muzero/

But are any of today's chess-playing contraptions "conscious" in the sense we humans understand the term? I don't think so. Will any man-made "AI" machine ever achieve consciousness? That's a different question. The problem is that we tend to attribute consciousness to any creatures or things that behave sufficiently like us. The only creature I actually know to be conscious is me. I infer that other humans (and animals) are conscious only based on the similarity of their appearance and behavior to mine. Will it ever be possible to distinguish actual consciousness from the mere appearance thereof? That question is at the core of the mind-body problem, and I don't pretend to know the answer.

Cheers.

Expand full comment
Suzi Travis's avatar

Wow — you were part of such an iconic chapter of AI history. That photo (and the who's who in it!) is just incredible. It’s strange to think that, at the time, no one could have fully imagined what was coming.

Looking at that photo — not really that long ago, despite how it might feel — it’s amazing how far we’ve come. I remember when the AlphaGo news broke (was it around 2016?). I was floored. It made me question a lot of assumptions I’d been carrying without even realising.

This was such a fun trip through history -- thank you for sharing. And I love where you land at the end. It's so true. There is a deep uncertainty about consciousness — what it is, whether we can ever know -- it's that pesky problem of other minds, isn’t it?

Expand full comment
Jon Rowlands's avatar

I think about this while folding laundry. Solving small puzzles over and over, in order to erase the information stored in the crumpled tee shirt. Near the end of the pile, when I start wanting to throw the rest away and get new ones, I make myself feel better by believing that I'm increasing the entropy of the universe most slowly.

Expand full comment
Suzi Travis's avatar

Thinking about entropy can feel depressing. So can that never-ending pile of laundry. I fully support your coping mechanism 😆

Expand full comment
Wild Pacific's avatar

Very beautiful as always.

Definition of physical information I think is not fleshed out enough for readers.

Will admit, it was close to my view before. Since, I’ve felt that IIT is not the right framework, and see information as a recursive fractal.

While not strictly on this topic, conversation of Sam Harris with Sara Walker was covering some of the probability fractal problem.

https://www.samharris.org/podcasts/making-sense-episodes/388-what-is-life

Some examples in the article are in this vein: it is not possible, in principle, I believe, to unbreak an egg. Because, like with the demon story, process of rebuilding of an egg requires information which is not measurable at the very basic level (not just lack of tools).

If one works with electronic systems, it’s quite well known that to emulate a virtual transistor requires several orders of magnitude more transistors, if we are talking about high precision voltage emulation.

I believe universe holds the same pattern, in order to have an object created, we need each assembly element which contains information, emulated and reversed.

Cats jump out of the bag using their own energy, but to catch them and put them back in the bag requires external system (human) to do a lot of work. Not just forgetting, like demon, but actually doing more work than is possible.

It bends the curve of probability of each fractal level.

Expand full comment
Suzi Travis's avatar

Thank you — I'm really intrigued by this idea of information as a recursive fractal. I think I follow what you’re getting at: are you saying that the reason entropy is so difficult to reverse is because each layer of disorder creates deeper layers of lost information that need to be reconstructed? And is that why you say the process of rebuilding an egg requires information that isn’t measurable at the most basic level?

This makes me think of Stephen Wolfram’s ideas about computational irreducibility—where some systems evolve in ways that can’t be reversed or predicted without running the full computation. Do you think entropy works in a similar way? That reconstructing order isn’t just difficult, but fundamentally irreducible because information is lost across multiple scales?

Expand full comment
Wild Pacific's avatar

Yes, I think each time a possibility of event exists, like gas atoms moving in the box, there is a possibility of reversal of this that is much higher level. I have edited a comment to add link to a podcast with Sam Harris and Sara Walker (she works with Lee Cronin on Assembly Theory)

It was super educational for me on this topic. https://www.samharris.org/podcasts/making-sense-episodes/388-what-is-life

There is an answer, for instance, that it is just not possible that somewhere in the universe an iPhone 16 would spontaneously assemble from the elements. Despite the endlessness of universe.

Expand full comment
Suzi Travis's avatar

That's great, thank you. Somehow, I had missed Sam's episode on 'What is Life?', but that sounds very much along the lines of what I am interested in exploring atm. So, thanks so much

Expand full comment
Ian Hill's avatar

Whenever my wife randomises my Rubik's cube, I jokingly complain that she is contributing to the heat death of the universe.

Expand full comment
Suzi Travis's avatar

I loved everything about this comment! Thanks for the laugh 🤣

Expand full comment
EntropyWave's avatar

Ah, but we never truly "erase" information do we? The bit that is erased becomes entangled with the environment in a complicated manner, but it is still there in the sense that we could, in theory, extract its information once again if we were able to track the quantum state of the system it was dissolved into. Of course, we will never likely be able to do such a thing because of decoherence, where the quantum state is, for all practical purposes, lost to us. We call this loss of information entropy. Yet, even though we may lose information through decoherence, without it we would have no classical world. It is ironic that our conscious perception of the world seems to depend on information being hidden from us!

Expand full comment
Suzi Travis's avatar

I was waiting for someone to mention the idea that information is never really erased! Is perception just decoherence with a point of view? 😉

Expand full comment
ASHWIN RAJAN's avatar

What's a place with absolutely no information called?

Expand full comment
Suzi Travis's avatar

I guess it depends who you ask. A cosmologist might call it the heat death of the universe. A philosopher might call it "the void". And in information theory, it might just be called "pure noise".

Expand full comment
ASHWIN RAJAN's avatar

Pure noice?

Expand full comment
Suzi Travis's avatar

In information theory, pure noise refers to a signal that is completely random -- it’s perfectly unpredictable, carries no usable message, and cannot be distinguished from randomness.

Expand full comment
CredenzaCompound's avatar

I assume next week will address the semantic content (which I take is what we typically mean when we use the i-word unless we’re working in specialized fields)? Because it’s never been obvious to me that we aren’t playing tricks on ourselves by smuggling mental kind into physics by equivocation

Expand full comment
Suzi Travis's avatar

Yes, I wonder about that too. I’ll definitely get to the distinction between semantic and syntactic information — but there are a few other things I want to unpack from a physicalist perspective first. It’s such a tricky space, and I think you’re right to flag the risk of smuggling mental kinds into physics. We do have to be careful here.

Expand full comment
Allen Ring's avatar

Scientists concept of entropy is aggravating. The element gold was formed in a star much larger than our own that went nova and blew the atoms across a huge expanse of space. That would be an extremely high level of entropy, just slightly less than the quantum flux. Add time. And you get a highly concentrated pure vein of gold in a mountain in California. That is a an extremely low level of entropy. What is the mechanism?

Expand full comment
Suzi Travis's avatar

It feels totally counterintuitive, doesn’t it? How can something as ordered as a gold vein come out of something as chaotic as a star exploding?

I’m no physicist, but my guess is they’d say that on average entropy always increases overall — as in, across the entire universe — but that doesn’t mean it has to increase locally. Local order (like a gold vein or a human being) can absolutely exist, as long as it's part of a larger process that increases entropy elsewhere. They call these pockets of order — and they can exist as long as the entropy price is paid somewhere else.

Expand full comment
Suzi Travis's avatar

Thanks for the link! If you have time, perhaps you might like to say a little bit about what it's about and why you found it interesting and relevant to the conversation. It helps others get a sense of what they’re clicking into.

Expand full comment
Imperceptible Relics's avatar

Yes, I'd be happy to. So a few years ago, a researcher from Sandia National Labs left and started a company to lower the energy cost of computers, which, with Moore's Law, create increasingly more heat and entropy. The researcher's utilize Landauer's limit, the minimum energy to erase a bit of information, to develop a reversible process, called adiabatic computing.

The process, which still requires some energy, embeds the circuit into a resonator, which is like a swinging pendulum. The circuit they design minimizes the friction for the pendulum to swing, because it would operate endlessly if it were not for friction. Thus the input they generate, gives a boost to the pendulum to keep it from slowing down. (I am curious how this circuit would operate in outer space- a vacuum).

In a way, they are reifying Shannon's information theory. Which is when the abstract theory becomes concrete/real/physical. However, a biophysicist has suggested that Maxwell's Demon can be found in nature: "Real-life versions of Maxwellian demons occur, but all such "real demons" or molecular demons have their entropy-lowering effects duly balanced by increase of entropy elsewhere.[28] " "https://en.wikipedia.org/wiki/Maxwell%27s_demon#Applications

In 2013, Werner R. Loewenstein wrote a book that explored that, called "Physics in Mind". There are other interesting theories how neural information/memories are stored, such as quantum consciousness: https://www.quantamagazine.org/a-new-spin-on-the-quantum-brain-20161102/

I had only encountered this connection to Maxwell's demon after reading some articles several months ago about this startup, and some of the originating theory. Some additional illustrations make this connection between Maxwell, Landauer, and Adiabatic computing: https://www.regulations.gov/comment/NIST-2022-0002-0077

Expand full comment
Suzi Travis's avatar

Amazing! Thank you.

Expand full comment