47 Comments
User's avatar
User's avatar
Comment deleted
May 27
Comment deleted
Expand full comment
Suzi Travis's avatar

I agree — I think we’ve forgotten the important insights we get from the theory of evolution. There’s something important about the fact that brains (and other evolution-shaped systems) aren’t static machines. They’re dynamic, constantly adapting, shaped by pressures over time. It often seems like we've been trying to explain something fluid, by using tools designed for fixed systems.

Expand full comment
User's avatar
Comment deleted
Jun 4
Comment deleted
Expand full comment
Suzi Travis's avatar

Interesting! I don’t know a whole lot about Kauffman’s concept of the Adjacent Possible, but I wonder -- do you think it risks becoming a kind of one-size-fits-all explanation? Might the mechanisms of change differ across domains like biology, economics, culture, and technology? And if so, do we need to be careful about how we apply the idea in each case?

Expand full comment
User's avatar
Comment deleted
May 27
Comment deleted
Expand full comment
Suzi Travis's avatar

Ah! The Matrix rabbit hole! It does feel like we’re in a loop: brain activity triggering more brain activity, with no clear “outside” to check against. This is similar to the brain in a vat thought experiment.

But philosopher Hilary Putnam has an interesting argument -- if we really were just a brain in a vat, the very idea of “a vat” wouldn’t mean anything to us — because our words and thoughts only get meaning from the world we're actually in. So if we were in a vat, we couldn't think we were in a vat.

Expand full comment
User's avatar
Comment deleted
May 28
Comment deleted
Expand full comment
Suzi Travis's avatar

True. Perhaps if we think of the brain as a passive receiver the Brain in a Vat is easier to swallow.

The brain in a vat idea assumes the brain is isolate-able — that you can pull it out of the evolutionary, bodily, worldly soup it emerged from and still expect it to “work” the same way.

But if we think brains are tools forged in and for interaction, their structures presuppose a history.

But then again, someone might argue that even an active brain -- that sends responses out -- could work with the brain in the vat idea too.

But the world we move through is not a pre-rendered movie; at any moment we can inspect the texture on the coffee mug or bend down to tie a loose shoelace. A good brain in a vat setup would have to anticipate — or simulate on demand—the causal consequences of anything the brain might do.

I guess a brain in a vat could be set up to have expectations, surprise, & prediction. Would those patterns of surprise ever mismatch the input? Could the brain “catch” the Matrix through its own internal workings? Or are we assuming a perfect simulation — one that mimics not only input, but feedback, too?

Apart from the conceptual puzzles, the amount of energy required to run such a thing doesn't make much sense. Keeping billions of neurons alive and perfectly fooled, while also simulating an entire physics-rich environment, would dwarf the resources needed to keep the same brain ticking in an ordinary body.

Expand full comment
The People Geek's avatar

Really enjoying this series. Can't wait to see your thoughts and approach to areas such as direct perception.

Expand full comment
Suzi Travis's avatar

Thanks so much — I’m glad you’re enjoying it!

Direct perception is such an interesting topic. And yes, I’m looking forward to digging into that one too.

Expand full comment
Mike Funnell's avatar

I wonder about the difference between 'misrepresentation' and 'misrecognition' here.

My apologies if I've missed something you've already covered (I've been very busy, and have not caught up.)

But: I'm reminded of an Aboriginal bloke I got to know rather well up north of the Daintree. I'm pretty damned sure he had a good 'representation' of "a crocodile" in his head. He also had a good representation of "a log". His trouble was not 'representation' it was 'recognition' - and the difference could be life-changing or life-ending.

He showed me a lovely, gorgeous, inlet with cool-looking waters on a hot day. I asked him: "how would you know if it's safe to swim there?" - he answered "I'd only trust it safe as the old women told me it was safe".

He was in his 40s, and *very* familiar with the area. Even then, he wouldn't trust his own recognition - only the old women. He told me "I had mates who wouldn't wait for the old women. Croc got 'em. I'll wait."

That was my first thought when I read about "dogs" vs "watering cans"...

Expand full comment
Suzi Travis's avatar

Hi Mike!

I love everything about this comment. I hadn’t realised you were in Australia too!

My husband tells a story about visiting his cousins up in northern Queensland. When they stopped at an inlet, he asked the same question you did: “How would you know if it’s safe to swim here?” Their answer wasn’t nearly as wise as your friend’s. They said, “We count the tracks going in and out. If it’s even, it’s safe. If it’s odd, it’s not.”

On the difference between misrepresentation and misrecognition: misrepresentation is the term usually used by those working within a representationalist framework — the idea that brains (or other systems) contain internal states that stand for something, and sometimes get it wrong.

Misrecognition, by contrast, tends to show up in views that reject representationalism — like enactivist, phenomenological, or ecological models of cognition. In these accounts, perception is something we do, not something that happens inside. So when things go wrong, it’s not because the brain assigned the wrong label — it’s because our expectations didn’t match how the world responded to us.

Expand full comment
Mike Smith's avatar

I've historically been suspicious about the "filling in" phrase for the blind spot, but I hadn't heard that there was an active V1 area corresponding to it. Interesting.

Based on what I've read, I tend to think representational thinking is fine, as long as we don't get hung up on the idea that it's a contiguous area in the brain, but instead understand it as a network of activations, portions of which might light up for different representations.

And it pays to think about what representations are more fundamentally. Here I think the predictive coding theories have a lot going for them. In that case, a misrepresentation is a prediction error, which isn't a problem since all predictions are probabilistic. And representational drift (another phenomenon I've seen used to challenge representationalism) is the prediction getting fine tuned and adjusted over time.

But I definitely think we have to be careful about importing too much from digital computing. I get what the dynamical system folks are concerned about. Many of the concepts we bring over, like the idea of information being exchanged between regions, is probably only a hazy approximation of the causal effects propagating through the messy biology.

I will note that the word "representation" has long bugged me. It seems to imply that something is being re-presented to some inner observer, rather than the underlying models used in the process of observation (or imagination as the case may be). But like so much terminology, it seems like we're stuck with it for historical reasons.

Expand full comment
James Cross's avatar

"network of activations"

Yes. There isn't a single representation of "dog" but a series of patterns relating to legs, fur, floppy ears, barks, dogs from memory, and the word/concept of "dog." In a mistake, the other "dog" patterns are sufficiently present to cause the word/concept of "dog" to activate.

Expand full comment
Suzi Travis's avatar

Ah, I relate to so much of this. I, too, were (and still am) suspicious of “filling in” explanations. I think because I also share your wariness around how easily representational talk slips into computational metaphors. But I don't think we need to evoke a homunculus here. I don't think we need to claim “filling in” in the old-school way (like painting in the blanks), I think we can simply say the brain is just doing what it does best — generating patterns that are adaptive.

Expand full comment
Malcolm Storey's avatar

Interesting read, as ever.

I wondered about the rat experiment. What's it like to be a rat? (Bit like a dog or an ant, I suspect, with scent playing a major part.) So I wondered if they were actually following scent trails (or even smelt the food but surely they thought of that! - guess you don't actually need the food in the test).

Looked up the paper to see if the mazes were single use or rotated, but it was behind a paywall.

The abstract (if I've read it correctly) says they found the bait by spatial map on day 8 but by day 16 they used “turn left to get food”. But I guess day 8 proves they do have a spatial map, even if they later choose not to use it.

Expand full comment
Tina Lee Forsee's avatar

I had the same thought about smell. I don't know about rats and what their sense of smell is like, but it occurred to me that if my dog could easily find one tiny thyme leaf placed inside a ziplock bag which I hid somewhere in the house after taking that bag into various rooms to throw him off the scent trail, then it didn't seem like a stretch that rat could smell the food in such a simple maze.

Expand full comment
Suzi Travis's avatar

Hi Tina!

Yes — I had the exact same thought! Rats do have a powerful sense of smell, so it’s a really fair question.

I just responded to Malcolm with a bit more detail about the study setup and how they ruled out smell as the main cue. The quick version: the rats switched strategies depending on which brain region was temporarily inactivated — so smell doesn't seem to fit this data.

Expand full comment
Tina Lee Forsee's avatar

I will check it out. Thanks!

Expand full comment
John's avatar

Yes. They use smell and that’s a potential confounder - I often wonder at some the historical experiments’ findings. It has been identified for a long time now and steps taken to control for its effects have been successful to a greater or lesser extent. Cleaning, masking odours and other forms of experimental designs such as water mazes. Good point.

Expand full comment
Malcolm Storey's avatar

Either way it's pretty obvious a lot of animals including insects have spatial maps. You can use scent on the ground, but anything that flies and habitually returns to the same place must be using some sort of spatial map, even if it's only a few landmarks.

The old Sphex experiments required the wasp to refind its burrow and it's hard to see how it might do that without a spatial map, and the fact that moving a pine cone confuses it shows it's using a spatial map.

The bee waggle dance is also a spatial map.

Expand full comment
Suzi Travis's avatar

Thanks Malcolm!

You should be able to download the paper here:

https://www.researchgate.net/publication/14529151_Inactivation_of_Hippocampus_or_Caudate_Nucleus_with_Lidocaine_Differentially_Affects_Expression_of_Place_and_Response_Learning

I wondered about smell, too! But I didn't explain the whole study in the essay. What the researchers also did was temporarily disable different brain regions. And they found that depending on what part was taken offline, the rats would switch strategies.

So, when the hippocampus was taken offline, they turned left (response learning). And intact overtrained rats who received hundreds of trials, also turned left -- almost like when you learn something so good, you no longer need to think about it. But when they took another part of the brain offline (the striatum) the rats went back to turning right (what looks like relying on spatial map). So, it looks like when conditioning was not available they returned to using spatial maps.

If smell alone were guiding them, we’d expect them to keep turning toward the food. But that’s not what happened. The strategy seemed to depend on which system was available.

Expand full comment
James Cross's avatar

Fascinating article!

To me the periodic hexagonal lattice in 2D does suggest some kind of geometric arrangement or mapping between neural representations and the external world. The simple mapping breaks down in 3D but that isn't to say there is still not a geometric mapping occurring. The mapping could still be periodic but just not periodic in 3D. It may require extra dimensions like a quasicrystal.

Expand full comment
James Cross's avatar

BTW, in the article, there is also a link to a related article by the same author:

https://www.quantamagazine.org/the-brain-maps-out-ideas-and-memories-like-spaces-20190114/

I've been thinking that the original basis of consciousness is the spatial-temporal mapping from the hippocampus and entorhinal cortex in vertebrates. So, it isn't surprising (to quote from the article):

"Recent insights have prompted some researchers to propose that this same coding scheme can help us navigate other kinds of information, including sights, sounds and abstract concepts. The most ambitious suggestions even venture that these grid codes could be the key to understanding how the brain processes all details of general knowledge, perception and memory."

So, the answer to how the brain represents something may be that it uses a spatial-temporal grid of some sort.

Expand full comment
David Keith Johnson's avatar

Ah, and the donut hole in the middle of the brain — the Self that receives the sum of whatever this process may be. Emergent from the ebb and flow of neural activity? Something lurking in the realm of microtubules? Now that’s a hard problem. Your excellent essay points the way into the mystery we must persist to wrestle with.

Expand full comment
Suzi Travis's avatar

Thanks David! Ah yes — that donut hole!

Expand full comment
Ragged Clown's avatar

On another substack, we were talking about aphantasia. One person said he couldn't imagine how people recognise things when they can't conjure up an image of a memory. I can't conjure up an image of a memory either, but I never thought of it as aphantasia.

For me, I can remember everything about a scene, but I can't actually see it when I remember it. Take a beach scene, for example. I can remember that my girlfriend has a red bikini and she is standing over there, and the sea is blue and there is a little boy playing with a stripey beach ball. I imagine it as a series of "labels" (not really labels, but I hope you'll get what I mean). There's a label that says the sea is blue, and it's over there. There's a label that says my girlfriend's bikini is red etc.

The model you describe, where an image hits your eyes, sends a signal down your optic nerve, and your visual cortex turns it into a representation that gets stored somewhere in your memory… and it's not an image that actually gets stored… that makes me wonder.

I don't quite know what a representation would look like, but I assume that's what I recall when I recall an image, and I am a little suspicious of both those people who say they can see an image and those who say they can't see an image. Maybe they both see the same "representation" that I see.

Do you know anything about aphantasia? Any thoughts on how it might fit into your model of representations?

Expand full comment
Suzi Travis's avatar

I do! Actually some good friends and past colleagues are interested in aphantasia.

They distinguishes between “shallow” and “deep” aphantasia. Derek (one of the authors in the paper below), fits the shallow type. Which means he can’t see mental images but has normal perception and memory of visual features (much like you describe). Loren (the other author) also lacks the ability to imagine, but she has atypical perceptual experiences (e.g., she can’t see certain visual illusions). This suggests there might be a range of aphantasia, with differing cognitive and perceptual profiles.

Many aphantasics still remember visual details — just not as pictures, which sounds similar to your idea of having “labels” for things.

What they think differentiates those with imagery from those without may involve feedback loops in the brain — especially from frontal areas to sensory regions. That could mean some people (like you) store and access information, but don’t get the neural reactivation that causes it to feel like a “picture.”

Here's a link to their latest article:

https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1374349/full

Expand full comment
Ragged Clown's avatar

Excellent, thank you!

The article mentions that Derek (shallow) can reimagine audio (songs & symphonies). Me too! Lauren (deep) can't even have inner monologues (I can!). Lauren can paint, but not from memory (me too!)

I was actually going to ask about this before. I wonder if it affects all the senses. I think I mentioned before that I have a brain tumour that was diagnosed from phantom smells (phantosmia). Thinking about it now, I think perhaps of a neural reactivation that regenerates the "picture" of a smell, even when I am not trying, like a hallucination. I wonder, too, if people can voluntarily recreate a smell like they can a picture or a song. (I can't).

The paper was fascinating. I learned a lot. Now I am interested to know what a visual representation might be like and how it might be used to regenerate an image. That's my project for this afternoon!

Expand full comment
Wild Pacific's avatar

Definitely representation.

Recognition is always fuzzy — that’s why the brain relies on heuristics.

Your series is very educational. My first steps into this subject came earlier, with other authors, who naturally took different routes.

Why recognition is like a “weak king” who needs a “queen” to win the game: it operates with lefty-brainy certainties on a short scale.

Representation, though, is a vast collection of frames of reference — maybe trillions of them. We don’t just see a tail or a nose in a watering can. We can “recognize” the dog in a croissant. In the scent of a garden. In the soft depression on a pillow where he used to sleep. In the pawprints. In the poop a neighbor didn’t pick up. In the hole he dug because a rabbit must have hidden there. In a half-chewed branch. In any branch.

In the soft absence of warmth when you drop in for Netflix and he’s not there. In the passenger side window, when it’s open, even if empty. In the lint roller in a discount store.

Even years after he’s gone — in the reeds that now grow closer into the walking trail, because no one breaks them anymore. In a brown coat you decide not to buy because the color is too close. In your not going to the restaurant near that vet. In the new couch you finally got — pristine, unscratched. In the car dealership you pass on the way to work, considering a car without too much space in the back. In the quicker gait you’ve developed, no longer adjusting for tired small legs.

Recognition is strong, but imprecise. It’s embedded all throughout your “self.” It is the self.

It’s representation all the way down.

Expand full comment
Suzi Travis's avatar

I don’t want to ruin your comment with a reply. So, I'm just going to say... thanks for putting it so beautifully.

Expand full comment
Michael Pingleton's avatar

The blindspot reminds me of a science experiment we did in high school. We drew a dot and a plus sign on a piece of paper, then held the paper in front of our faces while focusing on the plus sign; the dot would disappear as it hit the blind spot. I found it really interesting how our brains seemed to just “fill in the blanks,” so to speak.

I do agree that we need to be more tactful when borrowing metaphors from the computing world. This makes me think of your previous article on complexity; such metaphors usually only make sense at one layer of complexity, but quickly fall apart at any deeper layer. I’ve actually heard people asking questions like “how many gigabytes of memory can our brains store?” Of course the brain doesn’t work with bits and bytes like a computer does, so such a question already doesn’t make sense. I do really like what you said at the end of that section: representations can be a useful too, but they’re not the whole story.

Here’s another thought about language, this time relating to misrepresentation. Cognate words have always been a tricky thing, mainly because a world can sound and look similar between two languages, but have different meanings. If you can speak two languages, you might end up misunderstanding something that someone said on account of having a different understanding of a seemingly similar word. This same concept can also happen backwards too. For example with “dog” in English, “собака” in Russian, or “perro” in Spanish. Three different representations for the same concept. This gets tricky sometimes, and accidental code-switching may take place. I might say something like “¿Usted tiene un собака?” Oops!

Great work as always, Suzi!

Expand full comment
Suzi Travis's avatar

Ah yes — I used to make my students do that blind spot experiment too!

I also really love your point about language and misrepresentation. It does feel like we can be wrong in different ways. We can be wrong iconically, indexically, or symbolically, and each kind of misrepresentation tells us something different about how representations are built. Errors might be clues that reveal the architecture of representation.

Expand full comment
James Cross's avatar

The problem with representations is much like the problem of memories in the brain. There doesn't seem to be one spot where they are stored. They are all over the cortex. Representations might be memory deriving their meaning from learning and experience.

So, there isn't a P for DOG, but a whole bunch of Ps and some of them represent legs and fur. If the watering can by poor lighting or odd positioning looks to have legs and fur, then the P for concept "dog" could activate. At that point, the brain might even fill in a bark from background noise to confirm DOG.

Expand full comment
Suzi Travis's avatar

Fantastic point! What we call a “representation” is almost certainly not a single thing. DOG isn’t a lookup entry — if it is anything, it's a pattern.

Expand full comment
John's avatar

Thank you Suzi. Fabulous exposition whilst identifying a framework for taking the questions further. I love the history lessons and the systematic approach almost as much as following up on some of the referenced papers. Thanks again for doing this and sharing it. I wouldn’t get the cross disciplinary exposure any other way myself these days.

Expand full comment
Sunny's avatar

Agreed. Suzi’s cross-disciplinary approach that weaves neuroscience, philosophy, coding/AI, and even history, continues to shake my assumptions. Has anyone come across anything like Suzi’s Substack?

Expand full comment
Suzi Travis's avatar

Thanks so much, Sunny!

Expand full comment
Suzi Travis's avatar

Thank you, John!

Expand full comment
Eric Borg's avatar

It seems clear to me that the brain not only accepts input information and algorithmically processes it for output function, and so “computes”, but that the consciousness it creates can only exist in terms of representations rather than anything more. So what’s the problem? The problem I think is that people commonly consider the brain to also exist as consciousness. Thus here the brain is supposedly creating its own representations. Instead consider the thought that consciousness isn’t “brain” but rather something that the brain creates and uses. And why does the brain go to all this trouble? Because consciousness inherently functions in a value driven way that thus provides a sense of purpose or meaning for it. That’s why consciousness evolved. Here it could let the value driven computer figure things out so that it’s algorithms could continue on from such presumptions.

1. Reverse inference: No there aren’t any “face” areas of the brain, but rather functions that become known as faces over time by means of memory.

2. The brain isn’t a digital computer: Of course not. But I don’t see this mistake being made. It’s a massively parallel computer that accepts input information and processes it for output function.

3. The grounding problem: There is no grounding problem if, as I believe, consciousness is an inherently purposeful or meaningful form of computer. Theoretically it’s driven by the desire to feel as good as possible from moment to moment.

4. Misrepresentations: It’s not the brain that is wrong when you misinterpret an image, since the brain inherently doesn’t understand anything anyway. It merely provides information for the serial thinker to potentially figure out.

Expand full comment
Saj's avatar
May 28Edited

Hello Suzie, is there a difference between 'representation' and 'association' in this context? If the experience 'seeing a dog' is associated with a particular neural activation pattern, is that what representation effectively means? This obviously isn't explaining what the neurons are doing or how the experience of dog is generated but is just stating that there seems to be a reliable association between these two things.

Also, regarding the misrepresentation problem, why can we not explain this using an error rate? If we think of the neural association (or representation) as a 'test' - i.e. how often it's correct - then we could use the positive predictive value to say something like "when you see a dog, you're likely to be correct 99.9% of the time". We only learn it is incorrect if the dog representation is later replaced by a different representation, such as watering can. We will not know we had a misrepresentation until the dog representation is replaced with something that is not a dog.

Expand full comment
Jim Owens's avatar

If I'm getting this, the problem with misrepresentation is that you need two representations: your current representation of a "dog," and a stored representation of a dog. At first, you compare your current representation with your stored representation, and they match, so you think the watering can is a dog. But then you look again, and you see that they don't match. Your current representation actually matches "watering can" more closely.

We can re-state this in terms of predictions. Based on our current representation, your brain predicts a dog out there in the garden. But the prediction turns out to be wrong. It doesn't match the stored representation after all. So your brain re-calibrates and predicts a watering can, and that matches, so all is good.

The problem with this, and I think Suzi was explaining it with propositional logic, is that you're stuck in your brain, and you don't know what a "dog" is. So when you see a watering can that matches your stored representation of "dog," and then find out that it's a little different, you have the option to update your stored representation to include watering-can-like things as dogs. Why not? It's an improved representation, based on additional data. Nobody gave you a Platonic form of "dog" as a standard to start with. You're building a representation, supposedly trying to match it to some "real" thing, but you have no alternative connection to "real" things, so you have nothing independent to match it with. You're making up the representation as you go.

This is the grounding problem in a nutshell.

Expand full comment
Dave Slate's avatar

Jim Owens wrote:

"So when you see a watering can that matches your stored representation of "dog," and then find out that it's a little different, you have the option to update your stored representation to include watering-can-like things as dogs. Why not?"

That's a good question, and I think the answer is that upon closer inspection, the watering can momentarily misidentified as a dog lacks other dog-like characteristics such as a head, body, limbs, fur, signs of life, etc., which the brain has already incorporated into its representations of dogs. The brain, however, may add the watering can to its stored representation of the concept of objects that are sometimes mistaken for living creatures.

Expand full comment