35 Comments
User's avatar
Mike Smith's avatar

I had heard about the flipping experiments, but not that, while adapting, people were selective in what they perceived to be upright. That's interesting, and to me reveals how distributed and not necessarily unified our perceptions are.

I do think all of this fits with the predictive coding theories. Of course, they predate those theories, and probably were part of the impetus for them, so that makes sense.

No worries on becoming less predictable Suzie. Many of us have blogs ourselves and know how time consuming they can be. I'm subscribed by both email and RSS. I'll be happy to read your posts whenever they come!

Expand full comment
Suzi Travis's avatar

Thanks, Mike!

Yes, I found that part about selective flipping fascinating too. I think it’s interesting that the flip was described as happening at the object level. This, to me, fits nicely with the idea that we don’t actually take in the entire visual field at once, despite feeling like we do.

It seems like a real challenge to the idea of a unified, static “representation.” And like you said, it aligns beautifully with predictive coding -- even if those early researchers didn’t frame it in those terms.

Thanks again for the lovely encouragement about the posting rhythm. Your blog is a treasure trove, and I’m really looking forward to having more time to explore it.

Expand full comment
John's avatar

Sounds like a good compromise if we get to stay with you on this journey of discovery! All the best, John.

Expand full comment
Suzi Travis's avatar

Thanks so much, John!

Expand full comment
James Cross's avatar

"something else too — memory"

How about this?

Perception is where past experience and future prediction meet present reality in the brain.

"In other words, the brain preserves the layout of the retina."

Wouldn't that mean the layout would be inverted with goggles compared to original image?

Normal vision:

Upright image -> Inverted in retina -> long chain of wiring -> Upright mapping in cortex

Inverted vision:

Upright image -> Inverted by glasses -> Upright in retina -> long chain of wiring -> Inverted mapping in cortex

Are we just saying that you can't put on the glasses and expect the entire chain of wiring from retina to cortex to be changed? I don't why anyone would expect that.

Expand full comment
Suzi Travis's avatar

Nice! I like this.

When I said “the brain preserves the layout of the retina,” I just mean that V1’s map is retinotopic: every point on the retina has a matching point on the cortical sheet. That doesn't change. I don't think we should think about V1 as representing the world as upright or inverted; it simply represents the retina -- whatever that happens to be.

Where the trouble starts is 'later on', in areas that rely on V1’s retinotopic data with signals about eye position, head orientation, vestibular cues, proprioception, etc. Those 'higher‐level' transformations expect the relationship between retinal images and other inputs to be a certain way relative to gravity and body axes. The goggles break that expectation, so the recalibration has to happen in those multisensory and sensorimotor circuits -- so think parietal cortex, cerebellum, frontoparietal loops, and frontal eye fields.

So yes -- your chain-of-wiring highlights how the adaptation happens. What, I think we need to avoid is thinking that some neural representation (like V1) must remap itself like rotating an image on a screen. But, this doesn't need to happen. We just need to update the higher level relationship.

Expand full comment
James Cross's avatar

"I don't think we should think about V1 as representing the world as upright or inverted"

"those 'higher‐level' transformations expect the relationship between retinal images and other inputs to be a certain way relative to gravity and body axes"

I was thinking the same as your first statement I quoted. But then the "wiring" from the brain to cortex does, in fact, flip the mapping so it is upright. If it didn't need to, why did it do it? Maybe some sort of efficiency is gained by flipping early in visual processing and that won out in evolution. It might be the vestibular sense has to work with gravity as it is and everything works better if vision aligns with it.

This could have a lot to do with OBEs.

https://www.sciencedirect.com/science/article/pii/S258900422302624X

Expand full comment
Malcolm Storey's avatar

I remember reading about the original experiments when the ink was hardly dry.

If you've ever tried to manipulate anything under a compound microscope (no mobile stage, no micromanipulators) you'll have been in this world. I always found the easiest approach was to activate my inner grouch and try to make the opposite move to what was needed, but it rapdily started to feel natural.

Car steering wheels are always aligned so movement of the top of the wheel matches the desired change of direction (a design decision at some point), yet most drivers hold the sides or bottom. I wonder how long it would take to learn to drive a simulator with an inverted steering wheel?

[In a very trivial sense there's something ironic about an Australian telling a European about making the world look upside down !!! :-) ]

Expand full comment
Suzi Travis's avatar

Hahaha! Well, of course, it all depends on perspective, right?

https://www.mapshop.com/wp-content/uploads/2022/03/UpsideDownMap-2048x1265.jpeg

The steering wheel idea is great -- especially since most of us don’t hold it in the intuitively 'mapped' way, but it still works.

I looked it up -- someone did try the inverted-wheel simulator experiment.

https://www.sciencedirect.com/science/article/abs/pii/S1369847817303364

Like prism adaptation, drivers adapted; complete recalibration typically took minutes, not days. And like the google wearers, when normal steering was restored, people initially over-compensate in the opposite direction.

Expand full comment
Malcolm Storey's avatar

And as the paper says, there's steering wheel gain - ie how much turn you get for a given motion.

Same problem with computer mouse-speed. Takes a while to recalibrate yourself on somebody else's PC.

PS: anyway, the world is of course flat with cyclic boundary conditions. It all depends on what coordinate system you use. Good luck planning your next driving trip in 3D Cartesian coordinates!

Expand full comment
Eric Borg's avatar

Though I can generally be counted upon to say something controversial, not this time. Yes the brain should try to reconcile what’s seen with other spatial senses. Also it’s interesting that in modern inversion experiments, participants say their vision remains flipped though they’ve adjusted. 1950s era experiments shouldn’t always be reliable.

Don’t worry about less frequent essays on my account Suzi! I’m amazed how long you were able to go weekly. This stuff needs to just be for fun.

Expand full comment
Suzi Travis's avatar

Thanks Eric! Controversial or not, I always enjoy your take!

I agree: it makes sense that the brain tries to reconcile vision with everything else it knows about space. That’s why the modern findings are so intriguing. On one hand, participants' behaviour shows clear adaptation; on the other, they still report an upside-down view.

The earlier work is often criticised for its lack of rigour. And it is hardly the last word. Later studies certainly had tighter controls. I couldn't find any later studies that went much past 10 days though. I do wonder about that -- whether longer time wearing the goggles would produce different results.

Something else I was wondering about, too, was how much our modern experience with screens plays a role? Could spending more time interacting with input from screens change how we perceive the world? There’s growing evidence that spending lots of time with 2-D or VR screens can nudge the brain’s calibration rules (though not in the dramatic “upside-down world” sense of prism goggles). Long term effects are obviously a difficult thing to test, but it does make me wonder.

Thanks, too, for the kind words about cadence. Weekly is a lot of fun. But it is intense. Dialling back a bit should keep it fun without the weekly scramble.

Expand full comment
Eric Borg's avatar

I suppose if anyone wanted to see if our widespread use of electronic media screens has had certain specific effects on us, then they could find people with little such exposure and attempt some reasonably controlled testing. And what about just flipping our screens? I suspect people would get less and less irritated by flipped television screens as they adapted. Flipped computer screens should certainly be an adjustment for reading. I just tried reading your response with my iPhone screen upside down. This should ultimately become second nature given that I could at least do it. That’s also to be expected since all symbols are inherently conventional — there is no true “wrong”.

Note however that if my keyboard were flipped, it should take several years for my typing to reach similar proficiency. What’s the difference? While vision is something that I mainly process consciously, typing is mainly handled by my brain (an instrument that does not itself function consciously). Conscious repetition, as in the case of typing or even enunciating words, hands many tasks from the value based computer (consciousness), to the algorithm based computer (the brain).

Expand full comment
Tyrone Lai's avatar

Putting on a pair of inverting goggles is a simple thing to do. But it seems it has turned out to be a major disruption to your bodily system. To make sense of the outer world, the brain relies on ciphers. There could be ciphers within ciphers. Ciphers could branch out in different directions. How to adjust all these many interlocking ciphers could be a major task for the brain. How are we to find out how the brain does it? Usually, we solve simple ciphers first, then graduate upwards. Usually we solve ciphers one by one, not a whole bundle of them at the same time.

It seems I have to wait to see what happens next.

Expand full comment
Suzi Travis's avatar

Yes, exactly!

Neuroscience is slowing moving form the 'one cipher at a time' approach toward a many mismatches at once approach. Multi-sensor VR cages are playing a huge role here. But the ability to capture full-body motion and sync that with high-density neural data might be an interesting area to watch. I can imagine that it won't be long (if it is not being done already) before we have models that can predict when a new mismatch will feel weird, how long the weirdness will last, and which circuits will update.

Expand full comment
Mike Funnell's avatar

Commenting on a footnote: #2 "This is true, but it is context-dependent (e.g., vestibular cues can override vision in the dark)."

This is a major problem in aviation - if you trust your vestibular system in the absence of visual cues. Flying in cloud, or dark (or under a training hood) when you can't see the horizon or otherwise visually know your orientation means a loss of control and most likely "Very Bad Outcomes" as they say in the classics. Instrument flying, especially use of the attitude indicator (or "artificial horizon"), is necessary and takes a lot of training. I can assure you that trusting your instruments doesn't come naturally at first!

It's interesting that it's thought some bird species can maintain controlled flight in cloud, at night etc. though how many, how well, and just plain "how do they do that?" is neither well known nor well-studied.

Expand full comment
Suzi Travis's avatar

"I can assure you that trusting your instruments doesn't come naturally at first!"

I bet! I've heard of 'the graveyard spiral'. I assume that is caused by becoming disoriented in the air. I assume this is worse in bad weather conditions? I wonder if it is worse is space, too? It seems, at least to me, that the mismatch would be even sharper in space because micro-gravity removes the one cue (gravity) that normally lets the vestibular system veto visual errors.

Birds are a fascinating contrast. Someone needs to fund more bird-in-a-cloud simulator studies!

Expand full comment
Mike Funnell's avatar

Yes - “the graveyard spiral” is one name. It essentially (with exceptions) only happens in bad weather / at night when instruments aren’t used (properly).

The trouble is that the vestibular system is *wrong* in conditions of flight. In a turn, the vestibular system perceives ‘down’ as “towards the bottom of the aircraft”, but that is at an angle to real ‘down’ (the centre of the earth). If you can’t see the horizon - or fly to an artificial one - then you don’t know if you’re turning, which way you’re turning - so don’t know what corrective action to take (and can easily make things worse if you try).

Without control input, when you randomly start to turn (and you will), but don’t know it, the nose drops causing the aircraft to speed up, which tightens the turn; causing the nose to drop further, which tightens the turn..to a very bad “etcetera”. This can be corrected - but you have to (a) know it’s happening and (b) apply the correct control input. Without an ‘horizon’ - real or artificial - you really can’t.

Your vestibular system is not just useless but actively misleading.

Expand full comment
Jamie House's avatar

Great read! I'm a school teacher wondering if you have written about or have insights on how tools or technology extend our senses and therefore change human perception of the world. Thanks again for your writing.

Expand full comment
Suzi Travis's avatar

Thanks so much, Jamie!

I haven't written specifically on this topic. But the idea that human perception is shaped by technology is key to much that I write about.

We might even argue that human perception has always been shaped by the tools we use. From the invention of eyeglasses and hearing aids to today’s digital gadgets, technology can has been extend our sensory capabilities and mediating how we experience reality for a very long time.

There's the obvious ones. Like the relationship between attention and smartphone use. While smartphones extend our senses with information, they can also distract or distort our attention, and therefore our awareness. We know that smartphone use can narrow one’s attention to the screen, reducing awareness of one’s surroundings. So, texting while walking (or worse while driving) has huge effects on what we perceive.

This gets tricky when we move to augmented reality -- where digital information is layered onto our real-world perception. We see this in the new 'heads up' display being implemented in cars. There is concern that drivers who pay attention to the heads up information are not paying attention to their surrounds. We call this inattentional blindness. As the visual complexity of AR graphics rose, drivers’ ability to notice real-world stimuli dropped, and inattentional blindness was most severe right in the centre of the visual field. This problem has been documented in aviation for many years. Police crash databases do not yet code for whether the driver was using heads up, so we have a missing data problem on this one.

Then there is a whole bunch of work done on sensory substitution which is super interesting. I wrote a bit about that here: https://suzitravis.substack.com/p/brain-computer-interface-a-primer

If you are interested in this topic, I recommend looking up Neil Harbisson. He wears a sensor lets him “hear” colours beyond normal human vision (including infrared and ultraviolet hues) by translating light waves into audible vibrations.

It is fascinating how artificial intelligence is increasingly being intertwined with our sensory tools, acting as an intelligent filter or interpreter for the data our devices collect. AI-enhanced systems can process and augment sensory information in real time, in ways our brains alone might not achieve. This, I think, is fascinating.

I covered a little more about what effects the internet and AI are having on our brains in this note:

https://substack.com/@suzitravis/note/c-131563813

Expand full comment
Jamie House's avatar

Awesome! Thanks for the reply!

I'm really excited about the work you do and how it relates to my work.

It has me thinking about Jean Baudrillard who mused about how in society we often use symbols to represent real things but there is a progression where eventually symbols can represent other symbols, or copies of copies to the point where they don't represent anything real but are treated as real themselves. The implications for perception and knowledge are interesting. When shifting from augmented to something more artificial, our perceptive experience can be completed saturated by artificial stimuli. Love your work! Thank you again for the reply.

Expand full comment
Saj's avatar
Jul 2Edited

Hello Suzie, I'm reminded of the McGurk effect which occurs when visual information from lip movements conflicts with the auditory information, leading to a different 'integrated' perception that isn't strictly one or the other. It's an example of how our perceptions integrate information from different modalities, and apparently the effect is less pronounced in people with schizophrenia which indicates some sort of impairment in sensory integration.

On the point about retinotopic mapping, does this not suggest the 'flipping' must be occurring downstream of the primary visual cortex? If I look at a picture of a beach and then flip it upside down, different light signals (i.e. where there was sky there is now ground) are hitting the same bits of my retina and therefore activating the same cells in V1.

Optical illusions aside, we only experience one representation of an object at a time. This must mean that the representation is a single unified fusion of all available information (memory, prediction and sensory), which raises the question of how much weight is given to each contributing element.

One interpretation of these flipping studies (no swearing intended) is that it takes almost a week for all the other 'unflipped' inputs to overpower / outweigh the flipped visual information. As you point out, it is only in the transition phase when there is a discrepancy between inputs that we experience something as flipped; when all inputs are aligned (one way up or the other) then there is no flipping experience (again, no swearing intended).

Expand full comment
Suzi Travis's avatar

Exactly. The McGurk illusion is a textbook case of multisensory integration. Vision gets ga, audition gets ba, and the brain goes with da. The reduced McGurk effect in schizophrenia is interesting. From memory, I believe the same principle shows up in autism and aging studies.

Good point. There is no reason to think that V1 would not stay loyal to whatever the retina gives it. I also really like the way you phrased the final point: that it’s only during the transition phase -- when the weights are in flux --that we experience the world as flipped. or flipping!

btw in Australia, “flipping” is considered polite conversation.

Expand full comment
Ragged Clown's avatar

I wonder how these experiments would be interpreted by people who are not materialists. At what point do the quales turn upside down? Is that something the brain does or the mind does?

Expand full comment
Suzi Travis's avatar

Good question! I could imagine for the traditional dualist it might explain it as the brain registering the new data, but it is the immaterial mind that does the final act of “presenting” the right-side-up quale. The trouble, of course, is if there is a graded adaptation, it might be difficult to explain with a traditional dualist view.

Expand full comment
James of Seattle's avatar

An excellent post. I was not aware of all those details about the inversion experiments, especially with regard to the idea of some individual objects being uninverted. This brought to my mind the Thatcher effect, where an image of a face can be inverted, and subparts (mouth, eyes) can be un-inverted, and yet the image is perceived as a normal upside down face. Seems like a starting point for experiments. I wonder if there could be an animal model to study.

I also wanted to push back a little on there being no internal Cartesian theatre. I think the thalamus mainly acts as a series of screens for audiences in the cortex. Others have described nested screens as well, although they haven’t implicated the thalamus, yet. You can google “cartesian multiplex” or look at this from Friston, Ramstead, Safron, et al.: https://osf.io/preprints/psyarxiv/6afs3_v1

Expand full comment
James Cross's avatar

There is some pretty interesting stuff in the paper you link to.

Not too unlike the idea of phenomenal space having its own dimension(s) which could described as screens.

Expand full comment
Suzi Travis's avatar

Hey James!

I agree, Friston's ideas are very interesting.

When people speak of “screens” it can sound as if there’s a literal cinema with a hidden observer inside. What I like about Friston’s Markov-blanket idea is that it points instead to a statistical boundary -- a set of sensory and active states that filters what the organism can sample or influence. In Friston’s framework the “screens” (a metaphor for Markov blankets) are defined statistically, not materially. Reifying the blanket (or imagining a hidden spectator) would miss the whole point of the free-energy account.

Expand full comment
James Cross's avatar

This paper seems to go a bit beyond the standard FEP explanations. It seems to be arguing the Markov blanket(s) is a sort of holographic screen(s). Once we are into holography, we are certainly into dimensional space. It could be debated whether the "space" is really space or really something else space-like but even more ephemeral.

My argument is the observer does exist but it simply exists as a part of phenomenal space.

The paper also says this:

"There is a real sense in which this hypothesis amounts to the positing of an inner homunculus (Lycan, 1996)— witness to projections on the internal screen (a “Cartesian theatre”; see (Safron, 2021a))."

Did this paper make it out of pre-print?

Expand full comment
Suzi Travis's avatar

Yes, the Thatcher illusion is a great example. I believe there has been some work done in rhesus macaques. Single-unit recordings in their face areas pick up the local/global conflict.

https://www.jneurosci.org/content/35/27/9872

On the thalamus as a “Cartesian theatre”.

The thalamus is quickly becoming the 'it' structure of the brain, isn't it!? Recently, there have been a lot of papers being published about its role in conscious awareness. It is a fascinating area to watch.

I'm a big fan of Friston's work. The idea of layers of inner screens that shift prediction-errors up the cortical hierarchy is an attractive metaphor. And I think this sort of talk is totally fine, as long as we keep that homunculus on a leash. I'm know Friston would agree.

We need to always be asking "Who is supposed to be doing the viewing of this screen?" If we can’t answer that question without positing another homunculus (or ghost in the machine), then we know we haven't explained everything we need to explain.

Expand full comment
James of Seattle's avatar

I think we can (or I can) answer who is supposed to be doing the viewing of the screen. The answer is Ruth Millikan’s unitrackers, which are essentially pattern recognition units. (I can go into more detail on the neural anatomy on request.) And note I used the word “audience”, as there will be a number of unitrackers watching a given screen looking for the specific patterns they are tracking, and there will be a top level of unitrackers (in the prefrontal cortex) which do not provide input to a higher screen. Ya gotta run out of thalamus at some point. All of this anatomy maps to the hierarchy described by Friston’s group.

Expand full comment
Michael Pingleton's avatar

Such fascinating ideas and experimentation about the way our brains can map and remap relationships between different points of data. I really am in awe that our brains can adapt to, for instance, putting on the inversion goggles or losing our vision in an accident or something.

Also, I'm glad to have found your publication as well; you've challenged my perspective and thinking as I've read your articles and replies. I don't blame you for wanting to take a shift in your writing schedule though. I've been facing similar challenges myself as well. I do appreciate the work you do whenever you're able to. Cheers!

Expand full comment
Suzi Travis's avatar

Thanks so much, Michael!

As Subtack grows, there are more and more great writers here. I am looking forward to reading more of that great writing — including yours!

Expand full comment
Michael Pingleton's avatar

Thank you! I actually have a few large projects that I'm working on that I'm quite excited to talk about soon!

Expand full comment
Jim Owens's avatar

It's strange that some modern studies fail to replicate the older ones: that the participants don't report full reversal in the same time frame, or that reports of partial reversal are considered "anecdotal" (and by the way, what isn't anecdotal about reports of experience?), or that modern subjects no longer report reversion to a "right-side-up" world at the end of the experiment.Were there methodogical differences that might account for these variations?

Expand full comment