We know about the world because we have senses.
Sight. Sound. Touch. Taste. Smell.
Balance. Movement. Body state.1
Our senses are the messengers.
This seems obvious.
Most of the time, our senses work in harmony. Your eyes see a ball bounce, and our ears hear the thud at (almost) the same moment.
But not always.
Sometimes, our senses fall out of sync. The messages don’t line up. And when that happens, perception doesn’t go quite the way we expect.
So what happens when what we see doesn’t match what we hear? Or when our brain starts ignoring input from one sense in favour of another? Or when an entire sensory channel goes offline?
This essay starts a new series about how our brains use different senses to verify reality (or good-enough reality) — and what happens when those inputs don’t go as planned. As usual, throughout the series, we’ll also consider how biological brains compare to the way machines work — and whether those differences are the kind of differences that matter.
This week, we’re starting with vision. For most of us, vision is the sense we tend to trust the most.2 Intuitively vision feels direct. Like we have a live feed from the outside world.
So what happens when we mess with our most trusted sense?
To find out let’s answer three questions:
What happens when we change the input our eyes get ?
What happens if that change lasts for days?
What can we these strange experiments tells us about how perception really works?
Q1: What happens when we change the input our eyes get ?
You might remember this from high school biology: the image projected onto your retina is upside down.
And yet, somehow, you don’t see the world that way.
Maybe your teacher explained it by saying, “The brain flips the image.”
It’s a nice story — but that story isn’t quite right.
We’ll come back to this point in a bit. But first, let’s look at what happens when we deliberately invert the input that hits our retinas.
Imagine waking up one morning and putting on a pair of inverting goggles. These strange looking things flip the entire visual field upside down. Imagine how disorientating this would be. Imagine trying to navigate the world if the sky was down and the ground was up!

Strange as it sounds, that’s exactly what volunteers did in a series of classic experiments.3 These weren’t quick lab sessions, either. The goggles stayed on for days— sometimes weeks. In a few rare cases, even months.4
Imagine that!
Imagine reaching for your coffee pot. But your coffee pot is upside-down and somewhere near the ceiling which looks like the floor. If you reach up, you see your hand move down — away from the pot. If you reach down — away from where the pot appears — your hand moves toward it. Your eyes and your body no longer agree.
Things get especially weird when you try to pour yourself a cup of coffee. According to what you see, gravity works backwards. Liquids fall up. But of course, gravity hasn’t changed at all.
Yesterday, your vision was your most reliable sense. But today it has become your least reliable one. Your other senses that tell you about up and down — your balance, touch, proprioception, and sound — are all telling you the world is as it has always been — upright. But your vision disagrees.
Q2: So what happens when you wear these inverting goggles for days?
At first, as you might guess, it was chaos for these volunteers.5
They stumbled around, missed door handles, tripped over steps, and poured coffee onto the floor. Activities like riding a bike or climbing stairs were absurdly difficult.
But you will probably not be surprised to learn that this chaos did not last. The brain, as we know, is very good at adapting.
In those early 1950s studies, it took just five to six days of wearing the goggles before volunteers said the world looked normal again — it no longer seemed upside down.6
That alone is remarkable: the brain seemed to completely adapt to input that had been flipped on its head in just under a week!
But what’s more interesting is how the volunteers said this change happened.
They didn’t just wake up one day and suddenly see the world the right way up. The shift, they reported, was gradual.
And not only gradual — but strangely selective. Certain objects seemed to “flip” before others, as if the brain were recalibrating its sense of uprightness piece by piece.7
With vision no longer lining up with the other senses, the brain had to relearn the link between movement and sight. It could have relied on head movement, eye movement, sound, or gravity to relearn what up meant. For example, moving the head down used to reveal more of the floor. But with the goggles on, moving the head down suddenly showed more of the ceiling.
Out of all the strategies the brain might have tried, one stood out: touch.
By around day three, something strange began to happen. When the volunteers reached out and touched an object — traced its edges, felt its contours — the volunteers reported that the object they were touching, and only that object, suddenly seemed right-side up.8
The volunteers claimed that objects they were interacting with looked upright, while the rest of the world remained upside down.
Imagine how strange that must have been.
Using their hands seemed to make all the difference, too, because volunteers, who were instructed not to move around or interact with their environment, adapted much slower, and some never adapted at all.9
So what happened when these volunteers finally took the goggles off?
Well… they saw the world as upside down.
This aftereffect didn’t last long though — in some cases it was just a few minutes — and the world seemed upright again.
Q3: What can we learn from these strange old experiments?
At first glance, the inverted goggle studies might seem like little more than a quirky story — some odd thing a scientist tried a long time ago. Interesting, sure, but not much more than that.
But how we interpret the results from these experiments raise deeper philosophical questions.
Some take these findings as evidence that perception requires action — that we learn to see the world correctly not just by looking, but by doing.
Others interpret the data quite differently. To them, perception is a representation of the outside world, and if you give the brain enough time, it will recalibrate and flip that internal representation back to how it’s supposed to be.
As odd as these experiments are, how to interpret their findings is still up for debate.
Take a more recent study, where volunteers wore inverting goggles for ten days.
Just like in the early research, they adapted surprisingly well. Within days, they were walking through town, shopping for groceries — even skiing down a slope — as if nothing were out of the ordinary.10
Watching them, you’d assume their brains had quickly adjusted. And if adaptation were simply a matter of flipping an internal representation, you might expect to see that flip reflected somewhere in the brain’s visual maps.
When we talk about visual maps, we’re usually referring to early visual areas like V1. These regions are retinotopically organised — meaning every point of light that hits the retina has a matching point in V1. In other words, the brain preserves the layout of the retina.
So, if the brain truly flipped the representation from upside down to right-side up, we might expect to see that flip reflected in this map. A reversal in perception should show up as a reversal in the brain’s representation.
But that’s not what the brain scans showed.
Activity in primary visual cortex remained unchanged. There was no flip in the map. No rewiring of the visual coordinates.
Some have taken this as evidence that the brain doesn’t flip a representation. Others says it provides evidence that there is no representation to flip.
But the story is more complicated than that.
Because while volunteers in the original studies claimed the world eventually looked upright again, participants in more recent experiments reported that this wasn’t their experience.
Even though they were able to function normally — walk through town, ride bikes, even ski down slopes — they didn’t say the world had flipped upright. They said it remained upside down.
So what’s going on?
To start unpacking that, let’s imagine vision were your only sense. You had no hearing, no balance, no sense of touch. No proprioception. No sense of gravity. You just had visual input.
Now suppose you put on goggles that flipped your visual field upside down. Could you say your vision had been inverted?
Probably not.
Without other senses to compare against, there’d be no baseline — no up or down to refer to. Inversion only makes sense if there’s something else to anchor it. The same goes if all your senses were inverted at once. If everything flipped — sight, balance, proprioception — how would you even notice? What would you compare your experience to?
The volunteers in the prism goggle studies weren’t in that situation. They had other senses to rely on — balance, movement, and the sense of gravity.
And they had something else too — memory. They remembered what the world looked like before the flip. Those extra anchors gave them something to recalibrate against.
Neuroscientists call this multisensory calibration. And we do it constantly.
Watch a basketball game, and your brain is syncing sight and sound to follow the ball. But light and sound travel at different speeds. They hit your brain on different timelines. But we don’t experience the sound after we see the ball hit the ground. The brain calibrates.
This is why when sight and sound are slightly out of sync (like in dubbed movies), the brain can recalibrate to reduce the mismatch.11
But this can sound like there is a little theatre in the head. And we know there is no such theatre.
Plus scans show that even after days of wearing inversion goggles, the visual cortex doesn’t flip its map.
So what, exactly, is being recalibrated when the brain recalibrates?
One idea is that the brain is recalibrating the relationship between signals.12
Your brain is constantly predicting inputs and comparing those predictions to what it actually gets from its multiple senses — vision, balance, touch, and proprioception.13
When volunteers first put on the inverting goggles the brain’s predictions are very wrong. The relationship between input from the eyes and input from other senses like the inner ear and the hands, are entirely opposite to what the brain predicts.14 The brain’s expectations about the relationship needs updating. So it gets to work — not flipping a representation, but rebuilding the relationship between inputs.
Touch becomes a powerful ally.15 When you reach out and feel the shape of an object, your brain uses that input as solid, dependable feedback. It grounds reality in what you feel and learns the new relationship.
But of course, prediction doesn’t answer all the questions we might want to ask about perception.
We might still wonder: what allows the volunteers to say the world seems upright — or inverted?
Many think we’re still left with that tricky question:
What exactly is perception?
So, how do you think we answer that question?
A Huge Thank You and a Change in Gears
When I started this little newsletter about 18 months ago, I did so mostly because I love writing and thinking about the intersection of neuroscience, philosophy of mind, and artificial intelligence.
I saw it as a way to explore that intersection more deeply. But I also quietly hoped that by writing, I might find others who are just as curious about these questions as I am.
And I have. We found each other.
This community has become one of the most rewarding parts of my life. Your comments, questions, and perspectives challenge me, teach me, and often take the ideas in directions I hadn’t considered. That conversation — this conversation — is the reason I keep going.
So, thank you. Truly.
The one thing I didn’t seem to anticipate was just how limited time can be. I don’t write this newsletter full-time. I also run a company. So, the writing and the replying happen in the margins: early mornings, late evenings, or in little gaps in the day. And as many of you have noticed, replies sometimes come late — or, despite my best efforts, slip through the cracks entirely.
Until now, I’ve been prioritising the writing and squeezing in the responses when I can. But I want to try flipping that.
So, moving forward, the essays might come a little less predictably.
It’s a tradeoff, I know. But I think it’s the better one. The comments are where so many of the best ideas are discussed.
So, I’m going to try this. If it turns out terribly, I’ll just try something else.
And thanks again for being here. I mean that more than you know.
By balance I mean the vestibular sense. By movement I mean proprioception (body position and movement). By body state I mean interoception (hunger, heart rate, visceral stretch, etc.). We might want to add pain and temperature (nociception and thermoception) to this list, too. Textbooks often count 10-15 distinct sensory systems, not just the 5 senses you learnt in school.
This is true, but it is context-dependent (e.g., vestibular cues can override vision in the dark).
George Stratton’s work was done between 1896–97; Ivo Kohler’s series (1950-63) is the classic modern set of experiments.
Stratton: 8 days. Kohler: up to ~21 days. A few later cases hit 4–5 weeks, but published peer-reviewed accounts beyond a month are sparse.
All classic reports (Stratton 1896, Erismann & Kohler 1950, Snyder 1950s) describe staggering, grasp errors, and nausea.
Stratton wrote that fleeting moments of uprightness appeared after ~87 h (~4 days) and stabilised later. Kohler’s best-documented subject (himself) said “normal” vision returned around day 10.
Modern replications (Linden 1999) find no full perceptual flip even after 6–10 days.
Kohler’s diary mentions some items appearing upright first, but there’s no quantitative data. This is all anecdotal evidence.
The 1950 Innsbruck film shows Kohler declaring a touched object now upright, and early authors emphasised active haptic exploration. Early observers believed touch was crucial; later work shows any self-generated movement speeds adaptation.
Held & Bossom 1961 and Held & Freedman 1963 showed that passive movement meant little or no adaptation.
Linden, David & Kallenbach, Ulrich & Heinecke, Armin & Singer, Wolf & Goebel, Rainer. (1999). The Myth of Upright Vision. A Psychophysical and Functional Imaging Study of Adaptation to Inverting Spectacles. Perception. 28. 469-81. 10.1068/p2820. [PDF]
We seem to tolerate ~±200 ms AV lag
Supported by multisensory-cue–integration and Bayesian re-weighting models.
Predictive-processing and the forward-model framework is the mainstream view in motor control & perception.
This is the classic sensory prediction error explanation. We think the cerebellum & posterior parietal cortex are essential for this process.
Early diaries flagged touch, but modern work shows active, self-generated movement (which includes touch) is the stronger predictor of successful recalibration. We now think touch helps because it is tightly coupled to proprioception and shortens the visuomotor loop, but it’s not uniquely privileged.
I remember reading about the original experiments when the ink was hardly dry.
If you've ever tried to manipulate anything under a compound microscope (no mobile stage, no micromanipulators) you'll have been in this world. I always found the easiest approach was to activate my inner grouch and try to make the opposite move to what was needed, but it rapdily started to feel natural.
Car steering wheels are always aligned so movement of the top of the wheel matches the desired change of direction (a design decision at some point), yet most drivers hold the sides or bottom. I wonder how long it would take to learn to drive a simulator with an inverted steering wheel?
[In a very trivial sense there's something ironic about an Australian telling a European about making the world look upside down !!! :-) ]
An excellent post. I was not aware of all those details about the inversion experiments, especially with regard to the idea of some individual objects being uninverted. This brought to my mind the Thatcher effect, where an image of a face can be inverted, and subparts (mouth, eyes) can be un-inverted, and yet the image is perceived as a normal upside down face. Seems like a starting point for experiments. I wonder if there could be an animal model to study.
I also wanted to push back a little on there being no internal Cartesian theatre. I think the thalamus mainly acts as a series of screens for audiences in the cortex. Others have described nested screens as well, although they haven’t implicated the thalamus, yet. You can google “cartesian multiplex” or look at this from Friston, Ramstead, Safron, et al.: https://osf.io/preprints/psyarxiv/6afs3_v1