How the Blind Use Echolocation to See
Yes, Actually See
When the mood takes him, Daniel likes to do something that might seem reckless. The kind of thing most of us would never dare attempt.
He rides his bike.
It’s reckless — or it seems that way — because Daniel is blind.
When Daniel was just a toddler, doctors removed both of his eyes. Because of eye cancer.
Daniel can ride his bike because he echolocates.
He makes clicking sounds. He snaps his tongue on the roof of his mouth as he rides — kind of like a bat. The sound from the clicks bounces off everything around him and back to his ears, telling him about the things around him. The lamppost up ahead, the car parked on the curb, even the hedge along the sidewalk — Daniel senses his world through sound.
Daniel has been using echolocation to find his way in the world since he was a kid. So, he’s got very good at it. Daniel does most things a sighted person might like to do. He hikes, he goes rock climbing, he navigates busy city streets, and, yes, he rides a bicycle.
It seems incredible. But spend any time listening to Daniel, and you’ll quickly learn that he doesn’t think his abilities are all that amazing.1
Daniel says he can see. In a very real sense. And he’s not the only blind person who makes such claims.
Which makes us wonder — what the heck is going on here!?
To find out, let’s ask three questions:
What’s going on in the brain when someone learns to echolocate?
Do we need eyes to see? and,
What might all of this tell us about consciousness?
This is Part 3 of a short series. We started this series by discussing Thomas Nagel’s famous What is it Like to be a Bat? paper. If you haven’t read Parts 1 and 2 yet, no worries! There isn’t anything in this article that you need to know to be able to follow along. Although, in Question 3, I will circle back to Parts 1 and 2 to connect some dots.
Q1: What’s going on in the brain when someone learns to echolocate?
When people lose one sense, they often get better at others. Deaf people tend to have sharper vision, amputees develop more sensitive touch, and blind people get remarkably good at processing sound.
We may wonder, what’s going on in the brain to make this possible?
When someone loses sight, we might expect that the visual cortex would go dark — so to speak. After all, there’s no input coming from the eyes.
But scientists have discovered something quite remarkable. When they put blind people in brain scanners and played sounds for them, the visual cortex — the part of the brain that normally only cares about vision — responded to the sounds.
To understand what might be happening here, first, we need to review what goes on in the brain under normal circumstances — when someone is not blind.
The visual cortex is organized into different regions, like V1, V2, V3, V4, and V5. Each region (normally) maps the input from our eyes in a very precise way.
Imagine laying a grid over the retina in your eye — each point in that grid connects to a specific point in each of these brain regions. So, each region is like a map. And each of these maps specializes in processing different aspects of the world. Some maps handle the orientation of edges, others process motion, and others deal with colour.
We can think of the visual cortex as a sensory map. It takes in input from our sense organs — typically, those sense organs are our eyes. But it’s worth noting that the brain has other types of maps, too — like the somatosensory cortex and the motor cortex that map the body, and the hippocampus, which acts as a cognitive map for memory and navigation.
So, if the visual cortex typically acts as a sensory map, we may wonder what happens when it doesn’t get input from the eyes and starts responding to sounds instead.
One possibility is that the brain changes in a general way. It simply reassigns the now available neurons to other tasks — like processing sound. With more neural real estate devoted to processing sound, more details about sound are processed and, so, hearing is better. There’s a lot of truth to this idea — in the brain, more processing power often does mean more detailed sensing.
But there might be something else going on here, too. Something that has to do with how the different maps are organized in the first place.
To understand what else might be going on, we need to ask a strange question.
What if the visual cortex’s main function isn’t vision at all?
What if the real job of the visual cortex is to create sensory maps of the world around us? And the brain doesn’t mind what inputs it uses to build these maps — it just tries to use the best possible input for figuring out where things are, how they’re moving, and what shape they are. Eyes are usually the best way to get this input because our eyes are very good sensors. But ears can work, too, if need be.
Scientists have found some evidence to support this idea.
Typically, when something moves past someone, the light that the something reflects moves across their retina and the sound that the something makes shifts from one ear to the other. Normally, area V5 in the visual cortex is especially responsive to light motion across the retina.
But in blind people, particularly those who use echolocation, V5 becomes active when they hear sounds moving from one ear to the other. It’s as if V5 doesn’t mind whether it gets its motion information from light or sound — it just wants to track movement.
All of this raises an intriguing question: if seeing our world is about the visual cortex mapping our world — and if the visual cortex can create these maps from sound — does that mean blind people who echolocate are, in some sense, seeing?
Q2: Do we need eyes to see?
In 1974, a baby who was blind from birth — let’s call him Baby X — was the first baby to try something that had never been tried before.2
Baby X wore a special pair of glasses. These glasses sent out ultrasonic waves — like a bat’s sonar — that would bounce off objects in the environment. The reflections of these waves would then be converted into sounds the baby could hear. High-pitched sounds meant something was far away; low-pitched sounds meant it was close. Loud sounds meant the object was large; soft sounds meant it was small.
The glasses were essentially turning visual information into an audio landscape that the baby might learn to understand.
Even on the very first day, the findings were remarkable. The scientist slowly moved an object to and from the baby’s nose. On the fourth pass, Baby X’s eyes converged as the object approached his nose. Remember, Baby X is blind. He was acting just like a sighted baby would. A few passes later, Baby X started putting up his hands as if reaching for the object.
Later in the same session, Baby X was sitting on his mother’s knee. He slowly turned his head away, moving her out of the sound field, then turned back to bring her into it again. He did this over and over, giggling every time his mother’s face came into the sound field. He was playing peek-a-boo with sound.
Over the next few months, Baby X started doing things that blind babies typically don’t do. He would point to his favourite toy without touching it. He would reach for objects with both hands. He would even search for things hidden behind other objects — none of these things are what you would typically see blind babies do.
Baby X appeared to be seeing with sound.
Seeing with sound serves a similar function to seeing with light — it helps create a map of the world around us. But seeing with sound wouldn’t be exactly the same as seeing with light. Because seeing with sound cannot pick up differences in light, it would not provide details about colour. And it’s likely that fine details, like surface textures, would be harder to detect. However, because we have two ears and because sound bounces differently off different materials and shapes, the brain can still figure out what objects are made of, where they are located, their general size and shape, and even how they are moving. In other words, while the inputs might be different, the brain can still build a sensory map — and if that’s what seeing is, then maybe we don’t need eyes to see.
The implications of technology, like the one Baby X used, are enormous. So, you might be wondering why we don’t see these types of glasses being used today. The answer is that the technology back then was bulky and heavy —so it wasn’t exactly the kind of thing babies could wear all day long. So, while the idea was remarkable, progress has had to wait for better technology.
In the early 1980s, a Dutch physicist named Peter Meijer had an idea. Instead of using ultrasonic waves like Baby X’s glasses, what if you could take a regular video feed and turn it directly into sound? By the early 90s, he had created a system he called vOICe (where OIC stands for Oh, I See). The system worked like this: height became pitch (taller things made higher sounds), left-to-right position became timing (like a sound moving from one ear to the other), and brightness became volume (brighter things made louder sounds).
Explaining how vOICe works with words makes it seem more complicated than it is. It’s the type of thing that’s better to experience for yourself. Which you can do at seeingwithsound.com/webvoice. If you want to try it, you will need to allow access to your computer’s camera, but otherwise, using vOICe is very straightforward.
What you’ll hear at first probably sounds like a cacophony of meaningless sounds. But play around with it. Notice how the sound changes when you put different objects in front of the camera — perhaps an open hand with fingers spread compared to a closed hand.
This is a pause… if you’d like to go try it.
If the sounds vOICe makes seem confusing at first — like a meaningless buzz of tones — consider what it might have been like for Baby X wearing the ultrasonic glasses for the first time. Or imagine what it is like for any baby trying to make sense of the world. Psychologist William James famously suggested that newborns experience the world ‘as one great blooming, buzzing confusion’.
This tells us something interesting about what it might mean to see.
Perhaps seeing isn’t simply about having working eyes (or ears) — it’s about the brain learning to create meaning. Whether the input comes through the eyes, the ears or even touch might not matter as much as we think. What matters is that the brain can learn to create meaning from the blooming, buzzing confusion that it receives. What we typically call seeing might be the brain creating this meaning.
(what this meaning is, how it is created, and who exactly this meaning is for is a question for another day.)
That brings us to perhaps the most fascinating question of all:
Q3: What might all of this tell us about consciousness?
Over the past few weeks, we discussed Thomas Nagel’s ideas about consciousness. He describes consciousness as what it feels like — the qualia, or the felt sensory experience — like the redness of an apple, the taste of strawberries, the smell of the ocean. When Nagel talks about what it feels like — is he talking about the blooming buzzing confusion? Is it the input — the raw feels — that Nagel thinks cannot be reduced to brain states?
It seems not.
When Nagel asks us to imagine what it is like to be a bat, he seems to want us to consider the subjective experience of a bat — what it means for the bat to be a bat.
He seems to be getting at something deeper than raw feels. He seems to want us to consider the meaning. Consider not just what the input feels like but what it’s like to understand that input. What is it like to have a bat-like understanding? To understand what an echo means for the bat?
Understanding meaning in the brain, it turns out, is a very different question from understanding raw sensory experience. Meaning, it seems, requires experience.
The question is, does this prove anything about the nature of reality or the limits of science?
Next week...
Let’s return to the question of experience and raw feelings and discuss Mary and her black-and-white room. What does Mary learn when she sees red for the first time? And is this type of knowledge — knowledge that comes from experience — different from other types of knowledge?
To see Daniel demonstrate his remarkable echolocation abilities and hear him describe the experience firsthand, watch his fascinating TED talk:
Bower TGR (1978), Perceptual development: Object and space, in Handbook of Perception, vol. 8, Perceptual Coding, ed. EC Carterette and MP Friedman (New York: Academic Press).
Thank you.
I want to take a small moment to thank the lovely folks who have reached out to say hello and joined the conversation here on Substack.
If you’d like to do that, too, you can leave a comment, email me, or send me a direct message. I’d love to hear from you. If reaching out is not your thing, I completely understand. Of course, liking the article and subscribing to the newsletter also help the newsletter grow.
If you would like to support my work in more tangible ways, you can do that in two ways:
You can become a paid subscriber
or you can support my coffee addiction through the “buy me a coffee” platform.
I want to personally thank those of you who have decided to financially support my work. Your support means the world to me. It’s supporters like you who make my work possible. So thank you.







A fascinating description of how seeing is much more than just processing light.
It's also worth noting that we don't see the pattern that hits our retina. That pattern has high acuity (resolution) and color in the center but becomes increasingly lower acuity and colorless as we move closer to the periphery. And we have a hole in the center where the optic nerve connects the retina o the brain. Our impression of a rich visual field is a construction, possibly a prediction framework with incoming signals acting as error correction. It shouldn't surprise us that it could be constructed from alternate pathways.
On seeing through hearing, I wonder if anyone has tried to incorporate color into something like that. Probably too much information to wedge in, particularly if we want to give it the same saliency as reds and yellows have in comparison to greens and blues. If we did manage it, it seems like a blind person could come to form many of the same learned associations with color that we do. So they could come to understand what a sighted person means by red being associated with hotness, or blue with coolness.
Which would raise the question: are they now having the experience of redness or blueness? If not, what would they be missing?
Excellent, as always Suzi!
Thanks for another lovely one Suzi!
I bet one big advantage of the clicking echolocation that people use, is that it should still leave the standard sense of hearing quite intact. Conversely the sounds from the vOICe app seem pretty overpowering. Of course someone could turn the vOICe sounds down to get more standard sound information as well, though probably at the expense of not getting quite as much nuanced vOICe information. So there ought to be a trade off between the two. Maybe it would be most effective to alternate between the vOICe information and just standard hearing as appropriate from time to time? But what do blind people themselves now find? That would be where the rubber actually meets the road. Outfitting someone with such technology today, and even a baby, should not be expensive. Unless there are major problems with this particular technology then I’d expect blind people to now be using it quite a lot.
Much of this article addresses the concept of brain plasticity. Furthermore sometimes there are time limits, which I presume is why baby development should be important here. This reminds me of the tragic case of “feral children”. Beyond horrific psychological trauma, apparently without exposure they lose the potential to develop any of our natural languages — their brains appropriate those areas for other things. So are people, and even blind babies, now being fitted with such technology? And hopefully even babies are taught to turn their vOICe system on and off for samples when they’d like information about what’s around them rather than standard sound information.
Anyway back to Nagel, I suspect that even he couldn’t quite nail down what he was getting at with “something it is like”. Perhaps he just figured that brain states weren’t appropriate for this mysterious thing? Jackson too. But perhaps I can enunciate what they could not. Perhaps it’s the goodness to badness of existing? In the end that’s all I think it is. I consider this to essentially be the fuel which drives the conscious form of function, and somewhat like the electricity that drives our computers.