A fascinating description of how seeing is much more than just processing light.
It's also worth noting that we don't see the pattern that hits our retina. That pattern has high acuity (resolution) and color in the center but becomes increasingly lower acuity and colorless as we move closer to the periphery. And we have a hole in the center where the optic nerve connects the retina o the brain. Our impression of a rich visual field is a construction, possibly a prediction framework with incoming signals acting as error correction. It shouldn't surprise us that it could be constructed from alternate pathways.
On seeing through hearing, I wonder if anyone has tried to incorporate color into something like that. Probably too much information to wedge in, particularly if we want to give it the same saliency as reds and yellows have in comparison to greens and blues. If we did manage it, it seems like a blind person could come to form many of the same learned associations with color that we do. So they could come to understand what a sighted person means by red being associated with hotness, or blue with coolness.
Which would raise the question: are they now having the experience of redness or blueness? If not, what would they be missing?
Great point -- what we experience is more about constructing useful models than faithfully reproducing sensory data?
Learning colour through sound raises interesting questions. If a blind person learned all the same associations with "red" that sighted people have (heat, stop signs, blood, etc.) and could reliably identify red things through sound-based vision, would they be experiencing redness? Would that be functional equivalent -- and is there a difference? Is the subjective "feel" of redness separable from its functional role it plays in our experience and behaviour?
One thing I find fascinating is how people describe their experience with technologies like vOICe. At first, they hear just an annoying buzz of sounds. But quickly those sounds start to mean something - they become signifiers rather than just sounds.
It reminds me of the difference between hearing a language you understand versus one you don't. When you understand the language, you don't really notice the raw sounds anymore - you just grasp the meaning. The sounds become (somewhat) transparent to their significance.
Somewhere deep inside my brain my awareness is curled up, nice and warm, on a sofa, sipping a bottomless cup of endorphins, and watching a giant screen showing a two and half D simulation based on the visual data abstracted from my environment. It's also connected to further simulations of my body and the make-believe worlds of sound, smell and taste. All this feeds into a more vague 3D map of my surroundings, mainly built from memory of what's been seen, but I also get inklings from sound cues, like the difference in the sound of footsteps on entering a large room from a corridor, or going outside.
The interesting part is the awareness's interaction with the semi-autonomous autopilot that handles day to day activities, like walking, (even talking sometimes). The awareness can seemlessly take-over foot placement and withdraw again without breaking your rhythm. And you don't even think about balance.
While I love the idea of my awareness curled up on a sofa watching a giant screen, I can't help but be reminded of the homunculus fallacy! But I get your point.
The seamless switching between conscious and unconscious has fascinated me for years. When we switch from doing something unconsciously to consciously, what's actually going on in that switch? You've hit on one of my most favourite questions to try to tackle -- a question I spent many years researching.
Maybe it's me or I'm imagining it, but when I hear a loud noise with my eyes closed I frequently see a flash of light with my eyes.
I think probably the cortexes do have specializations, because (I think) in normal development the eyes get wired to the visual cortex and ears to the audio. However, because information is widely distributed in the brain likely some of the audio goes to the visual and vice versa. Those connections would grow if one of the cortexes lost its original source of input, so it could be possible for the audio cortex to take on visual processing.
Mriganka Sur rewired new-born ferret brains so that the visual input went to where auditory input normally is processed. The result was the part of the brain thought to be only able to process auditory input developed fully functional visual processing capability.
The brain is highly interconnected -- you're totally correct on that. And, yep, it's also highly plastic -- constantly changing. Every time you do anything -- so all the time -- it's changing.
That flash of light you see with loud noises is fascinating! It's called a phosphene, and it's a great example of how our senses aren't as separate as we might think. The visual and auditory systems have lots of connections and cross-talk.
The ferret study you mention is mind-blowing, isn't it? It really challenges the idea that brain regions are rigidly specialised for specific senses. Instead, it seems like these regions can learn to process whatever input they receive, especially early in development. This fits with what we know about brain plasticity -- our neural circuits are constantly being reshaped by experience.
Since similar pyramidal neuron structures span across the cortex, it makes sense that there would be a degree of interchange capability for particular functions. I've wondered what to make of the fact that the allocortex has fewer layers than the neocortex. Perhaps simply a holdover from evolution or perhaps indicative of some unique functionality. If you lose a hippocampus like H.M., another part of cortex isn't going to take over for it.
What an interesting thought! When you mention it might be "a holdover from evolution," it makes me wonder - what advantages might this simpler (or less layered) structure offer? Could there be something about having fewer layers that's actually optimal for the functions of regions like the hippocampus?
It's fascinating how the brain balances standardisation with specialisation, isn't it!?
I would be mostly guessing, but it could be that the structures of the allocortex are more specialized and less layers work well for what they do; whereas, the neocortex is less specialized as can be seen from the fact one region can take over for other regions. From an evolutionary standpoint, expansion of layers in the allocortex might have introduced dysfunctional mutations, so instead of adding new capabilities to the allocortex, expansion on top of it provided better selective advantage.
Rocky in Project Hail Mary also ‘see’s’ through echolocation. Excellent book; excellent article. Careful Suzi, soon they will be making movies of your work.
I loved that book! It was my favourite read of 2021.
Hahaha -- that would be quite something! Though I think I'll stick to writing non-fiction and leave fiction for those with better imaginations than I 😉
I bet one big advantage of the clicking echolocation that people use, is that it should still leave the standard sense of hearing quite intact. Conversely the sounds from the vOICe app seem pretty overpowering. Of course someone could turn the vOICe sounds down to get more standard sound information as well, though probably at the expense of not getting quite as much nuanced vOICe information. So there ought to be a trade off between the two. Maybe it would be most effective to alternate between the vOICe information and just standard hearing as appropriate from time to time? But what do blind people themselves now find? That would be where the rubber actually meets the road. Outfitting someone with such technology today, and even a baby, should not be expensive. Unless there are major problems with this particular technology then I’d expect blind people to now be using it quite a lot.
Much of this article addresses the concept of brain plasticity. Furthermore sometimes there are time limits, which I presume is why baby development should be important here. This reminds me of the tragic case of “feral children”. Beyond horrific psychological trauma, apparently without exposure they lose the potential to develop any of our natural languages — their brains appropriate those areas for other things. So are people, and even blind babies, now being fitted with such technology? And hopefully even babies are taught to turn their vOICe system on and off for samples when they’d like information about what’s around them rather than standard sound information.
Anyway back to Nagel, I suspect that even he couldn’t quite nail down what he was getting at with “something it is like”. Perhaps he just figured that brain states weren’t appropriate for this mysterious thing? Jackson too. But perhaps I can enunciate what they could not. Perhaps it’s the goodness to badness of existing? In the end that’s all I think it is. I consider this to essentially be the fuel which drives the conscious form of function, and somewhat like the electricity that drives our computers.
Thanks, Eric! You always raise fascinating points.
Visually impaired people often develop enhanced hearing. And vOICe users seem to learn to process its sounds as meaningful information rather than noise. But, good point, the technology does have some major limitations -- especially in noisy environments. This might be why, as you note elsewhere, the field seems to be moving more toward brain-computer interfaces (BCIs) for restoring sight, than technology like vOICe.
Yes, good point -- early development is crucial. The brain is indeed most plastic when we're young. This plasticity difference between childhood and adulthood is probably why sensory substitution technologies might be most effective when introduced early.
Interesting connection between affect and consciousness! In his new book (Chapter 2) Nagel suggests that affect (the felt quality of experience being good or bad) might be the best place to start when studying consciousness. He presented this idea at a conference recently. There was an interesting pushback from an attendee who suggested a different starting point - though I'll have to check my notes to remember what she proposed instead.
I haven’t read anything promising about BCI for sight yet (and I presume largely because we’re still clueless about what sight happens to be made of) but hopefully soon!
It’s good to hear that Nagel is still trying to figure this stuff out! And wow, perhaps he’s even supportive of my own position regarding value?
I have a feeling that I already know the general theme to the pushback you recall to Nagel identifying affect as the fundamental component to consciousness. I’ve long referred to this as the evolved social tool of morality. Observe that if in the end we’re all affect-centric products of our circumstances, then our inherent selfishness should influence us to outwardly imply that people must not consider their own happiness to be paramount to their existence. It’s ironically a selfish move because having this stance tends to help us get in better favor with others. Most of the time I doubt we even grasp that we’re doing it. (But I do suspect that humanity’s most effective sales people actively acknowledge their falseness to themselves, and in order to help them continue perfecting their craft of persuasion.) Scientists should not be immune. This is why I think it’s been so difficult to found psychology upon an effective utility based premise. Thus while modern psychology may develop effective models regarding learning, memory, and other peripheral issues, the social tool of morality seems to prevent it from developing basic models of what we are that seem effective. Though I have no use for the ideas of Sigmund Freud, at least he did attempt to get fundamental. To truly do so however, it may be that psychology will need some outside axiological help.
While we're still in the early days, there is some fascinating work happening with optogenetics and BCI. Recent studies shows we can stimulate specific patterns of neural activity to produce reliable visual percepts. Of course, these are still quite basic compared to natural vision, but they suggest we may be making progress in this area. I think we'll be reading much more about optogenetics in the coming years. I wrote a little about this here: https://suzitravis.substack.com/p/the-code-that-cures-blindness
I looked up my notes -- the commenter made a distinction between hedonic and non-hedonic attentional systems and asked Nagel whether looking for an attentional mechanism that were both hedonic and unconscious might be the way to go forward.
Your perspective on affect and social dynamics is fascinating. If I'm understanding correctly, you're suggesting something akin to how we might resist reducing love to cognitive mechanisms, like cognitive dissonance, even when cognitive dissonance might explain love quiet well? The resistance isn't necessarily because the explanation is wrong, but because acknowledging it feels somehow wrong?
This raises such interesting questions about how our social and emotional needs might influence even our scientific theories. When you suggest that psychology needs an "effective utility based premise," are you essentially arguing that we need to be more willing to look at uncomfortable truths about our nature?
I wonder though - even if we could reduce everything to mechanisms, should we? What happens to a society where people see love primarily as cognitive dissonance? Does understanding the mechanics of something change how we experience it? Perhaps some of our "illusions" serve important functions in ways we don't fully appreciate?
Interesting vision article! The challenges for BCI here seem appropriate, though a potential way forward nonetheless.
I haven’t read Nagel’s book so I can only guess whether or not such criticism seems reasonable. Regarding my point itself however, I’ll try to be more plain.
I believe that value exclusively exists as feeling good rather than bad for anything, anywhere — a physics based element of reality (and whether EMF based or something else). Because psychology does not yet formally accept this or any other value premise, I consider it appropriate that the field has only been able to develop effective peripheral rather than central models of our nature so far. If the human functions on the basis of value, though psychologists do not formally get into what’s valuable, then it makes sense to me that psychology should currently have foundational voids.
Unless you believe that psychologists already do formally acknowledge that feeling good/bad is what constitutes good/bad, you probably wonder what I think has prevented the formal acceptance of this position? My suspicion is social backlash — our selfishness naturally encourages us to instead celebrate altruism. I’m open to other explanations though.
Apparently philosophers have gotten around such backlash by ignoring the concept of good/bad altogether to instead focus on the rightness to wrongness of behavior. While I don’t exactly mind their morality move here, that shouldn’t help scientists effectively grasp our nature itself. So I suspect that science will need instruction from a respected community of “meta scientists” regarding metaphysics and epistemology, though to effectively model ourselves in basic ways, an accepted value premise ought to be critical.
I was tempted to write something about efference copy in this one, but in the end decided to take it out. So, thanks for bringing it up in the comments. The short answer is, yes. I think those who use echolocation do have efference copies. While researching, I learned that echolocation is better for those that produce the sounds themselves (like mouth clicks or cane tapping) than for those that have the sounds produced by technology (like vOICe). To me, this suggests that there is something to the idea that action is important for perception.
When they actively generate clicks or tap a cane, the brain would send a copy of that motor command to the sensory areas -- just like it does during normal sight. It can use this copy as a prediction of the echos it expects to get back.
As for blind spots -- that's a fascinating question! While blind echolocators don't have the anatomical blind spot that sighted people have (caused by the optic nerve's exit point in the retina), they might experience "acoustic shadows" where objects block echoes from reaching them. However, I'm not sure we'd call this a 'blind spot'. Unlike the fixed position of the visual blind spot, these acoustic shadows can be overcome by moving one's head or position to get echoes from a different angle.
Wonderful, thank you. Just one further question on the blind spots and, by extension, the definition of “echolocation.” Obviously even sighted people can locate constant sources of noise outside of themselves. But suppose a sound generated a constant e sharp sound from a specific, constant location. In other words, it wouldn’t be an “echo” but a constant, externally generated sound. Would that disappear from an echolocator’s “vision” after a while? I would guess it would, and those are the blind spots to which I was referring.
That's a fascinating way to think about blind spots! I never thought of sound blind spots, but it makes a lot of sense. We do "tune out" constant sounds - which could create a kind of functional blind spot. Though I suspect active echolocators might avoid this by constantly generating new signals. On this, there's probably a big difference between passive hearing and active echolocation.
Okay, let me pose the question this way: suppose there’s a blind echolocator in a room with a 4 foot tall speaker that is emitting at a non-distressing level a continuous e sharp note. After ten minutes the blind person gets up and walks towards a spot located on the other side of the speaker. Does she bump into the speaker? If she knows the room, she might not make her own sounds, and I’m wondering whether the mechanisms of efference turn the speaker into a blind spot for her. And now suppose she makes her own clicks or noises - would they override the continuous e sharp message she’s been getting from the speaker? Or would her brain filter her new input out as a type of distraction or mistake (in other words, hierarchize signals)? One final variation, since her movement itself toward the speaker might create a difference in sound volume: what if the speaker was sensitive to that and modified its volume so that the input she received was constant. Does she bump into it in this cruel experiment?
Believe it or not I discussed this question at length with my son, a musician into music theory, and when we got into thinking about ways to torment the test subject he brought up the possibility of psychoacoustics. Are you familiar with that phenomenon? In short, it's playing two notes from different sources which, because of the wavelength differential produce a third note in the mind of the listener. And the two notes you start with could be inaudible, so that the only note our poor test subject heard would be occurring only in the brain and have no external source at all. So if our subject, instead of clicking, blew a whistle with an e flat note (son had objections to e sharp) and you could generate a psychoacoustic e flat out of your speaker... well, it would give her a hard time. That's all we could figure.
Hi Jack! I found your questions about echolocation and psychoacoustics so fascinating I did some digging! I couldn't easily find the answer, so I got chatting with a friend who knows more about auditory processing than I do.
We're not sure what would happen, but this is our guess:
In the continuous E flat speaker scenario: We think the blind person would almost certainly not bump into the speaker, even with continuous sound. Here's why: Our brains are remarkably good at detecting changes in acoustic patterns (this is especially true for people who can echolocate). As the person moves toward the speaker, we think they would perceive subtle changes in the
sound patterns from the room, spatial cues from head movements, changes in the sound's spectral qualities based on distance. We suspect that these cues would persist even with constant volume. This is because the sound is coming from the speaker. If the blind echolocation were wearing noise cancelling headphones that eliminated all sounds except the one E Flat, things might be different -- but then we have to worry about something else -- extreme auditory deprivation can lead to visual and auditory hallucinations.
Regarding whether their own clicks would override the speaker's sound: The brain actually excels at processing multiple sound sources simultaneously. So, we think the clicks might provide additional spatial information rather than competing with the speaker's sound. Our brains are pretty good at separating and processing multiple acoustic streams.
We spent a lot of time discussing the psychoacoustics idea. It's clever. In the end we decided that we think it probably wouldn't create the confusion we might think it would. Even with combination tones (the third note illusory tone), the spatial information from the original sound sources would still be present and usable for navigation. But it definitely made us think.
Fascinating stuff, Suzi. I am wondering about the meaning of the verb “to see.” The link to meaning as the product of seeing suggests to me that a blind person responding in conversation with “Oh, I see what you mean” is using the word in the same sense as a sighted person. “I plan to see my doctor tomorrow”—same thing. “They are seeing each other on a regular basis” could be like the meaning of “I see what you mean.” Interestingly, if a a deaf person were to say “I hear you” to signify “I see what you mean” I suspect people might consciously sense something unusual about the expression. We don’t say “They’ve been hearing each other for a month now” or “I have an appointment to hear my dentist.” My point is “see” has become a term that functions the same way for blind as well as sighted people except for the most basic sense as in “She sees well enough to thread a needle.” Could it be that humans “knew” the brain well enough to differentiate the verb to see from the verb to hear long before neuroscientists understood the physiology and neurology and plasticity of the brain?
When we say "I see what you mean," we're talking about understanding or grasping meaning -- a process that transcends any particular sensory modality. So, it seems we use the verb 'to see' in a number of ways.
"To see" has evolved to represent understanding and perception broadly, while "hear" remains more tied to its specific sensory meaning. This might reflect something about how important vision is -- understanding how our world is structured is very important for not just getting around in the world but there's evidence that our memories might be spatial. What I mean by that is that the hippocampus -- which is very important for memory -- is also crucial for spatial navigation. It creates cognitive maps not just of physical spaces, but also of abstract relationships and concepts.
The response from a vOICe user captures this perfectly. She says "It is sight. I know what sight looks like. I remember it." When she says this, she's not just talking about sensory input -- she's talking about the higher-level process of understanding spatial relationships and patterns, which can be achieved through different sensory channels.
So yes, I think you're onto something important -- the flexibility of the verb "to see" in our language might reflect an intuitive understanding that "seeing" is really about understanding.
I’d love to read a post about the spatial aspects of the hippocampus. Makes me think of your point about differences between making your way through a carnival using a cell phone camera vs a pair of eyeballs during your commentary on Her. Thinking about say the spatial features of narratives vs information and within information say cause and effect structure vs comparison contrast. Concepts come in different shapes as well. Boy oh boy you got me thinking:)
One more thing: When we close our eyes to visualize, say, trying to get better at making free throws, we are able to focus more intently on an object than when we physically see it. I’m assuming blind people use what they’ve learned about the visual realm through echolocation and are able to visualize (though it would seem they would have to close their ears).
This behavior suggests something about consciousness itself: Visual processing might be more deeply connected to our conscious experience, and we can more easily "direct" visual attention than auditory attention. The visual system might be more integrated with executive function than the auditory???
The visual system's flexibility extends not just to processing different types of input, but also to switching between external and internal sources of information. We can literally "turn off" external visual input in a way we can't do with hearing. We have eyelids.
You're right that it's easier to shut off visual input than auditory input -- we have eyelids but not earlids. But I wonder whether blind echolocators can control their input by choosing when they use mouth clicks (or tap their cane). In this way, they seem to have (at least some) control similar to eyelids. They can choose when to sample their environment through self-generated sounds.
Yep, I think the connection you draw to consciousness is correct. There's evidence that visual imagery and visual perception use many of the same brain networks. When blind people use echolocation, they activate their "visual" cortex. I think this is evidence that vision is really about spatial processing and pattern recognition -- which is highly integrated with our conscious experience. It's difficult to think about a conscious experience without that conscious experience being spatial in some way. For example, colour is always colour of something -- and that something is always in some spatial location.
This active control over sensory sampling might be key to why self-generated echolocation (through clicking or cane tapping) works better than technological solutions that generate the sounds for you. Just as we can choose where to look and when to close our eyes, expert echolocators can choose when and where to direct their clicks to build up a mental image of their environment.
Your point about executive function connects nicely to this. Both vision and active echolocation are highly controllable -- we can choose where to look/click and when to sample our environment. This might indeed suggest that spatial understanding, regardless of sensory modality, is almost certainly integrated with executive control.
It’s certainly a lot easier to open the eyes and see than to calculate the rhythm, frequency, and direction of clicks or taps. What an amazing thing this brain is. Vision does intuitively seem to be the best evolutionary option. Hawks see food on the ground from miles away. I need to spend some time with this interchange. Thank you for sharing your expertise with me!
Northern Sea Robin (a fish) has evolved taste buds on its legs to detect buried prey. The taste buds must be wired up appropriately even tho they're in a different place (and are presumably routed via difference vertebrae). This is the converse problem to reusing hearing and wiring it up to vision.
Makes me wonder if originally everything connects to everything then the unused connections are lost (synaptic pruning); otherwise you've got a complicated engineering problem to solve each time.
That's really cool! I didn't know that about Sea Robin!
When a baby is born, it has relatively few neural connections. What's amazing is that in the first few years of life, an infant's brain forms new connections at a rate of 700-1,000 per second! After that, then, yes, there's a lot of synaptic pruning that goes on.
The thing I find particularly fascinating about neurons is the synapse -- they aren't actually physically connected. If they were wired up like a typical electrical circuit, the death of one neuron could disrupt the entire downstream network. Instead, the synaptic gap architecture of neurons allows for remarkable flexibility - if one neuron dies, other neurons can easily form new connections with its former targets. The synaptic architecture makes the brain incredibly adaptable!
I would think the resilience is more due to parallelism than the nature of the synapse. You can't stop walking while your neurons find the right connections to rebuild the circuit.
The interesting thing is that babies build brains that are remarkably similar. It's not a free for all and each bit of brain must have a preassigned function.
Presumably our distant ancestors were segmented with each sense organ or body part attached to a different segment and this has been retained through cephalisation.
Clearly segmented organisms that have different structures on different segments must have a signalling system so each segment knows which one it is, so if you have eyes on segment 2 it has to know that it is segment 2 and we've apparently retained this. Good old Hox!
I wonder if all those retinal maps originated from eyes on different segments?
It's just dawned on me: the reason we need the blood-brain barrier is cos blood circulation is a great homogeniser, but the brain needs to maintain diverse chemical concentration gradients (guidance cues) to enable axons to find their way around.
I’m intrigued by the idea of the visual cortex being about building maps rather than merely seeing. I wonder if you’ll be telling us about what happens next — after the visual cortex has done its job of seeing things.
I have what feels like a visual map of the world around me - even of the things I can’t see because they are behind me or because I have my eyes closed. I can still find the light switch in the dark because I can ‘see’ where it is on my map. Does all this happen in the visual cortex? Or is it somewhere else in the brain?
What about recognising faces or written words? Or elephants? What does the visual cortex send to the temporal lobe to say “Here is a word.”? Does it send something like “this might be a word. See if you can figure out what it is”? Or does it send something closer to raw visual data?
Thanks for this fascinating question! (and I'm sorry for the slow reply).
That sense you describe of having a map even with your eyes closed is interesting. The visual cortex is not the only type of map we have. So, I'm curious... when you reach for that light switch in the dark, does it feel like you have a visual experience of where the light switch is? Or is it more like you are drawing on a memory? The reason I ask is because people differ on their ability to visualise. But in any case, I suspect that the hippocampus is very much involved in this type of light switch memory -- there are specialised cells in the hippocampus that are very interested in where you are in space.
On your question about what the visual cortex sends to other brain regions -- it's probably best to think about communication between areas not as a one-way street. It's more like an ongoing conversation between brain regions. When you see what might be a word (or elephants) other parts of your brain are already sending their inputs (or predictions) to bias the inputs, even as visual information is still being processed.
So, the visual cortex sends sensory data, but not all data is sent with equal priority -- it's biased. And this bias depends on things like your expectations, goals, and attention.
> when you reach for that light switch in the dark, does it feel like you have a visual experience of where the light switch is? Or is it more like you are drawing on a memory?
A couple of years ago, I did that little survey of my friends where you ask them to visualise a beach scene with their eyes closed. Out of all my friends, only two could actually “see” the beach. I definitely can’t. I can’t really visualise anything from memory.
But when I close my eyes in my bedroom right now, I can ‘see’ exactly where everything is. There is the door. There is the piano. There is the light switch. There are the photos on the wall. I can't ‘see’ any features of anything and I have nothing like an image. But I know exactly where everything is. I guess you could call it a map but it’s in 3D and it is very accurate.
I knew a little about the prediction idea but I guess I didn't think about the idea that it works for words or for difference concepts (like faces and elephants) that are handled in different parts of the brain. That poor hippocampus must be very busy!
I wish I had some time to study this is detail. It’s so fascinating!
Great post! It's hard to imagine riding a bicycle without vision. If I were that guy's mother I would be very nervous. On the other hand, while I was listening to this I was roasting coffee and ended up burning the beans because I should have been relying on my hearing rather than my eyes. Who knows which way is more reliable.
I appreciate your bringing up the difference between raw sensory experience and what's it's like-ness. It's so often assumed these are the same thing. When William James talks of infants experiencing the world as a blooming, buzzing confusion, that leans heavily on the notion that raw sensory data make up our experiences (maybe like building blocks?) and we come along and interpret or impose structure on them. But when have we ever experienced raw sensory data? Maybe it's not possible.
"The question is, does this prove anything about the nature of reality or the limits of science?"
Great question. I doubt it proves anything—hardly anything does!—but it does show how meaning is suffused in experience, perhaps inextricably so, and we should probably keep that in mind in our theories about reality.
I keep forgetting that I can share comments as Notes (and then I turn around and complain, "What am I supposed to do with Notes?") Go figure.
Thanks for this wonderful comment, Tina! (and I'm sorry for the slow reply).
It's a common assumption, isn't it? -- that we somehow start with raw sensations and then layer meaning on top. But isn't it difficult to imagine what truly raw sensory data would be like? It really seems like that blooming, buzzing confusion James describes must already have some structure to it. Are we ever really a blank slate?
Now, you've got me thinking... is meaning baked into experience from the start? If so, what does that tell us about how we make sense of the world?
(And hey, I get you with the Notes thing. I feel like I'm always the last one to figure out how to best use platform features!)
A fascinating description of how seeing is much more than just processing light.
It's also worth noting that we don't see the pattern that hits our retina. That pattern has high acuity (resolution) and color in the center but becomes increasingly lower acuity and colorless as we move closer to the periphery. And we have a hole in the center where the optic nerve connects the retina o the brain. Our impression of a rich visual field is a construction, possibly a prediction framework with incoming signals acting as error correction. It shouldn't surprise us that it could be constructed from alternate pathways.
On seeing through hearing, I wonder if anyone has tried to incorporate color into something like that. Probably too much information to wedge in, particularly if we want to give it the same saliency as reds and yellows have in comparison to greens and blues. If we did manage it, it seems like a blind person could come to form many of the same learned associations with color that we do. So they could come to understand what a sighted person means by red being associated with hotness, or blue with coolness.
Which would raise the question: are they now having the experience of redness or blueness? If not, what would they be missing?
Excellent, as always Suzi!
Hey Mike!
Great point -- what we experience is more about constructing useful models than faithfully reproducing sensory data?
Learning colour through sound raises interesting questions. If a blind person learned all the same associations with "red" that sighted people have (heat, stop signs, blood, etc.) and could reliably identify red things through sound-based vision, would they be experiencing redness? Would that be functional equivalent -- and is there a difference? Is the subjective "feel" of redness separable from its functional role it plays in our experience and behaviour?
One thing I find fascinating is how people describe their experience with technologies like vOICe. At first, they hear just an annoying buzz of sounds. But quickly those sounds start to mean something - they become signifiers rather than just sounds.
It reminds me of the difference between hearing a language you understand versus one you don't. When you understand the language, you don't really notice the raw sounds anymore - you just grasp the meaning. The sounds become (somewhat) transparent to their significance.
Somewhere deep inside my brain my awareness is curled up, nice and warm, on a sofa, sipping a bottomless cup of endorphins, and watching a giant screen showing a two and half D simulation based on the visual data abstracted from my environment. It's also connected to further simulations of my body and the make-believe worlds of sound, smell and taste. All this feeds into a more vague 3D map of my surroundings, mainly built from memory of what's been seen, but I also get inklings from sound cues, like the difference in the sound of footsteps on entering a large room from a corridor, or going outside.
The interesting part is the awareness's interaction with the semi-autonomous autopilot that handles day to day activities, like walking, (even talking sometimes). The awareness can seemlessly take-over foot placement and withdraw again without breaking your rhythm. And you don't even think about balance.
I assume your experience is likewise.
While I love the idea of my awareness curled up on a sofa watching a giant screen, I can't help but be reminded of the homunculus fallacy! But I get your point.
The seamless switching between conscious and unconscious has fascinated me for years. When we switch from doing something unconsciously to consciously, what's actually going on in that switch? You've hit on one of my most favourite questions to try to tackle -- a question I spent many years researching.
Again an informative and thought provoking essay. Could not wish for more. Thank you, John.
Thanks, John!
Maybe it's me or I'm imagining it, but when I hear a loud noise with my eyes closed I frequently see a flash of light with my eyes.
I think probably the cortexes do have specializations, because (I think) in normal development the eyes get wired to the visual cortex and ears to the audio. However, because information is widely distributed in the brain likely some of the audio goes to the visual and vice versa. Those connections would grow if one of the cortexes lost its original source of input, so it could be possible for the audio cortex to take on visual processing.
Mriganka Sur rewired new-born ferret brains so that the visual input went to where auditory input normally is processed. The result was the part of the brain thought to be only able to process auditory input developed fully functional visual processing capability.
The brain is highly interconnected -- you're totally correct on that. And, yep, it's also highly plastic -- constantly changing. Every time you do anything -- so all the time -- it's changing.
That flash of light you see with loud noises is fascinating! It's called a phosphene, and it's a great example of how our senses aren't as separate as we might think. The visual and auditory systems have lots of connections and cross-talk.
The ferret study you mention is mind-blowing, isn't it? It really challenges the idea that brain regions are rigidly specialised for specific senses. Instead, it seems like these regions can learn to process whatever input they receive, especially early in development. This fits with what we know about brain plasticity -- our neural circuits are constantly being reshaped by experience.
Since similar pyramidal neuron structures span across the cortex, it makes sense that there would be a degree of interchange capability for particular functions. I've wondered what to make of the fact that the allocortex has fewer layers than the neocortex. Perhaps simply a holdover from evolution or perhaps indicative of some unique functionality. If you lose a hippocampus like H.M., another part of cortex isn't going to take over for it.
What an interesting thought! When you mention it might be "a holdover from evolution," it makes me wonder - what advantages might this simpler (or less layered) structure offer? Could there be something about having fewer layers that's actually optimal for the functions of regions like the hippocampus?
It's fascinating how the brain balances standardisation with specialisation, isn't it!?
I would be mostly guessing, but it could be that the structures of the allocortex are more specialized and less layers work well for what they do; whereas, the neocortex is less specialized as can be seen from the fact one region can take over for other regions. From an evolutionary standpoint, expansion of layers in the allocortex might have introduced dysfunctional mutations, so instead of adding new capabilities to the allocortex, expansion on top of it provided better selective advantage.
Rocky in Project Hail Mary also ‘see’s’ through echolocation. Excellent book; excellent article. Careful Suzi, soon they will be making movies of your work.
I loved that book! It was my favourite read of 2021.
Hahaha -- that would be quite something! Though I think I'll stick to writing non-fiction and leave fiction for those with better imaginations than I 😉
Thanks for another lovely one Suzi!
I bet one big advantage of the clicking echolocation that people use, is that it should still leave the standard sense of hearing quite intact. Conversely the sounds from the vOICe app seem pretty overpowering. Of course someone could turn the vOICe sounds down to get more standard sound information as well, though probably at the expense of not getting quite as much nuanced vOICe information. So there ought to be a trade off between the two. Maybe it would be most effective to alternate between the vOICe information and just standard hearing as appropriate from time to time? But what do blind people themselves now find? That would be where the rubber actually meets the road. Outfitting someone with such technology today, and even a baby, should not be expensive. Unless there are major problems with this particular technology then I’d expect blind people to now be using it quite a lot.
Much of this article addresses the concept of brain plasticity. Furthermore sometimes there are time limits, which I presume is why baby development should be important here. This reminds me of the tragic case of “feral children”. Beyond horrific psychological trauma, apparently without exposure they lose the potential to develop any of our natural languages — their brains appropriate those areas for other things. So are people, and even blind babies, now being fitted with such technology? And hopefully even babies are taught to turn their vOICe system on and off for samples when they’d like information about what’s around them rather than standard sound information.
Anyway back to Nagel, I suspect that even he couldn’t quite nail down what he was getting at with “something it is like”. Perhaps he just figured that brain states weren’t appropriate for this mysterious thing? Jackson too. But perhaps I can enunciate what they could not. Perhaps it’s the goodness to badness of existing? In the end that’s all I think it is. I consider this to essentially be the fuel which drives the conscious form of function, and somewhat like the electricity that drives our computers.
Thanks, Eric! You always raise fascinating points.
Visually impaired people often develop enhanced hearing. And vOICe users seem to learn to process its sounds as meaningful information rather than noise. But, good point, the technology does have some major limitations -- especially in noisy environments. This might be why, as you note elsewhere, the field seems to be moving more toward brain-computer interfaces (BCIs) for restoring sight, than technology like vOICe.
Yes, good point -- early development is crucial. The brain is indeed most plastic when we're young. This plasticity difference between childhood and adulthood is probably why sensory substitution technologies might be most effective when introduced early.
Interesting connection between affect and consciousness! In his new book (Chapter 2) Nagel suggests that affect (the felt quality of experience being good or bad) might be the best place to start when studying consciousness. He presented this idea at a conference recently. There was an interesting pushback from an attendee who suggested a different starting point - though I'll have to check my notes to remember what she proposed instead.
I haven’t read anything promising about BCI for sight yet (and I presume largely because we’re still clueless about what sight happens to be made of) but hopefully soon!
It’s good to hear that Nagel is still trying to figure this stuff out! And wow, perhaps he’s even supportive of my own position regarding value?
I have a feeling that I already know the general theme to the pushback you recall to Nagel identifying affect as the fundamental component to consciousness. I’ve long referred to this as the evolved social tool of morality. Observe that if in the end we’re all affect-centric products of our circumstances, then our inherent selfishness should influence us to outwardly imply that people must not consider their own happiness to be paramount to their existence. It’s ironically a selfish move because having this stance tends to help us get in better favor with others. Most of the time I doubt we even grasp that we’re doing it. (But I do suspect that humanity’s most effective sales people actively acknowledge their falseness to themselves, and in order to help them continue perfecting their craft of persuasion.) Scientists should not be immune. This is why I think it’s been so difficult to found psychology upon an effective utility based premise. Thus while modern psychology may develop effective models regarding learning, memory, and other peripheral issues, the social tool of morality seems to prevent it from developing basic models of what we are that seem effective. Though I have no use for the ideas of Sigmund Freud, at least he did attempt to get fundamental. To truly do so however, it may be that psychology will need some outside axiological help.
While we're still in the early days, there is some fascinating work happening with optogenetics and BCI. Recent studies shows we can stimulate specific patterns of neural activity to produce reliable visual percepts. Of course, these are still quite basic compared to natural vision, but they suggest we may be making progress in this area. I think we'll be reading much more about optogenetics in the coming years. I wrote a little about this here: https://suzitravis.substack.com/p/the-code-that-cures-blindness
I looked up my notes -- the commenter made a distinction between hedonic and non-hedonic attentional systems and asked Nagel whether looking for an attentional mechanism that were both hedonic and unconscious might be the way to go forward.
Your perspective on affect and social dynamics is fascinating. If I'm understanding correctly, you're suggesting something akin to how we might resist reducing love to cognitive mechanisms, like cognitive dissonance, even when cognitive dissonance might explain love quiet well? The resistance isn't necessarily because the explanation is wrong, but because acknowledging it feels somehow wrong?
This raises such interesting questions about how our social and emotional needs might influence even our scientific theories. When you suggest that psychology needs an "effective utility based premise," are you essentially arguing that we need to be more willing to look at uncomfortable truths about our nature?
I wonder though - even if we could reduce everything to mechanisms, should we? What happens to a society where people see love primarily as cognitive dissonance? Does understanding the mechanics of something change how we experience it? Perhaps some of our "illusions" serve important functions in ways we don't fully appreciate?
Interesting vision article! The challenges for BCI here seem appropriate, though a potential way forward nonetheless.
I haven’t read Nagel’s book so I can only guess whether or not such criticism seems reasonable. Regarding my point itself however, I’ll try to be more plain.
I believe that value exclusively exists as feeling good rather than bad for anything, anywhere — a physics based element of reality (and whether EMF based or something else). Because psychology does not yet formally accept this or any other value premise, I consider it appropriate that the field has only been able to develop effective peripheral rather than central models of our nature so far. If the human functions on the basis of value, though psychologists do not formally get into what’s valuable, then it makes sense to me that psychology should currently have foundational voids.
Unless you believe that psychologists already do formally acknowledge that feeling good/bad is what constitutes good/bad, you probably wonder what I think has prevented the formal acceptance of this position? My suspicion is social backlash — our selfishness naturally encourages us to instead celebrate altruism. I’m open to other explanations though.
Apparently philosophers have gotten around such backlash by ignoring the concept of good/bad altogether to instead focus on the rightness to wrongness of behavior. While I don’t exactly mind their morality move here, that shouldn’t help scientists effectively grasp our nature itself. So I suspect that science will need instruction from a respected community of “meta scientists” regarding metaphysics and epistemology, though to effectively model ourselves in basic ways, an accepted value premise ought to be critical.
Fascinating and great as always. Do the echolocating blind have efference copies? Can the blind have blind spots?
Thanks, Jack! And great comment, as always.
I was tempted to write something about efference copy in this one, but in the end decided to take it out. So, thanks for bringing it up in the comments. The short answer is, yes. I think those who use echolocation do have efference copies. While researching, I learned that echolocation is better for those that produce the sounds themselves (like mouth clicks or cane tapping) than for those that have the sounds produced by technology (like vOICe). To me, this suggests that there is something to the idea that action is important for perception.
When they actively generate clicks or tap a cane, the brain would send a copy of that motor command to the sensory areas -- just like it does during normal sight. It can use this copy as a prediction of the echos it expects to get back.
As for blind spots -- that's a fascinating question! While blind echolocators don't have the anatomical blind spot that sighted people have (caused by the optic nerve's exit point in the retina), they might experience "acoustic shadows" where objects block echoes from reaching them. However, I'm not sure we'd call this a 'blind spot'. Unlike the fixed position of the visual blind spot, these acoustic shadows can be overcome by moving one's head or position to get echoes from a different angle.
Wonderful, thank you. Just one further question on the blind spots and, by extension, the definition of “echolocation.” Obviously even sighted people can locate constant sources of noise outside of themselves. But suppose a sound generated a constant e sharp sound from a specific, constant location. In other words, it wouldn’t be an “echo” but a constant, externally generated sound. Would that disappear from an echolocator’s “vision” after a while? I would guess it would, and those are the blind spots to which I was referring.
That's a fascinating way to think about blind spots! I never thought of sound blind spots, but it makes a lot of sense. We do "tune out" constant sounds - which could create a kind of functional blind spot. Though I suspect active echolocators might avoid this by constantly generating new signals. On this, there's probably a big difference between passive hearing and active echolocation.
Okay, let me pose the question this way: suppose there’s a blind echolocator in a room with a 4 foot tall speaker that is emitting at a non-distressing level a continuous e sharp note. After ten minutes the blind person gets up and walks towards a spot located on the other side of the speaker. Does she bump into the speaker? If she knows the room, she might not make her own sounds, and I’m wondering whether the mechanisms of efference turn the speaker into a blind spot for her. And now suppose she makes her own clicks or noises - would they override the continuous e sharp message she’s been getting from the speaker? Or would her brain filter her new input out as a type of distraction or mistake (in other words, hierarchize signals)? One final variation, since her movement itself toward the speaker might create a difference in sound volume: what if the speaker was sensitive to that and modified its volume so that the input she received was constant. Does she bump into it in this cruel experiment?
Believe it or not I discussed this question at length with my son, a musician into music theory, and when we got into thinking about ways to torment the test subject he brought up the possibility of psychoacoustics. Are you familiar with that phenomenon? In short, it's playing two notes from different sources which, because of the wavelength differential produce a third note in the mind of the listener. And the two notes you start with could be inaudible, so that the only note our poor test subject heard would be occurring only in the brain and have no external source at all. So if our subject, instead of clicking, blew a whistle with an e flat note (son had objections to e sharp) and you could generate a psychoacoustic e flat out of your speaker... well, it would give her a hard time. That's all we could figure.
Hi Jack! I found your questions about echolocation and psychoacoustics so fascinating I did some digging! I couldn't easily find the answer, so I got chatting with a friend who knows more about auditory processing than I do.
We're not sure what would happen, but this is our guess:
In the continuous E flat speaker scenario: We think the blind person would almost certainly not bump into the speaker, even with continuous sound. Here's why: Our brains are remarkably good at detecting changes in acoustic patterns (this is especially true for people who can echolocate). As the person moves toward the speaker, we think they would perceive subtle changes in the
sound patterns from the room, spatial cues from head movements, changes in the sound's spectral qualities based on distance. We suspect that these cues would persist even with constant volume. This is because the sound is coming from the speaker. If the blind echolocation were wearing noise cancelling headphones that eliminated all sounds except the one E Flat, things might be different -- but then we have to worry about something else -- extreme auditory deprivation can lead to visual and auditory hallucinations.
Regarding whether their own clicks would override the speaker's sound: The brain actually excels at processing multiple sound sources simultaneously. So, we think the clicks might provide additional spatial information rather than competing with the speaker's sound. Our brains are pretty good at separating and processing multiple acoustic streams.
We spent a lot of time discussing the psychoacoustics idea. It's clever. In the end we decided that we think it probably wouldn't create the confusion we might think it would. Even with combination tones (the third note illusory tone), the spatial information from the original sound sources would still be present and usable for navigation. But it definitely made us think.
Fascinating stuff, Suzi. I am wondering about the meaning of the verb “to see.” The link to meaning as the product of seeing suggests to me that a blind person responding in conversation with “Oh, I see what you mean” is using the word in the same sense as a sighted person. “I plan to see my doctor tomorrow”—same thing. “They are seeing each other on a regular basis” could be like the meaning of “I see what you mean.” Interestingly, if a a deaf person were to say “I hear you” to signify “I see what you mean” I suspect people might consciously sense something unusual about the expression. We don’t say “They’ve been hearing each other for a month now” or “I have an appointment to hear my dentist.” My point is “see” has become a term that functions the same way for blind as well as sighted people except for the most basic sense as in “She sees well enough to thread a needle.” Could it be that humans “knew” the brain well enough to differentiate the verb to see from the verb to hear long before neuroscientists understood the physiology and neurology and plasticity of the brain?
Brilliant, I love this.
When we say "I see what you mean," we're talking about understanding or grasping meaning -- a process that transcends any particular sensory modality. So, it seems we use the verb 'to see' in a number of ways.
"To see" has evolved to represent understanding and perception broadly, while "hear" remains more tied to its specific sensory meaning. This might reflect something about how important vision is -- understanding how our world is structured is very important for not just getting around in the world but there's evidence that our memories might be spatial. What I mean by that is that the hippocampus -- which is very important for memory -- is also crucial for spatial navigation. It creates cognitive maps not just of physical spaces, but also of abstract relationships and concepts.
The response from a vOICe user captures this perfectly. She says "It is sight. I know what sight looks like. I remember it." When she says this, she's not just talking about sensory input -- she's talking about the higher-level process of understanding spatial relationships and patterns, which can be achieved through different sensory channels.
So yes, I think you're onto something important -- the flexibility of the verb "to see" in our language might reflect an intuitive understanding that "seeing" is really about understanding.
I’d love to read a post about the spatial aspects of the hippocampus. Makes me think of your point about differences between making your way through a carnival using a cell phone camera vs a pair of eyeballs during your commentary on Her. Thinking about say the spatial features of narratives vs information and within information say cause and effect structure vs comparison contrast. Concepts come in different shapes as well. Boy oh boy you got me thinking:)
One more thing: When we close our eyes to visualize, say, trying to get better at making free throws, we are able to focus more intently on an object than when we physically see it. I’m assuming blind people use what they’ve learned about the visual realm through echolocation and are able to visualize (though it would seem they would have to close their ears).
This behavior suggests something about consciousness itself: Visual processing might be more deeply connected to our conscious experience, and we can more easily "direct" visual attention than auditory attention. The visual system might be more integrated with executive function than the auditory???
The visual system's flexibility extends not just to processing different types of input, but also to switching between external and internal sources of information. We can literally "turn off" external visual input in a way we can't do with hearing. We have eyelids.
You're right that it's easier to shut off visual input than auditory input -- we have eyelids but not earlids. But I wonder whether blind echolocators can control their input by choosing when they use mouth clicks (or tap their cane). In this way, they seem to have (at least some) control similar to eyelids. They can choose when to sample their environment through self-generated sounds.
Yep, I think the connection you draw to consciousness is correct. There's evidence that visual imagery and visual perception use many of the same brain networks. When blind people use echolocation, they activate their "visual" cortex. I think this is evidence that vision is really about spatial processing and pattern recognition -- which is highly integrated with our conscious experience. It's difficult to think about a conscious experience without that conscious experience being spatial in some way. For example, colour is always colour of something -- and that something is always in some spatial location.
This active control over sensory sampling might be key to why self-generated echolocation (through clicking or cane tapping) works better than technological solutions that generate the sounds for you. Just as we can choose where to look and when to close our eyes, expert echolocators can choose when and where to direct their clicks to build up a mental image of their environment.
Your point about executive function connects nicely to this. Both vision and active echolocation are highly controllable -- we can choose where to look/click and when to sample our environment. This might indeed suggest that spatial understanding, regardless of sensory modality, is almost certainly integrated with executive control.
It’s certainly a lot easier to open the eyes and see than to calculate the rhythm, frequency, and direction of clicks or taps. What an amazing thing this brain is. Vision does intuitively seem to be the best evolutionary option. Hawks see food on the ground from miles away. I need to spend some time with this interchange. Thank you for sharing your expertise with me!
Cell Press. "These fish use legs to taste the seafloor." ScienceDaily. ScienceDaily, 26 September 2024. <www.sciencedaily.com/releases/2024/09/240926132111.htm>.
Northern Sea Robin (a fish) has evolved taste buds on its legs to detect buried prey. The taste buds must be wired up appropriately even tho they're in a different place (and are presumably routed via difference vertebrae). This is the converse problem to reusing hearing and wiring it up to vision.
Makes me wonder if originally everything connects to everything then the unused connections are lost (synaptic pruning); otherwise you've got a complicated engineering problem to solve each time.
That's really cool! I didn't know that about Sea Robin!
When a baby is born, it has relatively few neural connections. What's amazing is that in the first few years of life, an infant's brain forms new connections at a rate of 700-1,000 per second! After that, then, yes, there's a lot of synaptic pruning that goes on.
The thing I find particularly fascinating about neurons is the synapse -- they aren't actually physically connected. If they were wired up like a typical electrical circuit, the death of one neuron could disrupt the entire downstream network. Instead, the synaptic gap architecture of neurons allows for remarkable flexibility - if one neuron dies, other neurons can easily form new connections with its former targets. The synaptic architecture makes the brain incredibly adaptable!
I would think the resilience is more due to parallelism than the nature of the synapse. You can't stop walking while your neurons find the right connections to rebuild the circuit.
The interesting thing is that babies build brains that are remarkably similar. It's not a free for all and each bit of brain must have a preassigned function.
Presumably our distant ancestors were segmented with each sense organ or body part attached to a different segment and this has been retained through cephalisation.
Clearly segmented organisms that have different structures on different segments must have a signalling system so each segment knows which one it is, so if you have eyes on segment 2 it has to know that it is segment 2 and we've apparently retained this. Good old Hox!
I wonder if all those retinal maps originated from eyes on different segments?
It's just dawned on me: the reason we need the blood-brain barrier is cos blood circulation is a great homogeniser, but the brain needs to maintain diverse chemical concentration gradients (guidance cues) to enable axons to find their way around.
I’m intrigued by the idea of the visual cortex being about building maps rather than merely seeing. I wonder if you’ll be telling us about what happens next — after the visual cortex has done its job of seeing things.
I have what feels like a visual map of the world around me - even of the things I can’t see because they are behind me or because I have my eyes closed. I can still find the light switch in the dark because I can ‘see’ where it is on my map. Does all this happen in the visual cortex? Or is it somewhere else in the brain?
What about recognising faces or written words? Or elephants? What does the visual cortex send to the temporal lobe to say “Here is a word.”? Does it send something like “this might be a word. See if you can figure out what it is”? Or does it send something closer to raw visual data?
Thanks for this fascinating question! (and I'm sorry for the slow reply).
That sense you describe of having a map even with your eyes closed is interesting. The visual cortex is not the only type of map we have. So, I'm curious... when you reach for that light switch in the dark, does it feel like you have a visual experience of where the light switch is? Or is it more like you are drawing on a memory? The reason I ask is because people differ on their ability to visualise. But in any case, I suspect that the hippocampus is very much involved in this type of light switch memory -- there are specialised cells in the hippocampus that are very interested in where you are in space.
On your question about what the visual cortex sends to other brain regions -- it's probably best to think about communication between areas not as a one-way street. It's more like an ongoing conversation between brain regions. When you see what might be a word (or elephants) other parts of your brain are already sending their inputs (or predictions) to bias the inputs, even as visual information is still being processed.
So, the visual cortex sends sensory data, but not all data is sent with equal priority -- it's biased. And this bias depends on things like your expectations, goals, and attention.
Thank you. Wonderful as always, Suzi.
> when you reach for that light switch in the dark, does it feel like you have a visual experience of where the light switch is? Or is it more like you are drawing on a memory?
A couple of years ago, I did that little survey of my friends where you ask them to visualise a beach scene with their eyes closed. Out of all my friends, only two could actually “see” the beach. I definitely can’t. I can’t really visualise anything from memory.
But when I close my eyes in my bedroom right now, I can ‘see’ exactly where everything is. There is the door. There is the piano. There is the light switch. There are the photos on the wall. I can't ‘see’ any features of anything and I have nothing like an image. But I know exactly where everything is. I guess you could call it a map but it’s in 3D and it is very accurate.
I knew a little about the prediction idea but I guess I didn't think about the idea that it works for words or for difference concepts (like faces and elephants) that are handled in different parts of the brain. That poor hippocampus must be very busy!
I wish I had some time to study this is detail. It’s so fascinating!
Great post! It's hard to imagine riding a bicycle without vision. If I were that guy's mother I would be very nervous. On the other hand, while I was listening to this I was roasting coffee and ended up burning the beans because I should have been relying on my hearing rather than my eyes. Who knows which way is more reliable.
I appreciate your bringing up the difference between raw sensory experience and what's it's like-ness. It's so often assumed these are the same thing. When William James talks of infants experiencing the world as a blooming, buzzing confusion, that leans heavily on the notion that raw sensory data make up our experiences (maybe like building blocks?) and we come along and interpret or impose structure on them. But when have we ever experienced raw sensory data? Maybe it's not possible.
"The question is, does this prove anything about the nature of reality or the limits of science?"
Great question. I doubt it proves anything—hardly anything does!—but it does show how meaning is suffused in experience, perhaps inextricably so, and we should probably keep that in mind in our theories about reality.
I keep forgetting that I can share comments as Notes (and then I turn around and complain, "What am I supposed to do with Notes?") Go figure.
Thanks for this wonderful comment, Tina! (and I'm sorry for the slow reply).
It's a common assumption, isn't it? -- that we somehow start with raw sensations and then layer meaning on top. But isn't it difficult to imagine what truly raw sensory data would be like? It really seems like that blooming, buzzing confusion James describes must already have some structure to it. Are we ever really a blank slate?
Now, you've got me thinking... is meaning baked into experience from the start? If so, what does that tell us about how we make sense of the world?
(And hey, I get you with the Notes thing. I feel like I'm always the last one to figure out how to best use platform features!)