So if you have motion blindness, you're effectively seeing life as a timelapse video. Except less coherent since you have no sense of transition. So maybe more like holding an old camera film with individual frames in front of you.
What's curious to me is that, as far as i understand your description, this "blindness" only applies to your VISUAL perception. So if you e.g. close your eyes, you still retain a coherent understanding of your body and limbs moving through space and time?
If that's the case, it must be a completely mind-warping experience to observe yourself pouring a cup of coffee with your eyes open. Your other senses feel the full motion of you picking up the kettle, tipping it, and filling your cup, while your eyes are only seeing sporadic freeze-frames of that experience?
And if that's true, it almost means that you're better off with your eyes closed if you want to perform fluid actions?
Or does motion blindness also affect other ways to perceive motion along with the visual cues?
Yep, that is correct. The 'blindness' only applies to vision. It really would be the strangest of experiences.
I probably should have noted this in the article, but much of what we know about motion blindness comes from just one patient. She described her experience of pouring a cup of tea as seeing the tea "frozen in air" - I guess this would be like looking at a glacier of tea. She would be surprised when the cup would overflow.
I believe she learned to do things to avoid this deficit. For example, she would put the tip of her finger in the cup so she could feel when the cup was almost full. And when she was crossing the street, she would use sound to estimate the distance of moving cars.
Interestingly, there are some studies that have tried to replicate this effect in normal healthy people using TMS (Transcranial Magnetic Stimulation), a technique which can cause short-term disruption of brain activity. These studies found that if they applied TMS over area MT, the participants had difficulty judging movement.
I have BA in Psychology from years ago and this was a nice updated synopsis breakdown of brain flow learning. I am fan of the panpsychism concept - everything is interconnected and conscious right down to the dancing electrons bopping in and out of existence. What else could explain their/photons mysterious independent behavior in the classic Double Slit experiment?
Panpsychism certainly seems to be gaining popularity these days! I'm always interested in how people come to their views on consciousness. Was there a book or thinker who inspired you to explore panpsychism as a potential explanation for consciousness?
Suzi, what is known about visualizing during the reading process? There is research showing that readers report different levels of creating pictures in their minds as they process text. Also, concrete words representing, say, blue or red mugs are processed more quickly than abstract words (cf: Rayner and Pollatsek 1989 for a synthesis of decades of research using eye movement photography). A related question—have researchers studied the visual cortex when the subject is viewing a sculpture or a painting, when higher-order cognitive activity may accompany basic perception? Finally, I’ve seen maps of the brain that locate something called “executive function” in the front of the brain, which I assumed is involved in what I now know to be the binding problem. Is this naming of a function and attaching the name to a physical location just one possible explanation, is it an incorrect inference, totally bogus, or what? I’m very interested in how visual processing is accomplished not just during the physical uptake of print but also in the downstream experience of imagery.
On the first question. Yes! There is research on this. Actually, a colleague, and good friend, of mine is interested in these sorts of questions. He is interested in what he calls 'imagined sensory experiences'. It turns out that people vary a lot on their ability to imagine experience. Some people cannot imagine experiences at all. This is a condition known as aphantasia. Other people seem to have overly active imaginations (hyperphantasia).
I can't think of any research looking specifically at differences in viewing sculpture or paintings. But I do know of some published papers that looked at visual perception in experts -- like athletes. As a general rule expertise seems to make a difference.
Ah! the brain with function labels! What is bogus is the idea that one area of the brain is solely responsible for specific functions. We know that certain areas of the brain seem to specialise in specific functions, but this is more likely the case of being necessary, but not sufficient.
Your question, however, is at the heart of a central issue. Is the brain's cognitive functions the result of the entire brain operating in harmony, or do the brain's functions stem from specialised brain regions functioning with some degree of independence? And this question is probably the key question that fuels much of the work being done in cognitive neuroscience.
The phrase "Necessary but not sufficient" primes me to think about how we are conditioned to think "every part has a definite task" and "breaking things into identifiable units" can help us decipher the whole.
All of this priming is rooted in our mindset that is shaped by "industrial revolution" school of thought - which says that every whole is breakable in one or more sorts of "units" and every part has a unique, autonomous function.
Given that our consciousness has been evading us during the last 5 centuries of progress of science, this one area needs fundamentally different mental models of research. Likewise, different engineering approach to augment that research.
As always, a very beautiful and thought-provoking (neuron-firing?!) piece!
Fascinating point! I like the idea that the way we think might limit our understanding, especially about consciousness. This reminds me of language - how it's so embedded in our thinking that it seems impossible to extracted it from our ideas.
Though I originally had other reasons to suspect that consciousness exists as a unified electromagnetic field, later I decided that EMF consciousness also works pretty well as a potential solution for “the binding problem”. The thing that I think we should infer from the color, edges, and motion of a blue mug being processed in different parts of the brain, is that each part should ultimately be informing a single thing in order for that blue mug to consciously result from moment to moment as something singular. But in a systemically causal world, what might that informed single thing be? What might consciousness effectively be made of?
When neurons fire, we know that each charge results in a slight electromagnetic disturbance. So could these separate firings effective be algorithms which essentially inform a unified electromagnetic field that itself exists as the experiencer of a blue mug? To me this seems like a potential solution. (And note that this potential explanation does not solve “the hard problem of consciousness”. It does not illustrate to us why the physics work like this. But I also don’t think it should. Science exists to help us understand what we can though I don’t think we should assume we’d ever grasp everything, and certainly not this particular thing today given well known softness in associated fields.)
I also like to ponder this question from an opposing perspective. Let’s say we know that a unified blue mug is perceived (as we do), and also that it doesn’t ultimately exist as a thusly produced electromagnetic field (which we don’t). What would be a second possibility that these neurons in various parts of the brain might be informing to exist as the experiencer of a blue mug? Try as I might, I just don’t know of another causal element of reality which might fit the bill, which is to say be informed by these elements of the brain to exist as a unified experiencer of a blue mug. The unification of its color, edges, motion, and so on seem to mandate that something unified exists as such, though beyond light speed field dynamics I’m not sure of a second potential causal explanation.
Suzi—another excellent article. In color and motion perception—in your example for binding—what is the current theory of what is going on in these areas with color synesthesia? Is this a type of hyper binding? Or is it hyper unbinding? My own experience is that color is implicit in thinking and memory—for example how does one keep track of days / know what it is beyond the label “Tuesday or Wednesday. . .” If you don’t know what color the day is? And in your example only a deep ultramarine or Prussian blue makes sense for a hot drink—a icy teal blue mug with a hot drink is almost revolting (I already know this isn’t going to taste good because the color changes the way things taste). So I filled in a color image which worked with the context when reading your description. I accept this isn’t a common perception but wonder how one lives without strong color associations. Would we say someone who can’t taste color, who doesn’t feel color, and/or doesn’t structure thoughts with color is ‘under bound’? (Even if this is the most common experience .).
Which makes me wonder how can one determine over or under active imagination? Perhaps the extreme outliers could be identified—the 1% at each end of a spectrum—but the grouping would seem to be very random and dependent upon a moment’s median range which could be unfortunately far under or over what is beneficial. Which brings the question beneficial for what? Quantifying levels of imagination seems like tape measuring affection or love.
Thanks Dean! It's fascinating to hear you're a synesthete. My PhD adviser dedicated much of his early career to studying synesthesia - it's truly an intriguing phenomenon.
Hyper-binding is currently the leading explanation for synesthesia, suggesting the rest of us might be "under-bound" in comparison. However, emerging theories propose that synesthetes may share similarities with hyperphantasics, possessing an exceptional capacity for imagination.
Synesthesia remains largely enigmatic. It's often viewed as a developmental condition but it's not that simple -- acquired synesthesia can result from epilepsy, psychoactive substances, or psychoses .
Interestingly, research has shown that synesthetes born in the U.S. during the 1970s and 1980s often share remarkably similar colour associations for certain letters. These associations closely match the colours of popular Fisher-Price refrigerator magnets from that era (e.g., A was red, B was blue). This finding sparked considerable discussion in the field, raising questions about the universality of synesthetic development.
My perspective is that synesthesia likely has multiple pathways of development. The diversity of synesthetic experiences and their origins suggests there's no single route to becoming a synesthete. And maybe synesthesia is not one single phenomena.
Thank you. Very enlightening. Not sure if it’s acquired or not in myself—I cannot remember a time when very specific color wasn’t key to memory, associations—taste, feelings, etc., and imagination—even very early childhood memories from early 60s. Inversely it can be extremely disturbing when something is the wrong color. My youngest daughter has tactile-gustatory synesthesia, so I wonder if there is some sort of inherent pattern or structure to fundamental synesthesia that reveals in different and more complex—personal (individualized) ways. Hyperphantasm similarities makes sense—at least my own experience and understanding of my daughter supports this. I’d like to read more of your thoughts on this imagination topic sometime.
Super interesting! I have so many questions. I won't bombard you with every question, but I am curious, would you say you synesthesia colour experience is different to you perceptual colour experience? When you say its disturbing when something is the wrong colour is this because the perceptual colour feels wrong or because there is conflict in perception? Or something else?
I definitely have plans to write on synesthesia and hyperphantasia . It'd be great to see if the science lines up with you and your daughters experience.
Lots of things that seem unified or seamless are actually the product of opposing forces (e.g. walking, the earth’s orbit). Why would consciousness be different?
Interesting. The sensation of being fixed in place is important for survival. I look at a fence post that is in motion on a spinning planet and I fully expect it to be there in an hour or a week—at least not to move of its own volition.
Hi Misha! Good point. When we think about the purpose of the brain, one increasingly popular idea is that the brain's primary function is to generate actions and evaluate the consequences of those actions. This cycle is commonly referred to as the perception-action cycle (some might argue that it should actually be called the action-perception cycle -- but that's an argument for another day).
As you point out, its important for our brain to distinguish between when we're actively moving versus when we're being moved. When we initiate an action, our brain prepares an action, initiates that action (i.e., it moves) and uses sensory information to evaluate the consequences of that action.
But when we're passively moved (like being pushed or driving in a car), the sensory input doesn't line up with whatever action the body happened to be doing at that time.
This distinction probably has a lot to do with how we experience causation.
‘Tis, Misha. Yours is a profound thought. As true in the social as in the atomic spheres of reality, participants in literacy events can occupy space reserved for rocks. Unfortunately, inertia in high-poverty schooling reproduces a recalcitrant loop where teachers push and learners move. Learners can learn to unlearn dogmatic training. The brain is a stable structure, but as near as I can tell, it is flexible. Suzi knows about this topic I’m sure. Provocative comments, Misha, aka same difference. My strong suit is splitting hairs, which explains my loss of them:)
I suspect there has to be a significant difference between active and passive engagement in learning. If we take the view that learning is the brain changing (e.g., forming new connections, strengthening some connections, and weakening others), then passive learning (which I take to mean learning with little attention and action, e.g., not engaging with the content), cannot have the same effect as active learning. But to make these sorts of claims, I have to take quite a few leaps from basic research (i.e., research looking at simplified processes) to far more complex ones (e.g. teaching and learning in schools). There are risks to taking this sort of leap.
But I assume, there's enough behavioural evidence showing that passive learning is far less effective than active learning to worry too much about why or how this happens in the brain.
This issue sits at the heart of the reading wars. Pure phonics advocates prescribe passive learning of graphological and phonological codes for the first ten weeks of school—I am thrilled with your response! You reaffirm precisely what sociocultural and cognitive research supports: active engagement of the whole brain, not teaching to a specific loop in the brain. I call it Whole Brain with apologies to Whole Language.
Btw, reading researchers concur that some readers report much greater experiences of visualization during reading than others, both ability to link abstract technical diagrams to words in text and capacity to “make a movie in your mind.” But there is modest evidence that guided experiences can increase self-reports. Lots of problems with this method of data collection. It would be cool to add brain imaging data to a study
I just thought of an area of research you might find interesting -- visual statistical learning. This is the ability to extract information about the probabilities of shapes during passive viewing. This type of research suggests that we are sensitive to regularities in our environment. And it is thought to be important for a bunch of things we do, including reading.
Have you read much about statistical learning in relation to reading?
The interesting thing about visual statistical learning is that it seems like it can be done implicitly and (maybe) without attention. I'm making huge leaps, simplifications, and generalisations here. But it might be an area worth exploring.
Great question! Unfortunately, the answer is not simple. I don't think I will be able to give you a full answer here (without writing a book).
But let's give it a try... It's true that we make perceptual errors. My guess is that we don't remember these errors because they are less likely to happen when we 'pay attention' and attention seems necessary for that sort of memory.
Also, if the predictive processing theory of the brain is correct, then 'seeing' a blue apple might be very unlikely. According to this view, our brain is constantly making predictions and our perceptions are constrained by those predictions. So, we may be very unlikely to make the error of seeing an apple as blue because we have very little experience of seeing blue apples and, therefore, our brain would be unlikely to make that prediction.
Your question is very thought provoking. I think to answer your question fully, we would need to keep a few things in mind.
One, Terry is onto something. Adding eye movements does make the situation more complex.
We are constantly moving our eyes. We call these rapid eye movements, saccades. On average we make about 3-5 saccades per second. But, around the time that we move our eyes, our brain suppresses visual sensitivity. It does this (we think) because if it didn't the world would look blurry during an eye movement (kinda like when you move your camera quickly when filming something).
Two, it's tempting to think that neurons in our brain fire once and then send their messages onto other neurons upstream in the system. So we might think that neurons in V4 process the colour once and that's what accounts for our perception. But this isn't really how it works. Area's of the brain work in parallel. And neurons in V4 (and other areas) continue to fire. And they are not just receiving input from incoming signals, they are also receiving inputs from neurons 'higher' up in the brain. This 'feedforward' and 'feedback' is constantly happening. Areas in the visual cortex are using the incoming 'feedforward' information and the downstream 'feedback' information to 'fine tune' their responses. So we don't think perception is really a once and done thing.
"So we don't think perception is really a once and done thing." By "perception " I assume you mean pattern matching. Def ongoing - those optical illusions that can be interpreted in two ways change "before your eyes" as you look at them. And of course distant unfamiliar objects often morph into familiar ones as you get closer. When you see somebody you recognise that adds detail to what you see (does for me anyway) presumably either pasted in from memory, or at least it gives extra info to resolve some uncertainties in the visual signal.
These are great examples! You make great points. Adding to what you've said -- perception isn't just a bottom-up process of collecting sensory input. Our brain is constantly shaping what we perceive through top-down influences (things like expectations, attention, and prior knowledge). That's why those optical illusions can flip, and familiar faces can seem to gain detail.
Fine to explain vision as a linear process or a set of linear processes, but the brain/consciousness is a neural net and nets don't work that way.
This is what I was getting at with my post about the homunculus.
It seems to me that the neural net maintains a (near) real time simulation of the sensed environment, constantly updated via the sensory input channels. This seemlessly brings together inputs from disparate sources like seeing your hand and feeling a touch and makes touching your own hand feel completely different from somebody else touching it.
You're absolutely right that the brain isn't simply processing information in a linear way!
I tend to think about it not just as a simulation of the sensed environment; I suspect what we are doing is simulating a prediction of what is expected. The brain samples the environment to check if its predictions are correct and constantly updates those predictions. In this way, the process is more active than passive.
Even simulating the present is a prediction cos of latency - anyway we have to synchronise the inputs from different channels. There's a lot of error correction as well, e.g. removal of blinks and saccades, and not hearing your heart beat. But the simulation that we sense directly and think of as reality is current.
Yes we also have to be able to predict the future, but only in small regions at a time. You throw one ball at a tennis player and they'll hit it, throw ten and they'll give up.
We also have a number of autonomous subsystems: balance, walking, even talking (have you never gone into ChatGPT mode?) that are running their own simulations. Balance: I can't access; walking: I can access when I wish to; talking: I usually drive, but I can drift out; typing: I always have to drive. I can't access the future simulation either - this is why you have to LEARN a sport (train the subsystem) until it's automatic - if you try to think it, you'll fail.
I think I know what saccades are for. Pixera has/had a digital camera patent for it under the name DiRactor. They shifted the sensor by half a pixel in each direction (four frames) and then did some clever processing, to (almost) double the resolution.
In the human case it corrects for irregularities in the rod and cone grid, hides the effects of dead sensors, and gives a bit more resolution. This is why we stare.
That’s an interesting analogy! Saccades allow the brain to gather more details by moving the fovea across the visual field, and this constant sampling might help compensate for some perceptual irregularities. It’s interesting to think about how saccades are similar and different to technology like DiRactor.
That's such an interesting point you make. Surprise is a big part of predictive processing. But I suspect what is happening in the gorilla/basketball example is that your attention is busy doing something else -- counting passes between the white t-shirt players. If you were to just watch the video without having to count the passes, I suspect you would see and be surprised by the gorilla.
In reading research, eye movement photography studies have concluded that it can take 50 ms for an image of a word to fade and make room for a new image. Color words printed in different colors (/green/ printed in red letters) are retrieved with more errors and can take longer to decay). Once word perception (aka lexical access) is achieved, the eye fixation (a still eye) jerks (a saccade) and a new fixation commences. Your question about suppression of errors (the momentary blue apple fading unnoticed) is very interesting.
The eyes can perceive words only when their sweep stops or fixates on a point in a line of print. The mean span of perception during a fixation within foveal vision is 7-15 letters. Once this display fades from the foveal area, the eyes are ready to sweep or jump together and then stop for another fixation, and so on. Readers do not perceive individual words, but they process every letter in fixations, perhaps four or five per line of print in the length of lines in these comment windows.
Readers do not have to accomplish lexical access (identify the meaning of a word) for the eyes to receive a signal to jump. But if a string of letters have been processed and the cutting edge of comprehension experiences confusion, readers make regressions, backpedaling over letter strings previously processed. Cognitive load driven by attention slows the reader as meaning is repaired. Ordinary automatic mature processing moves between 250-300 words with some readers under certain conditions achieving perhaps 400 or so words.
There is a perceptual phase of reading occurring automatically without effort and awareness unless comprehension fails. The eyes move alternating stops/fixations and jumps./sac adds. The eye-voice span is interesting. The eye a.ways outpaces the voice when readers read aloud. If you suddenly black out upcoming print in an ongoing display readers will give voice to 3-5 words that were blacked out. It takes 200 ms to program speech articulation structures for duty.
So if you have motion blindness, you're effectively seeing life as a timelapse video. Except less coherent since you have no sense of transition. So maybe more like holding an old camera film with individual frames in front of you.
What's curious to me is that, as far as i understand your description, this "blindness" only applies to your VISUAL perception. So if you e.g. close your eyes, you still retain a coherent understanding of your body and limbs moving through space and time?
If that's the case, it must be a completely mind-warping experience to observe yourself pouring a cup of coffee with your eyes open. Your other senses feel the full motion of you picking up the kettle, tipping it, and filling your cup, while your eyes are only seeing sporadic freeze-frames of that experience?
And if that's true, it almost means that you're better off with your eyes closed if you want to perform fluid actions?
Or does motion blindness also affect other ways to perceive motion along with the visual cues?
Maybe I'm overthinking this one.
Hey Daniel!
Yep, that is correct. The 'blindness' only applies to vision. It really would be the strangest of experiences.
I probably should have noted this in the article, but much of what we know about motion blindness comes from just one patient. She described her experience of pouring a cup of tea as seeing the tea "frozen in air" - I guess this would be like looking at a glacier of tea. She would be surprised when the cup would overflow.
I believe she learned to do things to avoid this deficit. For example, she would put the tip of her finger in the cup so she could feel when the cup was almost full. And when she was crossing the street, she would use sound to estimate the distance of moving cars.
Interestingly, there are some studies that have tried to replicate this effect in normal healthy people using TMS (Transcranial Magnetic Stimulation), a technique which can cause short-term disruption of brain activity. These studies found that if they applied TMS over area MT, the participants had difficulty judging movement.
Very interesting! And sounds like I wasn't too far off in how I was picturing this sense-confusion in my mind.
I have BA in Psychology from years ago and this was a nice updated synopsis breakdown of brain flow learning. I am fan of the panpsychism concept - everything is interconnected and conscious right down to the dancing electrons bopping in and out of existence. What else could explain their/photons mysterious independent behavior in the classic Double Slit experiment?
Hey Christopher! Thanks so much.
Panpsychism certainly seems to be gaining popularity these days! I'm always interested in how people come to their views on consciousness. Was there a book or thinker who inspired you to explore panpsychism as a potential explanation for consciousness?
David Chalmers and Stuart Hameroff
https://youtu.be/i_RXrclM7Bc?si=z_aSBVlIAAjYk7AM
https://youtu.be/mog-Pw0Z9mU?si=a_pbo3GB7B9drFLp
Suzi, what is known about visualizing during the reading process? There is research showing that readers report different levels of creating pictures in their minds as they process text. Also, concrete words representing, say, blue or red mugs are processed more quickly than abstract words (cf: Rayner and Pollatsek 1989 for a synthesis of decades of research using eye movement photography). A related question—have researchers studied the visual cortex when the subject is viewing a sculpture or a painting, when higher-order cognitive activity may accompany basic perception? Finally, I’ve seen maps of the brain that locate something called “executive function” in the front of the brain, which I assumed is involved in what I now know to be the binding problem. Is this naming of a function and attaching the name to a physical location just one possible explanation, is it an incorrect inference, totally bogus, or what? I’m very interested in how visual processing is accomplished not just during the physical uptake of print but also in the downstream experience of imagery.
Amazing questions!
On the first question. Yes! There is research on this. Actually, a colleague, and good friend, of mine is interested in these sorts of questions. He is interested in what he calls 'imagined sensory experiences'. It turns out that people vary a lot on their ability to imagine experience. Some people cannot imagine experiences at all. This is a condition known as aphantasia. Other people seem to have overly active imaginations (hyperphantasia).
I can't think of any research looking specifically at differences in viewing sculpture or paintings. But I do know of some published papers that looked at visual perception in experts -- like athletes. As a general rule expertise seems to make a difference.
Ah! the brain with function labels! What is bogus is the idea that one area of the brain is solely responsible for specific functions. We know that certain areas of the brain seem to specialise in specific functions, but this is more likely the case of being necessary, but not sufficient.
Your question, however, is at the heart of a central issue. Is the brain's cognitive functions the result of the entire brain operating in harmony, or do the brain's functions stem from specialised brain regions functioning with some degree of independence? And this question is probably the key question that fuels much of the work being done in cognitive neuroscience.
The phrase "Necessary but not sufficient" primes me to think about how we are conditioned to think "every part has a definite task" and "breaking things into identifiable units" can help us decipher the whole.
All of this priming is rooted in our mindset that is shaped by "industrial revolution" school of thought - which says that every whole is breakable in one or more sorts of "units" and every part has a unique, autonomous function.
Given that our consciousness has been evading us during the last 5 centuries of progress of science, this one area needs fundamentally different mental models of research. Likewise, different engineering approach to augment that research.
As always, a very beautiful and thought-provoking (neuron-firing?!) piece!
Thanks Nirav!
Fascinating point! I like the idea that the way we think might limit our understanding, especially about consciousness. This reminds me of language - how it's so embedded in our thinking that it seems impossible to extracted it from our ideas.
Though I originally had other reasons to suspect that consciousness exists as a unified electromagnetic field, later I decided that EMF consciousness also works pretty well as a potential solution for “the binding problem”. The thing that I think we should infer from the color, edges, and motion of a blue mug being processed in different parts of the brain, is that each part should ultimately be informing a single thing in order for that blue mug to consciously result from moment to moment as something singular. But in a systemically causal world, what might that informed single thing be? What might consciousness effectively be made of?
When neurons fire, we know that each charge results in a slight electromagnetic disturbance. So could these separate firings effective be algorithms which essentially inform a unified electromagnetic field that itself exists as the experiencer of a blue mug? To me this seems like a potential solution. (And note that this potential explanation does not solve “the hard problem of consciousness”. It does not illustrate to us why the physics work like this. But I also don’t think it should. Science exists to help us understand what we can though I don’t think we should assume we’d ever grasp everything, and certainly not this particular thing today given well known softness in associated fields.)
I also like to ponder this question from an opposing perspective. Let’s say we know that a unified blue mug is perceived (as we do), and also that it doesn’t ultimately exist as a thusly produced electromagnetic field (which we don’t). What would be a second possibility that these neurons in various parts of the brain might be informing to exist as the experiencer of a blue mug? Try as I might, I just don’t know of another causal element of reality which might fit the bill, which is to say be informed by these elements of the brain to exist as a unified experiencer of a blue mug. The unification of its color, edges, motion, and so on seem to mandate that something unified exists as such, though beyond light speed field dynamics I’m not sure of a second potential causal explanation.
Suzi—another excellent article. In color and motion perception—in your example for binding—what is the current theory of what is going on in these areas with color synesthesia? Is this a type of hyper binding? Or is it hyper unbinding? My own experience is that color is implicit in thinking and memory—for example how does one keep track of days / know what it is beyond the label “Tuesday or Wednesday. . .” If you don’t know what color the day is? And in your example only a deep ultramarine or Prussian blue makes sense for a hot drink—a icy teal blue mug with a hot drink is almost revolting (I already know this isn’t going to taste good because the color changes the way things taste). So I filled in a color image which worked with the context when reading your description. I accept this isn’t a common perception but wonder how one lives without strong color associations. Would we say someone who can’t taste color, who doesn’t feel color, and/or doesn’t structure thoughts with color is ‘under bound’? (Even if this is the most common experience .).
Which makes me wonder how can one determine over or under active imagination? Perhaps the extreme outliers could be identified—the 1% at each end of a spectrum—but the grouping would seem to be very random and dependent upon a moment’s median range which could be unfortunately far under or over what is beneficial. Which brings the question beneficial for what? Quantifying levels of imagination seems like tape measuring affection or love.
Thanks Dean! It's fascinating to hear you're a synesthete. My PhD adviser dedicated much of his early career to studying synesthesia - it's truly an intriguing phenomenon.
Hyper-binding is currently the leading explanation for synesthesia, suggesting the rest of us might be "under-bound" in comparison. However, emerging theories propose that synesthetes may share similarities with hyperphantasics, possessing an exceptional capacity for imagination.
Synesthesia remains largely enigmatic. It's often viewed as a developmental condition but it's not that simple -- acquired synesthesia can result from epilepsy, psychoactive substances, or psychoses .
Interestingly, research has shown that synesthetes born in the U.S. during the 1970s and 1980s often share remarkably similar colour associations for certain letters. These associations closely match the colours of popular Fisher-Price refrigerator magnets from that era (e.g., A was red, B was blue). This finding sparked considerable discussion in the field, raising questions about the universality of synesthetic development.
My perspective is that synesthesia likely has multiple pathways of development. The diversity of synesthetic experiences and their origins suggests there's no single route to becoming a synesthete. And maybe synesthesia is not one single phenomena.
Thank you. Very enlightening. Not sure if it’s acquired or not in myself—I cannot remember a time when very specific color wasn’t key to memory, associations—taste, feelings, etc., and imagination—even very early childhood memories from early 60s. Inversely it can be extremely disturbing when something is the wrong color. My youngest daughter has tactile-gustatory synesthesia, so I wonder if there is some sort of inherent pattern or structure to fundamental synesthesia that reveals in different and more complex—personal (individualized) ways. Hyperphantasm similarities makes sense—at least my own experience and understanding of my daughter supports this. I’d like to read more of your thoughts on this imagination topic sometime.
Super interesting! I have so many questions. I won't bombard you with every question, but I am curious, would you say you synesthesia colour experience is different to you perceptual colour experience? When you say its disturbing when something is the wrong colour is this because the perceptual colour feels wrong or because there is conflict in perception? Or something else?
I definitely have plans to write on synesthesia and hyperphantasia . It'd be great to see if the science lines up with you and your daughters experience.
Lots of things that seem unified or seamless are actually the product of opposing forces (e.g. walking, the earth’s orbit). Why would consciousness be different?
Interesting. The sensation of being fixed in place is important for survival. I look at a fence post that is in motion on a spinning planet and I fully expect it to be there in an hour or a week—at least not to move of its own volition.
More important than the sensation of being fixed in place is the ability to distinguish when one is moving from when one is being moved.
Hi Misha! Good point. When we think about the purpose of the brain, one increasingly popular idea is that the brain's primary function is to generate actions and evaluate the consequences of those actions. This cycle is commonly referred to as the perception-action cycle (some might argue that it should actually be called the action-perception cycle -- but that's an argument for another day).
As you point out, its important for our brain to distinguish between when we're actively moving versus when we're being moved. When we initiate an action, our brain prepares an action, initiates that action (i.e., it moves) and uses sensory information to evaluate the consequences of that action.
But when we're passively moved (like being pushed or driving in a car), the sensory input doesn't line up with whatever action the body happened to be doing at that time.
This distinction probably has a lot to do with how we experience causation.
‘Tis, Misha. Yours is a profound thought. As true in the social as in the atomic spheres of reality, participants in literacy events can occupy space reserved for rocks. Unfortunately, inertia in high-poverty schooling reproduces a recalcitrant loop where teachers push and learners move. Learners can learn to unlearn dogmatic training. The brain is a stable structure, but as near as I can tell, it is flexible. Suzi knows about this topic I’m sure. Provocative comments, Misha, aka same difference. My strong suit is splitting hairs, which explains my loss of them:)
This is a huge topic! It's also a complex one.
I suspect there has to be a significant difference between active and passive engagement in learning. If we take the view that learning is the brain changing (e.g., forming new connections, strengthening some connections, and weakening others), then passive learning (which I take to mean learning with little attention and action, e.g., not engaging with the content), cannot have the same effect as active learning. But to make these sorts of claims, I have to take quite a few leaps from basic research (i.e., research looking at simplified processes) to far more complex ones (e.g. teaching and learning in schools). There are risks to taking this sort of leap.
But I assume, there's enough behavioural evidence showing that passive learning is far less effective than active learning to worry too much about why or how this happens in the brain.
This issue sits at the heart of the reading wars. Pure phonics advocates prescribe passive learning of graphological and phonological codes for the first ten weeks of school—I am thrilled with your response! You reaffirm precisely what sociocultural and cognitive research supports: active engagement of the whole brain, not teaching to a specific loop in the brain. I call it Whole Brain with apologies to Whole Language.
Btw, reading researchers concur that some readers report much greater experiences of visualization during reading than others, both ability to link abstract technical diagrams to words in text and capacity to “make a movie in your mind.” But there is modest evidence that guided experiences can increase self-reports. Lots of problems with this method of data collection. It would be cool to add brain imaging data to a study
I just thought of an area of research you might find interesting -- visual statistical learning. This is the ability to extract information about the probabilities of shapes during passive viewing. This type of research suggests that we are sensitive to regularities in our environment. And it is thought to be important for a bunch of things we do, including reading.
Have you read much about statistical learning in relation to reading?
The interesting thing about visual statistical learning is that it seems like it can be done implicitly and (maybe) without attention. I'm making huge leaps, simplifications, and generalisations here. But it might be an area worth exploring.
Thank you!
Great question! Unfortunately, the answer is not simple. I don't think I will be able to give you a full answer here (without writing a book).
But let's give it a try... It's true that we make perceptual errors. My guess is that we don't remember these errors because they are less likely to happen when we 'pay attention' and attention seems necessary for that sort of memory.
Also, if the predictive processing theory of the brain is correct, then 'seeing' a blue apple might be very unlikely. According to this view, our brain is constantly making predictions and our perceptions are constrained by those predictions. So, we may be very unlikely to make the error of seeing an apple as blue because we have very little experience of seeing blue apples and, therefore, our brain would be unlikely to make that prediction.
Your question is very thought provoking. I think to answer your question fully, we would need to keep a few things in mind.
One, Terry is onto something. Adding eye movements does make the situation more complex.
We are constantly moving our eyes. We call these rapid eye movements, saccades. On average we make about 3-5 saccades per second. But, around the time that we move our eyes, our brain suppresses visual sensitivity. It does this (we think) because if it didn't the world would look blurry during an eye movement (kinda like when you move your camera quickly when filming something).
Two, it's tempting to think that neurons in our brain fire once and then send their messages onto other neurons upstream in the system. So we might think that neurons in V4 process the colour once and that's what accounts for our perception. But this isn't really how it works. Area's of the brain work in parallel. And neurons in V4 (and other areas) continue to fire. And they are not just receiving input from incoming signals, they are also receiving inputs from neurons 'higher' up in the brain. This 'feedforward' and 'feedback' is constantly happening. Areas in the visual cortex are using the incoming 'feedforward' information and the downstream 'feedback' information to 'fine tune' their responses. So we don't think perception is really a once and done thing.
"So we don't think perception is really a once and done thing." By "perception " I assume you mean pattern matching. Def ongoing - those optical illusions that can be interpreted in two ways change "before your eyes" as you look at them. And of course distant unfamiliar objects often morph into familiar ones as you get closer. When you see somebody you recognise that adds detail to what you see (does for me anyway) presumably either pasted in from memory, or at least it gives extra info to resolve some uncertainties in the visual signal.
These are great examples! You make great points. Adding to what you've said -- perception isn't just a bottom-up process of collecting sensory input. Our brain is constantly shaping what we perceive through top-down influences (things like expectations, attention, and prior knowledge). That's why those optical illusions can flip, and familiar faces can seem to gain detail.
Fine to explain vision as a linear process or a set of linear processes, but the brain/consciousness is a neural net and nets don't work that way.
This is what I was getting at with my post about the homunculus.
It seems to me that the neural net maintains a (near) real time simulation of the sensed environment, constantly updated via the sensory input channels. This seemlessly brings together inputs from disparate sources like seeing your hand and feeling a touch and makes touching your own hand feel completely different from somebody else touching it.
You're absolutely right that the brain isn't simply processing information in a linear way!
I tend to think about it not just as a simulation of the sensed environment; I suspect what we are doing is simulating a prediction of what is expected. The brain samples the environment to check if its predictions are correct and constantly updates those predictions. In this way, the process is more active than passive.
Even simulating the present is a prediction cos of latency - anyway we have to synchronise the inputs from different channels. There's a lot of error correction as well, e.g. removal of blinks and saccades, and not hearing your heart beat. But the simulation that we sense directly and think of as reality is current.
Yes we also have to be able to predict the future, but only in small regions at a time. You throw one ball at a tennis player and they'll hit it, throw ten and they'll give up.
We also have a number of autonomous subsystems: balance, walking, even talking (have you never gone into ChatGPT mode?) that are running their own simulations. Balance: I can't access; walking: I can access when I wish to; talking: I usually drive, but I can drift out; typing: I always have to drive. I can't access the future simulation either - this is why you have to LEARN a sport (train the subsystem) until it's automatic - if you try to think it, you'll fail.
I think I know what saccades are for. Pixera has/had a digital camera patent for it under the name DiRactor. They shifted the sensor by half a pixel in each direction (four frames) and then did some clever processing, to (almost) double the resolution.
In the human case it corrects for irregularities in the rod and cone grid, hides the effects of dead sensors, and gives a bit more resolution. This is why we stare.
That’s an interesting analogy! Saccades allow the brain to gather more details by moving the fovea across the visual field, and this constant sampling might help compensate for some perceptual irregularities. It’s interesting to think about how saccades are similar and different to technology like DiRactor.
That's such an interesting point you make. Surprise is a big part of predictive processing. But I suspect what is happening in the gorilla/basketball example is that your attention is busy doing something else -- counting passes between the white t-shirt players. If you were to just watch the video without having to count the passes, I suspect you would see and be surprised by the gorilla.
In reading research, eye movement photography studies have concluded that it can take 50 ms for an image of a word to fade and make room for a new image. Color words printed in different colors (/green/ printed in red letters) are retrieved with more errors and can take longer to decay). Once word perception (aka lexical access) is achieved, the eye fixation (a still eye) jerks (a saccade) and a new fixation commences. Your question about suppression of errors (the momentary blue apple fading unnoticed) is very interesting.
The eyes can perceive words only when their sweep stops or fixates on a point in a line of print. The mean span of perception during a fixation within foveal vision is 7-15 letters. Once this display fades from the foveal area, the eyes are ready to sweep or jump together and then stop for another fixation, and so on. Readers do not perceive individual words, but they process every letter in fixations, perhaps four or five per line of print in the length of lines in these comment windows.
Readers do not have to accomplish lexical access (identify the meaning of a word) for the eyes to receive a signal to jump. But if a string of letters have been processed and the cutting edge of comprehension experiences confusion, readers make regressions, backpedaling over letter strings previously processed. Cognitive load driven by attention slows the reader as meaning is repaired. Ordinary automatic mature processing moves between 250-300 words with some readers under certain conditions achieving perhaps 400 or so words.
There is a perceptual phase of reading occurring automatically without effort and awareness unless comprehension fails. The eyes move alternating stops/fixations and jumps./sac adds. The eye-voice span is interesting. The eye a.ways outpaces the voice when readers read aloud. If you suddenly black out upcoming print in an ongoing display readers will give voice to 3-5 words that were blacked out. It takes 200 ms to program speech articulation structures for duty.