Thank you so much for sharing this interesting article Suzi. I didn't know this particular theory and now that I'm reading a neuroscience book, it was particularly attractive to me. Naturally, I do not have the knowledge to support or not the theory you describe, but, in the abstract, even such a perspective would suggest many implications for a whole series of research fields affiliated with neuroscience and psychology. I am thinking, for example - also by background - of understanding the processes that guide consumers during a purchase or the emotional reactions deriving from an advertising, a brand initiative or the relationship with a product itself.
Hey Riccardo! There seems to be a lot of attention lately on trying to link what we know about neuroscience with marketing -- people call this area neuro-marketing. I suspect most of this work stretches the science a little too far, but the underlying psychological principles seem to apply. It's a fascinating area that taps into research on human decision making, heuristics, and biases.
What neuroscience book are you reading? Do you recommend it?
I think Place is right. Wittgenstein’s 1923 treatise nailed the very thing this debate mirrors: Philosophical problems are artifacts of natural language, one reason bots are not the best at reasoning. Claude 3 Opus is horrible at solving truth tables for truth-functions. There is to much negation, which is a type of ambiguity. . Equivocation, ambiguity, puns, all these quirks in natural language create havoc when one intends one specific meaning for one specific thing. The mind is brain and nothing else, a structural ambiguity, could be true to one person but not to another, leading to recriminations and possibly fist to cuffs:) do you know about Word Grammar (WG) (cf: James Hudson). Word grammar is superordinate to syntax in that syntax is dependent on words in phrases. For example, a ship inherits information from oceangoing in the background in the same way that rowboat inherits information from river going, spreading activation to other abstract nodes of meaning that activate fishing or cargo or banks or waves. A noun isa noun is an inheritance relationship. So in word grammar a mind isa brain means that mind inherits information from brain in a semantic network. Here’s what interests me. In WG words can take affixes which alter the semantic network big time. Farm, for example, becomes farmer. In this case, farm might refer to Venus as morning star does, but farmer does not refer to the same Venus at all. Category error would start blinking red, yet if we take one step back these words (farm and farmer) are so tightly connected we really can’t have one without the other.. We can have a minder that becomes a reminder, but we can’t have a farmer that becomes a refarmer. Inheritance properties of farm is very different from mind in level of abstraction. But brain is concrete while mind is abstract. We can remind but we can’t rebrain. We can be thoughtful and mindful but we can’t be brainful. Each suffix is a syntactic signal that pushes the word toward a nuanced inheritance. Although in the end I think Place is right, I don’t think it’s the end of the argument.
Terry! I love this. I've always loved how interconnected the philosophy of mind is to the philosophy of language.
The analogy I've been thinking about a lot lately is a country. We can point to a map and say, "That is the United States of America," and other people will understand what we mean. We can also talk about the physical aspects of a country in a statement like, "The United States of America is ecologically diverse, encompassing vast prairies, towering mountain ranges, arid deserts, lush forests, fertile river valleys, and coastal regions." We can also talk about the politics of a country, "The United States of America is a federal republic with a strong democratic tradition and a system of checks and balances." Even though we use the same word, the politics of a country does not refer to the physical landscape at all. But just as we can't have a farmer without a farm, we can't have The United States of America (in the political or cultural sense) without the physical land. I wonder whether we should think of the mind and brain in a similar way? As you say a country's land (brain) is concrete while a country's politics or culture (mind) are abstract.
I agree, I think Place makes some good points, but the story doesn't end there.
We’ve got this thing called language that works for us a lot of the time, but when precision is the uppermost demand, it can stand between brain and mind, making things more difficult for us. Science in this case loses some while Poetry gets a leg up.
My main objection is that all the so-called “reasons” to think identity theory is true, merely assume their conclusion. It doesn’t address the hard problem that consciousness transcends any appeal to structure and function. Physics doesn’t include the properties of mass, dimension, charge and …. pain.
Physics deals in quantitative properties, experience is pure qualitative. That divide between quantity and quality can’t be bridged by more correlations. That’s far more than different layers of abstraction.
We’re physical beings therefore consciousness is physical doesn’t address the fact that consciousness doesn’t have any properties we think of as physical.
The appeal to evolution doesn’t explain what consciousness is or it’s place in the world, it only says it’s useful so ifff it’s physical it’s likely to selected.
And mind brain correlations are a statement of the problem, not a reason to prefer any solution.
The identity theory definitely has its problems. The idea that consciousness is just a different level of abstraction is a difficult one to swallow for most. It just doesn't seem to be the case. If we look at the brain and go more micro we get things like neurons, glial cells and neurotransmitters. We don't seem to get consciousness.
Going the other direction to a more constructed or conceptual level doesn't seem to work either. If consciousness is a different level of abstraction, as the identity theory claims, it doesn't seem to be like any other abstraction we know about.
On your other point -- yes! correlation is not causation. I completely agree. But I'm interested in how you might account for claims of causal evidence. For example, if I deliver a transcranial magnetic stimulation (TMS) pulse over my friends visual cortex, he reports seeing phosphenes (he sees flashes of light without light entering the eye). I can give him a sham pulse (sounds and feels like a pulse but no pulse is actually given) and he reports no light. This sort of evidence is considered causal (not correlational). I'm curious how someone might account for this sort of evidence? Is the idea that the neural firing that happens during the TMS pulse is the physical behavioural response, but the conscious experience of seeing the light is separate and independent of this physical response?
I’m not knowledgeable enough to answer that question, but why would we see the TMS pulse as any different, in principle, to delivering an electromagnetic light ray to the eyeball? And isn’t the explanatory gap the hard problem highlights between “brain state” and “qualia” (and visa versa)?
If I “believe” there is a burglar, that activates my xyz neurons, which causes my body to move to lock the door. Which is a different causal sequence to when my “boredom” activates abc neurons and moves my leg muscles to the couch to watch netflix.
I assume science can give (in principle at least) a complete causal explanation starting from “xyz neurons” through to legs muscles involved in walking. But this gap between “mental state” and “xyz neurons” is what seems like an unbridgeable explanatory gap.
As for how say idealism or panpsychism explains those things, there is an implicit expectation that any explanation must provide a brain mechanism. Whereas idealists are going to explain those things by telling you why the conscious self is bound to their material desires and how to liberate ourselves from this physical netflix existence lol. The beatific vision will provide the ultimate truth of consciousness!
I do find the question of the necessity of a neuroscience paradigm interesting. I wonder if the hard problem is a wild goose chase for neuroscience and the approach will always be reductive. e.g. Tononi’s IIT gives us the ability to detect consciousness in coma patients, brain computer interface solves this problem etc. Science excels at practicalities.
The debate does seem to come down to the explanatory gap, doesn't it? People seem to argue about what we should put into that gap, whether we could ever find the thing that explains the gap, or whether there is even a gap at all to explain.
From the neuroscience perspective, I think it simply feels like we're onto something. Do this and we get this conscious experience, do that and we get that one... The evidence seems to point to the brain having a lot to do with it. But you might be right, we could be on a wild goose chase, searching for the 'something' that will fill the gap, but that something might not be a something we can ever find.
Interesting stuff as always Suzi. Though imo claiming that the mind is simply the brain is an arrogant and reductivist way to try to quantify consciousness which I believe is an emergent phenomenon and has much deeper secrets to discover.
The identity theory runs into problems, doesn't it!? Of course there are always claims and counter-claims and counter-counter-claims that seem to go on and on, but in the end, it just doesn't sit right with many people.
Suzi—I’m thoroughly enjoying this series—thank you so much. And thank you for engaging in the comments. The discussion on syntax is interesting—I was a little ways into this topic professionally for other reasons. Your statement—“The is of composition needs. . . science” (I’ve condensed and somewhat paraphrased)—by this it seems you mean it needs empirical evidence—data gathered from our human senses. When the proposed theory is “the mind is the brain “—using ‘the is of composition’ as you point out— the proposing mechanism, the observing mechanisms, the data or evidence gathering mechanism, the data sorting mechanism, and data rationalizing mechanisms are all sourced at the same place—the human brain. With so much bias and thumb on the scale—or in this case literal neurons on the scale—how can this be seen as scientific? The theory is horribly tainted before it ever gets to evidence collecting—which will just further pollute the basis of the theory. It appears very self preservationist for a group of neurons in a particular humans head to suggest that their mind is their brain and then to construct a model to convince other groups of neurons in other heads that it is so. (How is that for rational reasoning in this case of ‘the is of composition’?)
Yes, you are right, I was using the word science to mean empirical evidence. Empirical evidence would have been a better term to use.
Good point, the neurons are on the scale, as you say. But if our brain is doing all the work, would this not be true about everything we think is real and true? Even logical claims (which are often distinguished from empirical ones) are made using the brain. It seems we could question everything. Do we know nothing at all? Or do we make some assumptions. Do we assume we have a relatively accurate model of the world? If we take an evolutionary perspective, we might assume that our brains have evolved to model our world to increase the likelihood of survival and reproduction. Would those who have a more accurate model of the world be more likely to survive and reproduce? Perhaps? Or perhaps The Matrix got it correct -- reality is a Wonderland and the rabbit hole is deep.
Yes, I think (intentionally using the vague term here) there is a role for evolutionary development here in the discussion on conscious. Honestly how would any of us conclusively know the difference—is it evolution or devolution, or a bit of both—somethings evolved some pruned? Seems more likely because of how a human brain still develops from conception to adulthood to old age—through injury, repair, or even diminished cognitive function. And yet the ‘beingness’ of the person seems still there beyond or behind the biological curtain—so to speak. I’ve had a few experiences with this myself. To skip the story and cut down to the point it was a very strange experience to be / to ‘know’ I was fully ‘me’ and yet be unable to understand much of what was going on, where I was, or communicate with anyone. I think (vague term again) consciousness might be more faceted, diverse, even divergent, and more complexly layered than the mind is brain. At least this is what my own empirical experience suggests. (And by the way I like your questions —do we know nothing at all ? Or do we make some assumptions?—after 6+ decades of being alive, I feel admit to both, we know nothing—we sense very little—and we make workable assumptions individually and collectively (and over time) from these minutely assembled snippets—flawed as it is. It’s the work of beings.). Your series and writing is excellent—thank you again. I’ve also been listening to them after reading sometimes—thanks for taking the time to record them. You have a wonderful voice and pace. Thanks again.
Thank you so much, Dean! I thoroughly enjoy reading your comments.
I agree, I think consciousness might be more diverse and layered than we tend to think it is. And I'm fascinated with the idea that our understanding of the world, including our conscious experiences of it, are flawed and almost certainly wrong in many ways. One of the most interesting questions, I think, is how wrong are we?
This was an especially interesting episode in the series, as I've never given Identity Theory much thought. If physicalism is true, then, at the least, mind supervenes on brain, but that doesn't imply an ISA relationship. Suppose consciousness arises from brains the way laser light arises from certain configurations of materials. A dependence but not an identity in the ISA sense. (Chalmers introduced me to the word "supervenes", which I think is the perfect term.)
I think Place is right about the IS of composition, though the implied bidirectional equality seems less precise, the dependency and direction less obvious than with "supervenes". To borrow your example, both "his table" and "old packing case" can be reversed or replaced with other objects, so there's no sense one depends on the other. So, I think even Identity(Composition) misses the mark.
I like the laser analogy because laser light a specialized phenomenon that emerges from a specific physical configuration (plus energy). That the light supervenes on the laser, but different configurations and materials are possible, has positive implications for systems that emulate the physical brain. That consciousness might be in the light rather than the lasing process has possible negative implications for systems that try to simulate the mind.
Oh! I like your laser analogy. It provides an interesting way to think about the relationship between consciousness and the physical brain. And it nicely brings up the idea of supervenience, which might capture the dependency relationship more precisely than the identity theories claims.
The idea that the mind supervenes on the brain also brings up one of the biggest attacks against the identity theory -- multiple realisability -- a topic which I'll discuss in next week's article on functionalism.
It seems a fairly apt analogy, one that focuses on some key aspects of hard problem. Laser light is an emergent phenomenon fully explained within physicalism, so there’s no dualism in the analogy, no magic.
Exactly as you say, multiple configurations of matter can lase, but only certain very specific such configurations can lase. By analogy, multiple configurations of matter might be conscious, but only certain very specific ones. Things that closely enough resemble brains might have minds, but we can say little about things that are unlike brains (such as software).
On that front, the analogy does end up restating the “simulated water isn’t wet” argument (an argument I think has power). We understand lasers enough to make accurate software simulations that fully predict their behavior. But software can never produce photons. Further, the numbers the software outputs can be interpreted as milliwatts or megawatts just by moving a decimal point.
If consciousness is indeed analogous to the emission of photons from a lasing substance, this has major implications for software simulations. They’ll be able to describe consciousness but never produce it, though I’m not entirely clear on what difference that difference makes.
Thank you so much for sharing this interesting article Suzi. I didn't know this particular theory and now that I'm reading a neuroscience book, it was particularly attractive to me. Naturally, I do not have the knowledge to support or not the theory you describe, but, in the abstract, even such a perspective would suggest many implications for a whole series of research fields affiliated with neuroscience and psychology. I am thinking, for example - also by background - of understanding the processes that guide consumers during a purchase or the emotional reactions deriving from an advertising, a brand initiative or the relationship with a product itself.
Hey Riccardo! There seems to be a lot of attention lately on trying to link what we know about neuroscience with marketing -- people call this area neuro-marketing. I suspect most of this work stretches the science a little too far, but the underlying psychological principles seem to apply. It's a fascinating area that taps into research on human decision making, heuristics, and biases.
What neuroscience book are you reading? Do you recommend it?
The book is 'The Happy Brain', but there are also some neuro-marketing books I read in the past.
I think Place is right. Wittgenstein’s 1923 treatise nailed the very thing this debate mirrors: Philosophical problems are artifacts of natural language, one reason bots are not the best at reasoning. Claude 3 Opus is horrible at solving truth tables for truth-functions. There is to much negation, which is a type of ambiguity. . Equivocation, ambiguity, puns, all these quirks in natural language create havoc when one intends one specific meaning for one specific thing. The mind is brain and nothing else, a structural ambiguity, could be true to one person but not to another, leading to recriminations and possibly fist to cuffs:) do you know about Word Grammar (WG) (cf: James Hudson). Word grammar is superordinate to syntax in that syntax is dependent on words in phrases. For example, a ship inherits information from oceangoing in the background in the same way that rowboat inherits information from river going, spreading activation to other abstract nodes of meaning that activate fishing or cargo or banks or waves. A noun isa noun is an inheritance relationship. So in word grammar a mind isa brain means that mind inherits information from brain in a semantic network. Here’s what interests me. In WG words can take affixes which alter the semantic network big time. Farm, for example, becomes farmer. In this case, farm might refer to Venus as morning star does, but farmer does not refer to the same Venus at all. Category error would start blinking red, yet if we take one step back these words (farm and farmer) are so tightly connected we really can’t have one without the other.. We can have a minder that becomes a reminder, but we can’t have a farmer that becomes a refarmer. Inheritance properties of farm is very different from mind in level of abstraction. But brain is concrete while mind is abstract. We can remind but we can’t rebrain. We can be thoughtful and mindful but we can’t be brainful. Each suffix is a syntactic signal that pushes the word toward a nuanced inheritance. Although in the end I think Place is right, I don’t think it’s the end of the argument.
Terry! I love this. I've always loved how interconnected the philosophy of mind is to the philosophy of language.
The analogy I've been thinking about a lot lately is a country. We can point to a map and say, "That is the United States of America," and other people will understand what we mean. We can also talk about the physical aspects of a country in a statement like, "The United States of America is ecologically diverse, encompassing vast prairies, towering mountain ranges, arid deserts, lush forests, fertile river valleys, and coastal regions." We can also talk about the politics of a country, "The United States of America is a federal republic with a strong democratic tradition and a system of checks and balances." Even though we use the same word, the politics of a country does not refer to the physical landscape at all. But just as we can't have a farmer without a farm, we can't have The United States of America (in the political or cultural sense) without the physical land. I wonder whether we should think of the mind and brain in a similar way? As you say a country's land (brain) is concrete while a country's politics or culture (mind) are abstract.
I agree, I think Place makes some good points, but the story doesn't end there.
We’ve got this thing called language that works for us a lot of the time, but when precision is the uppermost demand, it can stand between brain and mind, making things more difficult for us. Science in this case loses some while Poetry gets a leg up.
Another good article with clear explanation.
My main objection is that all the so-called “reasons” to think identity theory is true, merely assume their conclusion. It doesn’t address the hard problem that consciousness transcends any appeal to structure and function. Physics doesn’t include the properties of mass, dimension, charge and …. pain.
Physics deals in quantitative properties, experience is pure qualitative. That divide between quantity and quality can’t be bridged by more correlations. That’s far more than different layers of abstraction.
We’re physical beings therefore consciousness is physical doesn’t address the fact that consciousness doesn’t have any properties we think of as physical.
The appeal to evolution doesn’t explain what consciousness is or it’s place in the world, it only says it’s useful so ifff it’s physical it’s likely to selected.
And mind brain correlations are a statement of the problem, not a reason to prefer any solution.
Completely agree
Thanks Prudence!
The identity theory definitely has its problems. The idea that consciousness is just a different level of abstraction is a difficult one to swallow for most. It just doesn't seem to be the case. If we look at the brain and go more micro we get things like neurons, glial cells and neurotransmitters. We don't seem to get consciousness.
Going the other direction to a more constructed or conceptual level doesn't seem to work either. If consciousness is a different level of abstraction, as the identity theory claims, it doesn't seem to be like any other abstraction we know about.
On your other point -- yes! correlation is not causation. I completely agree. But I'm interested in how you might account for claims of causal evidence. For example, if I deliver a transcranial magnetic stimulation (TMS) pulse over my friends visual cortex, he reports seeing phosphenes (he sees flashes of light without light entering the eye). I can give him a sham pulse (sounds and feels like a pulse but no pulse is actually given) and he reports no light. This sort of evidence is considered causal (not correlational). I'm curious how someone might account for this sort of evidence? Is the idea that the neural firing that happens during the TMS pulse is the physical behavioural response, but the conscious experience of seeing the light is separate and independent of this physical response?
I’m not knowledgeable enough to answer that question, but why would we see the TMS pulse as any different, in principle, to delivering an electromagnetic light ray to the eyeball? And isn’t the explanatory gap the hard problem highlights between “brain state” and “qualia” (and visa versa)?
If I “believe” there is a burglar, that activates my xyz neurons, which causes my body to move to lock the door. Which is a different causal sequence to when my “boredom” activates abc neurons and moves my leg muscles to the couch to watch netflix.
I assume science can give (in principle at least) a complete causal explanation starting from “xyz neurons” through to legs muscles involved in walking. But this gap between “mental state” and “xyz neurons” is what seems like an unbridgeable explanatory gap.
As for how say idealism or panpsychism explains those things, there is an implicit expectation that any explanation must provide a brain mechanism. Whereas idealists are going to explain those things by telling you why the conscious self is bound to their material desires and how to liberate ourselves from this physical netflix existence lol. The beatific vision will provide the ultimate truth of consciousness!
I do find the question of the necessity of a neuroscience paradigm interesting. I wonder if the hard problem is a wild goose chase for neuroscience and the approach will always be reductive. e.g. Tononi’s IIT gives us the ability to detect consciousness in coma patients, brain computer interface solves this problem etc. Science excels at practicalities.
Btw I thought you might be interested in this article - https://substack.com/@mwj4719
The debate does seem to come down to the explanatory gap, doesn't it? People seem to argue about what we should put into that gap, whether we could ever find the thing that explains the gap, or whether there is even a gap at all to explain.
From the neuroscience perspective, I think it simply feels like we're onto something. Do this and we get this conscious experience, do that and we get that one... The evidence seems to point to the brain having a lot to do with it. But you might be right, we could be on a wild goose chase, searching for the 'something' that will fill the gap, but that something might not be a something we can ever find.
Thanks for the link, I will check it out.
Interesting stuff as always Suzi. Though imo claiming that the mind is simply the brain is an arrogant and reductivist way to try to quantify consciousness which I believe is an emergent phenomenon and has much deeper secrets to discover.
Thanks Matthew!
The identity theory runs into problems, doesn't it!? Of course there are always claims and counter-claims and counter-counter-claims that seem to go on and on, but in the end, it just doesn't sit right with many people.
Suzi—I’m thoroughly enjoying this series—thank you so much. And thank you for engaging in the comments. The discussion on syntax is interesting—I was a little ways into this topic professionally for other reasons. Your statement—“The is of composition needs. . . science” (I’ve condensed and somewhat paraphrased)—by this it seems you mean it needs empirical evidence—data gathered from our human senses. When the proposed theory is “the mind is the brain “—using ‘the is of composition’ as you point out— the proposing mechanism, the observing mechanisms, the data or evidence gathering mechanism, the data sorting mechanism, and data rationalizing mechanisms are all sourced at the same place—the human brain. With so much bias and thumb on the scale—or in this case literal neurons on the scale—how can this be seen as scientific? The theory is horribly tainted before it ever gets to evidence collecting—which will just further pollute the basis of the theory. It appears very self preservationist for a group of neurons in a particular humans head to suggest that their mind is their brain and then to construct a model to convince other groups of neurons in other heads that it is so. (How is that for rational reasoning in this case of ‘the is of composition’?)
Thanks so much, Dean!
Yes, you are right, I was using the word science to mean empirical evidence. Empirical evidence would have been a better term to use.
Good point, the neurons are on the scale, as you say. But if our brain is doing all the work, would this not be true about everything we think is real and true? Even logical claims (which are often distinguished from empirical ones) are made using the brain. It seems we could question everything. Do we know nothing at all? Or do we make some assumptions. Do we assume we have a relatively accurate model of the world? If we take an evolutionary perspective, we might assume that our brains have evolved to model our world to increase the likelihood of survival and reproduction. Would those who have a more accurate model of the world be more likely to survive and reproduce? Perhaps? Or perhaps The Matrix got it correct -- reality is a Wonderland and the rabbit hole is deep.
Yes, I think (intentionally using the vague term here) there is a role for evolutionary development here in the discussion on conscious. Honestly how would any of us conclusively know the difference—is it evolution or devolution, or a bit of both—somethings evolved some pruned? Seems more likely because of how a human brain still develops from conception to adulthood to old age—through injury, repair, or even diminished cognitive function. And yet the ‘beingness’ of the person seems still there beyond or behind the biological curtain—so to speak. I’ve had a few experiences with this myself. To skip the story and cut down to the point it was a very strange experience to be / to ‘know’ I was fully ‘me’ and yet be unable to understand much of what was going on, where I was, or communicate with anyone. I think (vague term again) consciousness might be more faceted, diverse, even divergent, and more complexly layered than the mind is brain. At least this is what my own empirical experience suggests. (And by the way I like your questions —do we know nothing at all ? Or do we make some assumptions?—after 6+ decades of being alive, I feel admit to both, we know nothing—we sense very little—and we make workable assumptions individually and collectively (and over time) from these minutely assembled snippets—flawed as it is. It’s the work of beings.). Your series and writing is excellent—thank you again. I’ve also been listening to them after reading sometimes—thanks for taking the time to record them. You have a wonderful voice and pace. Thanks again.
Thank you so much, Dean! I thoroughly enjoy reading your comments.
I agree, I think consciousness might be more diverse and layered than we tend to think it is. And I'm fascinated with the idea that our understanding of the world, including our conscious experiences of it, are flawed and almost certainly wrong in many ways. One of the most interesting questions, I think, is how wrong are we?
This was an especially interesting episode in the series, as I've never given Identity Theory much thought. If physicalism is true, then, at the least, mind supervenes on brain, but that doesn't imply an ISA relationship. Suppose consciousness arises from brains the way laser light arises from certain configurations of materials. A dependence but not an identity in the ISA sense. (Chalmers introduced me to the word "supervenes", which I think is the perfect term.)
I think Place is right about the IS of composition, though the implied bidirectional equality seems less precise, the dependency and direction less obvious than with "supervenes". To borrow your example, both "his table" and "old packing case" can be reversed or replaced with other objects, so there's no sense one depends on the other. So, I think even Identity(Composition) misses the mark.
I like the laser analogy because laser light a specialized phenomenon that emerges from a specific physical configuration (plus energy). That the light supervenes on the laser, but different configurations and materials are possible, has positive implications for systems that emulate the physical brain. That consciousness might be in the light rather than the lasing process has possible negative implications for systems that try to simulate the mind.
Thank you!
Oh! I like your laser analogy. It provides an interesting way to think about the relationship between consciousness and the physical brain. And it nicely brings up the idea of supervenience, which might capture the dependency relationship more precisely than the identity theories claims.
The idea that the mind supervenes on the brain also brings up one of the biggest attacks against the identity theory -- multiple realisability -- a topic which I'll discuss in next week's article on functionalism.
It seems a fairly apt analogy, one that focuses on some key aspects of hard problem. Laser light is an emergent phenomenon fully explained within physicalism, so there’s no dualism in the analogy, no magic.
Exactly as you say, multiple configurations of matter can lase, but only certain very specific such configurations can lase. By analogy, multiple configurations of matter might be conscious, but only certain very specific ones. Things that closely enough resemble brains might have minds, but we can say little about things that are unlike brains (such as software).
On that front, the analogy does end up restating the “simulated water isn’t wet” argument (an argument I think has power). We understand lasers enough to make accurate software simulations that fully predict their behavior. But software can never produce photons. Further, the numbers the software outputs can be interpreted as milliwatts or megawatts just by moving a decimal point.
If consciousness is indeed analogous to the emission of photons from a lasing substance, this has major implications for software simulations. They’ll be able to describe consciousness but never produce it, though I’m not entirely clear on what difference that difference makes.