David Deutsch has a great conversation with Naval about how LLMs will never be fully creative because, definitionally you’re constraining their outputs, and they’re never allowed to ask why. Only by letting them question and get stuff wrong will they arrive at human level general intelligence
Rick Rubin has a much more wuwu interpretation of creativity which I resonate with as well
I commend your restraint in putting your hard work on hold because someone beat you to the punch. I'd probably have simply hit publish and yelled "FIIIIIIIIRST!!!!"
I look forward to reading the re-examined version of it.
I like pondering about what constitutes a novel idea. In some ways, even the most groundbreaking and counterintuitive insights are at least partially outcomes of an author's existing understanding of the world and ideas they absorbed along the way.
When I was growing up in Ukraine, there was a popular saying that goes something like "Everything new is well-forgotten old," which kind of comes from the same realization that nothing is truly new and everyone ultimately builds on prior knowledge and ideas.
Now that we're on the subject of new ideas: I came up with this novel concept of a non-human digital brain that can do tasks that humans can do. I even have a name for it: AI, or "Automated Intellect." I can't wait to see what people think of it!
Oh, and thanks a lot for the nod to my Midjourney post.
PS: I must say, bearephant looks so epic that I kind of wish it was real!
I love that -- "everything new is well-forgotten old". There seems to be so much truth is that saying. I suspect many of our new ideas are simply new ways of saying old things.
Automated Intellect -- brilliant! I'm going to hit publish and yell.... FIIIIIRST!!!!
You've hit dead center on the same thing I explored last year in whether AI could be creative. The challenge I found was having to step back how creative we think humans are. I ended up having a discussion with AI and asking it to write me a sonnet on a topic for which no sonnets existed.
These are great insights on originality of ideas and creations, Suzi! I’d like to reference it in my new article series on ethics of generative AI for music, if that’s ok?
I'm also looking forward to reading your series -- it sounds fascinating. I've adding a link to your introductory article, so others can easily find it.
I've seen it myself. If I had to guess how the process works, it combines parts of concepts and applies some kind of lateral thinking. Most of them are bad, but some have been quite useful.
Everything we remember and know comes from either 1) personal experience, 2) social learning from others, or 3) originates from an individual's innovation. An innovation is something that transcends perception, experience, and reality itself. Innovative or transcendental concepts include dreams, inventions, jokes, music, fantasies, categories, our sense of time and space, mathematics, conspiracy theories, symbols, and tools. Yuval Harari captured the essence of transcendental concepts when he said, "There are no gods in the universe, no nations, no money, no human rights, no laws, and no justice outside the common imagination of human beings."
Analogy has much to do with transcendental concepts. According to Douglas Hofstadter, analogy is the core of all thinking. Metaphor (George Lakoff) and conceptual blending (Mark Johnson) are forms of analogy.
We humans like to think we are unique in our ability to innovate and create tools. But consider the following analogy:
>> Big hole containing food is to finger or beak, as a small hole containing food is to what?
If you are a chimpanzee, a crow, or even the diminutive Galapagos woodpecker finch, your answer might be “a twig”. All three animals use twigs as tools to extract insects from holes in logs that are too small for fingers or beaks to reach. The generalization used to solve this analogy is “something that fits in a hole to get food”. It is reasonable to believe that each animal had experience of retrieving food from large holes but got frustrated when their beaks or fingers were too big for smaller holes. A twig is long and thin like a finger or beak and, at some point, a clever animal recognized utility in a twig they might never have recognized if not for the analogy.
A tool is an innovative conceptual construct that assigns utility or meaning to something that is not intrinsic or obvious. It can be a twig used to get ants, a rock used to break the shells of nuts, a symbol warning of radiation or the number "7", or a dime used as a screwdriver. A dime is a perceived object, but the dime-as-a-screwdriver and the dime-as-money are transcendental concepts. Tools do not originate from the senses, though their parts (dimes, screws, twigs, ants, holes) might. You may gain most of your tools from others, but some innovative individual had to invent them. A tool is a concept that originates in the mind through a process of analogy.
I am not aware of any LLMs currently using analogy to create innovative concepts. Please let me know if you are aware of any.
I am not aware of any large language models (LLMs) currently using analogy to create innovative concepts. I've been thinking about why this might be the case for a while. There are probably a number of reasons, but one potential reason I keep pondering is that LLMs never ask "why." They are never curious to ask the questioner a question. I wonder whether this sort of curiosity is essential for innovative ideas. The chimpanzee, crow, or human child needs to know about their world. There is an evolutionary advantage to explore and develop an accurate model of their world. To do this, they must be curious. LLMs don't need to do this.
Only the curious ask questions. LLMs don't need to be curious because they are statistical classifiers programmed by uber intelligences. The use of the terms "learning" and "neural networks" in the same sentence is metaphorical hogwash.
Recently an emotion center has been discovered that rewards curiosity: Ahmadlou, Mehran, Janou H. W. Houba, Jacqueline F. M. van Vierbergen, Maria Giannouli, Geoffrey Alexander Gimenez, Christiaan van Weeghel, Maryam Darbanfouladi, et al. “A Cell Type–Specific Cortico Subcortical Brain Circuit for Investigatory and Novelty-Seeking Behavior.” Science 372, no. 6543 (May 14, 2021). https://doi.org/10.1126/science.abe9681.
Until the cognitive sciences and AI recognize the contribution of affective neuroscience to the study of emotion, we cannot have autonomous learning. I view the eight emotion centers in the midbrain (seven per Jack Panksepp) as LUST, CARE, PLAY, RAGE, FEAR, PANIC, SEEKING, and now CURIOSITY. I believe these collectively represent an innate guide that determines what should be remembered and what is inconsequential. The role of emotion is to assess each experience with respect to valence (good/bad) and arousal (weak/strong) for each of these basic level emotions. Over time, learned concepts will trigger more complex emotional responses, e.g. feelings of love (CARE) and resentment (RAGE) toward an abusive parent.
What if Situation 3 is just situation 2 performed by a conceptual thinker. Situation 2 is very cozy for us because the combination in visual art or music is intra-systemic - two pictures of animals merged to create a third is firmly locked in the systems of "animals" and "pictures." I say systems because we are implicitly "borrowing" chunks like trunk-ness or a particular riff, not combining pixels and soundwaves. But the consistency of the systems, biology and music, constrains the combination to "conforming" outcomes.
But we could see Mendel's insight as doing the same with heterogenous conceptual systems. He was merging "ancestry-system" which was well known with "statistics-system" which was emerging right as he was doing his work. This is not combining an elephant and a giraffe, it is combining an elephant with algebra.
Such combinations are much harder because the systemic consistency one gets from operating in one arena are not available - many animals share traits so sense-making is more obvious - a trunk goes where a nose goes, or cilia are like hair. Combining heterogeneous systems, like merging statistics and breeding, requires a mind that can see the applicability, or perhaps compatibility, of the two systems. Breeding and statistics are wildly different, except they both can accommodate iteration, variation, and scalar values. Bringing them together can create "novel" offspring, but may be just heterogeneous situation 2.
True situation 3 may be possible, but only if the muses really exist that can inject fully-formed novel systems into our brains. Even "eureka" moments like Benzene rings or water displacement are instances of merging two systems, though perhaps subconsciously.
Yes, I think this is entirely possible. It's likely what's happening in most cases when we believe we're observing situation 3. I like the way you put it -- situation 3 is combining different systems -- which is much harder, but perhaps not conceptually that different from combining ideas within the same system. Your elephant-algebra example was great! 😁
Your last point about eureka moments possibly happening subconsciously is a fascinating one to me. A big part of my research was looking into how much of what we do happens subconsciously and what can and cannot be done sub-consciously (or unconsciously). It's a highly debated question and I'm looking forward to exploring it in future articles.
Your article will still be original but it will not be first. Let’s not confuse copyright with intellectual ‘ownership‘. It is the difference between management and leadership 😉
Heh, I've experienced that déjà vu recently re-reading Penrose's "The Emperor's New Mind", which I first read back in the early 1990s. Things I've been talking or writing about for decade that I would have said were a blend of things I'd learned -- I'm finding very similar accounts in Penrose's pages. I hadn't realized how much I really did absorb.
FWIW, SF has imagined some creative neo lifeforms. Crystal lifeforms or giant balloon lifeforms. Still remixes of inanimate objects, one might say. But to some extent, everything necessarily is a mix of primitive forms. But one reason I read so much SF is the new-ish ideas.
David Deutsch has a great conversation with Naval about how LLMs will never be fully creative because, definitionally you’re constraining their outputs, and they’re never allowed to ask why. Only by letting them question and get stuff wrong will they arrive at human level general intelligence
Rick Rubin has a much more wuwu interpretation of creativity which I resonate with as well
https://open.substack.com/pub/matthewharris/p/deep-dive-the-creative-act-a-way?r=298d1j&utm_medium=ios
Thanks for another great article!
Amazing! Thank you. I will definitely watch that and read your article. This topic fascinates me.
For sure! Here’s the Deutsch episode
https://open.spotify.com/episode/6nE2aDcXQye5R402hYIbGI?si=Skq7WOrlRjqjJuppfKlrow
I commend your restraint in putting your hard work on hold because someone beat you to the punch. I'd probably have simply hit publish and yelled "FIIIIIIIIRST!!!!"
I look forward to reading the re-examined version of it.
I like pondering about what constitutes a novel idea. In some ways, even the most groundbreaking and counterintuitive insights are at least partially outcomes of an author's existing understanding of the world and ideas they absorbed along the way.
When I was growing up in Ukraine, there was a popular saying that goes something like "Everything new is well-forgotten old," which kind of comes from the same realization that nothing is truly new and everyone ultimately builds on prior knowledge and ideas.
Now that we're on the subject of new ideas: I came up with this novel concept of a non-human digital brain that can do tasks that humans can do. I even have a name for it: AI, or "Automated Intellect." I can't wait to see what people think of it!
Oh, and thanks a lot for the nod to my Midjourney post.
PS: I must say, bearephant looks so epic that I kind of wish it was real!
I love that -- "everything new is well-forgotten old". There seems to be so much truth is that saying. I suspect many of our new ideas are simply new ways of saying old things.
Automated Intellect -- brilliant! I'm going to hit publish and yell.... FIIIIIRST!!!!
An, yes, bearephants. This needs to be a thing.
You've hit dead center on the same thing I explored last year in whether AI could be creative. The challenge I found was having to step back how creative we think humans are. I ended up having a discussion with AI and asking it to write me a sonnet on a topic for which no sonnets existed.
https://www.polymathicbeing.com/p/can-ai-be-creative
Amazing! Another example of people independently arriving at similar ideas.
Lol... apropos to the introduction of your essay.
These are great insights on originality of ideas and creations, Suzi! I’d like to reference it in my new article series on ethics of generative AI for music, if that’s ok?
I would be honoured!
I'm also looking forward to reading your series -- it sounds fascinating. I've adding a link to your introductory article, so others can easily find it.
https://sixpeas.substack.com/p/intro-unfair-use-genai-music-ethics
Thank you, Suzi 😊
I've seen it myself. If I had to guess how the process works, it combines parts of concepts and applies some kind of lateral thinking. Most of them are bad, but some have been quite useful.
Everything we remember and know comes from either 1) personal experience, 2) social learning from others, or 3) originates from an individual's innovation. An innovation is something that transcends perception, experience, and reality itself. Innovative or transcendental concepts include dreams, inventions, jokes, music, fantasies, categories, our sense of time and space, mathematics, conspiracy theories, symbols, and tools. Yuval Harari captured the essence of transcendental concepts when he said, "There are no gods in the universe, no nations, no money, no human rights, no laws, and no justice outside the common imagination of human beings."
Analogy has much to do with transcendental concepts. According to Douglas Hofstadter, analogy is the core of all thinking. Metaphor (George Lakoff) and conceptual blending (Mark Johnson) are forms of analogy.
We humans like to think we are unique in our ability to innovate and create tools. But consider the following analogy:
>> Big hole containing food is to finger or beak, as a small hole containing food is to what?
If you are a chimpanzee, a crow, or even the diminutive Galapagos woodpecker finch, your answer might be “a twig”. All three animals use twigs as tools to extract insects from holes in logs that are too small for fingers or beaks to reach. The generalization used to solve this analogy is “something that fits in a hole to get food”. It is reasonable to believe that each animal had experience of retrieving food from large holes but got frustrated when their beaks or fingers were too big for smaller holes. A twig is long and thin like a finger or beak and, at some point, a clever animal recognized utility in a twig they might never have recognized if not for the analogy.
A tool is an innovative conceptual construct that assigns utility or meaning to something that is not intrinsic or obvious. It can be a twig used to get ants, a rock used to break the shells of nuts, a symbol warning of radiation or the number "7", or a dime used as a screwdriver. A dime is a perceived object, but the dime-as-a-screwdriver and the dime-as-money are transcendental concepts. Tools do not originate from the senses, though their parts (dimes, screws, twigs, ants, holes) might. You may gain most of your tools from others, but some innovative individual had to invent them. A tool is a concept that originates in the mind through a process of analogy.
I am not aware of any LLMs currently using analogy to create innovative concepts. Please let me know if you are aware of any.
For more on innovation and the EvoInfo model of natural intelligence, see https://tomrearick.substack.com/p/the-story-of-intelligence-part-one
Wonderful comment! Thank you.
I am not aware of any large language models (LLMs) currently using analogy to create innovative concepts. I've been thinking about why this might be the case for a while. There are probably a number of reasons, but one potential reason I keep pondering is that LLMs never ask "why." They are never curious to ask the questioner a question. I wonder whether this sort of curiosity is essential for innovative ideas. The chimpanzee, crow, or human child needs to know about their world. There is an evolutionary advantage to explore and develop an accurate model of their world. To do this, they must be curious. LLMs don't need to do this.
Only the curious ask questions. LLMs don't need to be curious because they are statistical classifiers programmed by uber intelligences. The use of the terms "learning" and "neural networks" in the same sentence is metaphorical hogwash.
Recently an emotion center has been discovered that rewards curiosity: Ahmadlou, Mehran, Janou H. W. Houba, Jacqueline F. M. van Vierbergen, Maria Giannouli, Geoffrey Alexander Gimenez, Christiaan van Weeghel, Maryam Darbanfouladi, et al. “A Cell Type–Specific Cortico Subcortical Brain Circuit for Investigatory and Novelty-Seeking Behavior.” Science 372, no. 6543 (May 14, 2021). https://doi.org/10.1126/science.abe9681.
Until the cognitive sciences and AI recognize the contribution of affective neuroscience to the study of emotion, we cannot have autonomous learning. I view the eight emotion centers in the midbrain (seven per Jack Panksepp) as LUST, CARE, PLAY, RAGE, FEAR, PANIC, SEEKING, and now CURIOSITY. I believe these collectively represent an innate guide that determines what should be remembered and what is inconsequential. The role of emotion is to assess each experience with respect to valence (good/bad) and arousal (weak/strong) for each of these basic level emotions. Over time, learned concepts will trigger more complex emotional responses, e.g. feelings of love (CARE) and resentment (RAGE) toward an abusive parent.
Can I make a challenge to Situation 3?
What if Situation 3 is just situation 2 performed by a conceptual thinker. Situation 2 is very cozy for us because the combination in visual art or music is intra-systemic - two pictures of animals merged to create a third is firmly locked in the systems of "animals" and "pictures." I say systems because we are implicitly "borrowing" chunks like trunk-ness or a particular riff, not combining pixels and soundwaves. But the consistency of the systems, biology and music, constrains the combination to "conforming" outcomes.
But we could see Mendel's insight as doing the same with heterogenous conceptual systems. He was merging "ancestry-system" which was well known with "statistics-system" which was emerging right as he was doing his work. This is not combining an elephant and a giraffe, it is combining an elephant with algebra.
Such combinations are much harder because the systemic consistency one gets from operating in one arena are not available - many animals share traits so sense-making is more obvious - a trunk goes where a nose goes, or cilia are like hair. Combining heterogeneous systems, like merging statistics and breeding, requires a mind that can see the applicability, or perhaps compatibility, of the two systems. Breeding and statistics are wildly different, except they both can accommodate iteration, variation, and scalar values. Bringing them together can create "novel" offspring, but may be just heterogeneous situation 2.
True situation 3 may be possible, but only if the muses really exist that can inject fully-formed novel systems into our brains. Even "eureka" moments like Benzene rings or water displacement are instances of merging two systems, though perhaps subconsciously.
Hi Matt!
Yes, I think this is entirely possible. It's likely what's happening in most cases when we believe we're observing situation 3. I like the way you put it -- situation 3 is combining different systems -- which is much harder, but perhaps not conceptually that different from combining ideas within the same system. Your elephant-algebra example was great! 😁
Your last point about eureka moments possibly happening subconsciously is a fascinating one to me. A big part of my research was looking into how much of what we do happens subconsciously and what can and cannot be done sub-consciously (or unconsciously). It's a highly debated question and I'm looking forward to exploring it in future articles.
Your article will still be original but it will not be first. Let’s not confuse copyright with intellectual ‘ownership‘. It is the difference between management and leadership 😉
Good point!
Heh, I've experienced that déjà vu recently re-reading Penrose's "The Emperor's New Mind", which I first read back in the early 1990s. Things I've been talking or writing about for decade that I would have said were a blend of things I'd learned -- I'm finding very similar accounts in Penrose's pages. I hadn't realized how much I really did absorb.
FWIW, SF has imagined some creative neo lifeforms. Crystal lifeforms or giant balloon lifeforms. Still remixes of inanimate objects, one might say. But to some extent, everything necessarily is a mix of primitive forms. But one reason I read so much SF is the new-ish ideas.