Hello Curious Humans!
Before we start imagining conscious robots — don’t worry, we will get to that shortly, I promise — let’s start with imagining humans.
Imagine a human.
Imagine the human is watching a movie. It’s a comedy, something filled with unexpected twists and laugh-out-loud moments. As the plot unfolds, our human starts making regular, patterned noises and their eyes and mouth contract and relax repeatedly.
Most of us would recognise this behaviour as laughter. We would naturally think that the human is laughing because they've understood a joke or something amusing happened in the movie. We assume the human gets the joke on a conscious level, and their laughter is a direct response to that understanding.
When it comes to consciousness - we judge by watching behaviour. We look at behaviour and guess others are experiencing things similarly to us. If we see someone laughing at a joke, we think they find it funny. We make this assumption because, from our own experience, we usually laugh at jokes when we find them funny.
But this is just an educated guess. We can’t get inside someone else’s head. In philosophy, this problem is called the problem of other minds. We believe we understand others based on their actions and how we would feel or act in similar situations, but in reality, we can't directly access or experience what's going on in someone else's mind. We're essentially making our best guess based on their behaviour and comparing it to our own experiences.
This is a form of abductive reasoning similar to —
“if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.”
However, the situation changes when we think about AI. People start to reason differently here. Let’s take the same thinking exercise as above but slot in an AI robot instead of a human…
Imagine an AI robot. Now, imagine that robot is watching a movie. It’s a comedy, something filled with unexpected twists and laugh-out-loud moments. As the plot unfolds, our robot starts making regular, patterned noises and their eyes and mouth, contract and relax repeatedly.
For some, the question as to whether the robot was, in fact, laughing is not so easily answered. We might question whether the laughing behaviour is caused by a conscious understanding of the humour or just clever programming.
The same abductive reasoning we used for our imagined human doesn’t seem to work as well here.
If it looks like a duck and quacks like a duck but it needs batteries, you probably have the wrong abstraction
-Derick Bailey
So here’s the rub. If the AI robot’s laughing behaviour was, in fact, the result of a true conscious understanding of humour, how would we know?
Discerning if an AI robot's laughter is a product of true conscious understanding or a well-programmed response is a real challenge. These sorts of questions aren't just about the technology itself, but they are asking a much deeper question — how do we know what we know? This deeper question is at the root of the branch of philosophy called Epistemology, which urges us to carefully examine the methods, criteria, and evidence we consider necessary to confidently claim knowledge. This week’s question If AI were conscious, how would we know? requires us to reflect on what it means to truly know.
In this week’s issue of When Life Gives You AI let’s:
Take a look at the most popular test of AI consciousness — The Turing Test
Investigate a popular criticism of the Turing Test — Searle’s Chinese Room Thought Experiment, and
Find out what science is doing to try to answer this question
1. The Turing Test
The Turing Test, proposed by Alan Turing in 1950, is a method for determining whether a machine can think. It's a foundational concept in the field of artificial intelligence. If you need a refresher, here's how it works:
A human judge engages in a conversation with one human and one machine, both of which are hidden from the judge's view. The conversation typically happens through a computer interface.
The judge's task is to determine which participant is human and which is the machine based solely on their responses.
If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.
Turing called this test the Imitation Game. Originally, his idea was that if a machine did well in the Imitation Game, people could reasonably infer that the machine was thinking. Over the years, the phrase The Turing Test has become a more general idea that refers to any form of behavioural test of consciousness. It is this latter interpretation that I use throughout this article (but note that we haven’t actually defined consciousness yet).
The concept of passing the Turing Test is contentious, and the criteria can vary. But there have been several notable developments that are worth mentioning:
In 1966, Joseph Weizenbaum, created ELIZA — one of the very first chatbots that could mimic conversation. It worked by searching for keywords in the user's input and formulating responses based on that keyword. It would provide vague responses. Although some judges were fooled into thinking they were conversing with a human, it’s hard to imagine a judge being fooled by ELIZA in our current technology climate.
In 1972, PARRY was a more advanced program that simulated the behaviour of a paranoid schizophrenic. Psychiatrists analysing conversation transcripts were fooled 48% of the time, an impressive result for its time.
In 2014, Eugene Goostman was a program that simulated a 13-year-old Ukrainian boy. It made headlines for allegedly passing the Turing Test by convincing 33% of human judges that it was a human. However, the setup and methodology of this test faced criticism, particularly regarding the number of judges and the portrayal of the AI as a young non-native English speaker, which might have lowered the judges' expectations.
Last year, in 2023, there were two notable events — AI in Advertising and GPT-4.
AI in Advertising: AI-generated advertisements were indistinguishable from those created by humans in a competition. A panel of marketing experts had only a 57% accuracy rate in distinguishing between AI-generated ads and those made by marketing students.
GPT-4: In a public online Turing Test, GPT-4 showed significant improvement over previous models like GPT-3.5. The best-performing GPT-4 prompt had a success rate of 41%, outperforming GPT-3.5’s best score of 14%.
While some have suggested the Turing Test is a reasonable test to determine machine consciousness, many have major objections to the test itself or to how the test results are interpreted. Indeed, there are so many objections to the Turing Test that I could not possibly cover them all here (I’ll leave that for another issue). But I will mention just two.
Some of you might be thinking that using the Turing Test in this way makes a huge assumption: that AI consciousness will be just like human consciousness. The original Turing Test was designed as a test of a machine's ability to exhibit intelligent behaviour that is indistinguishable from that of a human. But why should human intelligence (or consciousness) be the criterion? What if AI consciousness were entirely different from human consciousness?
The second criticism comes from the philosopher John Searle.
2. Searle’s Chinese Room Thought Experiment
If you regularly read about AI (or consciousness), you’ve probably heard of Searle’s Chinese Room Thought Experiment. But, just in case you haven’t, or you need a refresher, here’s the set-up:
Imagine a person who does not understand Chinese sitting inside a room. This room contains a comprehensive set of rules in English (a sort of manual) that dictate how to respond to Chinese characters inputted into the room.
Chinese characters are passed into the room, and the person inside uses the rule book to find the correct responses to these characters, even though they don't understand a word of Chinese. The responses are then passed out of the room.
To an outside observer, it appears the room understands Chinese because it can take Chinese questions and produce accurate Chinese answers. However, the person inside the room doesn’t understand Chinese at all; they're simply following a set of instructions.
Searle argues that, similar to the Chinese Room, a computer running a program can process inputs (like language) and produce outputs (like responses) without understanding the input or output in any meaningful way. The computer, like the person in the room, is simply following a set of syntactical rules without any comprehension of the meaning behind them.
Critics of the Turing Test use the Chinese Room Thought Experiment to argue that behavioural tests don’t tell us anything about true understanding. While a machine might act like it has conscious understanding and pass the Turing Test, this does not mean the same thing as genuine understanding.
Searle’s thought experiment has been both influential and controversial. There have been many extensive debates and various counterarguments in the fields of AI and cognitive science.
Here are three common objections:
The first objection comes from the philosopher Daniel Dennett. Dennett argues that while the individual (in the room) might not understand Chinese, the system as a whole (the person plus the input plus the rule book) could be said to understand it. Likewise, individual neurons in our brains don't understand language, but the brain as a system does.
The second objection criticises the thought experiment itself. Searle’s Chinese Room, as described, would never convince anyone that it was conscious. The simple look-up-table that Searle imagines that matches Chinese characters to English could never produce the sort of results that would pass the Turing Test. It simply lacks the complexity required to produce the sort of responses required to pass the test. Imagine we asked our laughing robot to explain why it was laughing. Any response that convinced us it understood the joke would require far more complexity than a simple look-up table.
The third objection is that behavioural tests like the Turing Test are, in fact, sufficient to determine consciousness — in fact, we use them all the time to do just that. Earlier, when we imagined a human laughing at a funny movie, we had no problem ascribing a conscious understanding of the joke to the human just by observing behaviour.
When we imagined a laughing robot, I suggested that some of us would question whether or not the laughing behaviour was caused by a conscious understanding of the humour or just clever programming. But is that true?
In popular films like Her and Ex Machina, when AI characters display behaviours typically associated with consciousness—expressing emotions, making decisions, or engaging in meaningful conversations—we easily treat these as signs of a conscious mind. In movies, behaviour is enough for us to assume that the AI is conscious. And if we think it’s not, should we be questioning whether the humans are conscious, too?
The idea that observable behaviour is enough to determine consciousness is rooted in functionalism — the theory that defines consciousness by its functions. If an AI functions like it is conscious — showing responses and adaptations akin to a conscious being — functionalism suggests that it is valid to treat the AI as conscious.
A form of functionalism called computational functionalism is currently the most widely held view among artificial intelligence researchers and many neuroscientists.
So let’s take a look at how neuroscientists are aiming to answer our question — If AI were conscious, how would we know?
3. The Science of Consciousness
Recently, the renowned journals Nature and Science each published articles addressing our exact question — If AI were conscious, how would we know? The articles were about a report that suggests using what we know about the brain and consciousness to figure out if and how AI could be conscious. The report proposes a checklist of features to look for in AI systems to see how close they are to having a form of consciousness. Right now, no AI has all these features, but the report suggests that it's possible to work towards this.
While this concept of a consciousness checklist is intriguing, it has sparked considerable debate within the scientific community. Objections centre around the following key points.
The theories included in the report are theories that are based mostly on data from humans. Some have questioned the relevance of using human-based theories when assessing consciousness in AI.
Others have raised questions about what neuroscience is actually measuring. Most neuroscientists who research consciousness do not make the strong claim that they are measuring consciousness itself, but rather, they mostly measure the neural activity associated with a conscious report (ie., what people are saying they are experiencing), which, some argue, is not necessarily the conscious experience.
Others object to the checklist itself. To create the checklist, a group of scientists took indicators of consciousness from popular scientific theories. The theories used to create the checklist were only included if they were compatible with a computational functionalist view of consciousness. That is, the theories were included only if they aligned with the idea that consciousness is simply performing the right kind of computations in the right kind of way. You might be wondering, is this consciousness?
All of these objections centre around one fundamental question — what is consciousness?
If you are questioning whether our laughing robot was, in fact, laughing, you might also be wondering whether the scientists are missing something here. Can computation account for what it feels like to experience joy, sadness, daydreams, touch, taste, smells, aspirations, apprehensions, uncertainties, excitement, remorse, and the colour red? Is consciousness computational?
In the next issue…
Is consciousness computational?
Spend any time in the tech community, and you’ll run into the widely held view that sufficiently advanced AI might one day be conscious. Some might even argue that it is not just a future possibility but an imminent reality — sentient computers are already here. But could a computer running algorithms really have conscious experiences? Is consciousness computational?
Find it in your inbox on Tuesday, January 30, 2024.
Thank you for taking this question seriously! I don't think we can really know without solving the hard problem. Behavior won't be enough, and pointing to the problem of other minds isn't quite right either, since that's not comparing apples to apples—I think we have a right to be more skeptical in the case of a supposed consciousness created by us rather than by nature. As you point out, the computationalist's checklist assumes computationalism, and ditto for other theories. So really, time to face up to the hard problem!
As an aside, I think this question is very much a separate question from the ethical one.
great text