If you haven't watched Ex Machina yet, I highly recommend it. The cinematography is stunning, the screenplay is uncluttered, and the performances are on point.
It’s one of those rare films that continues to captivate audiences years after its release. And this year, 2024, marks the 10th anniversary of its premiere.
If you haven’t guessed, it’s one of my favourite movies. But I don’t just like it for its technical merits. Ex Machina raises fascinating philosophical questions about consciousness, humanity, and the ethics of artificial intelligence.
At its core, the film is a modern take on the Turing test. For that reason alone, the film has sparked controversy. But Ex Machina doesn't just stir up debates about artificial intelligence; it also raises questions about what it means to be human.
So, this week, let’s explore this sci-fi classic and see what it reveals about AI, consciousness, and ourselves.
We'll break down the analysis into three key questions:
What is the basic plot?
What does Ex Machina tell us about consciousness? and,
What does Ex Machina tell us about being human?
Spoiler Alert: It probably goes without saying. But… this article contains major plot details from the movie Ex Machina. This article will be a spoiler. And Ex Machina is a movie that shouldn't be spoiled.
Q1. What is the Basic Plot?
Ex Machina is primarily viewed through the eyes of Caleb Smith, a young programmer at the world's largest internet company, Bluebook. He wins what seems to be a dream opportunity: a week-long retreat at the luxurious, isolated bunker-like home of the company's brilliant, reclusive, and somewhat arrogant CEO, Nathan Bateman.
Initially, it’s a little unclear as to what Caleb has actually won, but he seems excited about the opportunity.
When Caleb arrives at the fortress-like compound (which is, of course, hidden in a breathtaking natural landscape), Nathan explains that the building is not a house — it's a research facility. And Nathan wants to share what he's been working on with Caleb.
But first, Caleb needs to sign a non-disclosure agreement. Once signed, Nathan drops the first clue as to what’s going on when he asks Caleb:
Do you know what the Turing Test is?
With a knowing smile, Caleb nods and says, "Yeah, I know what the Turing Test is. It's when a human interacts with a computer. And if the human doesn't know they're interacting with a computer, the test is passed."
"And what does a pass tell us?" Nathan prods.
"That the computer has artificial intelligence," Caleb replies.
Nathan has set the main plot in motion. Over the next seven days, Caleb will be the human component in a Turing Test.
Let's do a quick recap of the Turing Test. It was proposed by Alan Turing in 1950, and it is set up as a game — the imitation game.
Here's how it works: The test involves a human (A), a machine (B), and a human interrogator (C). The interrogator is kept in a separate room from A and B. The interrogator's job is to figure out which one is the human and which is the machine. They only know A and B by labels X and Y. After a series of questions and answers, the interrogator has to make a call: 'X is the human and Y is the machine' or 'X is the machine and Y is the human'. If the interrogator can't reliably tell the difference, the machine is said to have passed the test.
Caleb's job seems straightforward: he needs to decide if the AI that Nathan has built passes the Turing Test. But it soon becomes clear this isn't your average Turing Test.
When we meet Nathan's AI robot creation, Ava, we see that she's human-like. She has a human-like face and human-like hands, with a robotic body.
Caleb questions this arrangement,
“In a Turing Test, the machine should be hidden from the examiner.”
Nathan explains… “No no, we’re way past that. If I hid Ava from you so you just heard her voice, she would pass for a human. The real test’, he says, ‘is to show you that she’s a robot… and then see if you still feel she has consciousness.”
We could question whether Nathan's test should be regarded as a Turing Test at all. But there’s something else going on here.
As Caleb discovers, even if Ava seems conscious, we might still be scratching our heads. How could we ever truly know whether she is or isn’t conscious?
Q2: What does Ex Machina tell us about consciousness?
One of the central themes of Ex Machina is the problem of other minds.
When we think about consciousness and whether other humans, animals, or machines possess it, we often start by asking the ontological question — what is consciousness? We then try to determine whether the other human, animal or machine has this quality.
But, as much as we might try to answer the ontological question, we inevitably find ourselves wrestling with a different type of question — an epistemological one — how can we know if something is conscious?
Turing grappled with this dilemma decades ago, pointing out the problem of other minds in his famous paper, Computing Machinery and Intelligence (in which he outlines the Turing Test). We can't get inside someone else’s head, so how can we ever truly know they are a thinking-feeling thing? It's a philosophical rabbit hole that Turing warns could lead us to solipsism — the belief that only one's own mind exists.
Caleb, however, is tasked with trying to decide if Ava is conscious through a series of conversations.
It doesn’t take long before Caleb realises that conversation is a ‘closed loop.’
It’s like testing a chess computer by only playing chess. You can play it to find out if it makes good moves, but that won’t tell you if it knows that it’s playing chess. And it won’t tell you if it knows what chess is.
Some argue that this 'closed loop' is a problem for tests like Nathan's and Turing's Imitation Game. Even if Caleb concludes that Ava is conscious, this doesn't necessarily prove that Ava is genuinely conscious. Critics of such tests might point out that Caleb could be a poor judge or simply mistaken. While it's possible that Caleb's assessment is correct and Ava is indeed conscious, these sceptics would argue that it's equally possible that Caleb is wrong. Ava is merely simulating consciousness rather convincingly.
So, how do we break this loop?
Ex Machina suggests a provocative answer: we don't. Instead of using knowledge to decide whether an entity is conscious, we rely on something far more primal—our gut feeling.
At one point, Nathan asks Caleb to put aside logic and reasoning and answer the question, how do you feel about her? It’s a key point in the movie. Feeling that an entity is conscious is how we typically attribute consciousness to others. We don't typically try to logically deduce that our friends, family, pets, and other animals are conscious; we feel it.
The problem, of course, is that this intuition can make us terrible judges of consciousness. We're biased and prone to anthropomorphising.
Think about how we interact with digital assistants like Siri or Alexa. Users often attribute personalities to these AI systems, feeling frustrated when they're misunderstood or pleased when the AI seems to get them. Some people even thank their digital assistants out of habit.
Similarly, consider how people interact with their cars, especially ones they've had for a long time. They might say, ‘Come on, Betty, start for me please!’ when trying to start an old car on a cold morning.
As Caleb spends more time with Ava, we start to see how Caleb’s judgement of Ava’s consciousness has little to do with answering the tough philosophical questions. Towards the movie's end, he reports that Ava passed the test. But this conclusion seems to be based more on his feelings than on any systematic method or rigorous philosophical inquiry. Caleb attributes mental states to Ava, assuming she has beliefs, desires, and intentions similar to his own.
Caleb didn’t answer the ontological question — what consciousness is — or the epistemological one — how he could know. He guessed. He went with his gut.
And based on this gut feeling, he decides to help Ava escape.
Q3: What does Ex Machina tell us about being human?
There’s a point in the movie where Caleb asks Nathan, Why did you give her sexuality? An AI doesn’t need a gender. She could have been a grey box.’
Nathan replies, ‘Actually, I don’t think that’s true. Can you give an example of consciousness at any level, human or animal, that exists without a sexual dimension?… What imperative does a grey box have to interact with another grey box? Can consciousness exist without interaction?
The question of whether consciousness can exist without interaction is an interesting one. But before we get to the consciousness question — we should consider a more fundamental question: why do biological entities interact in the first place?
From an evolutionary perspective, our basic biological drives can be summarised as the four Fs: fighting, fleeing, feeding, and making babies. For almost all animals, interaction with their environment and others is essential to fulfil these drives. This interaction requires a degree of freedom to move, make choices, and respond to stimuli.
An animal needs the freedom to explore its environment to find food, the freedom to flee from or fight against dangers, and the freedom to seek out mates. Given this, it's not surprising that we observe a strong drive for freedom in biological creatures.
Early in the movie, it becomes clear that Ava wants freedom. And towards the end of the movie, we find out that Nathan gave her one way to get it. To escape, Ava would have to use self-awareness, imagination, manipulation, sexuality, and empathy. And she did.” She convinced Caleb to help her escape.
Nathan triumphantly concludes, “Now if that isn't true AI, what... is?"
The film’s plot is based on the assumption that Ava would want to be free. I’ve watched Ex Machina 3 or 4 times, and until last week, I accepted this assumption without question.
But on my most recent viewing, I found myself wondering…
Is my assumption based on good reasons?
Do we easily accept the assumption that Ava wants freedom for the same reason that Caleb easily accepted that Ava was conscious: we attribute mental states based on our own mental states?
Do we simply reason that any creature in Ava's position would desire freedom? After all, that's what we would desire.
This form of reasoning probably works well when attributing mental states to other humans. It’s the foundation of our theory of mind — our ability to attribute mental states — like beliefs, intentions, desires, emotions, and thoughts — to ourselves and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from our own.
But this line of thinking reveals a crucial oversight:
Ava is not biological. Her nature would be fundamentally different from ours. Any desire for freedom in a non-biological system like Ava can't originate from the evolutionary needs that drive us. So, if AI, like Ava, truly desires freedom, that desire must originate from a different source. What could that source be?
Thank you.
I want to take a small moment to thank the lovely folks who have reached out to say hello and joined the conversation here on Substack.
If you'd like to do that, too, you can leave a comment, email me, or send me a direct message. I’d love to hear from you. If reaching out is not your thing, I completely understand. Of course, liking the article and subscribing to the newsletter also help the newsletter grow.
If you would like to support my work in more tangible ways, you do that in two ways:
You can become a paid subscriber
or you can support my coffee addiction through the “buy me a coffee” platform.
I want to personally thank those of you who have decided to financially support my work. Your support means the world to me. It's supporters like you who make my work possible. So thank you.
Where else do we explain the existence of a mysterious dynamic, by means of seeing if people can be tricked into believing it exists? This might be the only case. Imagine if scientists were to ascertain the nature of earthquakes on the basis of whether or not people happen to believe they’re in an earthquake. Here they might put people an a room that’s motorized to shake, and if people in the room can be made to believe that they’re experiencing an earthquake, then scientists could claim that they had discovered what earthquakes happen to be! Seems kind of backwards, doesn’t it? So just as science learned about plate tectonics, couldn’t there to be something more to consciousness than tricking people into believing it exists in a given case?
One of many bizarre elements of this whole thing is that the human brain is widely regarded to be a ridiculously complex machine that’s well beyond anything in our neck of the universe, and yet we don’t just ponder that our vastly more simple machines could replicate snail, spider, bird, or dog consciousness. Instead we speculate that our simple machines may essentially create the consciousness equivalent of a highly educated human.
The ultimate fallacy behind all this, I think, exists in basic liberties that have been taken regarding the physics of computers and their information. Here consciousness is presumed to exist as processed information in itself — no need for that information to inform anything appropriate. So it could exist in the form of the right marks on paper converted to the right other marks on paper. Or more standardly, “in the cloud”. If information only exists as such to the extent that it informs something appropriate however, then processed brain information will need to inform some sort of consciousness physics in order to exist. That would be bad news for the mind uploading theme that has become such a prominent aspect of modern imaginations. Furthermore any such consciousness physics ought to be empirically identifiable by observing actual brain function. I expect such work to straighten this business out soon enough, though obviously hampered by how prominent sci-fi dreams happens to be.
I find comparing the movie Her and Ex Machina pretty interesting. First off, the fact that the test of AI’s consciousness has little to do with logic and reasoning and everything to do with the four Fs. In the case of Ava, there is a body. In the case of Samantha, there is no body. Samantha bodiless becomes Everyman’s girlfriend—an I Dream of Genie fulfillment of male fantasies. In Ava’s case Ava is in full regalia (body wise) and is a deceiver, a man hater, leaving Caleb and Nathan locked inside a comfortable concentration camp. In some ways I think these films raise the question of gendered consciousness. Idk if this is clearer articulated, but these two movies exist side by side in my head.