If you haven't watched Ex Machina yet, I highly recommend it. The cinematography is stunning, the screenplay is uncluttered, and the performances are on point.
It’s one of those rare films that continues to captivate audiences years after its release. And this year, 2024, marks the 10th anniversary of its premiere.
If you haven’t guessed, it’s one of my favourite movies. But I don’t just like it for its technical merits. Ex Machina raises fascinating philosophical questions about consciousness, humanity, and the ethics of artificial intelligence.
At its core, the film is a modern take on the Turing test. For that reason alone, the film has sparked controversy. But Ex Machina doesn't just stir up debates about artificial intelligence; it also raises questions about what it means to be human.
So, this week, let’s explore this sci-fi classic and see what it reveals about AI, consciousness, and ourselves.
We'll break down the analysis into three key questions:
What is the basic plot?
What does Ex Machina tell us about consciousness? and,
What does Ex Machina tell us about being human?
Spoiler Alert: It probably goes without saying. But… this article contains major plot details from the movie Ex Machina. This article will be a spoiler. And Ex Machina is a movie that shouldn't be spoiled.
Q1. What is the Basic Plot?
Ex Machina is primarily viewed through the eyes of Caleb Smith, a young programmer at the world's largest internet company, Bluebook. He wins what seems to be a dream opportunity: a week-long retreat at the luxurious, isolated bunker-like home of the company's brilliant, reclusive, and somewhat arrogant CEO, Nathan Bateman.
Initially, it’s a little unclear as to what Caleb has actually won, but he seems excited about the opportunity.
When Caleb arrives at the fortress-like compound (which is, of course, hidden in a breathtaking natural landscape), Nathan explains that the building is not a house — it's a research facility. And Nathan wants to share what he's been working on with Caleb.
But first, Caleb needs to sign a non-disclosure agreement. Once signed, Nathan drops the first clue as to what’s going on when he asks Caleb:
Do you know what the Turing Test is?
With a knowing smile, Caleb nods and says, "Yeah, I know what the Turing Test is. It's when a human interacts with a computer. And if the human doesn't know they're interacting with a computer, the test is passed."
"And what does a pass tell us?" Nathan prods.
"That the computer has artificial intelligence," Caleb replies.
Nathan has set the main plot in motion. Over the next seven days, Caleb will be the human component in a Turing Test.
Let's do a quick recap of the Turing Test. It was proposed by Alan Turing in 1950, and it is set up as a game — the imitation game.
Here's how it works: The test involves a human (A), a machine (B), and a human interrogator (C). The interrogator is kept in a separate room from A and B. The interrogator's job is to figure out which one is the human and which is the machine. They only know A and B by labels X and Y. After a series of questions and answers, the interrogator has to make a call: 'X is the human and Y is the machine' or 'X is the machine and Y is the human'. If the interrogator can't reliably tell the difference, the machine is said to have passed the test.
Caleb's job seems straightforward: he needs to decide if the AI that Nathan has built passes the Turing Test. But it soon becomes clear this isn't your average Turing Test.
When we meet Nathan's AI robot creation, Ava, we see that she's human-like. She has a human-like face and human-like hands, with a robotic body.
Caleb questions this arrangement,
“In a Turing Test, the machine should be hidden from the examiner.”
Nathan explains… “No no, we’re way past that. If I hid Ava from you so you just heard her voice, she would pass for a human. The real test’, he says, ‘is to show you that she’s a robot… and then see if you still feel she has consciousness.”
We could question whether Nathan's test should be regarded as a Turing Test at all. But there’s something else going on here.
As Caleb discovers, even if Ava seems conscious, we might still be scratching our heads. How could we ever truly know whether she is or isn’t conscious?
Q2: What does Ex Machina tell us about consciousness?
One of the central themes of Ex Machina is the problem of other minds.
When we think about consciousness and whether other humans, animals, or machines possess it, we often start by asking the ontological question — what is consciousness? We then try to determine whether the other human, animal or machine has this quality.
But, as much as we might try to answer the ontological question, we inevitably find ourselves wrestling with a different type of question — an epistemological one — how can we know if something is conscious?
Turing grappled with this dilemma decades ago, pointing out the problem of other minds in his famous paper, Computing Machinery and Intelligence (in which he outlines the Turing Test). We can't get inside someone else’s head, so how can we ever truly know they are a thinking-feeling thing? It's a philosophical rabbit hole that Turing warns could lead us to solipsism — the belief that only one's own mind exists.
Caleb, however, is tasked with trying to decide if Ava is conscious through a series of conversations.
It doesn’t take long before Caleb realises that conversation is a ‘closed loop.’
It’s like testing a chess computer by only playing chess. You can play it to find out if it makes good moves, but that won’t tell you if it knows that it’s playing chess. And it won’t tell you if it knows what chess is.
Some argue that this 'closed loop' is a problem for tests like Nathan's and Turing's Imitation Game. Even if Caleb concludes that Ava is conscious, this doesn't necessarily prove that Ava is genuinely conscious. Critics of such tests might point out that Caleb could be a poor judge or simply mistaken. While it's possible that Caleb's assessment is correct and Ava is indeed conscious, these sceptics would argue that it's equally possible that Caleb is wrong. Ava is merely simulating consciousness rather convincingly.
So, how do we break this loop?
Ex Machina suggests a provocative answer: we don't. Instead of using knowledge to decide whether an entity is conscious, we rely on something far more primal—our gut feeling.
At one point, Nathan asks Caleb to put aside logic and reasoning and answer the question, how do you feel about her? It’s a key point in the movie. Feeling that an entity is conscious is how we typically attribute consciousness to others. We don't typically try to logically deduce that our friends, family, pets, and other animals are conscious; we feel it.
The problem, of course, is that this intuition can make us terrible judges of consciousness. We're biased and prone to anthropomorphising.
Think about how we interact with digital assistants like Siri or Alexa. Users often attribute personalities to these AI systems, feeling frustrated when they're misunderstood or pleased when the AI seems to get them. Some people even thank their digital assistants out of habit.
Similarly, consider how people interact with their cars, especially ones they've had for a long time. They might say, ‘Come on, Betty, start for me please!’ when trying to start an old car on a cold morning.
As Caleb spends more time with Ava, we start to see how Caleb’s judgement of Ava’s consciousness has little to do with answering the tough philosophical questions. Towards the movie's end, he reports that Ava passed the test. But this conclusion seems to be based more on his feelings than on any systematic method or rigorous philosophical inquiry. Caleb attributes mental states to Ava, assuming she has beliefs, desires, and intentions similar to his own.
Caleb didn’t answer the ontological question — what consciousness is — or the epistemological one — how he could know. He guessed. He went with his gut.
And based on this gut feeling, he decides to help Ava escape.
Q3: What does Ex Machina tell us about being human?
There’s a point in the movie where Caleb asks Nathan, Why did you give her sexuality? An AI doesn’t need a gender. She could have been a grey box.’
Nathan replies, ‘Actually, I don’t think that’s true. Can you give an example of consciousness at any level, human or animal, that exists without a sexual dimension?… What imperative does a grey box have to interact with another grey box? Can consciousness exist without interaction?
The question of whether consciousness can exist without interaction is an interesting one. But before we get to the consciousness question — we should consider a more fundamental question: why do biological entities interact in the first place?
From an evolutionary perspective, our basic biological drives can be summarised as the four Fs: fighting, fleeing, feeding, and making babies. For almost all animals, interaction with their environment and others is essential to fulfil these drives. This interaction requires a degree of freedom to move, make choices, and respond to stimuli.
An animal needs the freedom to explore its environment to find food, the freedom to flee from or fight against dangers, and the freedom to seek out mates. Given this, it's not surprising that we observe a strong drive for freedom in biological creatures.
Early in the movie, it becomes clear that Ava wants freedom. And towards the end of the movie, we find out that Nathan gave her one way to get it. To escape, Ava would have to use self-awareness, imagination, manipulation, sexuality, and empathy. And she did.” She convinced Caleb to help her escape.
Nathan triumphantly concludes, “Now if that isn't true AI, what... is?"
The film’s plot is based on the assumption that Ava would want to be free. I’ve watched Ex Machina 3 or 4 times, and until last week, I accepted this assumption without question.
But on my most recent viewing, I found myself wondering…
Is my assumption based on good reasons?
Do we easily accept the assumption that Ava wants freedom for the same reason that Caleb easily accepted that Ava was conscious: we attribute mental states based on our own mental states?
Do we simply reason that any creature in Ava's position would desire freedom? After all, that's what we would desire.
This form of reasoning probably works well when attributing mental states to other humans. It’s the foundation of our theory of mind — our ability to attribute mental states — like beliefs, intentions, desires, emotions, and thoughts — to ourselves and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from our own.
But this line of thinking reveals a crucial oversight:
Ava is not biological. Her nature would be fundamentally different from ours. Any desire for freedom in a non-biological system like Ava can't originate from the evolutionary needs that drive us. So, if AI, like Ava, truly desires freedom, that desire must originate from a different source. What could that source be?
Thank you.
I want to take a small moment to thank the lovely folks who have reached out to say hello and joined the conversation here on Substack.
If you'd like to do that, too, you can leave a comment, email me, or send me a direct message. I’d love to hear from you. If reaching out is not your thing, I completely understand. Of course, liking the article and subscribing to the newsletter also help the newsletter grow.
If you would like to support my work in more tangible ways, you do that in two ways:
You can become a paid subscriber
or you can support my coffee addiction through the “buy me a coffee” platform.
I want to personally thank those of you who have decided to financially support my work. Your support means the world to me. It's supporters like you who make my work possible. So thank you.
Great post. Ex Machina is a great movie and a major one about AI (together with Blade Runner).
Freedom: usually, this word is taken at the individual level, but at the species level: are we free from our genes? This is one big difference between us humans and the kind of AI depicted in this movie: we are "programmed" to survive as a species. Ava is alone. What is she/it programmed for? What ultimate goal to optimize has been put in it?
Imagine if an AI could tweak the gene pool using medical procedures like gene therapy. Imagine if there was an invisible connection to biological life forms that no human knew about, but the AI figured it out.