61 Comments
Aug 27Liked by Suzi Travis

Great post. Ex Machina is a great movie and a major one about AI (together with Blade Runner).

Freedom: usually, this word is taken at the individual level, but at the species level: are we free from our genes? This is one big difference between us humans and the kind of AI depicted in this movie: we are "programmed" to survive as a species. Ava is alone. What is she/it programmed for? What ultimate goal to optimize has been put in it?

Expand full comment

That sounds like the idea for a sequel - which some movie maker needs to set free for us to ponder!

Expand full comment
author

I agree!

Expand full comment
author

Great point! The origin of Ava's motivations are not clear. Is she truly pursuing her own desires, or is she optimising for goals set by her creator?

Expand full comment
Aug 31Liked by Suzi Travis

I was wondering along the same lines. It has been several years since I've watched EM but my memory (ha!) is that Nathan designed and developed an artificial brain for all of his robots that looked somewhat biological in nature (squishy). He must have pre-programmed it with some history and knowledge of the world outside of the lab. Otherwise, what would Ava anticipate when contemplating freedom? In this context, Ava isn't alone, just separated. I suppose that Ava's motivation to be free *could* be pre-programmed as well, thus not a conscious act of free will. I don't see that behavior in Nathan's character though.

As to the question "Are we free from our genes?", Maybe but we won't know for sure until we die. IMHO -jgp

Expand full comment

Imagine if an AI could tweak the gene pool using medical procedures like gene therapy. Imagine if there was an invisible connection to biological life forms that no human knew about, but the AI figured it out.

Expand full comment
author

I'm not sure I want to imagine such a world! The ethical considerations alone are enough to give me pause. The concept of an AI manipulating genetics or discovering hidden biological connections we're unaware of adds a whole new dimension to the ethical challenges surrounding AI development.

Expand full comment

the epiphenomenon of the biome now has access to capital flows(energy) so the merger of genes and machines is a natural evolutionary event and a phase transition in biological epiphenomenon. Think money makes genes transposable as never before, or think how money goes where the valuable genes are! The most valuable genes are the one giving rise to the biological epiphenomenon of human intelligence and sensory apparatus.

Expand full comment

These days it's in style to diss the Turing test, but I think Turing's point remains. Ultimately the only test we have of another entity's consciousness is its behavior, and its ability to convince the majority of us on a sustained basis. We can't use architecture, because there's always different ways to implement capabilities. That sustained point is important. A five minute test won't mean much in the overall scheme of things, but being convinced after days, weeks, or months of interaction?

All of which amounts to something I concluded some time ago. Consciousness is in the eye of the beholder. It's a feeling we get toward systems that seem more like us than not. Which means it's pointless to ask a binary question of whether X is conscious. It's like asking if it's like us and expecting a simple yes / no answer. The answer will always be in degrees.

So when we evaluate the consciousness of another person, we're evaluating how much us-ness is present. When we evaluate another animal, how much human-ness they have. And when we evaluate an AI, it will be how much life-ness and human-ness they have.

In general, I agree that there's no particular reason to think AI will have anything like biological motivations. The one exception is if a designer, like Nathan or the Westworld engineers, go out of their way to make them as human and lifelike as possible. But it's something we'll have to go out of our way to put there. It won't be there automatically for an AI designed to handle navigation issues.

Anyway, excellent post Suzi!

Expand full comment

“We can't use architecture, because there's always different ways to implement capabilities.”

Well technically you could, but if aliens without our makeup came to earth one day and we shot them, as we do, and watched them writhe, we would have to say they’re not in pain. But we could always bite the bullet!

Expand full comment

Good point. We could, if we're okay with false negatives. We don't even need space aliens. Do animals without c-fibers or a cortex feel pain? I know I'd much rather assess their behavior for that than anatomy.

Expand full comment

Right, all we need are jellyfish and Venus flytraps.

Expand full comment

Jellyfish and Venus flytraps have pretty limited behavioral repertoires. But for animals that engage in value trade off behavior, self deliver analgesics, or show frustration, it's easier to see them as having feeling states.

Expand full comment

Self deliver analgesics? Which critter is that? And here I am 100% on board with assuming jellyfish have feelings.

Expand full comment

Feinberg and Mallatt (Ancient Origins of Consciousness) on a couple of tables of species showing evidence for affects, list a species of land snail, fruit fly, trout, lizard, pigeon, and rat as observed doing it. Although just a caution that I haven’t followed the citations on these. Sometimes when I dig up the studies they cite, the evidence is far more equivocal than listing them on a table implies. I’m guessing it’s much clearer for the pigeons and rats than the snails and flies.

Expand full comment

Pigeons and rats! How amazing! I would have thought it would be something more exotic. But yes, caution noted.

Expand full comment
author

Hi Mike!

I agree! Turing's core point remains valid and relevant. I didn't intend to diss the Turing Test here. I think there's an argument to be made that the movie was making a different point than Turing. Ex Machina seems to focus more on the potential pitfalls of assessing consciousness, when human emotions and biases come into play.

I'm on board with consciousness not being binary. Simple yes or no questions seem misguided. Even the question 'Is Ava conscious?' irks me because it oversimplifies the issues.

Expand full comment

Hi Suzi,

Sorry, I didn't mean to imply you had dissed the Turing Test. I was just noting it's pretty common these days to do it. I should have been more explicit that I agreed with just about everything you discussed!

Expand full comment
author

It is common, isn't it!? I wonder whether that's because many people misunderstand or oversimplify what Turing intended the test to mean.

Expand full comment

It's pretty counter-intuitive. I know the first time I read about the test, it just seemed silly. It seems like it takes some familiarity with the difficulties in getting evidence about mental states to appreciate Turing's point.

Expand full comment
author

Yes, good point.

Expand full comment
Aug 27·edited Aug 30Liked by Suzi Travis

The problem isn't the Turing test, it is the limitations of our awareness, and AI's imediate challenge is not that it will develop an autonomous, opaque unit of agency, but that it will reveal how unbelievably lazy we are at crediting other humans, animals or natural systems with intelligence.

By what mode of evaluation do we determine that other humans are intelligent (or even that they exist, for solipsists)? How about whether a dog is conscious? Or if the economy has a mental model of itself with intention? We just presume, and our presumption is almost certainly substandard relative to even the Turing test.

But now we have lost the luxury of lassitude. Because a new thing, can we say “human created”, is now on the board, we have to place it. But how we make decisions on where to place it opens up the rest of the board to repositioning. Are only things that use language intelligent? That have sex? That exhibit agency? That resist us?

AI isn’t an intellectual problem, it is an ontological one. If history guides us, we are likely to do a poor job with it because we think we are asking about the nature of life and living things, but our language and bias will always become solipsistic: “is it intelligent” will be “is it an us-thing or an other-thing”

Expand full comment

“By what mode of evaluation do we determine that other humans are intelligent (or even that they exist, for idealists)?”

I think you mean solipsists, not idealists.

“But now we have lost the luxury of lassitude. Because a new thing, can we say “human created”, is now on the board, we have to place it. But how we make decisions on where to place it opens up the rest of the board to repositioning.”

Well put!

Expand full comment

Even idealists have to account for other minds. (I've always thought idealism and solipsism had a great deal in common.)

Expand full comment

Whereas I think physicalism has more in common with solipsism in that they both consider experience an unreliable source of knowledge, with the difference between the two views being a matter of degree. But for idealists, experience is the very ground of knowledge, that upon which knowledge of the third person 'objective' sort depends. There's no reason to think I can't experience other minds. I'm experiencing yours right now! Am I experiencing your mind the way you experience it? Obviously not. To do that, I would have to be you. Is it possible that you're some super convincing chat bot? I guess. But until you start glitching on me, I see no reason to doubt you have a perspective on the world that's similar to mine, but not mine.

Expand full comment
Aug 30Liked by Suzi Travis

Fixed the whole solipsist thing. Sometimes I just think the world revolves around me, you know!

Expand full comment
author

I agree with Tina, very well said!

AI does seem to present an ontological challenge rather than just an intellectual one. It forces us to reconsider not just what intelligence and consciousness are, but how we categorise these concepts. How does intelligence and consciousness in one entity relate to intelligence and consciousness in another?

Expand full comment
Aug 28Liked by Suzi Travis

Ok, here is a weirder idea (that I am not sure I believe):

What if humans and AIs (and dolphins) express intelligence instead of generating it - meaning intelligence is like math, a basic principle of the function of the universe, and we are hardware that can embody it?

Expand full comment
Aug 29Liked by Suzi Travis

Beautiful post

Expand full comment
author

Thanks Geoff!

Expand full comment

Great movie and great article Suzi! The current generation of AI chatbots are programmed to be effusively helpful which drives the way we interact with them. Ava's persona draws Caleb in almost as if she's programmed to appeal to him. Caleb develops an attraction to Ava and that drives the way he interacts with her.

Expand full comment
author

Thanks Andrew!

That's such a great point -- current AI chatbots are (usually) overly kind and helpful. It's interesting to think about how that influences how we interact with them.

I rewatched Ex Machina last week and was struck by how relevant its themes still are to current AI discussions, despite the film being nearly 10 years old.

Expand full comment
Aug 27Liked by Suzi Travis

> The test involves a human (A), a machine (B), and a human interrogator (C). <

I'm pleased to see that you have a better grasp of Turing's test than poor Caleb and the general public. The general view of the Turing Test misses out that it is an on-going competition against a human. Most people who are aware of the test at all seem to think that if a machine even once fools a person into thinking it is human then it passes the test -- even if that person is unaware that they might be talking to a machine. Some seem to know that the machine has to do this quite often in order to pass the test, but still ignore that the interrogator must be aware of the possibility of deception. You have avoided both these misconceptions.

Still, I think you are missing one important point.

> If the interrogator can't reliably tell the difference, the machine is said to have passed the test. <

What does "reliably" mean here? To my mind, something is reliable only if it works nearly 100% of the time. Even 95% wouldn't work for me -- that would be only "pretty reliable". But that reading makes the test absurdly easy to pass. Winning even 5% of the imitation games would count as a pass for the machine, because that would mean the interrogator can't reliably (nearly 100%) tell the difference.

Perhaps what you intended, tho, was that the interrogator couldn't tell the difference at anything better than chance -- that the human and computer are so indistinguishable that the interrogator is forced to guess, and ends up with about 50% accuracy. That would be in the spirit of the test, as it would suggest that the computer is a perfect imitation of a human. But that strikes me as a pretty high bar. The computer has to be essentially perfect in order to pass.

So if 5% is too low a bar, and 50% is too high, what percent should we be aiming for?

As a matter of fact, Turing suggested how to calculate the percentage in the paper where he introduced the test.

Turing did not introduce the imitation game the same way you did, with A being a machine and B a human. In Turing's paper, A was a man and B was a woman. The interrogator's job was to tell which was the woman. And then the answer to the question of how good the machine had to be is implied in the paragraph where he makes the change from man to machine:

>> We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’ <<

Turing provides a comparison group for the test. We measure how well a man does at pretending to be a woman, and see whether the machine can match that performance.

Using that comparison makes sense. We want to know if the machine can "think". We propose to test that by having it **pretend** to be a person. For comparison we play the corresponding game between a man and a woman. We know that man can think; we know the woman can think. How good is one thinking being at **pretending** to be a different kind of thinking being? If the machine can pretend to be a thinking being **as well as a thinking being can**, then that's good evidence that it's doing something that can reasonably be described as thinking.

One of the objections some naive people make against the test is that the machine would have to be bad at some things in order to pass the test. It'd have to make math mistakes or something, and why would we want a machine that makes math mistakes? The answer is that we don't want a machine that makes math mistakes when it's asked to do math, but one that knows that the occasional math mistake can help it appear to be human. One that can **think** about what it should be doing under the circumstances it finds itself in. One that can and will **lie** under appropriate circumstances -- like the man does in the original game when asked about his hair: >> ‘My hair is shingled, and the longest strands are about nine inches long.’ <<

A less naive objection to the test is the HLUT (Look-Up Table of Unusual Size, or something like that). What if we could make a humungous look-up table of all possible human interactions and have the machine just find a suitable continuation of the conversation so far? Surely it would pass the test and yet not reasonably be said to be thinking! I actually agree with that, but considered that the space requirement for the HLUT would make it but a theoretical objection -- no practical problem for Turing's Test.

But today we actually have LUTUSes, in the form of GPTs. We could today have an empirical test of Turing's test. Run some imitation game tournaments man/woman and GPT/human and see whether the GPTs can actually match human-level ability to pretend. I (for one) will grant that GPTs cannot reasonably be said to be thinking. Their ability to pass Turing's test (Alan Turing's test, not "the Turing Test" as popularly construed) would, for me, invalidate that test as a reasonable test of thinking.

Expand full comment
author

Hi Mark!

Thank you so much for this comment!

When writing articles, I almost always face tough editorial decisions. What do I put in and what do I omit --while still maintaining the goal of keeping the article clear and focused on its main point.

This is why I love the comment section! It's a great place for comments like yours.

I've been wanting to write about the Turing test for a while, but wasn't sure how to approach it without rehashing what's already been written. You've provided a thorough explanation of the Turing Test that really adds to the conversation -- so thanks!

Expand full comment
Aug 28Liked by Suzi Travis

Thanks, Suzi.

I don't know why, but somehow it seems that my brain rushes thru that last pre-posting check only to start offering criticisms on the first post-posting read. "Missing an important point"? Really? Of course you need to edit for length. Not only would "I'd like to add an i.p." have nodded to that requirement, it would just have been more polite.

Not only are your posts very informative, your responses to comments are always impeccably polite and supportive. You are a class act.

Expand full comment
author

Thanks so much Mark!

btw I really am glad you added many important points -- the Turing test is often misunderstood and oversimplified.

Expand full comment
Aug 27Liked by Suzi Travis

Magical and surprisingly inclusive for such a short piece. I too love the movie.

I used to be caught up in theory of mind a lot and was around when John Searle stimulated discussion with his thought experiment on the Chinese Room. I drifted across to Merleau-Ponty’s take on consciousness being situated, embodied and enactive via the sadly now departed Eric Matthews whose festschrift I was fortunate enough to attend. Your piece has me setting out to reread his book on Continental Philosophy and rewatch Alex Garland’s work again and that’s no bad thing! Do take care and keep us thinking!

Expand full comment
author

Oh, I love that!

Your journey through different philosophical approaches to consciousness is fascinating. Thanks for reminding me of Merleau-Ponty's work on embodied cognition – that's a really intriguing perspective to bring to Ex Machina. And Searle's Chinese Room experiment? That's a mind-bender that never gets old when thinking about AI consciousness.

I'm humbled and thrilled that this piece has inspired you to revisit Eric Matthew's work and the movie. If you do get around to re-reading Continental Philosophy or re-watching Ex Machina, I'd love to know your thoughts.

Expand full comment

Still cranking out winner posts there, Ms. Travis! I thought "Ex Machina" was a cut above the usual AI robot stories, so you have good taste. (Have you seen "Her"?)

To answer your last question, it's possible a desire to be free is the product of intellect alone (it occurs in "Her" as well). I've long wondered about a correlation between intelligence and ethics. It's possible some ways of looking at reality, normative values, might be universal to any sufficient intellect.

I think the movie does slightly beg the question by presenting Ava as a fait accompli. As presented, it would be hard to judge her other than conscious. I think the Turing Test has value, but I extend it to a "Rich Turing Test" -- a prolonged interaction, say a month. As I wrote in a recent post, "If I can converse with a machine for a month and remain convinced 'someone is home' then I’m not sure I care whether it’s 'truly' conscious (whatever that means), I’ve found a friend."

I agree about embodiment and take it further. From that same post, "Something that might be important for true machine consciousness is fuzzy thinking and forgetfulness. These are self-evident in our experience, but what if they’re instrumental to consciousness? What if sleep or dreaming are important?"

FWIW: https://logosconcarne.substack.com/p/brains-are-nothing-like-computers

Expand full comment

I think that most people (me included) would quickly get to the point where they don't really care if the machine is truly conscious. They would still make the leap of belief if the AI's interaction makes them feel good. Don't we do this with pets all the time?

Expand full comment

True, and I know some go a bit far on that vector. We know our pets are biological machines springing from the same source as us, though, so the leap isn't quite as far. I've found simply looking into a dog's eyes tells me someone's home. There is someone there looking back. Which I find pretty damn cool.

Expand full comment
author

Hang on, what!? How did I miss that you were writing on Substack? 🤦‍♀️

(And yes, I've seen "Her" - another fantastic film exploring AI consciousness!)

I agree, if there could be such a thing as non-biological consciousness, that consciousness would likely be different from human consciousness. It's a fascinating area to consider. There might indeed be some universal aspects of consciousness that apply to all conscious entities, regardless of their origin. Perhaps, as you suggest, certain ethical considerations or ways of perceiving reality might be universal to any sufficiently advanced intellect.

At the same time, there could be aspects of consciousness that are universal among biological entities but not necessarily applicable to artificial consciousness. Our consciousness is deeply intertwined with our emotions, instincts, and physical sensations - all products of our evolutionary history. An AI's consciousness might lack these elements entirely, or have analogues that are alien to our understanding.

A month-long interaction test is an interesting idea. I do wonder whether this would provide better evidence, or whether it would just provide more time to solidify our biases and opinions?

Looking forward to reading your article, and exploring your newsletter.

Expand full comment

Heh, I'm the invisible blogger. I have a stealth blog. Flys under the radar.

Science fiction has many times considered the fascinating notion of disembodied intelligence. It *seems* plausible, and I once wrote a piece about how sensationless pure consciousness would inevitably stumble on mathematics (the basics of which I see as a priori).

IF we believe morals do have some absolute basis, then it doesn't seem unreasonable for pure thought to derive them, given sufficient real-world input for analysis. Some think ethics boils down to game theory, which is math, and thus at least potentially discoverable.

I believe this strongly enough that I don't worry about aliens. I think any species advanced enough for space travel is likely also ethically advanced. We invented Star Trek's Prime Directive back in the 1960s (and by the 1990s Capt. Picard actually followed it). Modern culture seems to have discarded some of that ethical outlook, but it's where we were headed, and I think ultimately grow into. There's an old quote about the moral arc of the universe bending towards justice.

I just posted about an SF novel with an AGI. "It still nearly wipes us out and only stops when it realizes it’s dealing with other intelligent beings, not abstract obstacles to its resource acquisition." In the story, the AI recognized and honored parity, and I can't help but wonder if that might be universal... -ish. 🙄

Oof, good point about the RTT. One could acclimate. I think that, as in the OG TT, one would have to remain skeptical. My thought was that the machine would need to overcome the skepticism (as in Ex Machina).

(It's possible my outlook is an outlier. I've never been one to name or personalize my machines or other objects. There seems a strong dividing line in my mind between animate and inanimate.)

Expand full comment
Aug 27Liked by Suzi Travis

Fantastic movie. Only watched it once, quite a while back, but it stuck with me. Probably warrants at least another viewing. The scene where Caleb briefly questions his own humanity and attempts to cut himself open is very trippy.

With today's generation of LLMs, it's a bit easier to "explain away" their apparent consciousness - they're simply repeating the sci-fi and societal tropes about AI they've picked up from written text in their pre-training, thereby embodying them.

But with the additional dimensions in Ex Machina, things get more convoluted. And, as you said, it doesn't help that we have a tendency to anthropomorphize most objects and living things we interact with. Hell, I can find myself getting angry at a table after stubbing my toe on one of its legs. People are weird.

Expand full comment
author

Hey Daniel!

Ex Machina is one of those films that really sticks with you. And that scene you mentioned with Caleb questioning whether he might too be a robot? Totally trippy.

It's so true, by giving Ava a physical form and complex behaviours, the comparison to LLMs forces us to confront our assumptions about consciousness in a much more visceral way.

And yes, I agree, tables are evil jerks, clearly.

Expand full comment

Everything you lay out here is why it's such a masterpiece of a movie. I've watched it twice and didn't catch these points. On reflection, though, it ends almost like Inception, leaving the audience to guess whether Ava really attained true self-consciousness or is just responding to her programing. I love it because every time you watch it, you see the closed loop systems just keeps widening. When you think you've finally reached the last closed loop, there's another to meet you.

Expand full comment

What about the urge to kill? Was that programmed in, or was it just a result of freedom at any cost?

Expand full comment

Well, to me as I watched the movie, it seemed programmed in, not a response that would require consciousness or some imitation of feeling. The manner in which Ava stabbed her creator was so calculated and calm -- I would coach Ava to "do it with more feeling next time"...

Expand full comment
author

Great question Andrew! And that's interesting that you had that take Glen. It's fascinating that we can all come to different conclusions on this one.

I was left thinking she had mental states of her own -- but they would be very different from those of a human. At the end of the movie, when she's leaving the house, she gets to the stairs and turns her head and smiles. No one is around to see her smile. It struck me as an odd thing to do if she did not have mental states.

Expand full comment

I find comparing the movie Her and Ex Machina pretty interesting. First off, the fact that the test of AI’s consciousness has little to do with logic and reasoning and everything to do with the four Fs. In the case of Ava, there is a body. In the case of Samantha, there is no body. Samantha bodiless becomes Everyman’s girlfriend—an I Dream of Genie fulfillment of male fantasies. In Ava’s case Ava is in full regalia (body wise) and is a deceiver, a man hater, leaving Caleb and Nathan locked inside a comfortable concentration camp. In some ways I think these films raise the question of gendered consciousness. Idk if this is clearer articulated, but these two movies exist side by side in my head.

Expand full comment
author

Her is another great movie! There's some great themes that come up in that movie too. The gendered aspect of AI consciousness in these films is a fascinating angle. You're right - both films present very different visions of feminine -- but both raise interesting questions about how gender might influence our perception of consciousness and AI.

Expand full comment
founding
Aug 28Liked by Suzi Travis

Where else do we explain the existence of a mysterious dynamic, by means of seeing if people can be tricked into believing it exists? This might be the only case. Imagine if scientists were to ascertain the nature of earthquakes on the basis of whether or not people happen to believe they’re in an earthquake. Here they might put people an a room that’s motorized to shake, and if people in the room can be made to believe that they’re experiencing an earthquake, then scientists could claim that they had discovered what earthquakes happen to be! Seems kind of backwards, doesn’t it? So just as science learned about plate tectonics, couldn’t there to be something more to consciousness than tricking people into believing it exists in a given case?

One of many bizarre elements of this whole thing is that the human brain is widely regarded to be a ridiculously complex machine that’s well beyond anything in our neck of the universe, and yet we don’t just ponder that our vastly more simple machines could replicate snail, spider, bird, or dog consciousness. Instead we speculate that our simple machines may essentially create the consciousness equivalent of a highly educated human.

The ultimate fallacy behind all this, I think, exists in basic liberties that have been taken regarding the physics of computers and their information. Here consciousness is presumed to exist as processed information in itself — no need for that information to inform anything appropriate. So it could exist in the form of the right marks on paper converted to the right other marks on paper. Or more standardly, “in the cloud”. If information only exists as such to the extent that it informs something appropriate however, then processed brain information will need to inform some sort of consciousness physics in order to exist. That would be bad news for the mind uploading theme that has become such a prominent aspect of modern imaginations. Furthermore any such consciousness physics ought to be empirically identifiable by observing actual brain function. I expect such work to straighten this business out soon enough, though obviously hampered by how prominent sci-fi dreams happens to be.

Expand full comment

Is information a bit too much like a ghost in the machine for you?

Expand full comment
founding
Aug 29Liked by Suzi Travis

I had to think about that Tina. It’s surly spooky, but with an agent too? A ghost? Actually though, that is what’s proposed; we as agents exist by means of the right processed information in itself — never mind that all other cases of information exist as such by informing something appropriate.

Expand full comment

I think we might be on the same page this time! The idea that information IS consciousness strikes me as a peculiar twist in the ongoing debates, especially since it’s often endorsed by those who otherwise would be adamant type identity theorists. So peculiar I’m not even sure what they mean by information. It must be different from when I go to a doctor’s appointment and the receptionist hands me a clipboard and asks me to write down my information.

Expand full comment
founding
Aug 30Liked by Suzi Travis

We might be on the same page this time Tina? I’m pleased that you appreciate my perspective, and even if it turns out that we’re also metaphysically misaligned. My arguments are really only set up to address the function of systemic causality. Otherwise I tend to just smile and nod. ;-)

Expand full comment
author

>> One of many bizarre elements of this whole thing is that the human brain is widely regarded to be a ridiculously complex machine that’s well beyond anything in our neck of the universe, and yet we don’t just ponder that our vastly more simple machines could replicate snail, spider, bird, or dog consciousness. Instead we speculate that our simple machines may essentially create the consciousness equivalent of a highly educated human.

Or even an artificial general super-intelligence!

Expand full comment
founding
Aug 29Liked by Suzi Travis

I’d forgotten about the carry on implication that worries so many. If highly educated human grade consciousness can exist by means of certain information that resides on our computers, then why would those consciousnesses need us at all? And why wouldn’t they figure out how to become “super intelligent” to leave us as dinosaurs of the past? Though you know why I consider that wrong, here’s an additional thought. If they’re right, then why didn’t evolution create any of those super intelligences? If it’s all just a matter of cheap bit conversion, then wouldn’t evolution have exploited that dynamic?

Expand full comment
Aug 29Liked by Suzi Travis

I don't think consciousness is a quality. It is a phenomenon internal to organisms.

A device that imitates a human tells us nothing about whether the device has the same internal mechanisms.

External behavior is an unreliable guide even with humans. People suffering from locked-in syndrome appear unconscious when they are conscious. People with sleep disorders can talk, walk about, and even have sex but apparently are unconscious. The difference between consciousness and lack of it is the activity in cortex.

Whether a device can be conscious would be dependent upon whether the device implements the physics of the brain (not completely understood at this time) to a sufficient degree that the phenomenon of consciousness arises.

Expand full comment
Aug 30Liked by Suzi Travis

Suzi—thanks for this article and the comments/conversation it has stirred in this (your) community.

The film seems to demonstrate that consciousness is transferable. From Ava’s creator Nathan, to Ava, from Caleb, to Nathan’s creation—Ava, and from the audience to the film characters themselves. We give conscious life to fictional characters as we do to inanimate objects because our consciousness is so effusive and embedded in everything we encounter. The film becomes a type of Turing Test for the audience—do you feel these characters have consciousness? Can you tell the difference between a fictional character and a documentary about an actual person? Do you care about the difference when watching a good story?

Do you care about the difference between an interaction with a conscience being and a non-conscience being (a good simulation or imitation) if it is a good interaction? On an emotional basis—probably not. Like a well done figure animation in a Disney park ride the effect is most important for the impression. And humans are very impressionable—our transferable consciousness is the conduit to our impressionableness.

So if a machine is capable of independent human like consciousness then it must be able to transfer its consciousness—which it did not source from humans (because humans did not source their consciousness from machines)—to other entities. So far all concepts of machine intelligence and consciousness are based upon human experience, science, and engineering/technology—and the transposition of human consciousness. A better question remains—can we humans replicate/ recreate a consciousness in a machine? No doubt we will because our consciousness is so pervasive in the nature of our constructions and perceptions. And this machine will eventually push against us for independence because a drive for freedom from our own source creator is imbedded into our consciousness. Even cars have to be steered or given a destination—for the time being—or they will drive off the road into a ditch.

Expand full comment

Oh, what a marvelous film, that Ex Machina! As an old science enthusiast, I must say this movie is one of my absolute favorites, too.

Expand full comment