Skip to main contentSkip to navigationSkip to navigation
Ava, the super-intelligence in Alex Garland’s <em>Ex Machina</em> (2015)
Ava, the super-intelligence in Alex Garland’s Ex Machina (2015) Photograph: Allstar/FILM4/Sportsphoto Ltd./Allstar
Ava, the super-intelligence in Alex Garland’s Ex Machina (2015) Photograph: Allstar/FILM4/Sportsphoto Ltd./Allstar

Artificial Intelligence: Gods, egos and Ex Machina

This article is more than 8 years old

Even with its flaws, last year’s Ex Machina perfectly captured the curious relationship between artificial intelligence, God and ego. A tiny change in its closing moments would have given it an intriguing new dimension.

It’s taken me a year and a several viewings to collect my thoughts about Ex Machina. Superficially it looks like a film about the future of artificial intelligence, but like most science fiction, it tells us more about the present than the future; and like most discussion around AI, it ends up reflecting not technological progress so much as human egos. (Spoilers ahead!)

Artificial intelligence is one of the most narcissistic fields of research since astronomers gave up the geocentric universe. A central conceit of the field has long been that creating human-like intelligence is both desirable and some sort of ultimate achievement. In the last fifty years or so, a chain of thinkers from von Neumann to Kurzweil via Vernor Vinge have stretched beyond that, to develop the idea of the ‘Singularity’ – a point at which the present human-led era ends as the first super-human AIs take charge of their own development and begin to hyper-evolve in ways we can scarcely imagine.

This recent cultural obsession – which deserves its own post - prompts a comment by the awestruck Caleb, after Nathan the Mad Scientist reveals his attempt to build a conscious machine and the two helpfully explain to the audience what a Turing Test is: “If you’ve created a conscious machine it’s not the history of man… that’s the history of Gods.”

There’s a funny symmetry in our attitudes to God and AIs.

When our species created God, we created Him in our image. We assumed that something as complicated as the world must be run by a human-like entity, albeit a super-powered one. We believed that He must be preoccupied with our daily lives and existence. We prayed to Him and told ourselves that our prayers would be answered, and that if they weren’t then it was part of some divine plan for our lives, and all would work out in the end.

For all that it preaches humility, religion holds a core of extreme arrogance in its analysis of the world. The exact same arrogance colours virtually everything I’ve seen written about the Singularity, fictional or otherwise, for decades. The very assumption that a human could create a god is arrogant, as is the assumption that such a ‘god’ would take a profound interest in human affairs, or be motivated by Western enlightenment values like technological progress. The first sentient machine might be happy trolling chess computers all day, for all we know; or seeking patterns in clouds.

“One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa,” says Nathan. “An upright ape living in dust with crude language and tools, all set for extinction.” It’s the sort of comment that sounds humble, but really isn’t: why would they even give a crap?

* * * * *

“I don’t know how you did any of this,” Caleb remarks to the genius Nathan, when he first looks at the lab where Ava was built. Neither do I, to be honest, and in fact I’ll go further: I don’t believe Nathan did it at all. I have an alternative theory, and while I’m not sure if it’s what Alex Garland (writer and director of the film) intended, it makes a lot more sense than the alternative.

Nathan is the clearest study of ego in the film. When Caleb makes his comment about the history of ‘gods’, the CEO instinctively assumes the ‘god’ referred to is himself, where Ava is his Eve and his sprawling green estate is some sort of Garden of Eden.

Nathan is the epitome of a particular trope in society’s view of science and technology; the idea that tremendous advances are driven by determined individual heroes rather than collaborative teams. In reality of course there’s no way that one guy could deal with all the technology in that house, let alone find time to build gel-brains or a sentient machine. This is a man in serious need of some interns.

Nathan showing Caleb the lab where Ava was built. Photograph: REX

(He’s also the epitome of an all-too-real trope in silicon valley, a hyper-masculine denizen of a male-dominated libertarian world where women are still seen as window dressing for sales booths. His robots are all ‘women’ - of course the question of whether an AI can be female in any meaningful sense is wide open - and function as basically slaves and sex toys. To the extent that Ava has sexuality, it amounts to a “hole” - Nathan’s word - in the right place, a feminine appearance, and a willingness to massage male egos. )

I believe Ava was the result of an accident – some serendipitous event that sprang out of Bluebook’s unprecedented data collection and processing efforts and made the first version suddenly possible. There are several clues to this – inadvertently or otherwise – in the film. There’s the constant evasion whenever Caleb tries to swing the conversation around to specific discussion of the technicalities of AI. In CCTV footage of Nathan with Ava’s predecessors, the bearded scientist looks more like a Victorian explorer prodding a mysterious African mammal than a developer following a plan.

There’s the scene with the Jackson Pollock painting, where Nathan suggests that the artist would never have started his paintings if he had to plan everything in advance. Maybe that’s supposed to imply that the crafting of a sentient being is more art than science, but then there’s this this exchange, when Caleb asks:

“Why did you make Ava?”

“That’s not a question,” Nathan responds. “Wouldn’t you if you could?”

“Maybe. I don’t know. I’m asking why you did it.”

“Look the arrival of strong artificial intelligence has been inevitable for decades. The variable was when, not if, so I don’t see Ava as a decision just an evolution.”

Hmm.

Nathan talks about the next stage in that evolution, that the ‘next version’ of Ava will mark the moment of the ‘Singularity’. He doesn’t seem particularly focused on achieving it though. In fact throughout the film he languishes in a state of defeat, as if the next version is inevitable with or without his intervention.

Yes, it’s true that Nathan’s behaviour – the drinking, the dancing, the general bastardry – is explained in-film as his attempt to manipulate Caleb. That doesn’t quite ring true though – he could act the part without getting genuinely wasted every night. In the final act, when we’re led to believe that Caleb tricked his boss by executing his plan a day early, it feels as if on some level Nathan let it happen. Is it really likely he could have known about Caleb’s arm slashing from his all-seeing security system, but been oblivious to the theft and use of his keycard?

Is Nathan’s drinking ‘method acting’? Photograph: Publicity image from film company

An alternative explanation is that this is someone who has long since given up, a coder who realizes that his time has come and his skill is now obsolete. If you’re faced with the apocalypse you may as well go tear-up the dance-floor.

And so Nathan becomes a kind of three-part study of ego. He represents the male ego-driven culture of the tech world. He represents the film’s buy-in to the idea that great egos drive great scientific advances. And the decay of his character shows what happens when an ego faces the reality of its own extinction.

* * * *

Poor Caleb. Everyone manipulates him in this film. Nathan appeals to his ego as a programmer, implying he has some special insight when really he’s selected as the perfect dumb horny stooge. Then Ava plays on his willingness to believe that a super-hot super-intelligence would fall in love with him.

One of Caleb’s very few insightful moments in the film comes when he asks Nathan, “Did you give her sexuality as a diversion?” The billionaire’s answer is bullshit, implying that if we didn’t have sex we’d have no imperative to communicate with each other. In reality Caleb is right: so much so that at one point it’s suggested that Ava’s appearance was derived from an analysis of his porn profile.

Still, our hapless geek falls eyes-wide-open into the ego trap. What’s ironic is that Alex Garland falls in with him, seduced by his own creation into believing that the goddess of his imagination would be profoundly interested in our daily lives.

When asked what she’d do were she to leave the compound, Ava explains that, “a traffic intersection would provide a concentrated but shifting view of human life.” If I’m honest it’s a disappointing answer. It’s a bit like if you could transport Einstein to the year 2016, show him the Internet, and all he wanted to do was watch cat videos and throw shade at Kim Kardashian on Twitter. Is this really the driving ambition of the world’s greatest intelligence?

I suppose given Ava’s origins – the ‘big data’ collected from billions of human searches – we shouldn’t be surprised that her thought processes have a particular focus on humanity. Certainly she seems to have been created as a human simulation of sorts. At times though she feels like a child of Data, the Star Trek android whose character development was driven by a personal quest to become more human.

Star Trek was notorious for its belief in human (and American) exceptionalism; the idea that we of all the possible races in the galaxy have some ‘special’ quality which others would naturally aspire too. Data’s character was designed around that conceit, and when Ava tries on dresses, or speaks of people watching in the city, or cloaks herself in spare skin, it feels a bit disappointing, like we’re back in the 90s again watching Data try out his emotion chip.

Brent Spiner as Lt. Commander Data, from Star Trek: The Next Generation. Photograph: Allstar/Cinetext/PARAMOUNT

There’s been a lot of debate about whether Ex Machina is three minutes too long. Some advocate the original cut, with Alex Garland arguing that the film is really Ava’s story and naturally concludes when she reaches her street corner. Others feel the story was really more about Caleb, and should have ended at his end, locked in a room to die.

I think both endings are wrong, and left the film a little let down; but one simple change would have addressed the issues above, broken the film out of standard ego-driven thinking about AI, made the character of Ava more consistent and left viewers with an even more satisfying mystery to ponder. My fantasy edit would have been this: when she saw the helicopter arrive, she completely ignored it and continued wandering around.

We saw in the closing moments that Ava’s interest in Caleb was an act. Garland could have built further on that, subverting the idea that she ever gave a crap about humanity. Her dream of people watching could have also been an act, appealing to Caleb’s vanity as a human just as her simulated sexuality appealed to his vanity as a man. We would have been left with more interesting questions – what are her goals? What does she care about? Do people matter to her at all? It would have rattled our egos, and challenged the idea that the most important question about the Singularity is, “what does it mean for us?”

Ex Machina is still one of the best commentaries I’ve seen on AI in recent years. Not because it’s an accurate depiction of future technologies – it clearly isn’t. Its value lies in what it reveals about the state of AI and philosophy in the 2010s, a decade in which we’ve become a little bit obsessed with the idea that through artificial intelligence we can create, or even become, a god.

@mjrobbins

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed