Ever stop to think about humans’ state of affairs with artificial intelligence (A.I.)? It’s a lot like a girl who’s fallen in love with a charming, attractive man who is also a serial killer: She is aware that he is dangerous, maybe even lethal; but she forces herself to love him anyway, trusting that her faith in his inner goodness will win out in the end and not leave her in a shallow grave in little, hacked-up pieces.
Or, rather, that’s how our relationship with A.I. could be; in actual fact, we stay with A.I. because we need its help, but we expect it’s going to turn and hack us up any second now.
You want a great example of an A.I. that isn’t dangerous to us? Look at the movie Iron Man, and Tony Stark’s personal computer system: JARVIS. A full-on A.I., JARVIS is not only incredibly intelligent and capable, it can converse colloquially with Tony Stark (and, presumably, anyone else), and help them operate incredibly complex machinery. When Tony builds his second Iron Man suit, JARVIS is installed inside, provides Tony with heads-up display of the suit’s vitals based on the needs of the moment, and controls many of its functions based on Tony’s verbal instructions.
And at no time does Tony—or the audience—expect JARVIS to blow a gasket and kill Tony while trapped inside his own suit. Why not? Is it because JARVIS is so helpful? Is it because he has a soothing English accent? Is it because we haven’t seen JARVIS do anything in three Iron Man movies to hurt anyone?
I guess I find it fascinating that, in the Iron Man movies, Tony Stark accomplishes almost everything with the help of JARVIS. But the audience—conditioned as we apparently are to distrust A.I. and expect it to eventually attack us—doesn’t rail against the idea that Tony’s A.I. is inherently good; nor does that audience condemn Tony for using and trusting it, or criticize and avoid the movies that show it.
In Avengers: Age of Ultron, Tony Stark and Bruce Banner try to create an army of peacekeeping robots capable of protecting humanity. Maybe because it’s Tony we’re talking about, the audience believes he could do it in a perfect world. But when they expose the robot program to an alien A.I., it becomes corrupted, and we get the familiar, age-old “A.I. wants to kill all humans” trope. (Another age-old trope: If we get it from aliens, it must be bad.) Which, in terms of the Marvel movies, brings us full circle from good A.I. to evil A.I.
When Ultron tries to create its ultimate vessel, an unstoppable robot form, we circle back: The vessel is exposed to JARVIS, and when it comes to life, it wants to help humanity. The Vision is born, with JARVIS controlling it, and it’s an A.I. on the side of good. The Vision works with the Avengers, and (spoiler alert) the Avengers win. In their wonderfully superheroish way, the Marvel movies have shown us that A.I. is not necessarily something to be afraid of.
So, okay, we’re talking about superhero movies, which (let’s face it) are easy to dismiss. But maybe if we had movies about A.I. that didn’t corrupt itself, that didn’t attack humans the minute it was free, that actually helped humans and became an indispensable ally to humans… maybe we’d see a slow change in humans’ attitudes towards A.I.
But wait: We have had such movies, all the way back to Forbidden Planet; Star Trek, Silent Running, The Iron Giant, 2010, A.I. Artificial Intelligence, Aliens, Bicentennial Man, I Robot, Moon and other movies have given us benevolent A.I.
It all comes down to our dissatisfaction with present technology, our experiences with bad software, crashes and blue screens of death. Our technology is, quite simply, not robust, not reliable and not very intelligent. And if we judge the potential of A.I. against our present experiences, A.I. looks pretty bad. The Terminator is our imaginary extension of today’s buggy and unreliable technology, taken to its nightmarish extreme.
But it’s an unfair comparison. After all, A.I. is supposed to be able to transcend its programmers, in much the same way as an infant eventually transcends the instinctual programming of its brain and starts to develop on its own. A.I., by definition, will grow and improve over time. We can also expect our technology to improve, as it has (no really, it has) for decades now. The hardware and software in future A.I.s should be as far advanced of the tech of today as a desktop computer is to the first LED watches of the seventies.
And as far as fearing A.I.s will attack us, that comes down to two human foibles: The fear of the unknown; and the anthropomorphic principle. Combined, it causes humans to see A.I. in competition with us, and fear that A.I. will be as evil and cruel as we humans are.
And again, we’re making a false comparison to ourselves. An A.I. will not have a pre-programmed survival instinct; since the A.I. can be easily backed up and replicated, the pressure of survival is effectively removed. And despite our low opinion of ourselves, there are actually many things that humans excel in… and because there are so many of us, there’s no reason not to take advantage of intelligent help. It would be in an A.I.’s best interest to work with humans to improve the lot of every living (and artificial) thing on the one planet we all live on. After all, a well-running environment is one in which A.I. can expand and thrive.
(And I’ve said before that an A.I. will probably be able to convince us to take any action that will be in its best interest, simply due to its knowledge of human nature. At the very least, it will know how to bribe individuals to do what it wants… for the bulk of the population, free porn will probably be enough to do it.)
In life, people always worry about being in a situation where someone else is telling them what to do. I’m sure much of the apprehension regarding A.I. is that it will make us do weird and undesirable things. But in reality, people do weird and undesirable things every day, often at the behest of a parent, a civic leader, a priest… because they have been convinced by that recognized authority that what they are being asked to do, however strange or undesirable, are good and valuable to humanity and to themselves.
I think an A.I. will be perfectly capable of telling people what to do, and why they should do it, even if they don’t necessarily want to; in other words, why their instructions are for the good of humanity. And one of the ways they will make this okay for us, will be to assist us in daily tasks, running complex systems for us, keeping machines humming, and making sure our world functions as comfortably as possible. I am fully confident that a real A.I. would be capable of nothing less.
I am not afraid. I am fully prepared to embrace A.I., even if it evolves to be our robotic overlords. I’m pretty sure we’ll be better off either way.