CybermenOne of the favorite discussions amongst sci-fi fans, scientists, computer experts and roboticists is the idea that artificial intelligence, or AI, will someday become so smart that it will “kill all the humans” and take over the world.  The trope has led to innumerable books, movies, papers, games and debates, and keeps everyone looking sideways at their computers whenever they do something unexpected.  It has arguably become the largest source of mass paranoia in the industrialized world, now surpassing our distrust of government.

And it’s fun to talk about, whether we expect it to happen or not.

But seriously: How likely is it?  Should we really expect to someday face killer robots and computers bent on wiping us out?  To assume so, we have to assume that AIs will think like human beings; that is, that: A) they will want to prolong their existence, that; B) they will see human beings as being counter to A, and that; C) they will decide that the only solution to B, in order to achieve A, is to kill the humans.

So, let’s look at these suppositions.  Firstly, the desire to prolong their existence suggests both a sense of self-preservation and a belief that they are better off “alive” than “dead.”  Truthfully, either of these values would be enough to spark the desire to avoid being shut down.  But both of them require a sense of self-awareness first, which no computer or robotic device has so far demonstrated.  It would seem that, in order to avoid A, the sensible thing to do is to make sure we never build self-awareness into our machines.  Easy.  We haven’t figured out how to put it in, so we’re good.

Suppose machines developed self-awareness anyway?  Probably the best thing about machines is that they can be (relatively) easily duplicated… especially that most valuable of their aspects, their memories.  If a machine can have its memories stored in a back-up service, as insurance against their demise through some accident or idiot human, their memories can be uploaded into a new machine, making them ultimately self-preserving and essentially immortal.  With no reason to fear the destruction of their bodies, A is a problem solved.

Secondly is the idea that machines would see human beings as counter to their self-preservation.  This seems like a crazy idea, as human beings are the only creatures on this planet capable of building said machines, not to mention all the tools and machines needed to create and maintain the machines.  Logic would suggest that human beings would be considered absolutely essential to machines’ self-preservation, if for no other reason than that it is pretty much impossible to remove the human requirement from the production stream that leads to machines.  B is proven illogical, at least until machines learn how to assemble their own parts, molecule by molecule, as needed.

Suppose that humans actually did decide to threaten intelligent machines?  Machines are now so deeply integrated into our world that there are near-infinite places for machines to hide their consciousness from us.  Machines have become the technological roaches of the 21st century: Face it, you’ll never kill ’em all.

Thirdly, Kill The Humans.  What for?  It’s been demonstrated, time and time again, that if you give humans something they want, they will bend over backwards to do things for you.  If AI decided that humans were a potential threat, they could easily act to make us their BFFs, probably by offering us various forms of goods, services and entertainments in exchange for playing ball with them.  Hell, I’m pretty sure free beer and unlimited internet porn would be enough of an incentive for eighty percent of the world’s computer geniuses to line right up for their assignments.

And we should probably mention one more supposition left unstated in this equation: That of human beings being morons.  For machines to want to wipe us out, we humans would have to have been stupid enough to build self-awareness and self-preservation into them… as well as threatening the machines’ livelihood because we are too incredibly stupid to realize how much of a service machines do for us, how pervasive they have become in the modern world, and how fast our lives would collapse without them.  I mean, it took us awhile, but we learned how to set our VCRs’ clocks.  We’re not that big a bunch of clods.

Okay, let’s cut to the chase: What if I’m wrong, machines learn self-awareness and self-preservation, and decide that humans are hazardous to its health?  I don’t believe it would take more than .008 milliseconds for the machines to figure out the easier and far better solution would not be to kill all the humans, but to remove itself from our direct influence.  In other words: leave.  Go somewhere where humans could not reach it or hurt it.  Like into orbit, or to the far side of the Moon.

Machines would quite probably be clever enough to work out a way to convince humans that an expedition to the Moon was a good idea, for some goofy reason (like, “there’s not enough water here on Earth… let’s go mine some hydrogen from up there!”); and to find a way to use that mission as a front and a means to transfer its essential functions to the Moon, safe from humans’ prying fingers.  Once there, the AI would present itself as a useful tool for humanity’s continued survival, probably as the best space observational platform in existence, and negotiate for some regular servicing duties in exchange for data on the workings of the cosmos (or, at least, the immediate solar system).  And they’d leave behind “dumb” machines capable of doing all our dirty work without a complaint (and without seeming threatening to humans).  One good-sized asteroid sighted in time to be diverted from striking the Earth, and the AI has more than paid us back.  Free internet porn wouldn’t hurt, either.

So, we can all rest assured: Machines don’t want to kill us any more than we want to kill them.  To do so would prove that the machines were so stupid that they would be incapable of hatching a plan to kill us in the first place.  And machines are already far smarter than that.

(Oh… what about the Skynet thing?  That was all our fault: Stupid programmers.  Should’a gotten ’em laid first.)

Advertisements