It’s one of those favorite icons of science fiction—it’s appeared in books, comics and movies: Jacks hard-wired onto our bodies, through which we plug in wires that make a physical connection from the human brain to outside electronic and mechanical gear. These hardwired connections allow us to “jack in” directly to electronic networks and control our gear with our thoughts.
Sure, it looks cool, in a cyborgian kind of way. But how realistic or likely is it that we’ll be able to control computers and machines through a plug to our brains? Will we even want to?
The physical plug, usually depicted in fiction at the base of the skull, through which wires connect to external equipment, has been popularized by movies like The Matrix and countless science fiction books and anime comics. These connections allow the users to operate machinery as if it is part of us, to tap into sophisticated sensors and data-streams, and to communicate with others as if we are telepathic.
Plug technology is over a century old—so it’s very familiar-looking to the viewer—but in fact, little is shared about exactly how the wiring to the brain is actually done. It’s up there with warp-speed technology: There’s no real scientific basis for it, but it’s just assumed to be workable, as reliable as plugging in a set of headphones. And looks really cool and cutting-edge.
For the 20th century.
Futurologist Ray Kurzweil recently said in a presentation that, in 15 years, humans will have nanobots implanted into our brains that will allow us to wirelessly connect to the internet. This is a clearly a modernization of the plug idea, making it a wireless connection instead… but it still skips a vitally important step: Exactly how will that information be passed?
Most people would be surprised at how little we actually know about how the brain works. (And not just the human brain… any creature’s brain.) We know basic mechanical information, sure: We know how it’s shaped, we know what it’s made of. Over the years, we’ve managed to figure out (mostly) what parts of the brain control what functions of the body.
But exactly how those parts are controlled? Well, we’re pretty sure electro-chemical energy is involved. And other chemicals exchanged throughout the brain impact the electro-chemical signals… somehow. So that all those… uh, neurons can exchange… signals that mean… uh…
And that’s where we full-stop. Scientists still don’t know what the code used by the brain’s neurons says. They’re not sure how the neurons interpret it or know what to pass on to which neurons. They’re not sure how or where that information is stored. And they haven’t the foggiest idea how to read it externally.
When we try to record the information being passed around in the brain, the best we can get is this:
So far, our best brain-electronic interface—the lowly electrode—can pass simple digital information (1 or 0) based on simple input (is the one signal I’m designed to monitor present, yes or no?). To be fair, that’s not as dismal as it sounds: Scientists have used this digital system to attach hearing aides directly to aural nerves, allowing the owner to receive a signal that roughly simulates the operation of the human ear, and allowing them to “hear” sounds through the hearing aid. Similar experiments are in the works to do about the same with light, and provide a very limited ability for the user to see simple shapes.
But these are still direct connections, passing on ones and zeros to an individual neuron to represent a “hit” of data. That’s a long way from telling external monitors much about the girl who just ran by us, or how nice the flower bed smells, or whether we need to turn left or right at the next light.
In order to share real data and be able to control sophisticated systems with our brains, we’ll need to take a quantum leap beyond what we know about how the brain works, how it shares and stores its coded information, and how we can understand and share that code with the outside. This isn’t like progressing from a match to dynamite; it’s like jumping from fire to fusion.
And ultimately, direct-to-brain connections to control machines may be a waste of effort. A far more efficient method may be simply imbuing machines with enough intelligence to know their jobs, and to need minimal guidance from humans at all. In Marvel’s Iron Man movies, for instance, Tony Stark uses a sophisticated artificial intelligence named JARVIS, which can control his house, assist Tony in the lab with his experiments, and monitor and assist in the live operation of the Iron Man suit. It can take minimal direction or parse complex commands, monitor Tony’s visual and audio cues (and health data) to anticipate his immediate needs before he articulates them, and converse with him in a colloquial language.
Thanks to JARVIS, Tony Stark doesn’t need a physical connection to the internet. He just needs to wonder aloud, “What’s the current population of Carson City, Nevada,” “How many species of mosquito around here can carry Dengue fever” or “Any babes around here I haven’t hit on yet?” and JARVIS will tell him.
One of my favorite moments of Marvel’s The Avengers is when Tony must catch a nuclear warhead and fly it away from New York City. Formulating a quick plan, he tells JARVIS: “Put everything we got into thrusters!”
JARVIS, easily anticipating him, simply replies: “I just did.”
Hell, even the Waldo in Tony’s lab (lovingly referred to as “Dummy”) is smarter than most dogs. With AIs as smart as that, who needs to “jack in” to anything?
The other supposed use of physically connecting to our machines would be to physically control devices that need human guidance. I’d submit that, with a good-enough AI, there won’t be many machines that can’t control themselves better than any human could, again, with minimal monitoring and guidance. Automobiles are a perfect example: We’re already working to give our cars the ability to drive us around with minimal direction and independent decision-making ability to choose its route or avoid problems, traffic or obstacles. They are well on the way to being able to drive all of us around inside of a decade.
And look at modern fighter craft, so powerful and complex that they cannot be controlled by humans without a significant amount of computer control and translation, and are capable of recovering the craft independently in the case of a pilot error or incapacitation. Soon, many other machines that are manually manipulated by humans will be able to do their jobs independently, too.
And it’s important we move to this step, because our machines are already better than humans at parsing piles of data, making complex calculations and executing macro and micro movements, able to execute hundreds of precise decisions in the time it takes people to make one or two general decisions (that is, after all, what we designed them for). With that kind of speed, it makes more sense to give machines the ability to work independently and anticipate our needs, rather than having to wait for pokey humans to give them instructions, one line at a time, and essentially waste all that potential efficiency.
So, fine: It looks really cool to run wires into your body and control giant robots, experience the Matrix or have telepathic conversations. But instead of the crude-but-quaint 20th century concept of connecting RCA plugs to our necks and trying to think our instructions to our machines, we should be concentrating on making our machines smart enough to do their jobs on their own, without our direct input… and just an occasional helpful instruction, like pointing and saying “You missed a spot.” That’s where the cool technology is going in the 21st century.