Works of science fiction have long depicted a dark and ominous future in which killer robots rule over their human creators. Decades before The Terminator made Arnold Schwarzenegger an action movie megastar, the 1920 play R.U.R.(Rossumovi Univerzální Roboti, or Rossum’s Universal Robots) was the first to introduce the concept of robots that rise up against their masters. In writer Karel Čapek’s story, the robots made from synthetic organic matter performed menial tasks.
Phillip K. Dick explored this same theme decades later in Do Androids Dream of Electric Sheep? The cyberpunk genre defining film, Blade Runner, is an adaption of Dick’s novel that centers around the definition of humanity. While the replicants in the movie don’t exactly become humanity’s masters, it is clear that they pose a threat.
The Terminator and The Matrix each featured a truly dystopian future where mankind has been all but exterminated. Then, ruled by machines that have turned our planet into a living hell. There’s a popular fan theory that Terminator is a prequel to the Matrix films. Each series has spawned novels, comic books, video games and of course a legendary array of toys. But does a movie really represent a serious warning about the possible future?
Warnings of Killer Robots
Despite a good start to each series, most of the follow-up films and other projects have disappointed fans, and the reason is simple. The idea of machines enslaving humans seems rather far-fetched, and the lengths to which the machines go to stamp out rebellion even more so. Watch any of the Terminator films after the second one, or either of The Matrix sequels to see for yourself!
However, there are those who remain very concerned about autonomous weapons, or “killer robots.” In 2018, during the annual International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden, some of the world’s top scientific minds signed a pledge that would call for “laws against lethal autonomous weapons.”
SpaceX and Tesla founder Elon Musk, as well as members of Google’s DeepMind project, were among those who signed the pledge, which also has the support of the Future of Life Institute. Clearly Big Tech seems concerned that the autonomous systems—or artificial intelligence (AI)—being developed for self-driving cars and other uses present a problem when it comes to weapons.
Self-Driving Cars are One Thing
In fairness, it is easy to see a difference between a self-driving Uber and a self-driving M1 Abrams tank, but only so much. The question perhaps Musk and those at Google, Apple, Ford and Uber—each of which is developing autonomous vehicle technology—why should we trust a car to drive for us, but not a weapon to fight a war? After all, those developing the self-driving car often tout the fail-safes, so wouldn’t these also be in an autonomous weapon?
“That is yet to be determined, and there is a definite movement toward ensuring that autonomous systems are developed with the idea of safety and security in mind,” explained Michael Blades, vice president for aerospace, defense and security at Frost & Sullivan, an international research and consulting firm.
“That said, there is much debate as to whether ‘general’ AI, where machines can make their own decisions, is achievable, or if ‘narrow’ AI, where machines can only do what they are taught, will be the functional limit,” Blades added.
Historical Analogies to Killer Robots
Humans have battled one another for eons and created some truly terrifying weapons in the process. These include nuclear, chemical and biological weapons, and any of these could present a problem. But computer technology is a bit different, in part because of how interconnected our world has become. If weaponized against critical infrastructure, some computer code could do as much damage as a nuclear attack. It is clear that technology in general could present a serious problem.
However, at the same time we should look at various analogies from history. Humans have “tamed” horses, dogs, elephants and even dolphins to varying degrees of success in order to utilize these beasts in military roles. Soldiers rode horses for eons, yet horses largely disappeared from the battlefield in only a single generation, in the 20th century. Dogs, sheep and other animals deployed mines, and have navigated across minefields.
Animals can, of course, turn on their masters, but with only limited success. A dog can bite its owner, but can’t develop or use weapons. The greater danger has long been in trusting human soldiers—since antiquity, there have been stories of soldiers mutinying. The British learned this lesson 160 years ago when the East India Company’s sepoys (native soldiers) rose up, and it took three years to suppress the uprising.
The British learned some valuable lessons, which other colonial powers then followed. Namely, provide your colonial troops and native auxiliary units with inferior weapons. That way, if they failed to remain loyal, at least you’d have an advantage over them.
The truth is that machines might be less likely to mutiny. Unlike mercenaries—which the Indian sepoys really were—a machine’s motivation can’t be money. Plus, unlike a machine, you can’t truly program a human.
Will Humans Ever Fully Trust a Machine?
“There is most definitely a trust issue with regard to a robot going ‘rogue,’ however forces are putting extensive measures in place to ensure that all safety aspects are covered,” said Melanie Rovery, principal analyst at Jane’s Information Group, the military and aerospace research firm.
For now the point is largely moot, because we haven’t reached a level of true automation. Even if that level of automation becomes possible, machines still wouldn’t “think” in the traditional sense.
“Most autonomous weapon systems are not yet truly autonomous,” added Rovery. “There will remain a human in or on the loop to ensure that the decisions that are made rest with the human. Synthetic training is a focus in the defense arena to ensure that all systems are strenuously tested and evaluated. AI and machine learning (ML) will be used to enhance and aid human decision making rather than replace it.”
For the reasons that Rovery laid out, autonomous weapons might actually present opportunities that could save lives. They could take human error out of the process.
“Machines would probably be less likely to turn on their ‘masters,’” said Blades. “In fact, the worry from a machine should not be that it turns itself into a killer robot, but a bad actor, from within or outside, could use or program the machine to do much more damage than the person alone could do. In that case, it is more important to establish impenetrable cybersecurity, before worrying about machines possibly becoming self-aware.”
The Case For Killer Robots
The very idea of killer robots is that these automated weapon systems would be free to roam a battlefield. We’re actually decades away from any such technology. It is unlikely that a machine’s primary mission would simply be to “kill everything.”
In the short term, autonomous weapons could save lives. Sensors, cameras and machine learning can determine a threat. Autonomous weapons could then tell friendly units from the enemy. Those same systems could then allow a machine to better distinguish threats from non-combatants.
“That is why there has been policy in the DoD for many years to fund unmanned aircraft systems (UAS) and cybersecurity programs and target them for reductions—if needed—last,” said Blades. “It is also why we see a major focus on capabilities like manned-unmanned teaming/loyal wingmen. Of course, UAS aren’t entirely ‘autonomous,’ but as the years go by, flight and ground systems will become more and more autonomous. Not only does it reduce manpower, it reduces the workload and stress on manned operators.”
The use of UAS already highlights how we can take out enemy targets without putting our human soldiers in harm’s way. That should serve as proof of concept for the advantages that autonomous weapons systems could provide. In addition, machine learning can help make for quicker decisions—something that is crucial on the battlefield.
How Close are We?
“There are many advantages to autonomous systems. They can help to reduce the cognitive burden on the warfighter,” added Rovery. “Logistics MULEs (or Multi-Mission Unmanned Ground Vehicle) can lighten the load for soldiers by carrying supplies. Combat vehicles can be sent forward before troops to carry out surveillance and reconnaissance. There have been huge advances over the past few years. However, the technology is still in a fairly embryonic stage, with testing and evaluation being key.”
It looks like for now, killer robots roaming the battlefield isn’t happening any time soon.