The Science Fiction of Dronephobia

Originally published at Beaconreader, a now-defunct crowd-funded journalism site, October 14, 2013.

If there’s a common theme to the opposition to developing remote-controlled weaponry — whose names change depending on one’s politics: drones, “killer robots,” unmanned aerial systems, or remotely piloted vehicles — it is the constant invocation of science-fiction to justify fear of technology. This is hardly a new development, considering how often fiction helps clarify and illuminate the moral failings of our world, but it is also the most damaging.

Resorting to sci-fi to justify hating drones is intended to be allusive: by invoking a widely known pop culture trope about scary robots, the author avoids grappling with the reality of how technology is developing and instead chooses a cheap appeal to fear through flashy imagery.

The appeal to sci-fi is widespread in moral condemnations of advancing military technology. Brett Holman, lecturer in modern European history in the School of Humanities at the University of New England in Australia, has been writing about the ways British society coped with (and feared) the advancement of air power in the early half of the 20th century (the images in this post come from his scans). While Britain’s fear of bombers seems obviously justified after World War II, it is the nature of that fear that bears relevance to drones today — namely, that lots of it was pure science fiction.

As Holman explains through this selection of press in 1913, the “black shadow of the airship” haunted the British popular imagination — a dark future where tyranny rained down from war dirigibles on a hapless population. The papers feared that Britain was unprepared for the menace airships represented.

New York City underwent a similar panic in 1918. Holman profiles one such panic, when the city was blacked out for days in fear of a primitive (and fictional) German airplane that would wreak havoc on the city. While there was some justifiable fear of an attack given the success of the German U-boat sinking some commercial liners off the coast of New Jersey, Holman is perplexed that they really felt legitimately threatened by the primitive state of air power.

When they were first invented war planes were terrifying, not just in the sense we’re used to (i.e. the fear of a bomb being dropped) but the very concept of being able to rise above one’s enemy and strike without recourse. H.G. Wells, renown for his sci-fi and speculative fiction, wrote the seminal novel of the dread air war, 1907’s The War in the Air. Here all the themes of the warplane fear is on display: the incomprehensible weapons, unending waves of devastating strategic bombing, and so on. And just like the other fears about deadly airships or German air-terror in New York, it was mostly untrue.

In the modern context, opposition to specific technologies of war usually takes the form of an appeal to one of two things — a previous, incredibly brutal conflict (World War 2, Vietnam, never for some reason Korea), or science fiction. Yet the laser-like focus on technology, whether it’s airships or drones, misses the far more important element in play — the bureaucracies, politics, and policies that make up the decision to wage a war and how to best fight it.

Appeals to Fiction Support the Use of Autonomous Robots

The International Committee for Robot Arms Control, a group of academics who oppose autonomy development, is fond of invoking science fiction (and surprising amounts of hyperbole) to justify its otherwise technological and ethical arguments against the increasing mechanization of robot warfare. One of their members, Mark Gubrud, responded to a recent article I wrote about the U.S. flirtation with lethal autonomy research where he does everything I outlined above: invoking sci-fi, doomsday, and fundamentally missing the reality of how robots get used.

Ignoring Gubrud’s latching onto a “meme” I did not start (I didn’t write the headline of my article, which was not largely about the “hackability” of drones, and I do not control what editors at another website change the headline to when they syndicate it), along with the needless personal digs (“Foust’s original post reads like a pile of scraps he may have found while cleaning out his laptop one random Tuesday”), Gubrud’s argument is filled with some old sci-fi tropes that, contrary to his intent, actually demonstrate the opposite of what he wishes.

Let’s start at his blog post title: “New Foustian pro-Terminator meme infection spreading.” That term, pro-Terminator, is fascinating.  For starters, it implies I am arguing for terminators, those nightmare skeleton-murderbots of lore. This is not true; I am not advocating for the creation of time-traveling robot assassins, but rather not to prohibit the development of technology that could, conceivably, make war less deadly and less impactful on civilians (many who oppose drones think war must be kept brutal to keep it distasteful and rare, which I find an astonishingly brutal moral calculus).

But let’s dig into the Terminator films — they are, at their newest, twenty-two years old (the original Terminator film was released twenty-nine years ago). They were created before there was a world wide web (where you’re now reading this article). Most of the special effects could not even be generated on a computer because, at the time, it was so laborious (all of the computer-generated effects in the film total 5 minutes of screen time but took 25 man-years to generate). While in the first film the Terminator was an imposing, almost unstoppable force, it also, mysteriously, did not use guided weapons or even bullets most of the time (XKCD gave this plot hole the best treatment out there). When glimpses of the future war were shown, the Terminators had scary-looking laser guns but, oddly, did not use guided weapons to target pickup trucks or infrared to see humans in the dark.

Truth be told, there is a LOT missing from the James Cameron conception of an actual killer robot — not only because he was thinking this up so long ago, but because he wasn’t designing them as a military would. He was designing those machines the way a story teller would, to evoke a visceral response at humans being plowed under by walking metal skeletons holding machine guns. It makes for some great movies, but also for some terrible moral philosophizing and even worse military policy discussions.

Terminator 2 was an advancement on this idea, but this is where invoking the Terminator meme, as Gubrud does, falls completely flat on its face. The most visible subplot of T2 was the T-101, portrayed by Arnold Schwarzenegger, learning to understand and defend humanity. He was reprogrammed away from his murderbot ways to defend John Connor, who then commands him not to kill people (an order the Terminator obeys even at the cost of his own ability to defend Connor later in the film). In the very last scene, the killer robot gently wipes away John Connor’s tear and says he finally understands why humans cry. If anything, Terminator 2 is an endorsement of lethal autonomy, as it shows that self-learning robots can be taught to understand and even defend non-militant humans.

But more than the Terminator, sci-fi is replete with images that feature scary robots as the stand in for any implacable foe — it’s not longer chic to be fighting Nazis or Islamists so machines will have to do. Battlestar Galactica even blends the last two, by making the genocidal robots into religious fanatics. But in BSG, again, there is this powerful subtext of machines learning to appreciate, and later defend, humans (that is the story arc of Number 6, which I hope doesn’t spoil things too much for the three of you who still haven’t seen it). If anything, humans wound up being a far worse enemy to themselves than the machines ever did — first by oppressing the intelligent machines they had created, which rebelled, and later by attacking each other instead of unifying to survive the Cylon onslaught.

Other analogies crop up from time to time. Few seem to mention LCDR Data from Star Trek: The Next Generation, who strikes me as the most interesting lethal autonomous robot in the entire genre (Data is a machine — there was an entire episode about his rights to make his choices since he has personhood — and he makes lethal decisions on his own all the time and no one in the show’s universe bats an eye except for that one guy in the Tasha Yar revival episode).

The HAL 9000 from 1968’s 2001: A Space Odyssey is another favorite in the debate — but invoking HAL also brings the same fundamental misunderstanding of what HAL really represented. In 2010: Odyssey II, we learn that HAL did not deliberately kill the crew but rather could not resolve the contradictory commands his human masters had given him. That contradiction, left unresolved, resulted in the machine equivalent of psychosis and, eventually death. But after HAL was revived by the joint US-Soviet mission to Jupiter, he not only empathized with the humans, he knowingly sacrificed himself to get them to safety, joining his last victim Dave Bowman in the process.

Unintended consequences are important, but not grounds for prohibition

It is the HAL lesson that seems to drive the opposition to lethal autonomous robots. After waving away discussions about how hard it would be to hack a human-controlled drone (he has more faith in the unhackability of remote systems than just about any other security engineer I’ve ever met or read),Gubrud lays bare his true objection to lethal autonomy:

Giving weapon systems autonomous capabilities is a good way to lose control of them, either due to a programming error, unanticipated circumstances, malfunction, or hack, and then not be able to regain control short of blowing them up, hopefully before they’ve blown up too many other things and people.

Autonomous targeting and fire control capabilities give weapon systems the capability to kill on their own. Whether they will continue to take orders after they have gone rogue then becomes an open question, which was never the case for systems which by design are incapable of doing anything without a positive command.

That link in his post goes to the terrible 2005 film Stealth, where Jessica Biel manages to shoot down a rogue drone that was doing things in unrealistic ways (honestly, it is just a terrible film, from concept to execution, which is why it lostover $100 million at the box office). Again, Gubrud relies on science fiction — increasingly inane and poorly-executed sci-fi, too — to drive home a point that would otherwise merit serious discussion.

So let’s say a drone goes “rogue,” and kills the wrong person. That’s a terrible thing — just like when humans in war go rogue and murder innocents. Is such a malfunction cause to end all development of autonomy technology (something ICRAC supports), or is it cause to figure out why that machine went wrong and update the software so it never happens again?

Humans do not have the ability to upgrade our software. If a person is in combat and sees her closest friends (or family) brutally hacked to pieces, there is a chance her psyche will break under the stress of such loss — leading either to a loss of combat effectiveness or, in terrible and rare circumstances, unjustifiable acts of violence against others. If a robot goes wrong, however, there are better options for fixing the problem: not only improving the chain of command so it is only deployed when militarily necessary (a necessary precondition for any sort of armed conflict anyway), but flashing the firmware or even doing a complete systems upgrade so that same failure is never repeated. Think of how commercial aircraft have become the most reliable and safest mode of transportation available: because their black boxes record what went wrong in a crash, and both the airplanes and the procedures governing how they’re used are updated to account for the new data to prevent a similar failure in the future. (Interestingly, there are also autopilots that can fly these same airplanes without human input should the pilot become incapacitated, much like the incipient driverless car future Google is developing right now — which raise far more immediate concerns about robots and autonomy that ICRAC would rather not discuss.)

The reality is, invoking unintended consequences is a canard meant to circumvent discussion about developing autonomy. When you look at what is really at stake for autonomous robots, it’s not as simple as a scary terminator. No known armed robot has a terribly massive weapons load: should, say, an autonomous MQ-9 Reaper go wrong, at most it has a few missiles with low-explosive warheads in it— that’s just how it’s used, and what it’s best for (I’m ignoring the very real fact that a B2 stealth bomber can, under certain circumstances, still carry out its bombing mission without human input, since the assumption there is that humans are in control to begin with). That can certainly cause a lot of damage, but it is not the apocalypse so many drone-phones paint it out to be. It is certainly not the Terminator.

Moreover, the discussion about what would make for an appropriate combatant is relevant as well. Robots are already being developed to form emotional bondswith humans. One can discount that as a mere simulation, but the “realness” of programmed emotions are nevertheless real to the robot — it will make decisions and alter its communications in response to them. If someone then programs a lethal autonomous robot to feel that same kind of emotional attachment to innocent civilians, or to its own side in a war… what happens then? Is that the robot horror show akin to cluster munitions and land mines, to which groups like ICRAC often compare lethal autonomous machines?

I don’t see how that is the case. It is equally likely that an autonomous robot will refuse to fire its weapons as often as a human does if it’s programmed to follow the same Rules of Engagement and have the same base-level emotional desire not to wantonly kill other humans. War is a messy chaos of constructive and unconstructive emotions — both mercy and mercilessness, compassion and hatred, cold rules and hot desires.

Developing autonomy for robots is one possible way those counterproductive negative emotional decisions can be removed from war entirely — lessening the impact on civilian populations, reducing the number of soldiers that need to die for a side to lose, and restricting to geographic impact of fighting. Getting mired in sci-fi nonsense about the apocalypse and advocating a technology ban not only muddies any discussion about how autonomous weapons can be regulated, developed, and used effectively, it also makes it likely that war will continue to be a horrible, brutal thing that disproportionately affects civilians. And if the ultimate goal is to prevent that from happening, why wouldn’t you want technology to accomplish it?