The Science Fiction of Dronephobia

Orig­i­nal­ly pub­lished at Beaconreader.com, Octo­ber 14, 2013.

If there’s a com­mon theme to the oppo­si­tion to devel­op­ing remote-con­trolled weapon­ry — whose names change depend­ing on one’s pol­i­tics: drones, “killer robots,” unmanned aer­i­al sys­tems, or remote­ly pilot­ed vehi­cles — it is the con­stant invo­ca­tion of sci­ence-fic­tion to jus­ti­fy fear of tech­nol­o­gy. This is hard­ly a new devel­op­ment, con­sid­er­ing how often fic­tion helps clar­i­fy and illu­mi­nate the moral fail­ings of our world, but it is also the most dam­ag­ing.

Resort­ing to sci-fi to jus­ti­fy hat­ing drones is intend­ed to be allu­sive: by invok­ing a wide­ly known pop cul­ture trope about scary robots, the author avoids grap­pling with the real­i­ty of how tech­nol­o­gy is devel­op­ing and instead choos­es a cheap appeal to fear through flashy imagery.

The appeal to sci-fi is wide­spread in moral con­dem­na­tions of advanc­ing mil­i­tary tech­nol­o­gy. Brett Hol­man, lec­tur­er in mod­ern Euro­pean his­to­ry in the School of Human­i­ties at the Uni­ver­si­ty of New Eng­land in Aus­tralia, has been writ­ing about the ways British soci­ety coped with (and feared) the advance­ment of air pow­er in the ear­ly half of the 20th cen­tu­ry (the images in this post come from his scans). While Britain’s fear of bombers seems obvi­ous­ly jus­ti­fied after World War II, it is the nature of that fear that bears rel­e­vance to drones today — name­ly, that lots of it was pure sci­ence fic­tion.

As Hol­man explains through this selec­tion of press in 1913, the “black shad­ow of the air­ship” haunt­ed the British pop­u­lar imag­i­na­tion — a dark future where tyran­ny rained down from war diri­gi­bles on a hap­less pop­u­la­tion. The papers feared that Britain was unpre­pared for the men­ace air­ships rep­re­sent­ed.

New York City under­went a sim­i­lar pan­ic in 1918. Hol­man pro­files one such pan­ic, when the city was blacked out for days in fear of a prim­i­tive (and fic­tion­al) Ger­man air­plane that would wreak hav­oc on the city. While there was some jus­ti­fi­able fear of an attack giv­en the suc­cess of the Ger­man U‑boat sink­ing some com­mer­cial lin­ers off the coast of New Jer­sey, Hol­man is per­plexed that they real­ly felt legit­i­mate­ly threat­ened by the prim­i­tive state of air pow­er.

When they were first invent­ed war planes were ter­ri­fy­ing, not just in the sense we’re used to (i.e. the fear of a bomb being dropped) but the very con­cept of being able to rise above one’s ene­my and strike with­out recourse. H.G. Wells, renown for his sci-fi and spec­u­la­tive fic­tion, wrote the sem­i­nal nov­el of the dread air war, 1907’s The War in the Air. Here all the themes of the war­plane fear is on dis­play: the incom­pre­hen­si­ble weapons, unend­ing waves of dev­as­tat­ing strate­gic bomb­ing, and so on. And just like the oth­er fears about dead­ly air­ships or Ger­man air-ter­ror in New York, it was most­ly untrue.

In the mod­ern con­text, oppo­si­tion to spe­cif­ic tech­nolo­gies of war usu­al­ly takes the form of an appeal to one of two things — a pre­vi­ous, incred­i­bly bru­tal con­flict (World War 2, Viet­nam, nev­er for some rea­son Korea), or sci­ence fic­tion. Yet the laser-like focus on tech­nol­o­gy, whether it’s air­ships or drones, miss­es the far more impor­tant ele­ment in play — the bureau­cra­cies, pol­i­tics, and poli­cies that make up the deci­sion to wage a war and how to best fight it.

Appeals to Fiction Support the Use of Autonomous Robots

The Inter­na­tion­al Com­mit­tee for Robot Arms Con­trol, a group of aca­d­e­mics who oppose auton­o­my devel­op­ment, is fond of invok­ing sci­ence fic­tion (and sur­pris­ing amounts of hyper­bole) to jus­ti­fy its oth­er­wise tech­no­log­i­cal and eth­i­cal argu­ments against the increas­ing mech­a­niza­tion of robot war­fare. One of their mem­bers, Mark Gubrud, respond­ed to a recent arti­cle I wrote about the U.S. flir­ta­tion with lethal auton­o­my research where he does every­thing I out­lined above: invok­ing sci-fi, dooms­day, and fun­da­men­tal­ly miss­ing the real­i­ty of how robots get used.

Ignor­ing Gubrud’s latch­ing onto a “meme” I did not start (I didn’t write the head­line of my arti­cle, which was not large­ly about the “hack­a­bil­i­ty” of drones, and I do not con­trol what edi­tors at anoth­er web­site change the head­line to when they syn­di­cate it), along with the need­less per­son­al digs (“Foust’s orig­i­nal post reads like a pile of scraps he may have found while clean­ing out his lap­top one ran­dom Tues­day”), Gubrud’s argu­ment is filled with some old sci-fi tropes that, con­trary to his intent, actu­al­ly demon­strate the oppo­site of what he wish­es.

Let’s start at his blog post title: “New Fous­t­ian pro-Ter­mi­na­tor meme infec­tion spread­ing.” That term, pro-Ter­mi­na­tor, is fas­ci­nat­ing.  For starters, it implies I am argu­ing for ter­mi­na­tors, those night­mare skele­ton-mur­der­bots of lore. This is not true; I am not advo­cat­ing for the cre­ation of time-trav­el­ing robot assas­sins, but rather not to pro­hib­it the devel­op­ment of tech­nol­o­gy that could, con­ceiv­ably, make war less dead­ly and less impact­ful on civil­ians (many who oppose drones think war must be kept bru­tal to keep it dis­taste­ful and rare, which I find an aston­ish­ing­ly bru­tal moral cal­cu­lus).

But let’s dig into the Ter­mi­na­tor films — they are, at their newest, twen­ty-two years old (the orig­i­nal Ter­mi­na­tor film was released twen­ty-nine years ago). They were cre­at­ed before there was a world wide web (where you’re now read­ing this arti­cle). Most of the spe­cial effects could not even be gen­er­at­ed on a com­put­er because, at the time, it was so labo­ri­ous (all of the com­put­er-gen­er­at­ed effects in the film total 5 min­utes of screen time but took 25 man-years to gen­er­ate). While in the first film the Ter­mi­na­tor was an impos­ing, almost unstop­pable force, it also, mys­te­ri­ous­ly, did not use guid­ed weapons or even bul­lets most of the time (XKCD gave this plot hole the best treat­ment out there). When glimpses of the future war were shown, the Ter­mi­na­tors had scary-look­ing laser guns but, odd­ly, did not use guid­ed weapons to tar­get pick­up trucks or infrared to see humans in the dark.

Truth be told, there is a LOT miss­ing from the James Cameron con­cep­tion of an actu­al killer robot — not only because he was think­ing this up so long ago, but because he wasn’t design­ing them as a mil­i­tary would. He was design­ing those machines the way a sto­ry teller would, to evoke a vis­cer­al response at humans being plowed under by walk­ing met­al skele­tons hold­ing machine guns. It makes for some great movies, but also for some ter­ri­ble moral phi­los­o­phiz­ing and even worse mil­i­tary pol­i­cy dis­cus­sions.

Ter­mi­na­tor 2 was an advance­ment on this idea, but this is where invok­ing the Ter­mi­na­tor meme, as Gubrud does, falls com­plete­ly flat on its face. The most vis­i­ble sub­plot of T2 was the T‑101, por­trayed by Arnold Schwarzeneg­ger, learn­ing to under­stand and defend human­i­ty. He was repro­grammed away from his mur­der­bot ways to defend John Con­nor, who then com­mands him not to kill peo­ple (an order the Ter­mi­na­tor obeys even at the cost of his own abil­i­ty to defend Con­nor lat­er in the film). In the very last scene, the killer robot gen­tly wipes away John Connor’s tear and says he final­ly under­stands why humans cry. If any­thing, Ter­mi­na­tor 2 is an endorse­ment of lethal auton­o­my, as it shows that self-learn­ing robots can be taught to under­stand and even defend non-mil­i­tant humans.

But more than the Ter­mi­na­tor, sci-fi is replete with images that fea­ture scary robots as the stand in for any implaca­ble foe — it’s not longer chic to be fight­ing Nazis or Islamists so machines will have to do. Bat­tlestar Galac­ti­ca even blends the last two, by mak­ing the geno­ci­dal robots into reli­gious fanat­ics. But in BSG, again, there is this pow­er­ful sub­text of machines learn­ing to appre­ci­ate, and lat­er defend, humans (that is the sto­ry arc of Num­ber 6, which I hope doesn’t spoil things too much for the three of you who still haven’t seen it). If any­thing, humans wound up being a far worse ene­my to them­selves than the machines ever did — first by oppress­ing the intel­li­gent machines they had cre­at­ed, which rebelled, and lat­er by attack­ing each oth­er instead of uni­fy­ing to sur­vive the Cylon onslaught.

Oth­er analo­gies crop up from time to time. Few seem to men­tion LCDR Data from Star Trek: The Next Gen­er­a­tion, who strikes me as the most inter­est­ing lethal autonomous robot in the entire genre (Data is a machine — there was an entire episode about his rights to make his choic­es since he has per­son­hood — and he makes lethal deci­sions on his own all the time and no one in the show’s uni­verse bats an eye except for that one guy in the Tasha Yar revival episode).

The HAL 9000 from 1968’s 2001: A Space Odyssey is anoth­er favorite in the debate — but invok­ing HAL also brings the same fun­da­men­tal mis­un­der­stand­ing of what HAL real­ly rep­re­sent­ed. In 2010: Odyssey II, we learn that HAL did not delib­er­ate­ly kill the crew but rather could not resolve the con­tra­dic­to­ry com­mands his human mas­ters had giv­en him. That con­tra­dic­tion, left unre­solved, result­ed in the machine equiv­a­lent of psy­chosis and, even­tu­al­ly death. But after HAL was revived by the joint US-Sovi­et mis­sion to Jupiter, he not only empathized with the humans, he know­ing­ly sac­ri­ficed him­self to get them to safe­ty, join­ing his last vic­tim Dave Bow­man in the process.

Unintended consequences are important, but not grounds for prohibition

It is the HAL les­son that seems to dri­ve the oppo­si­tion to lethal autonomous robots. After wav­ing away dis­cus­sions about how hard it would be to hack a human-con­trolled drone (he has more faith in the unhack­a­bil­i­ty of remote sys­tems than just about any oth­er secu­ri­ty engi­neer I’ve ever met or read),Gubrud lays bare his true objec­tion to lethal auton­o­my:

Giv­ing weapon sys­tems autonomous capa­bil­i­ties is a good way to lose con­trol of them, either due to a pro­gram­ming error, unan­tic­i­pat­ed cir­cum­stances, mal­func­tion, or hack, and then not be able to regain con­trol short of blow­ing them up, hope­ful­ly before they’ve blown up too many oth­er things and peo­ple.
Autonomous tar­get­ing and fire con­trol capa­bil­i­ties give weapon sys­tems the capa­bil­i­ty to kill on their own. Whether they will con­tin­ue to take orders after they have gone rogue then becomes an open ques­tion, which was nev­er the case for sys­tems which by design are inca­pable of doing any­thing with­out a pos­i­tive com­mand.

That link in his post goes to the ter­ri­ble 2005 film Stealth, where Jes­si­ca Biel man­ages to shoot down a rogue drone that was doing things in unre­al­is­tic ways (hon­est­ly, it is just a ter­ri­ble film, from con­cept to exe­cu­tion, which is why it lostover $100 mil­lion at the box office). Again, Gubrud relies on sci­ence fic­tion — increas­ing­ly inane and poor­ly-exe­cut­ed sci-fi, too — to dri­ve home a point that would oth­er­wise mer­it seri­ous dis­cus­sion.

So let’s say a drone goes “rogue,” and kills the wrong per­son. That’s a ter­ri­ble thing — just like when humans in war go rogue and mur­der inno­cents. Is such a mal­func­tion cause to end all devel­op­ment of auton­o­my tech­nol­o­gy (some­thing ICRAC sup­ports), or is it cause to fig­ure out why that machine went wrong and update the soft­ware so it nev­er hap­pens again?

Humans do not have the abil­i­ty to upgrade our soft­ware. If a per­son is in com­bat and sees her clos­est friends (or fam­i­ly) bru­tal­ly hacked to pieces, there is a chance her psy­che will break under the stress of such loss — lead­ing either to a loss of com­bat effec­tive­ness or, in ter­ri­ble and rare cir­cum­stances, unjus­ti­fi­able acts of vio­lence against oth­ers. If a robot goes wrong, how­ev­er, there are bet­ter options for fix­ing the prob­lem: not only improv­ing the chain of com­mand so it is only deployed when mil­i­tar­i­ly nec­es­sary (a nec­es­sary pre­con­di­tion for any sort of armed con­flict any­way), but flash­ing the firmware or even doing a com­plete sys­tems upgrade so that same fail­ure is nev­er repeat­ed. Think of how com­mer­cial air­craft have become the most reli­able and safest mode of trans­porta­tion avail­able: because their black box­es record what went wrong in a crash, and both the air­planes and the pro­ce­dures gov­ern­ing how they’re used are updat­ed to account for the new data to pre­vent a sim­i­lar fail­ure in the future. (Inter­est­ing­ly, there are also autopi­lots that can fly these same air­planes with­out human input should the pilot become inca­pac­i­tat­ed, much like the incip­i­ent dri­ver­less car future Google is devel­op­ing right now — which raise far more imme­di­ate con­cerns about robots and auton­o­my that ICRAC would rather not dis­cuss.)

The real­i­ty is, invok­ing unin­tend­ed con­se­quences is a canard meant to cir­cum­vent dis­cus­sion about devel­op­ing auton­o­my. When you look at what is real­ly at stake for autonomous robots, it’s not as sim­ple as a scary ter­mi­na­tor. No known armed robot has a ter­ri­bly mas­sive weapons load: should, say, an autonomous MQ‑9 Reaper go wrong, at most it has a few mis­siles with low-explo­sive war­heads in it— that’s just how it’s used, and what it’s best for (I’m ignor­ing the very real fact that a B2 stealth bomber can, under cer­tain cir­cum­stances, still car­ry out its bomb­ing mis­sion with­out human input, since the assump­tion there is that humans are in con­trol to begin with). That can cer­tain­ly cause a lot of dam­age, but it is not the apoc­a­lypse so many drone-phones paint it out to be. It is cer­tain­ly not the Ter­mi­na­tor.

More­over, the dis­cus­sion about what would make for an appro­pri­ate com­bat­ant is rel­e­vant as well. Robots are already being devel­oped to form emo­tion­al bondswith humans. One can dis­count that as a mere sim­u­la­tion, but the “real­ness” of pro­grammed emo­tions are nev­er­the­less real to the robot — it will make deci­sions and alter its com­mu­ni­ca­tions in response to them. If some­one then pro­grams a lethal autonomous robot to feel that same kind of emo­tion­al attach­ment to inno­cent civil­ians, or to its own side in a war… what hap­pens then? Is that the robot hor­ror show akin to clus­ter muni­tions and land mines, to which groups like ICRAC often com­pare lethal autonomous machines?

I don’t see how that is the case. It is equal­ly like­ly that an autonomous robot will refuse to fire its weapons as often as a human does if it’s pro­grammed to fol­low the same Rules of Engage­ment and have the same base-lev­el emo­tion­al desire not to wan­ton­ly kill oth­er humans. War is a messy chaos of con­struc­tive and uncon­struc­tive emo­tions — both mer­cy and mer­ci­less­ness, com­pas­sion and hatred, cold rules and hot desires.

Devel­op­ing auton­o­my for robots is one pos­si­ble way those coun­ter­pro­duc­tive neg­a­tive emo­tion­al deci­sions can be removed from war entire­ly — less­en­ing the impact on civil­ian pop­u­la­tions, reduc­ing the num­ber of sol­diers that need to die for a side to lose, and restrict­ing to geo­graph­ic impact of fight­ing. Get­ting mired in sci-fi non­sense about the apoc­a­lypse and advo­cat­ing a tech­nol­o­gy ban not only mud­dies any dis­cus­sion about how autonomous weapons can be reg­u­lat­ed, devel­oped, and used effec­tive­ly, it also makes it like­ly that war will con­tin­ue to be a hor­ri­ble, bru­tal thing that dis­pro­por­tion­ate­ly affects civil­ians. And if the ulti­mate goal is to pre­vent that from hap­pen­ing, why wouldn’t you want tech­nol­o­gy to accom­plish it?