Technology Bans Don’t Work

Data_crying

 

Yes­ter­day the UN Human Rights Coun­cil debated whether to pre­emp­tively ban the devel­op­ment of lethal autonomous weapons sys­tems — that is, plat­forms like drones that can decide on their own whether to use force or not. 26 states, includ­ing the spe­cial rap­por­teur, all said this sort of tech­nol­ogy is so morally abhor­rent, even its devel­op­ment should be for­bid­den the same way we have for­bid­den land mines, clus­ter muni­tions, and chem­i­cal, bio­log­i­cal, and even most nuclear weapons.

I don’t get it. At a basic level, if a weapon sys­tem con­forms to the laws of war, yet still inspires moral abhor­rence among crit­ics, does that make the weapon at fault, or are the laws of war at fault? At a fun­da­men­tal level, the rea­son chem/bio weapons are for­bid­den, and there remains a strong norm against nuclear weapons use, is that they vio­late the most basic prin­ci­pled behind all laws of con­flict: mil­i­tary neces­sity, dis­tinc­tion, and proportionality.

Put another way, chem­i­cal, bio­log­i­cal, and nuclear weapons by design are not dis­crim­i­nate, pro­por­tion­ate, or even in most cases mil­i­tar­ily nec­es­sary (the most com­mon, and scary, use-case for a nuclear weapons is a “dead man’s trig­ger” for a fail­ing regime — which serves no mil­i­tary pur­pose beyond inflict­ing heav­ier losses on a tri­umph­ing oppo­nent). There is lit­er­ally no con­flict where their use can be jus­ti­fied under with Inter­na­tional Human­i­tar­ian Law or the Laws of Armed Con­flict. Land mines and clus­ter muni­tions can be viewed in a largely sim­i­lar light: they can­not dis­crim­i­nate between com­bat­tants and non-combattants once deployed, and their lin­ger­ing pres­ence long after the ces­sa­tion of con­flict makes their pro­por­tion­al­ity ques­tion­able. Land mines and clus­ter muni­tions can debat­ably be said to have mil­i­tary neces­sity, but that neces­sity is drowned out by the severe costs they incur to civilians.

Drones, on the other hand, do not suf­fer from those short­com­ings. By design, they are dis­crim­i­nate and pro­por­tion­ate, at least as much as any weapons sys­tem can be. They are not always used in a way that upholds dis­tinc­tion, but that is an issue of user choices not weapons design. Unlike other banned weapons tech­nol­ogy there is noth­ing inher­ent to drones that vio­lated the norms of armed con­flict. In fact, many offi­cials in gov­ern­ment argue that the rise of drone war­fare is actu­ally a human rights response to coun­tert­er­ror­ism as prac­ticed under Pres­i­dent George W. Bush: fewer dead civil­ians, less soci­etal dis­rup­tion, less troop risk, and less polit­i­cal blowback.

What really con­cerns drone oppo­nents, and this has metas­ta­sized into oppo­si­tion to auton­omy research, is the con­cept of human agency and moral hazard.

The moral haz­ard argu­ment is prob­a­bly the most per­ni­cious, as it relies on a spe­cious under­stand­ing of the his­tory and evo­lu­tion of mod­ern war­fare. At a basic level, the more advanced mil­i­taries become, and the more socially pro­gres­sive the soci­eties sup­port­ing those mil­i­taries become, the less often con­flict hap­pens… and when it does hap­pen fewer peo­ple die than ever before. There is quan­ti­ta­tive data behind this con­clu­sion: despite the con­tin­ued, legit­i­mate wor­ries over con­flict, it is actu­ally less com­mon and less deadly than ever before in his­tory. War just isn’t as hor­rific as it was even 20 years ago, to say noth­ing of 30 years ago or more. With­out excus­ing war, surely the lower fre­quency of con­flict, and the lower casu­al­ties that result is a rather stun­ning advance­ment for human well being.

Even within the U.S., the clear tar­get for the inter­na­tional cam­paigns against drones despite muf­fled cries that other coun­tries might one day employ mil­i­tary force some­where, increased tech­nol­ogy has made wag­ing war eas­ier than ever… yet politi­cians face far higher bar­ri­ers to wag­ing war than they have in the 20th cen­tury. Com­pare the debate today over inter­ven­ing in Syria with the non-debate that accom­pa­nied, say, the US involve­ment in Colombia’s civil war in the early 1960s.

But then there’s human agency to grap­ple with. And here is where auton­omy crit­ics focus their great­est poetry: a belief in the pre­dictable moral rec­ti­tude of humans under duress. This argu­ment pre­sup­poses that humans have a pre­dictable moral impulse not to com­mit atroc­i­ties or wan­tonly mur­der civil­ians dur­ing war­fare — a truly curi­ous asser­tion given the behav­ior of actual humans in war­fare. Indeed, the tim­ing of the Human Rights Council’s delib­er­a­tions was amus­ing: it coin­cided with Staff Sergeant Robert Bales announc­ing his inten­tion to plead guilty to last March’s slaugh­ter of 16 unarmed civil­ians in south­ern Afghanistan. The pre­dictabil­ity and moral­ity of humans really didn’t stop him from sneak­ing off his base and lit­er­ally cut­ting chil­dren open in the mid­dle of the night, mak­ing their bod­ies into stacks, and light­ing them on fire.

No per­son is per­fect… but then again, nei­ther is any machine. And if an autonomous machine can­not at least match the per­for­mance of a human sol­dier, there is no way in hell any gov­ern­ment would deploy a sub­stan­dard, under­per­form­ing robot when a per­son could do the job per­fectly well.

Yet it is this point where the absolute poi­son of the “stop killer robots” cam­paign hits: by push­ing for a pre­emp­tive pro­hi­bi­tion on even the devel­op­ment of such sys­tems, the move­ment threat­ens to crim­i­nal­ize tech­nol­ogy itself. Maybe that’s the rea­son why human rights advo­cates have sud­denly devel­oped faith in the moral judg­ment of sol­diers under fire, or why they’re sud­denly endors­ing the con­cept of per­sonal dan­ger as inher­ent to mak­ing war scary. Because under­neath the talk of human rights, of dis­tinc­tion and pro­por­tion­al­ity, even of inter­na­tional law — all of which is not nearly as set­tled in their favor as they’d like — there is a creep­ing lud­dism, a fear of “killer robots” as they’re mis­lead­ingly called.

Indeed, because these sys­tems do not really exist as crit­ics imag­ine them to be, the entire cam­paign is based in sci­ence fic­tion — a ten­den­tious read­ing of sci­ence fic­tion, too. That’s why, instead of illus­trat­ing this post with a screen cap­ture from a 20-year old James Cameron film (where machines of the future appar­ently don’t have infrared cam­eras or guided mis­siles), I chose Data from Star Trek, an autonomous robot who makes lethal deci­sions and com­mands star­ships yet still cries when he finds his pet cat in a ship­wreck. Auton­omy cuts both ways.

Or at least it could. Should the pre­emp­tive ban on auton­omy research go through at the UN, then vast areas of com­puter sci­ence and robot­ics research sud­denly come on the chop­ping block. Want advanced autopi­lots that could land a com­mer­cial air­liner should the crew become inca­pac­i­tated? Sorry, that might also help a drone fly itself. Want advanced image recog­ni­tion to help doc­tors, ana­lysts, secu­rity per­son­nel, and sci­en­tific expe­di­tions col­lect more data to improve our lives? Well maybe we shouldn’t because that could also be used in a drone fly­ing over a con­flict zone. Want to use machine learn­ing so com­pli­cated research can go much faster than humans ever could, improv­ing our econ­omy and way of life? Too bad, it might also be used in a drone to make com­plex moral deci­sions about when to use force. Care about bet­ter sys­tems inte­gra­tion, so your self-driving car doesn’t acci­den­tally crash on the Nevada high­way? I guess you’re just out of luck, because that might also help some­one build a drone.

This is the dark side of tech­nol­ogy bans. Unlike the indis­crim­i­nate weapons listed at the top of this post, for which there is only one pur­pose and for which tech­nol­ogy must be nar­rowly devel­oped for a spe­cific pur­pose of inflict­ing unac­cept­able harm, drones are built out of com­mon, multi-use tech­nol­ogy. The soft­ware under­pin­ning them also runs the Inter­net shop­ping you love; the hard­ware comes from off-the-shelf ven­dors who build innocu­ous drones. The processes that make them func­tion also cre­ate the air­lin­ers you use to vaca­tion in the Caribbean.

A tech­nol­ogy ban on lethal auton­omy research is not only not fea­si­ble, it’s prob­a­bly not even pos­si­ble with­out crim­i­nal­iz­ing most of the high-tech indus­try. And con­sid­er­ing how mis­guided the oppro­brium levied at this sort of research already is, I don’t see any rea­son to expect a sud­den awak­en­ing of ratio­nal­ity out of the debate either. This is too bad: rather than serv­ing as an effec­tive brake and reg­u­la­tory force on when and how autonomous weapons can or should be deployed, the move­ment to pre­vent even their devel­op­ment is tak­ing such an extreme posi­tion they’re more likely to fiz­zle out of exis­tence instead of con­tribut­ing mean­ing­fully to the devel­op­ment of less hor­ri­ble con­flict. Which means, per­versely, that they are actu­ally mak­ing the world a worse place.