Technology Bans Don’t Work

Data_crying

 

Yes­ter­day the UN Human Rights Coun­cil debat­ed whether to pre­emp­tive­ly ban the devel­op­ment of lethal autonomous weapons sys­tems — that is, plat­forms like drones that can decide on their own whether to use force or not. 26 states, includ­ing the spe­cial rap­por­teur, all said this sort of tech­nol­o­gy is so moral­ly abhor­rent, even its devel­op­ment should be for­bid­den the same way we have for­bid­den land mines, clus­ter muni­tions, and chem­i­cal, bio­log­i­cal, and even most nuclear weapons.

I don’t get it. At a basic lev­el, if a weapon sys­tem con­forms to the laws of war, yet still inspires moral abhor­rence among crit­ics, does that make the weapon at fault, or are the laws of war at fault? At a fun­da­men­tal lev­el, the rea­son chem/bio weapons are for­bid­den, and there remains a strong norm against nuclear weapons use, is that they vio­late the most basic prin­ci­pled behind all laws of con­flict: mil­i­tary neces­si­ty, dis­tinc­tion, and pro­por­tion­al­i­ty.

Put anoth­er way, chem­i­cal, bio­log­i­cal, and nuclear weapons by design are not dis­crim­i­nate, pro­por­tion­ate, or even in most cas­es mil­i­tar­i­ly nec­es­sary (the most com­mon, and scary, use-case for a nuclear weapons is a “dead man’s trig­ger” for a fail­ing regime — which serves no mil­i­tary pur­pose beyond inflict­ing heav­ier loss­es on a tri­umph­ing oppo­nent). There is lit­er­al­ly no con­flict where their use can be jus­ti­fied under with Inter­na­tion­al Human­i­tar­i­an Law or the Laws of Armed Con­flict. Land mines and clus­ter muni­tions can be viewed in a large­ly sim­i­lar light: they can­not dis­crim­i­nate between com­bat­tants and non-com­bat­tants once deployed, and their lin­ger­ing pres­ence long after the ces­sa­tion of con­flict makes their pro­por­tion­al­i­ty ques­tion­able. Land mines and clus­ter muni­tions can debat­ably be said to have mil­i­tary neces­si­ty, but that neces­si­ty is drowned out by the severe costs they incur to civil­ians.

Drones, on the oth­er hand, do not suf­fer from those short­com­ings. By design, they are dis­crim­i­nate and pro­por­tion­ate, at least as much as any weapons sys­tem can be. They are not always used in a way that upholds dis­tinc­tion, but that is an issue of user choic­es not weapons design. Unlike oth­er banned weapons tech­nol­o­gy there is noth­ing inher­ent to drones that vio­lat­ed the norms of armed con­flict. In fact, many offi­cials in gov­ern­ment argue that the rise of drone war­fare is actu­al­ly a human rights response to coun­tert­er­ror­ism as prac­ticed under Pres­i­dent George W. Bush: few­er dead civil­ians, less soci­etal dis­rup­tion, less troop risk, and less polit­i­cal blow­back.

What real­ly con­cerns drone oppo­nents, and this has metas­ta­sized into oppo­si­tion to auton­o­my research, is the con­cept of human agency and moral haz­ard.

The moral haz­ard argu­ment is prob­a­bly the most per­ni­cious, as it relies on a spe­cious under­stand­ing of the his­to­ry and evo­lu­tion of mod­ern war­fare. At a basic lev­el, the more advanced mil­i­taries become, and the more social­ly pro­gres­sive the soci­eties sup­port­ing those mil­i­taries become, the less often con­flict hap­pens… and when it does hap­pen few­er peo­ple die than ever before. There is quan­ti­ta­tive data behind this con­clu­sion: despite the con­tin­ued, legit­i­mate wor­ries over con­flict, it is actu­al­ly less com­mon and less dead­ly than ever before in his­to­ry. War just isn’t as hor­rif­ic as it was even 20 years ago, to say noth­ing of 30 years ago or more. With­out excus­ing war, sure­ly the low­er fre­quen­cy of con­flict, and the low­er casu­al­ties that result is a rather stun­ning advance­ment for human well being.

Even with­in the U.S., the clear tar­get for the inter­na­tion­al cam­paigns against drones despite muf­fled cries that oth­er coun­tries might one day employ mil­i­tary force some­where, increased tech­nol­o­gy has made wag­ing war eas­i­er than ever… yet politi­cians face far high­er bar­ri­ers to wag­ing war than they have in the 20th cen­tu­ry. Com­pare the debate today over inter­ven­ing in Syr­ia with the non-debate that accom­pa­nied, say, the US involve­ment in Colom­bi­a’s civ­il war in the ear­ly 1960s.

But then there’s human agency to grap­ple with. And here is where auton­o­my crit­ics focus their great­est poet­ry: a belief in the pre­dictable moral rec­ti­tude of humans under duress. This argu­ment pre­sup­pos­es that humans have a pre­dictable moral impulse not to com­mit atroc­i­ties or wan­ton­ly mur­der civil­ians dur­ing war­fare — a tru­ly curi­ous asser­tion giv­en the behav­ior of actu­al humans in war­fare. Indeed, the tim­ing of the Human Rights Coun­cil’s delib­er­a­tions was amus­ing: it coin­cid­ed with Staff Sergeant Robert Bales announc­ing his inten­tion to plead guilty to last March’s slaugh­ter of 16 unarmed civil­ians in south­ern Afghanistan. The pre­dictabil­i­ty and moral­i­ty of humans real­ly did­n’t stop him from sneak­ing off his base and lit­er­al­ly cut­ting chil­dren open in the mid­dle of the night, mak­ing their bod­ies into stacks, and light­ing them on fire.

No per­son is per­fect… but then again, nei­ther is any machine. And if an autonomous machine can­not at least match the per­for­mance of a human sol­dier, there is no way in hell any gov­ern­ment would deploy a sub­stan­dard, under­per­form­ing robot when a per­son could do the job per­fect­ly well.

Yet it is this point where the absolute poi­son of the “stop killer robots” cam­paign hits: by push­ing for a pre­emp­tive pro­hi­bi­tion on even the devel­op­ment of such sys­tems, the move­ment threat­ens to crim­i­nal­ize tech­nol­o­gy itself. Maybe that’s the rea­son why human rights advo­cates have sud­den­ly devel­oped faith in the moral judg­ment of sol­diers under fire, or why they’re sud­den­ly endors­ing the con­cept of per­son­al dan­ger as inher­ent to mak­ing war scary. Because under­neath the talk of human rights, of dis­tinc­tion and pro­por­tion­al­i­ty, even of inter­na­tion­al law — all of which is not near­ly as set­tled in their favor as they’d like — there is a creep­ing lud­dism, a fear of “killer robots” as they’re mis­lead­ing­ly called.

Indeed, because these sys­tems do not real­ly exist as crit­ics imag­ine them to be, the entire cam­paign is based in sci­ence fic­tion — a ten­den­tious read­ing of sci­ence fic­tion, too. That’s why, instead of illus­trat­ing this post with a screen cap­ture from a 20-year old James Cameron film (where machines of the future appar­ent­ly don’t have infrared cam­eras or guid­ed mis­siles), I chose Data from Star Trek, an autonomous robot who makes lethal deci­sions and com­mands star­ships yet still cries when he finds his pet cat in a ship­wreck. Auton­o­my cuts both ways.

Or at least it could. Should the pre­emp­tive ban on auton­o­my research go through at the UN, then vast areas of com­put­er sci­ence and robot­ics research sud­den­ly come on the chop­ping block. Want advanced autopi­lots that could land a com­mer­cial air­lin­er should the crew become inca­pac­i­tat­ed? Sor­ry, that might also help a drone fly itself. Want advanced image recog­ni­tion to help doc­tors, ana­lysts, secu­ri­ty per­son­nel, and sci­en­tif­ic expe­di­tions col­lect more data to improve our lives? Well maybe we should­n’t because that could also be used in a drone fly­ing over a con­flict zone. Want to use machine learn­ing so com­pli­cat­ed research can go much faster than humans ever could, improv­ing our econ­o­my and way of life? Too bad, it might also be used in a drone to make com­plex moral deci­sions about when to use force. Care about bet­ter sys­tems inte­gra­tion, so your self-dri­ving car does­n’t acci­den­tal­ly crash on the Neva­da high­way? I guess you’re just out of luck, because that might also help some­one build a drone.

This is the dark side of tech­nol­o­gy bans. Unlike the indis­crim­i­nate weapons list­ed at the top of this post, for which there is only one pur­pose and for which tech­nol­o­gy must be nar­row­ly devel­oped for a spe­cif­ic pur­pose of inflict­ing unac­cept­able harm, drones are built out of com­mon, mul­ti-use tech­nol­o­gy. The soft­ware under­pin­ning them also runs the Inter­net shop­ping you love; the hard­ware comes from off-the-shelf ven­dors who build innocu­ous drones. The process­es that make them func­tion also cre­ate the air­lin­ers you use to vaca­tion in the Caribbean.

A tech­nol­o­gy ban on lethal auton­o­my research is not only not fea­si­ble, it’s prob­a­bly not even pos­si­ble with­out crim­i­nal­iz­ing most of the high-tech indus­try. And con­sid­er­ing how mis­guid­ed the oppro­bri­um levied at this sort of research already is, I don’t see any rea­son to expect a sud­den awak­en­ing of ratio­nal­i­ty out of the debate either. This is too bad: rather than serv­ing as an effec­tive brake and reg­u­la­to­ry force on when and how autonomous weapons can or should be deployed, the move­ment to pre­vent even their devel­op­ment is tak­ing such an extreme posi­tion they’re more like­ly to fiz­zle out of exis­tence instead of con­tribut­ing mean­ing­ful­ly to the devel­op­ment of less hor­ri­ble con­flict. Which means, per­verse­ly, that they are actu­al­ly mak­ing the world a worse place.

joshua.foust
Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.