For PBS: The false fear of autonomous weapons

Last month, Human Rights Watch raised eye­brows with a provoca­tive­ly titled report about autonomous weapon­ry that can select tar­gets and fire at them with­out human input. “Los­ing Human­i­ty: The Case Against Killer Robots,” blasts the head­line, and argues that autonomous weapons will increase the dan­ger to civil­ians in con­flict.

In this report, HRW urges the inter­na­tion­al com­mu­ni­ty to “pro­hib­it the devel­op­ment, pro­duc­tion, and use of ful­ly autonomous weapons” because these machines “inher­ent­ly lack human qual­i­ties that pro­vide legal and non-legal checks on the killing of civil­ians.”

While such con­cern is under­stand­able, it is mis­placed. For starters, as HRW con­cede in their report, no coun­try, includ­ing the U.S., has decid­ed to either devel­op or deploy ful­ly autonomous armed robots. Short­ly after the report was pub­lished, the Pen­ta­gon released a direc­tive on the devel­op­ment of auton­o­my that called for “com­man­ders and oper­a­tors to exer­cise appro­pri­ate lev­els of human judg­ment over the use of force.”

So if the Pen­ta­gon doesn’t want ful­ly autonomous weapons, why is there such con­cern about them?

Part of the rea­son, arguably, is cul­tur­al. Amer­i­can sci­ence fic­tion, in par­tic­u­lar, has made clear that autonomous robot are dead­ly. From the Ter­mi­na­tor fran­chise, the orig­i­nal and the remake of Bat­tlestar Galac­ti­ca, to the Matrix tril­o­gy, the clear thrust of pop­u­lar sci­ence fic­tion is that mak­ing machines func­tion­al with­out human input will be the down­fall of human­i­ty.

It is under this sci-fi “under­stand­ing” of tech­nol­o­gy that some object to autonomous weapon­ry. How­ev­er, the Pen­ta­gon direc­tive shows that the mil­i­tary cer­tain­ly doesn’t want total weapon­ry auton­o­my. A deep­er look at this type of weapon reveals that the per­ceived threat may not be valid. In fact, re-exam­i­na­tion might sug­gest more plau­si­ble alter­na­tives to this tech­nol­o­gy than full-bore pro­hi­bi­tion.

Many of the process­es that go into mak­ing lethal deci­sions are already auto­mat­ed. The intel­li­gence com­mu­ni­ty (IC) gen­er­ates around 50,000 pages of analy­sis each year, culled from hun­dreds of thou­sands of mes­sages. Every day ana­lysts review­ing tar­get­ing intel­li­gence pop­u­late lists for the mil­i­tary and CIA via hun­dreds of pages of doc­u­ments select­ed by com­put­er fil­ters and auto­mat­ed data­bas­es that dis­crim­i­nate for cer­tain key­words.

In war zones, too, many deci­sions to kill are at least part­ly auto­mat­ed. Soft­ware pro­grams such as Panatir col­lect mas­sive amounts of infor­ma­tion about IEDs, ana­lyze with­out human input, and spit out lists of like­ly tar­gets. No human could pos­si­bly read, under­stand, ana­lyze, and out­put so much infor­ma­tion in such a short peri­od of time.

Auto­mat­ed sys­tems already decide to fire at tar­gets with­out human input, as well. The U.S. Army fields advanced counter-mor­tar sys­tems that track incom­ing mor­tar rounds, swat them out of the sky, and fire a return vol­ley of mor­tars in response with­out any direct human input. In fact, the U.S. has employed sim­i­lar (though less advanced) auto­mat­ed defen­sive sys­tems for decades aboard its navy ves­sels. Addi­tion­al­ly, heat-seek­ing mis­siles don’t require human input once they’re fired – on their own, they seek out and destroy the near­est intense heat source regard­less of iden­ti­ty.

It’s hard to see how, in that con­text, a drone (or rather the com­put­er sys­tem oper­at­ing the drone) that auto­mat­i­cal­ly selects a tar­get for pos­si­ble strike is moral­ly or legal­ly any dif­fer­ent than weapons the U.S. already employs.

The real debate should sur­round the phi­los­o­phy behind uti­liz­ing these sys­tems: what are they designed to do, and can they be made to do it more effec­tive­ly? Humans are imper­fect – tar­gets may be misiden­ti­fied, vital intel­li­gence can be dis­count­ed because of cog­ni­tive bias­es, and out­side infor­ma­tion just might not be avail­able to make a deci­sion. Autonomous sys­tems can dra­mat­i­cal­ly improve that process so that civil­ians are actu­al­ly much bet­ter pro­tect­ed than by human inputs alone.

Addi­tion­al­ly, auto­mat­ed analy­sis sys­tems reflect the atti­tudes and assump­tions of the peo­ple who pro­gram them; Amer­i­can val­ues are reflect­ed in the way these sys­tems ana­lyze and why cer­tain pieces of data are high­light­ed or ignored. In oth­er words, auto­mat­ed sys­tems already reflect our pri­or­i­ties and us. There­fore, there is no rea­son to think more automa­tion would do some­thing else. For exam­ple, the fear that autonomous drones would use less dis­cre­tion before fir­ing a weapon means such automa­tion would be delib­er­ate in design and not inher­ent to their automa­tion.

Alter­na­tive­ly, these pro­grams could be changed to bet­ter reflect our val­ues and pri­or­i­ties. The pos­si­bil­i­ty of full auton­o­my pos­es a num­ber of ques­tions about the use of force and how to main­tain account­abil­i­ty when peo­ple take a less active role. But it could also make war­fare less dead­ly, more account­able, and ulti­mate­ly more humane; knee-jerk reac­tions against such a future don’t fur­ther the debate any­more than uncrit­i­cal­ly embrac­ing such tech­nol­o­gy.

This was orig­i­nal­ly post­ed at PBS.org.

joshua.foust
Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.