For PBS: The false fear of autonomous weapons

Last month, Human Rights Watch raised eyebrows with a provocatively titled report about autonomous weaponry that can select targets and fire at them without human input. “Losing Humanity: The Case Against Killer Robots,” blasts the headline, and argues that autonomous weapons will increase the danger to civilians in conflict.

In this report, HRW urges the international community to “prohibit the development, production, and use of fully autonomous weapons” because these machines “inherently lack human qualities that provide legal and non-legal checks on the killing of civilians.”

While such concern is understandable, it is misplaced. For starters, as HRW concede in their report, no country, including the U.S., has decided to either develop or deploy fully autonomous armed robots. Shortly after the report was published, the Pentagon released a directive on the development of autonomy that called for “commanders and operators to exercise appropriate levels of human judgment over the use of force.”

So if the Pentagon doesn’t want fully autonomous weapons, why is there such concern about them?

Part of the reason, arguably, is cultural. American science fiction, in particular, has made clear that autonomous robot are deadly. From the Terminator franchise, the original and the remake of Battlestar Galactica, to the Matrix trilogy, the clear thrust of popular science fiction is that making machines functional without human input will be the downfall of humanity.

It is under this sci-fi “understanding” of technology that some object to autonomous weaponry. However, the Pentagon directive shows that the military certainly doesn’t want total weaponry autonomy. A deeper look at this type of weapon reveals that the perceived threat may not be valid. In fact, re-examination might suggest more plausible alternatives to this technology than full-bore prohibition.

Many of the processes that go into making lethal decisions are already automated. The intelligence community (IC) generates around 50,000 pages of analysis each year, culled from hundreds of thousands of messages. Every day analysts reviewing targeting intelligence populate lists for the military and CIA via hundreds of pages of documents selected by computer filters and automated databases that discriminate for certain keywords.

In war zones, too, many decisions to kill are at least partly automated. Software programs such as Panatir collect massive amounts of information about IEDs, analyze without human input, and spit out lists of likely targets. No human could possibly read, understand, analyze, and output so much information in such a short period of time.

Automated systems already decide to fire at targets without human input, as well. The U.S. Army fields advanced counter-mortar systems that track incoming mortar rounds, swat them out of the sky, and fire a return volley of mortars in response without any direct human input. In fact, the U.S. has employed similar (though less advanced) automated defensive systems for decades aboard its navy vessels. Additionally, heat-seeking missiles don’t require human input once they’re fired – on their own, they seek out and destroy the nearest intense heat source regardless of identity.

It’s hard to see how, in that context, a drone (or rather the computer system operating the drone) that automatically selects a target for possible strike is morally or legally any different than weapons the U.S. already employs.

The real debate should surround the philosophy behind utilizing these systems: what are they designed to do, and can they be made to do it more effectively? Humans are imperfect – targets may be misidentified, vital intelligence can be discounted because of cognitive biases, and outside information just might not be available to make a decision. Autonomous systems can dramatically improve that process so that civilians are actually much better protected than by human inputs alone.

Additionally, automated analysis systems reflect the attitudes and assumptions of the people who program them; American values are reflected in the way these systems analyze and why certain pieces of data are highlighted or ignored. In other words, automated systems already reflect our priorities and us. Therefore, there is no reason to think more automation would do something else. For example, the fear that autonomous drones would use less discretion before firing a weapon means such automation would be deliberate in design and not inherent to their automation.

Alternatively, these programs could be changed to better reflect our values and priorities. The possibility of full autonomy poses a number of questions about the use of force and how to maintain accountability when people take a less active role. But it could also make warfare less deadly, more accountable, and ultimately more humane; knee-jerk reactions against such a future don’t further the debate anymore than uncritically embracing such technology.

This was originally posted at PBS.org.

comments powered by Disqus