Technology Bans Don’t Work

Yesterday the UN Human Rights Council debated whether to preemptively ban the development of lethal autonomous weapons systems — that is, platforms like drones that can decide on their own whether to use force or not. 26 states, including the special rapporteur, all said this sort of technology is so morally abhorrent, even its development should be forbidden the same way we have forbidden land mines, cluster munitions, and chemical, biological, and even most nuclear weapons.

I don’t get it. At a basic level, if a weapon system conforms to the laws of war, yet still inspires moral abhorrence among critics, does that make the weapon at fault, or are the laws of war at fault? At a fundamental level, the reason chem/bio weapons are forbidden, and there remains a strong norm against nuclear weapons use, is that they violate the most basic principled behind all laws of conflict: military necessity, distinction, and proportionality.

Put another way, chemical, biological, and nuclear weapons by design are not discriminate, proportionate, or even in most cases militarily necessary (the most common, and scary, use-case for a nuclear weapons is a “dead man’s trigger” for a failing regime — which serves no military purpose beyond inflicting heavier losses on a triumphing opponent). There is literally no conflict where their use can be justified under with International Humanitarian Law or the Laws of Armed Conflict. Land mines and cluster munitions can be viewed in a largely similar light: they cannot discriminate between combattants and non-combattants once deployed, and their lingering presence long after the cessation of conflict makes their proportionality questionable. Land mines and cluster munitions can debatably be said to have military necessity, but that necessity is drowned out by the severe costs they incur to civilians.

Drones, on the other hand, do not suffer from those shortcomings. By design, they are discriminate and proportionate, at least as much as any weapons system can be. They are not always used in a way that upholds distinction, but that is an issue of user choices not weapons design. Unlike other banned weapons technology there is nothing inherent to drones that violated the norms of armed conflict. In fact, many officials in government argue that the rise of drone warfare is actually a human rights response to counterterrorism as practiced under President George W. Bush: fewer dead civilians, less societal disruption, less troop risk, and less political blowback.

What really concerns drone opponents, and this has metastasized into opposition to autonomy research, is the concept of human agency and moral hazard.

The moral hazard argument is probably the most pernicious, as it relies on a specious understanding of the history and evolution of modern warfare. At a basic level, the more advanced militaries become, and the more socially progressive the societies supporting those militaries become, the less often conflict happens… and when it does happen fewer people die than ever before. There is quantitative data behind this conclusion: despite the continued, legitimate worries over conflict, it is actually less common and less deadly than ever before in history. War just isn’t as horrific as it was even 20 years ago, to say nothing of 30 years ago or more. Without excusing war, surely the lower frequency of conflict, and the lower casualties that result is a rather stunning advancement for human well being.

Even within the U.S., the clear target for the international campaigns against drones despite muffled cries that other countries might one day employ military force somewhere, increased technology has made waging war easier than ever… yet politicians face far higher barriers to waging war than they have in the 20th century. Compare the debate today over intervening in Syria with the non-debate that accompanied, say, the US involvement in Colombia’s civil war in the early 1960s.

But then there’s human agency to grapple with. And here is where autonomy critics focus their greatest poetry: a belief in the predictable moral rectitude of humans under duress. This argument presupposes that humans have a predictable moral impulse not to commit atrocities or wantonly murder civilians during warfare — a truly curious assertion given the behavior of actual humans in warfare. Indeed, the timing of the Human Rights Council’s deliberations was amusing: it coincided with Staff Sergeant Robert Bales announcing his intention to plead guilty to last March’s slaughter of 16 unarmed civilians in southern Afghanistan. The predictability and morality of humans really didn’t stop him from sneaking off his base and literally cutting children open in the middle of the night, making their bodies into stacks, and lighting them on fire.

No person is perfect… but then again, neither is any machine. And if an autonomous machine cannot at least match the performance of a human soldier, there is no way in hell any government would deploy a substandard, underperforming robot when a person could do the job perfectly well.

Yet it is this point where the absolute poison of the “stop killer robots” campaign hits: by pushing for a preemptive prohibition on even the development of such systems, the movement threatens to criminalize technology itself. Maybe that’s the reason why human rights advocates have suddenly developed faith in the moral judgment of soldiers under fire, or why they’re suddenly endorsing the concept of personal danger as inherent to making war scary. Because underneath the talk of human rights, of distinction and proportionality, even of international law — all of which is not nearly as settled in their favor as they’d like — there is a creeping luddism, a fear of “killer robots” as they’re misleadingly called.

Indeed, because these systems do not really exist as critics imagine them to be, the entire campaign is based in science fiction — a tendentious reading of science fiction, too. That’s why, instead of illustrating this post with a screen capture from a 20-year old James Cameron film (where machines of the future apparently don’t have infrared cameras or guided missiles), I chose Data from Star Trek, an autonomous robot who makes lethal decisions and commands starships yet still cries when he finds his pet cat in a shipwreck. Autonomy cuts both ways.

Or at least it could. Should the preemptive ban on autonomy research go through at the UN, then vast areas of computer science and robotics research suddenly come on the chopping block. Want advanced autopilots that could land a commercial airliner should the crew become incapacitated? Sorry, that might also help a drone fly itself. Want advanced image recognition to help doctors, analysts, security personnel, and scientific expeditions collect more data to improve our lives? Well maybe we shouldn’t because that could also be used in a drone flying over a conflict zone. Want to use machine learning so complicated research can go much faster than humans ever could, improving our economy and way of life? Too bad, it might also be used in a drone to make complex moral decisions about when to use force. Care about better systems integration, so your self-driving car doesn’t accidentally crash on the Nevada highway? I guess you’re just out of luck, because that might also help someone build a drone.

This is the dark side of technology bans. Unlike the indiscriminate weapons listed at the top of this post, for which there is only one purpose and for which technology must be narrowly developed for a specific purpose of inflicting unacceptable harm, drones are built out of common, multi-use technology. The software underpinning them also runs the Internet shopping you love; the hardware comes from off-the-shelf vendors who build innocuous drones. The processes that make them function also create the airliners you use to vacation in the Caribbean.

A technology ban on lethal autonomy research is not only not feasible, it’s probably not even possible without criminalizing most of the high-tech industry. And considering how misguided the opprobrium levied at this sort of research already is, I don’t see any reason to expect a sudden awakening of rationality out of the debate either. This is too bad: rather than serving as an effective brake and regulatory force on when and how autonomous weapons can or should be deployed, the movement to prevent even their development is taking such an extreme position they’re more likely to fizzle out of existence instead of contributing meaningfully to the development of less horrible conflict. Which means, perversely, that they are actually making the world a worse place.

comments powered by Disqus