Human Agency and the Moral Imperative of Robot Warfare

There have been a number of responses to my FP article on robot autonomy and warfare — some serious, some laughably unserious. Among the more serious and considered is Jay Adler. Writing for The Algemeiner, he brings up the biggest and, I think, most serious objection to increasing automation in warfare: human agency. He also, perhaps unintentionally, implies a further argument about the deterrence power of personal risk that is worth discussing as well.

Implied by all Foust argues is that human moral advancement in the conduct of war – a problematic, though nonetheless genuine notion acknowledged by just war, among other, theories – is exemplified by diminished numbers of casualties, especially civilian and what would amount to more effective winning. This is a seductively appealing argument on the face of it. If we must sometimes fight wars (well, really, we have to admit, it is far more often than sometimes) let us at least do it by killing as few people as possible, certainly as few women and children, in the classic formulation, and as few innocent civilians.

These are certainly goals to pursue, and the militaries of liberal democracies do most of the time pursue them. But I do not think this goal is the essence of human moral advancement in war. First, effectiveness in winning war has never been a problem. Since wars began, whenever exactly that was – two clans fighting over a cave and a fire? – most of the time one side has managed some kind of victory. Warring groups have always been effective at winning.

This is a fascinating counterargument, if only because the biggest challenge in the current “war” (I’m less and less comfortable with using the term “war” to define our many counterterrorism policies) is that there are not, in fact, clearly defined conditions under which one side or another can declare victory. While al Qaeda has very plainly as its goal the overthrow of enemy regimes and their replacement with a caliphate preaching their values, the U.S. does not have victory conditions.

This is a problem with how most American wars have been fought post-9/11: victory defined through absence, easily falsifiable and vaguely defined goals, little strategic thinking, and no consideration of alternatives. It is true of Afghanistan, and it is true of the “war on violent extremism” or whatever it’s called this week too.

So if we don’t really know what victory looks like, how can one argue that “effectiveness in winning war has never been problem?” That is actually the heart of the issue here: when it’s unclear what winning a war even means, are there lesser forms of violence that might at least manage the problem so warfare isn’t necessary? Adler doesn’t say.

On the score of diminished civilian casualties, whatever increased human concern with are called the_ laws of war_, through the mid twentieth century it can hardly be argued that humanity had achieved any form of advancement. More effectively lethal weapons produced, in fact, more killing, and more civilian death, on a scale previously unimaginable. Since the the second half of the twentieth century a pronounced characteristic of war, in the lethality of weaponry, has been that of profound technological disparity between warring parties. This has been so in all of the conflicts of the United States, of Israel over the past more than thirty years, of the Soviet Union and of Russia in Chechnya, for example. This has produced markedly lower comparative casualties on one side (not always a clear winner, as in the U.S. in Vietnam or Israel in Lebanon in 2006), though sometimes still comparatively massive casualties, even mostly civilian, as in Vietnam and the Iraq War, on the other. This disparity may be a happy development for the side with low numbers – not necessarily a winner, and not by any inherent necessity deserving of the benefit – but it cannot easily be argued that such a development is an advancement in the protection of human rights in war.

Unfortunately, this relies on a rather poisonous assumption that lies beneath the latest round of human rights advocacy against drones and against warfare. Weapons are indeed more effective at killing than ever before, but something remarkable has happened alongside that. Conflict, in general, is less common and less deadly than at any point in human history. The Norwegian Ministry of Foreign Affairs even registered a remarkable change during the deadliest part of the Iraq War: for several years running, no new conflicts had broken out anywhere in the world. They continue:

The number of ongoing conflicts has declined since shortly after the end of the Cold War and the severity of armed conflict has generally declied since World War II. This fact sharply contradicts many pessimistic perspectives bolstered by media headlines from Iraq, Afghanistan, and Darfur. Research conducted by the Centre for the Study of Civil War at PRIO, using the most recently updated data collected in collaboration with the Uppsala Conflict Data Program (UCDP) at Uppsala University, indicates a more complex situation, with both reassuring and disturbing trends.

After a period of steady decline in the number of armed conflicts in the world, the downward trend has ended. Data from PRIO and Uppsala University indicates that the number of active conflicts is no longer sinking, but has held steady at 32 for three years in a row. Secondly, we are now in the longest period since World War II without interstate war (those fought between two or more countries). Moreover, we register no new conflicts of any type over the previous two years; this is the first time in the postwar period in which two years have passed with no new conflicts having broken out.

Obviously, new conflicts have broken out since that study was published, most visibly in the Middle East thanks to the Arab Spring. But despite these new conflicts, casualties remain remarkably low in historical context. King Leopold’s campaign in the Congo, for example, which killed 10 million people, has never been replicated before or since. Even the Congo’s own civil war in the 90s to the 2000s, one of the bloodiest of the modern age, didn’t directly kill even a significant percentage of that many innocents. Most of the 8 million or so who have died from that conflict died by being denied access to critical infrastructure and resources, not from conflict.

Wars just aren’t as massively deadly as they once were. When the Soviets invaded Afghanistan, prompting a massive U.S.-funded insurgency to unseat them, they eventually lost nearly 15,000 soldiers to combat. Upwards of 1.5 million Afghan civilians died, too. The current round of war there has killed a fraction as many during a longer period of conflict. Even the war in Iraq, though horrifying by any standard, killed only about a tenth as many civilians as the Iran-Iraq war two decades earlier, which killed over a million.

None of this excuses wars, especially those of choice based on lies, as Iraq was. These are abhorrent stains on our conscious and brutalize without purpose; there is just no justification for them.

But the numbers are difficult to avoid. War just isn’t as horrific as it was even 20 years ago, to say nothing of 30 years ago or more. Without excusing war, surely the lower frequency of conflict, and the lower casualties that result is, contra-Adler, a rather stunning advancement for human well being.

Back to Adler:

This is, indeed, essential to the more general debate over the use of drones; in the current consideration, though, the matter is not machines using force (really, being used for), but machines using force autonomously. Autonomous weaponry removes the human moral agency of killing in war, could remove it, ultimately, from war altogether. Yet if anything can redeem the essential human crime of war, enact justice in the waging of it, it is precisely the complementary human moral agency of it.

This rests on a couple of interesting assumptions:

  1. Machine autonomy removes human moral agency for killing;
  2. Machine autonomy might remove moral agency from warfare altogether; and
  3. Human moral agency is necessary for “redeeming the essential human crime of war.”

Adler shares Human Rights Watch’s assumption that “fully autonomous machines” will lack empathy and moral agency. But, as I explained in my piece, this misunderstands what people mean when they talk about autonomous machine uses of force. At a basic level, the policymakers who approve the use of drones do not lack moral agency; if anything, the continued round of lawsuits directed at the CIA and Pentagon for drone strikes shows a clear chain of moral agency even if courts refuse to countenance the suits. The public holds officials accountable for what they order their soldiers, pilots, and machines to do, and despite the secrecy of the program clouding any effort to hold it accountable, there is little debate that the decision to use force must be critiqued at least as heavily as the specific method used to deliver force.

When one delves into the depths of what “full autonomy” means — and that is the category of automation HRW is campaigning again — then we’re dealing with machines that not only make decisions on their own, but who also have the capacity to learn. The assumption that machines will only learn to do evil but will never learn the value of doing good (showing restraint, practicing extreme precision humans just don’t have the patience for, behaving in a limited way) is just an assumption — one born more of lazy allusions to science fiction that was written before the Internet than anything based in the science of how machines learn.

Put simply, if an autonomous learning machine is programmed with our values, it will reflect those values. Think of Data from Star Trek (an autonomous machine capable of deciding without human input when and how to use deadly force, including when in command of a starship), rather than the Terminator (a favorite image of the anti-machine crowd).

But moreover, if anything, human agency is immaterial to this argument, and in a way fatally undermines Adler’s argument. If the trend of warfare overtime is for increasing sophistication to lead to decreased casualties and decreased burden on civilian populations — a point Adler kind of concedes in his final paragraph, then moral agency is still being employed to further lower casualties through increasingly sophisticated machines of war.

In many ways, officials appeal to drones as a human rights-driven response to counterterrorism — fewer belligerent casualties than outright warfare (true) and fewer civilian casualties than even traditional air wars (also true). The moral agency Adler appeals to with “pulling the trigger” doesn’t disappear when the decision to employ an autonomous weapon is used, anymore than moral agency disappears when an automated counter-mortar battery happens to kill a bunch of people laying timed rockets to attack a base in Afghanistan. It’s just expressed differently. The decision to employ hyper-precise machines is a moral decision to limit collateral damage, not an insidious decision to ignore collateral damage.

Such a decision, pace Adler, is not dehumanizing. It is, in fact, very deeply humanizing: showing that even “enemy” life has value that should be safeguarded to the greatest extent possible with human ingenuity.

comments powered by Disqus