Human Agency and the Moral Imperative of Robot Warfare

121126-N-PL185-082

There have been a num­ber of respons­es to my FP arti­cle on robot auton­o­my and war­fare — some seri­ous, some laugh­ably unse­ri­ous. Among the more seri­ous and con­sid­ered is Jay Adler. Writ­ing for The Alge­mein­er, he brings up the biggest and, I think, most seri­ous objec­tion to increas­ing automa­tion in war­fare: human agency. He also, per­haps unin­ten­tion­al­ly, implies a fur­ther argu­ment about the deter­rence pow­er of per­son­al risk that is worth dis­cussing as well.

Implied by all Foust argues is that human moral advance­ment in the con­duct of war – a prob­lem­at­ic, though nonethe­less gen­uine notion acknowl­edged by just war, among oth­er, the­o­ries – is exem­pli­fied by dimin­ished num­bers of casu­al­ties, espe­cial­ly civil­ian and what would amount to more effec­tive win­ning. This is a seduc­tive­ly appeal­ing argu­ment on the face of it. If we must some­times fight wars (well, real­ly, we have to admit, it is far more often than some­times) let us at least do it by killing as few peo­ple as pos­si­ble, cer­tain­ly as few women and chil­dren, in the clas­sic for­mu­la­tion, and as few inno­cent civil­ians.

These are cer­tain­ly goals to pur­sue, and the mil­i­taries of lib­er­al democ­ra­cies do most of the time pur­sue them. But I do not think this goal is the essence of human moral advance­ment in war. First, effec­tive­ness in win­ning war has nev­er been a prob­lem. Since wars began, when­ev­er exact­ly that was – two clans fight­ing over a cave and a fire? – most of the time one side has man­aged some kind of vic­to­ry. War­ring groups have always been effec­tive at win­ning.

This is a fas­ci­nat­ing coun­ter­ar­gu­ment, if only because the biggest chal­lenge in the cur­rent “war” (I’m less and less com­fort­able with using the term “war” to define our many coun­tert­er­ror­ism poli­cies) is that there are not, in fact, clear­ly defined con­di­tions under which one side or anoth­er can declare vic­to­ry. While al Qae­da has very plain­ly as its goal the over­throw of ene­my regimes and their replace­ment with a caliphate preach­ing their val­ues, the U.S. does not have vic­to­ry con­di­tions.

This is a prob­lem with how most Amer­i­can wars have been fought post‑9/11: vic­to­ry defined through absence, eas­i­ly fal­si­fi­able and vague­ly defined goals, lit­tle strate­gic think­ing, and no con­sid­er­a­tion of alter­na­tives. It is true of Afghanistan, and it is true of the “war on vio­lent extrem­ism” or what­ev­er it’s called this week too.

So if we don’t real­ly know what vic­to­ry looks like, how can one argue that “effec­tive­ness in win­ning war has nev­er been prob­lem?” That is actu­al­ly the heart of the issue here: when it’s unclear what win­ning a war even means, are there less­er forms of vio­lence that might at least man­age the prob­lem so war­fare isn’t nec­es­sary? Adler does­n’t say.

On the score of dimin­ished civil­ian casu­al­ties, what­ev­er increased human con­cern with are called the laws of war, through the mid twen­ti­eth cen­tu­ry it can hard­ly be argued that human­i­ty had achieved any form of advance­ment. More effec­tive­ly lethal weapons pro­duced, in fact, more killing, and more civil­ian death, on a scale pre­vi­ous­ly unimag­in­able. Since the the sec­ond half of the twen­ti­eth cen­tu­ry a pro­nounced char­ac­ter­is­tic of war, in the lethal­i­ty of weapon­ry, has been that of pro­found tech­no­log­i­cal dis­par­i­ty between war­ring par­ties. This has been so in all of the con­flicts of the Unit­ed States, of Israel over the past more than thir­ty years, of the Sovi­et Union and of Rus­sia in Chech­nya, for exam­ple. This has pro­duced marked­ly low­er com­par­a­tive casu­al­ties on one side (not always a clear win­ner, as in the U.S. in Viet­nam or Israel in Lebanon in 2006), though some­times still com­par­a­tive­ly mas­sive casu­al­ties, even most­ly civil­ian, as in Viet­nam and the Iraq War, on the oth­er. This dis­par­i­ty may be a hap­py devel­op­ment for the side with low num­bers – not nec­es­sar­i­ly a win­ner, and not by any inher­ent neces­si­ty deserv­ing of the ben­e­fit – but it can­not eas­i­ly be argued that such a devel­op­ment is an advance­ment in the pro­tec­tion of human rights in war.

Unfor­tu­nate­ly, this relies on a rather poi­so­nous assump­tion that lies beneath the lat­est round of human rights advo­ca­cy against drones and against war­fare. Weapons are indeed more effec­tive at killing than ever before, but some­thing remark­able has hap­pened along­side that. Con­flict, in gen­er­al, is less com­mon and less dead­ly than at any point in human his­to­ry. The Nor­we­gian Min­istry of For­eign Affairs even reg­is­tered a remark­able change dur­ing the dead­liest part of the Iraq War: for sev­er­al years run­ning, no new con­flicts had bro­ken out any­where in the world. They con­tin­ue:

The num­ber of ongo­ing con­flicts has declined since short­ly after the end of the Cold War and the sever­i­ty of armed con­flict has gen­er­al­ly declied since World War II. This fact sharply con­tra­dicts many pes­simistic per­spec­tives bol­stered by media head­lines from Iraq, Afghanistan, and Dar­fur. Research con­duct­ed by the Cen­tre for the Study of Civ­il War at PRIO, using the most recent­ly updat­ed data col­lect­ed in col­lab­o­ra­tion with the Upp­sala Con­flict Data Pro­gram (UCDP) at Upp­sala Uni­ver­si­ty, indi­cates a more com­plex sit­u­a­tion, with both reas­sur­ing and dis­turb­ing trends.

After a peri­od of steady decline in the num­ber of armed con­flicts in the world, the down­ward trend has end­ed. Data from PRIO and Upp­sala Uni­ver­si­ty indi­cates that the num­ber of active con­flicts is no longer sink­ing, but has held steady at 32 for three years in a row. Sec­ond­ly, we are now in the longest peri­od since World War II with­out inter­state war (those fought between two or more coun­tries). More­over, we reg­is­ter no new con­flicts of any type over the pre­vi­ous two years; this is the first time in the post­war peri­od in which two years have passed with no new con­flicts hav­ing bro­ken out.

Obvi­ous­ly, new con­flicts have bro­ken out since that study was pub­lished, most vis­i­bly in the Mid­dle East thanks to the Arab Spring. But despite these new con­flicts, casu­al­ties remain remark­ably low in his­tor­i­cal con­text. King Leopold’s cam­paign in the Con­go, for exam­ple, which killed 10 mil­lion peo­ple, has nev­er been repli­cat­ed before or since. Even the Con­go’s own civ­il war in the 90s to the 2000s, one of the blood­i­est of the mod­ern age, did­n’t direct­ly kill even a sig­nif­i­cant per­cent­age of that many inno­cents. Most of the 8 mil­lion or so who have died from that con­flict died by being denied access to crit­i­cal infra­struc­ture and resources, not from con­flict.

Wars just aren’t as mas­sive­ly dead­ly as they once were. When the Sovi­ets invad­ed Afghanistan, prompt­ing a mas­sive U.S.-funded insur­gency to unseat them, they even­tu­al­ly lost near­ly 15,000 sol­diers to com­bat. Upwards of 1.5 mil­lion Afghan civil­ians died, too. The cur­rent round of war there has killed a frac­tion as many dur­ing a longer peri­od of con­flict. Even the war in Iraq, though hor­ri­fy­ing by any stan­dard, killed only about a tenth as many civil­ians as the Iran-Iraq war two decades ear­li­er, which killed over a mil­lion.

None of this excus­es wars, espe­cial­ly those of choice based on lies, as Iraq was. These are abhor­rent stains on our con­scious and bru­tal­ize with­out pur­pose; there is just no jus­ti­fi­ca­tion for them.

But the num­bers are dif­fi­cult to avoid. War just isn’t as hor­rif­ic as it was even 20 years ago, to say noth­ing of 30 years ago or more. With­out excus­ing war, sure­ly the low­er fre­quen­cy of con­flict, and the low­er casu­al­ties that result is, con­tra-Adler, a rather stun­ning advance­ment for human well being.

Back to Adler:

This is, indeed, essen­tial to the more gen­er­al debate over the use of drones; in the cur­rent con­sid­er­a­tion, though, the mat­ter is not machines using force (real­ly, being used for), but machines using force autonomous­ly. Autonomous weapon­ry removes the human moral agency of killing in war, could remove it, ulti­mate­ly, from war alto­geth­er. Yet if any­thing can redeem the essen­tial human crime of war, enact jus­tice in the wag­ing of it, it is pre­cise­ly the com­ple­men­tary human moral agency of it.

This rests on a cou­ple of inter­est­ing assump­tions:

  1. Machine auton­o­my removes human moral agency for killing;
  2. Machine auton­o­my might remove moral agency from war­fare alto­geth­er; and
  3. Human moral agency is nec­es­sary for “redeem­ing the essen­tial human crime of war.”

Adler shares Human Rights Watch’s assump­tion that “ful­ly autonomous machines” will lack empa­thy and moral agency. But, as I explained in my piece, this mis­un­der­stands what peo­ple mean when they talk about autonomous machine uses of force. At a basic lev­el, the pol­i­cy­mak­ers who approve the use of drones do not lack moral agency; if any­thing, the con­tin­ued round of law­suits direct­ed at the CIA and Pen­ta­gon for drone strikes shows a clear chain of moral agency even if courts refuse to coun­te­nance the suits. The pub­lic holds offi­cials account­able for what they order their sol­diers, pilots, and machines to do, and despite the secre­cy of the pro­gram cloud­ing any effort to hold it account­able, there is lit­tle debate that the deci­sion to use force must be cri­tiqued at least as heav­i­ly as the spe­cif­ic method used to deliv­er force.

When one delves into the depths of what “full auton­o­my” means — and that is the cat­e­go­ry of automa­tion HRW is cam­paign­ing again — then we’re deal­ing with machines that not only make deci­sions on their own, but who also have the capac­i­ty to learn. The assump­tion that machines will only learn to do evil but will nev­er learn the val­ue of doing good (show­ing restraint, prac­tic­ing extreme pre­ci­sion humans just don’t have the patience for, behav­ing in a lim­it­ed way) is just an assump­tion — one born more of lazy allu­sions to sci­ence fic­tion that was writ­ten before the Inter­net than any­thing based in the sci­ence of how machines learn.

Put sim­ply, if an autonomous learn­ing machine is pro­grammed with our val­ues, it will reflect those val­ues. Think of Data from Star Trek (an autonomous machine capa­ble of decid­ing with­out human input when and how to use dead­ly force, includ­ing when in com­mand of a star­ship), rather than the Ter­mi­na­tor (a favorite image of the anti-machine crowd).

But more­over, if any­thing, human agency is imma­te­r­i­al to this argu­ment, and in a way fatal­ly under­mines Adler’s argu­ment. If the trend of war­fare over­time is for increas­ing sophis­ti­ca­tion to lead to decreased casu­al­ties and decreased bur­den on civil­ian pop­u­la­tions — a point Adler kind of con­cedes in his final para­graph, then moral agency is still being employed to fur­ther low­er casu­al­ties through increas­ing­ly sophis­ti­cat­ed machines of war.

In many ways, offi­cials appeal to drones as a human rights-dri­ven response to coun­tert­er­ror­ism — few­er bel­liger­ent casu­al­ties than out­right war­fare (true) and few­er civil­ian casu­al­ties than even tra­di­tion­al air wars (also true). The moral agency Adler appeals to with “pulling the trig­ger” does­n’t dis­ap­pear when the deci­sion to employ an autonomous weapon is used, any­more than moral agency dis­ap­pears when an auto­mat­ed counter-mor­tar bat­tery hap­pens to kill a bunch of peo­ple lay­ing timed rock­ets to attack a base in Afghanistan. It’s just expressed dif­fer­ent­ly. The deci­sion to employ hyper-pre­cise machines is a moral deci­sion to lim­it col­lat­er­al dam­age, not an insid­i­ous deci­sion to ignore col­lat­er­al dam­age.

Such a deci­sion, pace Adler, is not dehu­man­iz­ing. It is, in fact, very deeply human­iz­ing: show­ing that even “ene­my” life has val­ue that should be safe­guard­ed to the great­est extent pos­si­ble with human inge­nu­ity.

joshua.foust
Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.