A Treasury of Terminator Images For Stories About Autonomous Drones

First off, I should point out that XKCD, the ven­er­a­ble stick peo­ple com­ic about math and sci­ence, has by far the best treat­ment of exact­ly what is so wrong (if fun) about the Ter­mi­na­tor films:

more_accurate

The point being: actu­al robot assas­sins will look like drones do now: rel­a­tive­ly down­mar­ket, slow-mov­ing, and pret­ty famil­iar. Fast-fly­ing jet drones are not any good for what peo­ple wor­ry about when it comes to drones: they can­not move low and slow over a tar­get, metic­u­lous­ly col­lect­ing intel­li­gence in order to build a tar­get­ing pro­file for strike.

There’s anoth­er bonus to drones work­ing this way (and they will, for the vast­ly fore­see­able future): the ones that car­ry weapons are big by neces­si­ty, easy to spot, and (espe­cial­ly for a nation­al mil­i­tary) rather triv­ial to shoot down. That means that they real­ly can­not fly places where they are unwel­come: when even super secret stealth drones have drift­ed into Iran­ian air­space from Afghanistan, for exam­ple, they’ve been ground­ed and cap­tured for study.

So lost in all of the heat­ed rhetoric about drones and drone strikes and the era of assas­si­na­tion is the very sim­ple real­i­ty that drones only fly with con­sent — no one “forces” them on a coun­try (not even Pak­istan), and when they do go some­where they are unwant­ed they are swat­ted from the sky.

So when peo­ple talk about “killer robots,” what they real­ly mean are the usu­al drones we fly right now, only with bet­ter tar­get­ing pack­ages and the abil­i­ty to com­plete their mis­sions should humans be cut out of the deci­sion loop some­how. But that real­i­ty does­n’t real­ly scare peo­ple very effec­tive­ly so the activists try­ing to stran­gle auton­o­my research and remote flight tech­nol­o­gy in its infan­cy have come up with a much more effec­tive scare tac­tic: the Ter­mi­na­tor.

Terminator 1

Terminator 2

Terminator 4

Terminator 5

Grant­ed, there is some vari­a­tion on this theme, includ­ing a won­der­ful shoutout to Robo­Cop.

terminator 7

Terminator 3

Terminator extra

Even so (I mean, real­ly? Ultron?), the point is clear: autonomous robots are basi­cal­ly evil incar­nate, and there is no pos­si­ble mil­i­tary use for them oth­er than apoc­a­lypse.

Does it mat­ter that the “experts” in ques­tion have either no expe­ri­ence at all with arti­fi­cial intel­li­gence sys­tems (such as Stephen Hawk­ing or Elon Musk) or no expe­ri­ence at all with mil­i­tary pol­i­cy (most of the com­put­er sci­en­tists)? No.

These images are used for a very spe­cif­ic edi­to­r­i­al pur­pose: not to accu­rate­ly rep­re­sent the poten­tial — both ben­e­fi­cial and malign — of increased automa­tion on cer­tain weapons plat­forms, but to pre­dis­pose read­ers to imme­di­ate­ly asso­ciate any advance­ment with, essen­tial­ly, the destruc­tion of humankind.

Grant­ed! Some peo­ple think “strong,” or “gen­er­al” AI (that is, a piece of soft­ware that can think and inno­vate on its own, and essen­tial­ly self-pro­gram) is inher­ent­ly destruc­tive to human­i­ty. But that is far from a giv­en, and the very idea of a destruc­tive sin­gu­lar­i­ty is based almost entire­ly on sci­ence fic­tion like the Matrix — much like with the Ter­mi­na­tor, it is great as enter­tain­ment but ter­ri­ble as an ana­lyt­ic pre­dic­tor of the future.

Some­thing like the Matrix is stu­pid, from a machine per­spec­tive. So, too, is the Ter­mi­na­tor. The human biped form is one of the least effi­cient for stealth­ily or effi­cient­ly tar­get­ing peo­ple to kill them in war­fare. And in those sto­ries, the ways in which they decid­ed to wipe out human­i­ty is dumb as well — for the machines, they would essen­tial­ly kill them­selves, either from starv­ing them­selves of their own pow­er sup­ply or by fry­ing them­selves from the elec­tro­mag­net­ic puls­es that result from glob­al ther­monu­clear war.

But what makes this use of imagery even worse, from an edi­to­r­i­al per­spec­tive, is how it is not just a sil­ly bit of emo­tion­al manip­u­la­tion. It is active­ly deceit­ful:

terminator 8

Put sim­ply: the ter­mi­na­tor is not “fea­si­ble with­in years, not decades.” And while drones are easy to build, they’re also easy to kill — so the nuclear weapons con­cerns about pro­lif­er­a­tion and dan­ger are almost entire­ly hot air. The bor­ing, reas­sur­ing real­i­ty is that humanoid robots are either pon­der­ous­ly slow, or they are so unsta­ble that they are use­less out­side of con­fined set­tings (like a boat or col­lapsed build­ing) where they could serve a search-and-res­cue func­tion. Path detec­tion and image recog­ni­tion among oth­er robots is sim­i­lar­ly rudi­men­ta­ry: self-dri­ving cars func­tion, in part, because they don’t need to rec­og­nize indi­vid­u­als, only the out­line of a per­son and the out­line of a car. And heav­en help you if a humanoid robot were to aim a gun from its hand. There’s no point! The idea that a mod­ern mil­i­tary, either in the West or in Rus­sia or Chi­na, would build a dead­ly robot that can so poor­ly sense the world about itself that it would kill its own side is just as sci­ence fic­tion­al as a ter­mi­na­tor: fod­der for an enter­tain­ing action film but so unlike­ly as to be laugh­able.

But the sin­gu­lar­i­ty, you say! Ultron was strong AI, and we might be swal­lowed up or sub­sumed by our own inven­tions! Well sure, we should con­sid­er whether the sin­gu­lar­i­ty — the point at which com­put­er sys­tems out­side of human con­trol shape and deter­mine our fate — is already upon us. But there is a great rea­son to think it is: high-fre­quen­cy stock trad­ing already con­trols glob­al finan­cial mar­kets; our shop­ping habits and the mar­ket­ing we are exposed to and the movies we watch and like… all are deter­mined entire­ly by com­put­er pro­grams; and so on. More to the point, all of us have become reliant on our phones, which enable us to “off-load” our mem­o­ries, sched­ules, con­tacts, and knowl­edge into an eas­i­ly queried for­mat. We inter­act via devices, select mates through com­put­ers, and rely on com­put­ers to get our cars safe­ly (or not) from point A to point B.

Even in the mil­i­tary the fil­ter­ing, selec­tion, and inter­pre­ta­tion of infor­ma­tion is so auto­mat­ed it is unlike­ly any­one could even trace, from begin­ning to end, exact­ly where there was mean­ing­ful human input in iden­ti­fy­ing and act­ing upon infor­ma­tion before the point at which it became inevitable. U.S. Navy ves­sels have auto­mat­ed machine guns that track and fire at incom­ing anti-ship mis­siles with­out human input; U.S. Army bases are guard­ed by “defen­sive” weapons that auto­mat­i­cal­ly track and return fire on sus­pect­ed mor­tar emplace­ments; high-fly­ing blimps auto­mat­i­cal­ly assem­ble tar­get­ing pack­ages and threat track­ing for the sur­round­ing coun­try­side; soon, eye pieces will search a wide range of radi­a­tion spec­tra to tell the sol­dier who is a threat and who is not. These process­es are, in any mean­ing­ful sense, already detached from human judg­ment — we trust that the inputs our gad­gets and sen­sors and soft­ware give to sol­diers and com­man­ders are accu­rate and capa­ble of being act­ed upon.

But even if the sin­gu­lar­i­ty remains dis­tant, who cares? Far from a ter­mi­na­tor-like image, autonomous weapons will be mun­dane — it’s just automat­ing yet anoth­er process at the end of a very long series of auto­mat­ed process­es. And just like with oth­er process­es in a bureau­cra­cy (the mil­i­tary is a bureau­cra­cy, no mat­ter how much peo­ple try to por­tray it as some­thing more malign), there are con­trols, norms, oper­at­ing pro­ce­dures, and laws gov­ern­ing how it will or should be used. There are seri­ous issues that need to be worked out before any­one will accept a ful­ly autonomous machine, includ­ing whether it could ever even func­tion.

Unlike nuclear weapons, autonomous weapons are still the­o­ry; peo­ple can the­o­rize what they might look like (I have my own, bor­ing idea), or they might be stuck in fan­ta­sy and think Aus­tri­an body­builder skele­tons will march over their skulls shoot­ing frick­ing laser beams out of their hands. I mean, the great thing about an unsup­port­able, unproven the­o­ry is that you can project what­ev­er you want onto it: your hopes, fears, assump­tions, and red lines.

The pro­lif­er­a­tion of ter­mi­na­tor imagery is real­ly just the pro­lif­er­a­tion of lud­dism in the pub­lic sphere: robots are scary. Even if it’s true peo­ple are scary too (we nuked entire cities before there were robots to wor­ry about), because robots are on the wrong end of the uncan­ny val­ley, so clear­ly mechan­i­cal and for­eign, they invoke a spe­cial form of revul­sion. It is that vis­cer­al revul­sion that these images reflect — not any ground­ed con­cern or even oppo­si­tion to the devel­op­ment of advanced soft­ware. Just, ani­mal fear. An “ick” reac­tion, one that is cur­rent­ly over­pow­er­ing the dis­course and silenc­ing con­sid­er­ate dis­cus­sion of the issue.

Maybe in a few years the fog will clear, and we can have a prop­er pub­lic debate about what this all means.

joshua.foust
Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.