Why Experts Fret

In the 16th cen­tu­ry, Swiss biol­o­gist Con­rad Gess­ner, the inven­ter of mod­ern zool­o­gy, want­ed to cat­a­logue all the books in exis­tence. Over an extend­ed peri­od of time, he cre­at­ed a com­pre­hen­sive index of all the books in Europe, which he titled Bib­lio­the­ca uni­ver­salis. But the prospect of cre­at­ing this index trou­bled him. Gess­ner railed, at length and in print, about the “con­fus­ing and harm­ful abun­dance of books” that were flood­ing Europe in 1500s, and he called upon the mon­archs and prin­ci­pal­i­ties of the land to tight­ly reg­u­late and restrict the print­ing press.

Obvi­ous­ly, Gess­ner was over­wrought about the dan­ger of hav­ing a lot of books being print­ed. But he was hard­ly alone: inno­va­tion and tech­no­log­i­cal progress are always accom­pa­nied by fret­ting elites, from Socrates wor­ry­ing that the inven­tion of writ­ing will destroy mem­o­ry (“this dis­cov­ery of yours will cre­ate for­get­ful­ness in the learn­ers’ souls, because they will not use their mem­o­ries; they will trust to the exter­nal writ­ten char­ac­ters and not remem­ber of them­selves”) to mod­ern day psy­chi­a­trists assert­ing that email hurts one’s IQmore than pot.”

This sort of enlight­ened Lud­dism, or hatred of tech­nol­o­gy, is just an exag­ger­a­tion of the nor­mal sort of old-peo­ple crank­i­ness about new gad­gets — no dif­fer­ent than Gen Xers won­der­ing what the hell a Snapchat is and how it’s mak­ing those god­damned kids so stu­pid these days com­pared to when peo­ple talked over AIM like a nor­mal human being. It com­bines old peo­ple dis­like of new tech­nol­o­gy with pseu­do-philo­soph­i­cal fears about the Destruc­tion Of Thought: a potent com­bi­na­tion under ordi­nary cir­cum­stances, but prac­ti­cal­ly cat­nip in the mod­ern era of click­bait out­rage porn.

Enter The Guardian, which, as true to its form as it pos­si­bly could be, presents an “open let­ter” writ­ten by tech­nol­o­gists who wor­ry that their own inven­tions might be bad if the wrong bad peo­ple use it.

The let­ter states: “AI tech­nol­o­gy has reached a point where the deploy­ment of [autonomous weapons] is – prac­ti­cal­ly if not legal­ly – fea­si­ble with­in years, not decades, and the stakes are high: autonomous weapons have been described as the third rev­o­lu­tion in war­fare, after gun­pow­der and nuclear arms.”

The authors argue that AI can be used to make the bat­tle­field a safer place for mil­i­tary per­son­nel, but that offen­sive weapons that oper­ate on their own would low­er the thresh­old of going to bat­tle and result in greater loss of human life.

Should one mil­i­tary pow­er start devel­op­ing sys­tems capa­ble of select­ing tar­gets and oper­at­ing autonomous­ly with­out direct human con­trol, it would start an arms race sim­i­lar to the one for the atom bomb, the authors argue.Unlike nuclear weapons, how­ev­er, AI requires no spe­cif­ic hard-to-cre­ate mate­ri­als and will be dif­fi­cult to mon­i­tor.

The end­point of this tech­no­log­i­cal tra­jec­to­ry is obvi­ous: autonomous weapons will become the Kalash­nikovs of tomor­row. The key ques­tion for human­i­ty today is whether to start a glob­al AI arms race or to pre­vent it from start­ing,” said the authors.

This is an old theme on this blog (see here, here, here, here, here, here, and here, for exam­ple). Put sim­ply, this is a hilar­i­ous lud­dite view, one born of all of these researchers’ poor grasp of the mil­i­tary and how it approach­es tech­nol­o­gy, along with a wor­ry­ing igno­rance of what, exact­ly, nuclear weapons are and how they changed the nature of war­fare.

While there is a lot to unpack in this piece, it seems clear that these researchers and Very Smart Peo­ple are either being incred­i­bly disin­gen­u­ous, or that they are sim­ply igno­rant of what weapons are like and where the bound­aries of fea­si­ble arti­fi­cial intel­li­gence sys­tems can func­tion (both polit­i­cal­ly and tech­no­log­i­cal­ly).

For starters, who is “describ­ing” arti­fi­cial intel­li­gence that way? This is a 1,000-strong group of experts and no one is will­ing to even have that group deci­sive­ly state the threat auton­o­my may pose. They have to couch it in a pas­sive voice ref­er­ence to who-knows-whomev­er “described” them (not “per­suad­ed,” not “author­i­ta­tive­ly assert­ed,” not even “sup­port­ed with evi­dence”) as a third wave (which, come on, I used to work for Alvin Tof­fler and the Third Wave lan­guage, always meant to incur a West­ern hor­ror of the Penul­ti­mate from our col­lec­tive mem­o­ry of the Holy Trin­i­ty, is very played out), on par with nuclear weapons — which fun­da­men­tal­ly altered out inter­na­tion­al pol­i­tics but also did not destroy the plan­et (impor­tant point, that) because pol­i­tics and pol­i­cy com­bined to insti­tute some com­mon sense and broad­ly agree­able glob­al reg­u­la­tions on their devel­op­ment and use.

And that’s the thing: the “nuclear option” in secu­ri­ty fear mon­ger­ing has become so tire­some, and so played out, that is prac­ti­cal­ly its own law now: the greater the tech­no­log­i­cal sophis­ti­ca­tion of of a pol­i­cy chal­lenge, the more like­ly it is to be com­pared to nuclear weapons. Doing so should auto­mat­i­cal­ly dis­qual­i­fy such an argu­ment from seri­ous con­sid­er­a­tion, since not even very smart com­put­ers will mean we will destroy the plan­et a thou­sand times over the way we could have with glob­al ther­monu­clear war. This was the case when it was mere­ly dumb human-pilot­ed drones (which David Rem­nick com­pared to nukes), and it’s cer­tain­ly the case with the mild forms of auton­o­my that these peo­ple are ref­er­enc­ing.

One thing that unites all of these wor­ry-pieces about tech­nol­o­gy is how extreme­ly dif­fi­cult they are to imple­ment in real­i­ty. If we’ve learned any­thing from the nego­ti­a­tions over the nuclear indus­try in Iran, it is that deter­min­ing the dif­fer­ence between peace­ful and non-peace­ful uses for even high­ly enriched ura­ni­um can be incred­i­bly dif­fi­cult, and requires years of painstak­ing work to estab­lish (and in the case of Iran it still can­not be gen­er­al­ized to oth­er nucleariza­tion chal­lenges because of how unique Iran’s pos­ture is in the Mid­dle East).

That is because all tech­nol­o­gy is dual-use, which is why tech­nol­o­gy bans nev­er work. Not ever. Bal­lis­tic mis­siles also gave us the space pro­gram; mod­el air­planes gave us drones; nuclear weapons also gave us nuclear ener­gy; oil also gave us plas­tics; World War II gave us jet engines and mod­ern com­put­ers; you can do this for­ev­er. In the mod­ern era, arti­fi­cial intel­li­gence gives us Siri and Google Now and Cor­tana; it gives us Ama­zon Echo, it gives us Face­book engines and rec­om­men­da­tion algo­rithms, web­site ana­lyt­ics that allow for incred­i­bly small pornog­ra­phy nich­es, autopi­lots and cred­it alerts, smart­phones and self-dri­ving cars. But it also might give us weapons.

You know, the cars bit is inter­est­ing. I recent­ly heard some­one ask the ques­tion: if we knew how dan­ger­ous the car would be, would we want to ban it at its inven­tion? From the start of the Glob­al War on Ter­ror to the end of 2013, more than 540,000 Amer­i­cans have died in car crash­es. That is 25% more than died in World War 2. Under any oth­er cir­cum­stance, a piece of tech­nol­o­gy that killed 500,000 peo­ple in a decade would be the sub­ject of intense, venge­ful fury and tight reg­u­la­tion.

But cars, despite being hor­rif­i­cal­ly dan­ger­ous, are not restrict­ed to airbags rolling along at less than 20 mph. And besides which, con­sid­er­ing all of the oth­er gains we have seen since their inven­tion, would any­one in their right mind want to ban the car? I’m sure there’s some alt-his­to­ry buff who would like to think about it but real­is­ti­cal­ly no one could live a life they could pos­si­bly rec­og­nize with­out a car (or a plane, or a com­put­er). Yet, despite the harm they cause, and the great lengths peo­ple go to ame­lio­rate that harm (which is suc­cess­ful, since we have more peo­ple and cars than ever but car deaths con­tin­ue to fall each year), no one is seri­ous­ly sug­gest­ing we reg­u­late the car out of exis­tence. Just as no one thinks writ­ing, or books, or email should be reg­u­lat­ed out of exis­tence.

But drones and arti­fi­cial intel­li­gence! They’re just as bad as nukes, these experts warn! They are not. There is a rea­son that Guardian arti­cle is illus­trat­ed by a Ter­mi­na­tor — a work of sci­ence fic­tion. Most of the fears about what com­put­ers and drones can do is just that: sci­ence fic­tion. Ter­mi­na­tors are sup­pos­ed­ly intel­li­gent machines who do not have infrared sen­sors or guid­ed weapons; some­how they nuked the plan­et but did­n’t suf­fer from the mas­sive elec­tro­mag­net­ic puls­es that would have fried their cir­cuits; they go back to the 21st cen­tu­ry (in the lat­est ver­sion of the movies and TV show) but some­how can­not use Google or under­stand that killing John Con­nor would cre­ate a para­dox that would destroy their uni­verse and cre­ate one in which they might not ever exist.

Real­ly, the debate about A.I. (and their com­par­i­son to nuclear weapons) has not evolved — not one bit — since Dr. Strangelove, a film meant to par­o­dy nuclear war­fare, set in the 1960s.

It’s also neat to think about how one would reg­u­late arti­fi­cial intel­li­gence the way we have tried to reg­u­late nuclear weapons. One of the rea­sons nuclear treaties between the U.S. and USSR (and our respec­tive polit­i­cal blocs) was effec­tive was because of wide­spread revul­sion at the idea of destroy­ing the plan­et with nuclear bombs. But equal­ly as impor­tant was the bipo­lar nature of the world dur­ing the Cold War — the U.S. (or real­ly, NATO) and the USSR could real­is­ti­cal­ly sit down and between the two of them ham­mer out a real­is­tic, enforce­able weapons ban that both sides could adhere to and agree with. And indeed, nei­ther side want­ed to be nuked, and while both sides fudged at the mar­gins of these treaties they large­ly worked.

It is note­wor­thy that not a sin­gle mil­i­tar­i­ly advanced coun­try has agreed to a uni­ver­sal ban on weapons auton­o­my. Since no one can say what those actu­al­ly are (no terms are defined well in the activism on this front, from the dis­tinc­tion between auto­mat­ed weapons and autonomous weapons, to what “offen­sive” and “defen­sive” mean in prac­tice), no one wants to pre­emp­tive­ly crip­ple their own tech­nol­o­gy sec­tor. More to the point, unlike with nuclear weapons, small autonomous weapons do not ter­ri­fy mil­i­taries sim­ply by their nature — mil­i­taries want to ensure they can con­trol weapons, not set them loose and look away. Autonomous drones can do inter­est­ing things, but if they run ram­pant, you just need to wait a few hours for the bat­tery or gas tank to run dry. They might fire their two Hell­fire mis­siles, but that’s just two very small war­heads — not glob­al ther­monu­clear anni­hi­la­tion.

Plus, it is rather doubt­ful that either Rus­sia or Chi­na would ever agree to give up a tech­no­log­i­cal edge; if it only the U.S. and Europe agree­ing to a ban, who does that help?

Because real­ly, this whole debate comes down to some­thing very sim­ple, that the enlight­ened Lud­dites sim­ply can­not grasp: There is no wide­spread revul­sion at advanced com­put­ers. Noth­ing on par with nuclear weapons, at any rate. This could be due to igno­rance, but I think it’s more because most peo­ple ben­e­fit from smart pro­grams and smart gad­gets, and they don’t see them as secu­ri­ty threats the way nuclear weapons are. Even when only look­ing at their use as weapons, there is a stronger like­li­hood that more auton­o­my in tar­get­ing and fir­ing will be bet­ter for human rights, despite the gen­er­al dis­taste peo­ple have with the idea of a machine, and not a per­son, decid­ing to shoot. Just like how com­put­ers can man­u­fac­ture machines bet­ter than peo­ple; so too can com­put­er aim, select, and fire bet­ter than peo­ple.

Besides which, the actu­al Sin­gu­lar­i­ty as so many peo­ple imag­ine it is rather sil­ly. Robots are real­ly bad at doing things beyond analy­sis. They can bare­ly walk over uneven ter­rain, much less car­ry out a sus­tained attack upon human­i­ty. Quite unlike nuclear weapons, they are fun­da­men­tal­ly lim­it­ed in nature, and pos­sess zero capac­i­ty to destroy the world. And because of that, there is no one on the plan­et (apart from a mad­man who would­n’t adhere to a tech­nol­o­gy ban any­way) who would delib­er­ate­ly deploy a dumb war robot that can­not func­tion and can bare­ly rec­og­nize the envi­ron­ment around itself. There is too much of a dan­ger of that unit killing its own side as any­one else. And no mil­i­tary would risk it: they just would­n’t (which, if these experts knew any­thing about how mil­i­taries func­tion apart from their imag­i­nary car­i­ca­ture, they would know).

Last­ly there is some­thing odd about peo­ple like Steve Woz­ni­ak and Elon Musk, who owe their for­tunes to cre­at­ing the very advanced soft­ware and machines that they now fret over, join­ing this cru­sade. It goes with­out say­ing that none of them have any back­ground what­so­ev­er in mil­i­tary pol­i­tics, inter­na­tion­al rela­tions, mod­ern war­fare, or bureau­cra­cy. They don’t know how mil­i­taries work, espe­cial­ly not their own, so they fret and wor­ry but they don’t engage. Espe­cial­ly Musk, who envi­sions him­self as the vision­ary to col­o­nize Mars at a prof­it, but to some­how do so with­out the advanced robots and ultra­smart soft­ware he’s oppos­ing in this let­ter, is bizarre.

Frankly, there is a much greater dan­ger to putting in place gad­gets like Ama­zon Echo into your home — where Ama­zon has an always-on micro­phone, lis­ten­ing to every­thing you say and shout and argue and joke about on the off chance you might present it with an oppor­tu­ni­ty to charge your cred­it card for, like, laun­dry or what­ev­er. If the NSA were to build such a device — I believe they’re called “bugs” that “wire­tap” peo­ple’s homes — there would be vocif­er­ous out­cry from pri­va­cy advo­cates (and imag­ine what the Chi­nese inter­nal secu­ri­ty ser­vices could do with such a dataset). But the engi­neers aren’t fret­ting about things like that, because they prof­it from their devel­op­ment. They don’t see the casu­al­ly per­va­sive cor­po­rate sur­veil­lance net­work that we have all built as dan­ger­ous, because they make a ton of mon­ey off it is. It is much eas­i­er to fret (in the pas­sive voice, of course, because not even sup­posed experts can have agency in their fears) about some­thing far off, even more tech­no­log­i­cal, and espe­cial­ly (ick) mil­i­tary.

Mar­vin Min­sky, who co-found­ed MIT’s arti­fi­cial intel­li­gence lab­o­ra­to­ry, pre­dict­ed, “Once the com­put­ers got con­trol, we might nev­er get it back. We would sur­vive at their suf­fer­ance. If we’re lucky, they might decide to keep us as pets.” He said this in 1970. That peo­ple are still fret­ting about such a thing, even though it is impos­si­ble with our cur­rent under­stand­ing of tech­nol­o­gy (espe­cial­ly anthro­po­mor­phiz­ing a piece of soft­ware), should say some­thing. And that is that this cur­rent era of shirt-tear­ing over com­put­ers will pass, and will even­tu­al­ly seem utter­ly sil­ly — just like the man wor­ry­ing there are too many books, or that writ­ing is bad for your brain. It is just noise, see­ing progress and wor­ry­ing it might get left behind.

joshua.foust
Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.