A World Of Infinite Paper Clips

An arti­cle in the Syd­ney Morn­ing Her­ald rais­es a com­mon apoc­a­lyp­tic thought exper­i­ment about arti­fi­cial intel­li­gence and machines that is worth con­sid­er­ing:

Of all the rea­sons robots might rise up and destroy human­i­ty, mak­ing paper clips is not an obvi­ous one. But the pop­u­lar the­o­ry goes that an arti­fi­cial­ly intel­li­gent machine pro­grammed to pro­duce as many paper clips as pos­si­ble, might one day decide to do away with its mak­ers, lest they try to stop it from achiev­ing its aim. Ulti­mate­ly, the entire plan­et could be stripped of what­ev­er resources the relent­less robot needs to build the biggest pile of paper clips imag­in­able – not that any­one would be around to use them.

Nev­er mind an Arnold Schwarzeneg­ger-style Ter­mi­na­tor extin­guish­ing the human race: Sta­tionery could appar­ent­ly bring on the robot-led apoc­a­lypse.

Think­ing this all the way through, you can see that it is real­ly a repack­ag­ing of the “grey goo” sce­nario: if you teach a robot to make a thing, along with the abil­i­ty to make more of itself, it will even­tu­al­ly destroy every­thing around itself. Vari­a­tions exist in sci­ence fic­tion lit­er­a­ture (my favorite is the Green­fly in Alas­tair Reynold’s Rev­e­la­tion Space series, which ends up destroy­ing the uni­verse), but it also exists as a sort of Rachel Carl­son-like warn­ing about tech­nol­o­gy.

Of course, all forms of the grey goo sce­nario are height­ened rub­bish. From a log­i­cal, mate­r­i­al, and ener­gy per­spec­tive they are phys­i­cal­ly and the­o­ret­i­cal­ly impos­si­ble.

Log­i­cal­ly, a self-repli­cat­ing machine, even one meant to man­u­fac­ture paper clips and not just copies of itself, will not be deployed by any­one, ever. As a self-defeat­ing proph­esy, the fears over a run­away man­u­fac­tur­ing process means that any con­ceiv­able machine that could the­o­ret­i­cal­ly escape the bounds of its pro­gram­ming in that way will have some sort of kill switch built into it — and if worse comes to worse, we have things like elec­tro­mag­net­ic puls­es to sim­ply dis­able any­thing elec­tron­ic, and thus save it from being gob­bled up by an infin­i­ty machine.

Mate­ri­al­ly, there’s only so much met­al in the world. Sure, a smart enough machine might throw a tem­per tantrum if it can­not keep build­ing paper­clips, but if it killed all the humans, then who would mine the alu­minum and steel need­ed to build them? Sure­ly it does not have the skills and means to repro­gram oth­er places to build and oper­ate such machines for itself. More­over, there is only a finite amount of met­al on our plan­et. Sure it would suck if some intel­li­gent, hyper-capa­ble robot hoard­ed all of it, but we real­ly would not end as a species.

From an ener­gy stand­point, escap­ing the mate­r­i­al con­straints is impos­si­ble. Rather, if a self-repli­ca­tion machine man­aged to escape the fun­da­men­tal con­straints of ener­gy pro­duc­tion, stor­age, and uti­liza­tion, it would have more or less stum­bled upon a method for infi­nite ener­gy. That is a very big deal! We should cel­e­brate that achieve­ment! At the same time, if it learned how to con­vert oth­er mate­ri­als into met­als to make more paper­clips, then it would have solved the prob­lem of mat­ter to ener­gy con­ver­sion and lit­er­al­ly invent­ed alche­my. That is a very big deal! We should also cel­e­brate that achieve­ment!

So the entire paper clip apoc­a­lypse idea is just rub­bish. It is a lazy arti­fact of fear mon­ger­ing based on a fun­da­men­tal dis­re­spect of the intel­li­gence of peo­ple who know bet­ter (a group that does not seem to include the many jour­nal­ists, pun­dits, com­men­ta­tors, and sil­i­con val­ley exec­u­tives who are join­ing this cru­sade against pro­gram­ming).

So what good will self-learn­ing machines be?

Oxford aca­d­e­m­ic Stu­art Arm­strong, from the Future of Human­i­ty Insti­tute, recent­ly sug­gest­ed that a super­com­put­er tasked to “pre­vent human suf­fer­ing” could decide with lethal log­ic to “kill all humans” and so end our suf­fer­ing alto­geth­er.

Ugh, no. Since that could not pos­si­bly hap­pen at once, the “lethal log­ic” would also have to account for the anguish and suf­fer­ing of humans who knew they were next to be mur­dered. It would do the oppo­site of “pre­vent­ing human suf­fer­ing.” An osten­si­bly smart per­son said this.

Also among the sig­na­to­ries was Toby Walsh, pro­fes­sor of AI at the Uni­ver­si­ty of NSW and the NICTA research organ­i­sa­tion, who says “offen­sive autonomous weapons” will low­er the thresh­old for wag­ing war. “It’s a tech­nol­o­gy that’s going to allow pres­i­dents and prime min­is­ters to think they can slip into bat­tle with­out bod­ies com­ing home in bags,” he tells Fair­fax Media.

It’s not like­ly to make the world a bet­ter place – it’s like­ly to esca­late con­flict and to cause more dam­age to humans.”…

Pro­fes­sor Walsh says AI should be ded­i­cat­ed towards tack­ling press­ing prob­lems such as inequal­i­ty and pover­ty, or the ris­ing cost of health­care. But such tech­nol­o­gy can also be used to inflict unnec­es­sary harm. “Cer­tain­ly, it would help assuage fears about killer robots if we did­n’t have killer robots,” he says.

I remain deeply fas­ci­nat­ed by how per­sis­tent these ideas are in the dis­course about arti­fi­cial intel­li­gence, wrapped into some incred­i­bly lazy derp about drones. It is based in two false assump­tions: that an asym­met­ric con­fronta­tion with “autonomous” and “offen­sive weapons” (good luck defin­ing the term, since none of these osten­si­ble experts both­ered to try, or even to grap­ple with how incred­i­bly hard it real­ly is to define from any per­spec­tive), and that a con­fronta­tion where both sides employ “autonomous” weapons will be bad for humans.

The first is easy to tack­le: assum­ing we can­not mean­ing­ful­ly dis­tin­guish between offen­sive and defen­sive weapons (since we can­not), the case of the Unit­ed States sug­gests that employ­ing drones actu­al­ly leads to low­er inten­si­ty con­flicts with low­er body counts. (This mir­rors the glob­al ten­den­cy toward less con­flict that affects few­er peo­ple over time.)

The U.S. has had drones shoot­ing mis­siles at Pak­istan for approx­i­mate­ly the same peri­od of time — a hair under ten years — that it had troops deployed to Iraq. In more than 400 strikes, about 4,000 peo­ple have died, with around 20% of those killed inno­cent civil­ians. When includ­ing oth­er forms of vio­lence, clos­er to 50,000 peo­ple have died in con­flicts in north­west Pak­istan, around 45% of whom are civil­ians. Dur­ing a sim­i­lar time­frame in Iraq, more than 165,000 Iraqis died in vio­lent con­flict, and every research who has looked at the data is con­vinced that a huge num­ber of dead remain unac­count­ed for.

So, when drones are used asym­met­ri­cal­ly they actu­al­ly do not mean­ing­ful­ly result in a greater like­li­hood for war (the Pak­istani gov­ern­ment was at war in North­west Pak­istan long before drones came on the scene) and do not mean­ing­ful­ly result in more blood­shed (since far more peo­ple have died and been dis­placed in bat­tles between insur­gents and the Pak­istani mil­i­tary than in drones).

But what about when two forces use drones against each oth­er? I have a hard time car­ing. If you send two groups of robots to do bat­tle some­where, I mean yes we should all wor­ry about get­ting out of the way, but in the long run do we real­ly care about that as much as we would if it were sol­diers and pilots doing the fight­ing? Imag­ine the Land­ings at Incheon, only with robots against robots. I some­how doubt the mean­ing and hor­ror of such a bat­tle would be the same — for good rea­son. Human life does mat­ter, and if we end up out­sourc­ing our car­nage to robots we’ll prob­a­bly be bet­ter off in the long run.

Then we’re left with this moral agency so many com­put­er sci­en­tists apply to intel­li­gent machines, seem­ing­ly with­out think­ing it through. If the fear about arti­fi­cial intel­li­gence is that an unin­tend­ed con­se­quence might doom our species, would you real­ly want to pro­gram AI with an imper­a­tive to, as Pro­fes­sor Walsh puts it, “tack­ling press­ing prob­lems such as inequal­i­ty and pover­ty, or the ris­ing cost of health­care?” Those seem like areas just as rife for cat­a­stroph­ic unin­tend­ed con­se­quences as they do for improv­ing autopi­lot and tar­get­ing rou­tines. The 20th cen­tu­ry was soaked in blood because of regimes who thought they could tack­le inequal­i­ty and pover­ty and health­care using mon­strous means (like the kolkhozy in ear­ly Lenin­ism or Chi­na’s Great Leap For­ward).

More to the point, why on earth won’t AI be used to simply gen­er­ate addi­tion­al wealth for the per­son or orga­ni­za­tion that devel­ops and/or deploys it? That seems like the far more like­ly out­come — a super­charged ver­sion of a high-fre­quen­cy trad­ing algo­rithm, for exam­ple, or a pre­dic­tive method of patent trolling. From an optics per­spec­tive, too, I don’t see Pro­fes­sor Walsh mak­ing grandiose state­ments that tech com­pa­nies and build­ing apps are inher­ent­ly anti-demo­c­ra­t­ic or down­right destruc­tive of human­i­ty in effect even though they real­ly are for a lot of peo­ple (espe­cial­ly the non-wealthy in the San Fran­cis­co Bay Area). Sil­i­con val­ley has done almost noth­ing for “tack­ling inequal­i­ty and pover­ty or the ris­ing cost of health­care” but few com­put­er sci­en­tists ring their hands about what native adver­tis­ing, pri­va­cy-nuk­ing data min­ing, and GPS-enabled adver­tise­ments are doing to peo­ple’s lives.

Ulti­mate­ly, every sin­gle one of these argu­ments come back to the ter­ri­bly-fraught issue with “human val­ues” that are sup­posed to guide arti­fi­cial intel­li­gence. I’ve tack­led before just how asi­nine and lim­it­ed this con­cept real­ly is (they real­ly mean “sec­u­lar lib­er­al val­ues in an indus­tri­al­ized, wealth, west­ern con­text”), but it has a real world con­se­quence. Hitch­Bot, the lov­able hitch­hik­ing robot, was just behead­ed by unknown assailants in Philadel­phia. It was­n’t a robot that lit­er­al­ly cut the head off of an inof­fen­sive object; it was a per­son. A human val­ue to destroy, if you will.

I, for one, am ter­ri­fied that arti­fi­cial intel­li­gence might adopt val­ues that are a lit­tle too human. Destroy­ing human­i­ty for the thrill of it, or out of dis­gust for our soci­ety, is far more like­ly a mali­cious out­come than a phys­i­cal­ly improb­a­ble man­u­fac­tur­ing acci­dent. I would instead make the sug­ges­tion that intel­li­gent machines would be less like­ly to arbi­trar­i­ly or delib­er­ate­ly kill human­i­ty — with great capa­bil­i­ties comes great pre­ci­sion, and thus less col­lat­er­al death.

Real­ly, the weird Manichean log­ic employed by oth­er­wise very smart peo­ple to talk about AI is immense­ly dis­ap­point­ing. It is Malthu­sian log­ic, one born out of fear and techno­pho­bia but not a real­is­tic under­stand­ing of how var­i­ous sys­tems embed­ded in our world inter­re­late and self-cor­rect over time. It is like Alvin Tof­fler, wor­ry­ing in 1970 that the megac­i­ties of the future will be so dense­ly pop­u­lat­ed that they will run out of oxy­gen: a fear that might seem pre­scient at the time, but once you think about it ever so slight­ly, appears increas­ing­ly unhinged and bizarre. “Use this tech­nol­o­gy for ends I deem accept­able, or you are immoral” argu­ments are inter­est­ing and sure as hell make for great web­site head­lines, but they real­ly aren’t any­thing more than a teenaged pout about feel­ing left behind by a chang­ing world.

joshua.foust
Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.