A World Of Infinite Paper Clips

An article in the Sydney Morning Herald raises a common apocalyptic thought experiment about artificial intelligence and machines that is worth considering:

Of all the reasons robots might rise up and destroy humanity, making paper clips is not an obvious one. But the popular theory goes that an artificially intelligent machine programmed to produce as many paper clips as possible, might one day decide to do away with its makers, lest they try to stop it from achieving its aim. Ultimately, the entire planet could be stripped of whatever resources the relentless robot needs to build the biggest pile of paper clips imaginable – not that anyone would be around to use them.

Never mind an Arnold Schwarzenegger-style Terminator extinguishing the human race: Stationery could apparently bring on the robot-led apocalypse.

Thinking this all the way through, you can see that it is really a repackaging of the “grey goo” scenario: if you teach a robot to make a thing, along with the ability to make more of itself, it will eventually destroy everything around itself. Variations exist in science fiction literature (my favorite is the Greenfly in Alastair Reynold’s Revelation Space series, which ends up destroying the universe), but it also exists as a sort of Rachel Carlson-like warning about technology.

Of course, all forms of the grey goo scenario are heightened rubbish. From a logical, material, and energy perspective they are physically and theoretically impossible.

Logically, a self-replicating machine, even one meant to manufacture paper clips and not just copies of itself, will not be deployed by anyone, ever. As a self-defeating prophesy, the fears over a runaway manufacturing process means that any conceivable machine that could theoretically escape the bounds of its programming in that way will have some sort of kill switch built into it — and if worse comes to worse, we have things like electromagnetic pulses to simply disable anything electronic, and thus save it from being gobbled up by an infinity machine.

Materially, there’s only so much metal in the world. Sure, a smart enough machine might throw a temper tantrum if it cannot keep building paperclips, but if it killed all the humans, then who would mine the aluminum and steel needed to build them? Surely it does not have the skills and means to reprogram other places to build and operate such machines for itself. Moreover, there is only a finite amount of metal on our planet. Sure it would suck if some intelligent, hyper-capable robot hoarded all of it, but we really would not end as a species.

From an energy standpoint, escaping the material constraints is impossible. Rather, if a self-replication machine managed to escape the fundamental constraints of energy production, storage, and utilization, it would have more or less stumbled upon a method for infinite energy. That is a very big deal! We should celebrate that achievement! At the same time, if it learned how to convert other materials into metals to make more paperclips, then it would have solved the problem of matter to energy conversion and literally invented alchemy. That is a very big deal! We should also celebrate that achievement!

So the entire paper clip apocalypse idea is just rubbish. It is a lazy artifact of fear mongering based on a fundamental disrespect of the intelligence of people who know better (a group that does not seem to include the many journalists, pundits, commentators, and silicon valley executives who are joining this crusade against programming).

So what good will self-learning machines be?

Oxford academic Stuart Armstrong, from the Future of Humanity Institute, recently suggested that a supercomputer tasked to “prevent human suffering” could decide with lethal logic to “kill all humans” and so end our suffering altogether.

Ugh, no. Since that could not possibly happen at once, the “lethal logic” would also have to account for the anguish and suffering of humans who knew they were next to be murdered. It would do the opposite of “preventing human suffering.” An ostensibly smart person said this.

Also among the signatories was Toby Walsh, professor of AI at the University of NSW and the NICTA research organisation, who says “offensive autonomous weapons” will lower the threshold for waging war. “It’s a technology that’s going to allow presidents and prime ministers to think they can slip into battle without bodies coming home in bags,” he tells Fairfax Media.

“It’s not likely to make the world a better place – it’s likely to escalate conflict and to cause more damage to humans.”…

Professor Walsh says AI should be dedicated towards tackling pressing problems such as inequality and poverty, or the rising cost of healthcare. But such technology can also be used to inflict unnecessary harm. “Certainly, it would help assuage fears about killer robots if we didn’t have killer robots,” he says.

I remain deeply fascinated by how persistent these ideas are in the discourse about artificial intelligence, wrapped into some incredibly lazy derp about drones. It is based in two false assumptions: that an asymmetric confrontation with “autonomous” and “offensive weapons” (good luck defining the term, since none of these ostensible experts bothered to try, or even to grapple with how incredibly hard it really is to define from any perspective), and that a confrontation where both sides employ “autonomous” weapons will be bad for humans.

The first is easy to tackle: assuming we cannot meaningfully distinguish between offensive and defensive weapons (since we cannot), the case of the United States suggests that employing drones actually leads to lower intensity conflicts with lower body counts. (This mirrors the global tendency toward less conflict that affects fewer people over time.)

The U.S. has had drones shooting missiles at Pakistan for approximately the same period of time — a hair under ten years — that it had troops deployed to Iraq. In more than 400 strikes, about 4,000 people have died, with around 20% of those killed innocent civilians. When including other forms of violence, closer to 50,000 people have died in conflicts in northwest Pakistan, around 45% of whom are civilians. During a similar timeframe in Iraq, more than 165,000 Iraqis died in violent conflict, and every research who has looked at the data is convinced that a huge number of dead remain unaccounted for.

So, when drones are used asymmetrically they actually do not meaningfully result in a greater likelihood for war (the Pakistani government was at war in Northwest Pakistan long before drones came on the scene) and do not meaningfully result in more bloodshed (since far more people have died and been displaced in battles between insurgents and the Pakistani military than in drones).

But what about when two forces use drones against each other? I have a hard time caring. If you send two groups of robots to do battle somewhere, I mean yes we should all worry about getting out of the way, but in the long run do we really care about that as much as we would if it were soldiers and pilots doing the fighting? Imagine the Landings at Incheon, only with robots against robots. I somehow doubt the meaning and horror of such a battle would be the same — for good reason. Human life does matter, and if we end up outsourcing our carnage to robots we’ll probably be better off in the long run.

Then we’re left with this moral agency so many computer scientists apply to intelligent machines, seemingly without thinking it through. If the fear about artificial intelligence is that an unintended consequence might doom our species, would you really want to program AI with an imperative to, as Professor Walsh puts it, “tackling pressing problems such as inequality and poverty, or the rising cost of healthcare?” Those seem like areas just as rife for catastrophic unintended consequences as they do for improving autopilot and targeting routines. The 20th century was soaked in blood because of regimes who thought they could tackle inequality and poverty and healthcare using monstrous means (like the kolkhozy in early Leninism or China’s Great Leap Forward).

More to the point, why on earth won’t AI be used to simply generate additional wealth for the person or organization that develops and/or deploys it? That seems like the far more likely outcome — a supercharged version of a high-frequency trading algorithm, for example, or a predictive method of patent trolling. From an optics perspective, too, I don’t see Professor Walsh making grandiose statements that tech companies and building apps are inherently anti-democratic or downright destructive of humanity in effect even though they really are for a lot of people (especially the non-wealthy in the San Francisco Bay Area). Silicon valley has done almost nothing for “tackling inequality and poverty or the rising cost of healthcare” but few computer scientists ring their hands about what native advertising, privacy-nuking data mining, and GPS-enabled advertisements are doing to people’s lives.

Ultimately, every single one of these arguments come back to the terribly-fraught issue with “human values” that are supposed to guide artificial intelligence. I’ve tackled before just how asinine and limited this concept really is (they really mean “secular liberal values in an industrialized, wealth, western context”), but it has a real world consequence. HitchBot, the lovable hitchhiking robot, was just beheaded by unknown assailants in Philadelphia. It wasn’t a robot that literally cut the head off of an inoffensive object; it was a person. A human value to destroy, if you will.

I, for one, am terrified that artificial intelligence might adopt values that are a little too human. Destroying humanity for the thrill of it, or out of disgust for our society, is far more likely a malicious outcome than a physically improbable manufacturing accident. I would instead make the suggestion that intelligent machines would be less likely to arbitrarily or deliberately kill humanity — with great capabilities comes great precision, and thus less collateral death.

Really, the weird Manichean logic employed by otherwise very smart people to talk about AI is immensely disappointing. It is Malthusian logic, one born out of fear and technophobia but not a realistic understanding of how various systems embedded in our world interrelate and self-correct over time. It is like Alvin Toffler, worrying in 1970 that the megacities of the future will be so densely populated that they will run out of oxygen: a fear that might seem prescient at the time, but once you think about it ever so slightly, appears increasingly unhinged and bizarre. “Use this technology for ends I deem acceptable, or you are immoral” arguments are interesting and sure as hell make for great website headlines, but they really aren’t anything more than a teenaged pout about feeling left behind by a changing world.

comments powered by Disqus