Why Experts Fret
In the 16th century, Swiss biologist Conrad Gessner, the inventer of modern zoology, wanted to catalogue all the books in existence. Over an extended period of time, he created a comprehensive index of all the books in Europe, which he titled Bibliotheca universalis. But the prospect of creating this index troubled him. Gessner railed, at length and in print, about the “confusing and harmful abundance of books” that were flooding Europe in 1500s, and he called upon the monarchs and principalities of the land to tightly regulate and restrict the printing press.
Obviously, Gessner was overwrought about the danger of having a lot of books being printed. But he was hardly alone: innovation and technological progress are always accompanied by fretting elites, from Socrates worrying that the invention of writing will destroy memory (“this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves”) to modern day psychiatrists asserting that email hurts one’s IQ “more than pot.”
This sort of enlightened Luddism, or hatred of technology, is just an exaggeration of the normal sort of old-people crankiness about new gadgets — no different than Gen Xers wondering what the hell a Snapchat is and how it’s making those goddamned kids so stupid these days compared to when people talked over AIM like a normal human being. It combines old people dislike of new technology with pseudo-philosophical fears about the Destruction Of Thought: a potent combination under ordinary circumstances, but practically catnip in the modern era of clickbait outrage porn.
Enter The Guardian, which, as true to its form as it possibly could be, presents an “open letter” written by technologists who worry that their own inventions might be bad if the wrong bad people use it.
The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.
Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue.Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.
“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.
This is an old theme on this blog (see here, here, here, here, here, here, and here, for example). Put simply, this is a hilarious luddite view, one born of all of these researchers’ poor grasp of the military and how it approaches technology, along with a worrying ignorance of what, exactly, nuclear weapons are and how they changed the nature of warfare.
While there is a lot to unpack in this piece, it seems clear that these researchers and Very Smart People are either being incredibly disingenuous, or that they are simply ignorant of what weapons are like and where the boundaries of feasible artificial intelligence systems can function (both politically and technologically).
For starters, who is “describing” artificial intelligence that way? This is a 1,000-strong group of experts and no one is willing to even have that group decisively state the threat autonomy may pose. They have to couch it in a passive voice reference to who-knows-whomever “described” them (not “persuaded,” not “authoritatively asserted,” not even “supported with evidence”) as a third wave (which, come on, I used to work for Alvin Toffler and the Third Wave language, always meant to incur a Western horror of the Penultimate from our collective memory of the Holy Trinity, is very played out), on par with nuclear weapons – which fundamentally altered out international politics but also did not destroy the planet (important point, that) because politics and policy combined to institute some common sense and broadly agreeable global regulations on their development and use.
And that’s the thing: the “nuclear option” in security fear mongering has become so tiresome, and so played out, that is practically its own law now: the greater the technological sophistication of of a policy challenge, the more likely it is to be compared to nuclear weapons. Doing so should automatically disqualify such an argument from serious consideration, since not even very smart computers will mean we will destroy the planet a thousand times over the way we could have with global thermonuclear war. This was the case when it was merely dumb human-piloted drones (which David Remnick compared to nukes), and it’s certainly the case with the mild forms of autonomy that these people are referencing.
One thing that unites all of these worry-pieces about technology is how extremely difficult they are to implement in reality. If we’ve learned anything from the negotiations over the nuclear industry in Iran, it is that determining the difference between peaceful and non-peaceful uses for even highly enriched uranium can be incredibly difficult, and requires years of painstaking work to establish (and in the case of Iran it still cannot be generalized to other nuclearization challenges because of how unique Iran’s posture is in the Middle East).
That is because all technology is dual-use, which is why technology bans never work. Not ever. Ballistic missiles also gave us the space program; model airplanes gave us drones; nuclear weapons also gave us nuclear energy; oil also gave us plastics; World War II gave us jet engines and modern computers; you can do this forever. In the modern era, artificial intelligence gives us Siri and Google Now and Cortana; it gives us Amazon Echo, it gives us Facebook engines and recommendation algorithms, website analytics that allow for incredibly small pornography niches, autopilots and credit alerts, smartphones and self-driving cars. But it also might give us weapons.
You know, the cars bit is interesting. I recently heard someone ask the question: if we knew how dangerous the car would be, would we want to ban it at its invention? From the start of the Global War on Terror to the end of 2013, more than 540,000 Americans have died in car crashes. That is 25% more than died in World War 2. Under any other circumstance, a piece of technology that killed 500,000 people in a decade would be the subject of intense, vengeful fury and tight regulation.
But cars, despite being horrifically dangerous, are not restricted to airbags rolling along at less than 20 mph. And besides which, considering all of the other gains we have seen since their invention, would anyone in their right mind want to ban the car? I’m sure there’s some alt-history buff who would like to think about it but realistically no one could live a life they could possibly recognize without a car (or a plane, or a computer). Yet, despite the harm they cause, and the great lengths people go to ameliorate that harm (which is successful, since we have more people and cars than ever but car deaths continue to fall each year), no one is seriously suggesting we regulate the car out of existence. Just as no one thinks writing, or books, or email should be regulated out of existence.
But drones and artificial intelligence! They’re just as bad as nukes, these experts warn! They are not. There is a reason that Guardian article is illustrated by a Terminator — a work of science fiction. Most of the fears about what computers and drones can do is just that: science fiction. Terminators are supposedly intelligent machines who do not have infrared sensors or guided weapons; somehow they nuked the planet but didn’t suffer from the massive electromagnetic pulses that would have fried their circuits; they go back to the 21st century (in the latest version of the movies and TV show) but somehow cannot use Google or understand that killing John Connor would create a paradox that would destroy their universe and create one in which they might not ever exist.
Really, the debate about A.I. (and their comparison to nuclear weapons) has not evolved — not one bit — since Dr. Strangelove, a film meant to parody nuclear warfare, set in the 1960s.
It’s also neat to think about how one would regulate artificial intelligence the way we have tried to regulate nuclear weapons. One of the reasons nuclear treaties between the U.S. and USSR (and our respective political blocs) was effective was because of widespread revulsion at the idea of destroying the planet with nuclear bombs. But equally as important was the bipolar nature of the world during the Cold War — the U.S. (or really, NATO) and the USSR could realistically sit down and between the two of them hammer out a realistic, enforceable weapons ban that both sides could adhere to and agree with. And indeed, neither side wanted to be nuked, and while both sides fudged at the margins of these treaties they largely worked.
It is noteworthy that not a single militarily advanced country has agreed to a universal ban on weapons autonomy. Since no one can say what those actually are (no terms are defined well in the activism on this front, from the distinction between automated weapons and autonomous weapons, to what “offensive” and “defensive” mean in practice), no one wants to preemptively cripple their own technology sector. More to the point, unlike with nuclear weapons, small autonomous weapons do not terrify militaries simply by their nature — militaries want to ensure they can control weapons, not set them loose and look away. Autonomous drones can do interesting things, but if they run rampant, you just need to wait a few hours for the battery or gas tank to run dry. They might fire their two Hellfire missiles, but that’s just two very small warheads — not global thermonuclear annihilation.
Plus, it is rather doubtful that either Russia or China would ever agree to give up a technological edge; if it only the U.S. and Europe agreeing to a ban, who does that help?
Because really, this whole debate comes down to something very simple, that the enlightened Luddites simply cannot grasp: There is no widespread revulsion at advanced computers. Nothing on par with nuclear weapons, at any rate. This could be due to ignorance, but I think it’s more because most people benefit from smart programs and smart gadgets, and they don’t see them as security threats the way nuclear weapons are. Even when only looking at their use as weapons, there is a stronger likelihood that more autonomy in targeting and firing will be better for human rights, despite the general distaste people have with the idea of a machine, and not a person, deciding to shoot. Just like how computers can manufacture machines better than people; so too can computer aim, select, and fire better than people.
Besides which, the actual Singularity as so many people imagine it is rather silly. Robots are really bad at doing things beyond analysis. They can barely walk over uneven terrain, much less carry out a sustained attack upon humanity. Quite unlike nuclear weapons, they are fundamentally limited in nature, and possess zero capacity to destroy the world. And because of that, there is no one on the planet (apart from a madman who wouldn’t adhere to a technology ban anyway) who would deliberately deploy a dumb war robot that cannot function and can barely recognize the environment around itself. There is too much of a danger of that unit killing its own side as anyone else. And no military would risk it: they just wouldn’t (which, if these experts knew anything about how militaries function apart from their imaginary caricature, they would know).
Lastly there is something odd about people like Steve Wozniak and Elon Musk, who owe their fortunes to creating the very advanced software and machines that they now fret over, joining this crusade. It goes without saying that none of them have any background whatsoever in military politics, international relations, modern warfare, or bureaucracy. They don’t know how militaries work, especially not their own, so they fret and worry but they don’t engage. Especially Musk, who envisions himself as the visionary to colonize Mars at a profit, but to somehow do so without the advanced robots and ultrasmart software he’s opposing in this letter, is bizarre.
Frankly, there is a much greater danger to putting in place gadgets like Amazon Echo into your home — where Amazon has an always-on microphone, listening to everything you say and shout and argue and joke about on the off chance you might present it with an opportunity to charge your credit card for, like, laundry or whatever. If the NSA were to build such a device — I believe they’re called “bugs” that “wiretap” people’s homes — there would be vociferous outcry from privacy advocates (and imagine what the Chinese internal security services could do with such a dataset). But the engineers aren’t fretting about things like that, because they profit from their development. They don’t see the casually pervasive corporate surveillance network that we have all built as dangerous, because they make a ton of money off it is. It is much easier to fret (in the passive voice, of course, because not even supposed experts can have agency in their fears) about something far off, even more technological, and especially (ick) military.
Marvin Minsky, who co-founded MIT’s artificial intelligence laboratory, predicted, “Once the computers got control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.” He said this in 1970. That people are still fretting about such a thing, even though it is impossible with our current understanding of technology (especially anthropomorphizing a piece of software), should say something. And that is that this current era of shirt-tearing over computers will pass, and will eventually seem utterly silly — just like the man worrying there are too many books, or that writing is bad for your brain. It is just noise, seeing progress and worrying it might get left behind.