An Expansion of the Science Fictions of Dronephobia
Over at Beacon, I discussed why so many opponents to drones segue into science fiction to make their case — a habit that is actually poisonous to reasoned debate about kinetic policies and how they can be reformed to better safeguard civilian lives.
When they were first invented war planes were terrifying, not just in the sense we’re used to (i.e. the fear of a bomb being dropped) but the very concept of being able to rise above one’s enemy and strike without recourse. H.G. Wells, renown for his sci-fi and speculative fiction, wrote the seminal novel of the dread air war, 1907’s The War in the Air. Here all the themes of the warplane fear is on display: the incomprehensible weapons, unending waves of devastating strategic bombing, and so on. And just like the other fears about deadly airships or German air-terror in New York, it was mostly untrue.
In the modern context, opposition to specific technologies of war usually takes the form of an appeal to one of two things — a previous, incredibly brutal conflict (World War 2, Vietnam, never for some reason Korea), or science fiction. Yet the laser-like focus on technology, whether it’s airships or drones, misses the far more important element in play — the bureaucracies, politics, and policies that make up the decision to wage a war and how to best fight it.
There’s more over at Beacon, and I hope you’ll consider subscribing to my writing there. But I want to re-emphasize how important this point is: the hand-waving, fuzzy definitions, arguments from fallacy, and shortcut allusions that have by and large taken over the anti-drone side of the debate are really poisonous to any reasoned discourse.
In this post, I discuss a blogger, who is also a physicist doing his postdoc on global security and part of an international academic association opposed to drone research, who took this article I wrote for Defense One and derived from it that I was endorsing drone autonomy as the only way to safeguard them from hacking. This is the section that blogger took issue with:
Though the pace for drone strikes has slowed down — only 21 have struck Pakistan in 2013, versus 122 in 2010 according to the New America Foundation — unmanned vehicles remain a staple of the American counterinsurgency toolkit. But drones have built-in vulnerabilities that military planners still have not yet grappled with. Last year, for example, an aerospace engineer told the House Homeland Security Committee that with some inexpensive equipment he could hack into a drone and hijack it to perform some rogue purpose.
Drones have been hackable for years. In 2009, defense officials told reporters that Iranian-backed militias used $26 of off-the-shelf software to intercept the video feeds of drones flying over Iraq. And in 2011, it was reported that a virus had infected some drone control systems at Creech Air Force Base in Nevada, leading to security concerns about the security of unmanned aircraft.
It may be that the only way to make a drone truly secure is to allow it to make its own decisions without a human controller: if it receives no outside commands, then it cannot be hacked (at least as easily). And that’s where LARs, might be the most attractive.
Based on this, the blogger goes on to:
- Call me “pro-Terminator,” as if I am really excited at the idea of unstopped murder-robots, along with a half dozen other allusions to science fiction;
- Complain that several other publications found the article and its viewpoints fascinating and re-posted them;
- Accuse me of lazy writing because I interviewed several experts on ethics (including from within his own academic association) and policymaking;
- Accuse me of not having an argument or having unclear views (which contrasts with calling me “pro-Terminator”);
- Declare drones unhackable anyway because military radios are secure and good programming means they can’t be hacked.
That last point is arguably the easiest to disprove — one of the biggest objections to drone autonomy, from the start, has been that the military often rushes incomplete or poorly-designed machines into wartime use. The faith that they would also engineer remote-controlled machines that cannot be hacked or disrupted is, at a fundamental level, laughable (and ignorant of the real vulnerabilities built into drone technology and C2 systems).
But beyond that, the bit about hacking was a tiny aside in a bigger article about how these platforms can be used, and how they should NOT be used, in a conflict setting. It was the opposite of a pro-Terminator article, since the last half of it was all the downsides and dangers of employing them outside of narrow circumstances. Moreover, the blogger kept coming back to science fiction, not just in the title of his blogpost but repeatedly throughout his post, to justify his argument and to smear either my character or intentions.
So I therefore wrote a post about how science fiction is a stupid distraction from the real issues at play. Which they are.
It’s interesting: dronephobes (the term is literally descriptive, as they cannot see a legitimate uses for remotely piloted airplanes) employ science fiction routinely to discuss a military weapon in the abstract terms of fearing the future, rather than grappling with what the policy of its usage is today. As a result, what should be a useful corrective to a policy all too prone to overreach is instead self-marginalized by ludicrous hyperbole and what amounts to shrieking into the ether.
It’s a damned shame. Targeted killing needs good, principled opponents but far too many are little more than credentialed nutbags.
Update: a stroll through some recent articles about drones brought this September piece in the Washington Post to my attention. It is surely germane:
Still, U.S. officials and aviation experts acknowledge that unmanned aircraft have a weak spot: the satellite links and remote controls that enable pilots to fly them from thousands of miles away.
In July 2010, a U.S. spy agency intercepted electronic communications indicating that senior al-Qaeda leaders had distributed a “strategy guide” to operatives around the world advising them how “to anticipate and defeat” unmanned aircraft. The Defense Intelligence Agency (DIA) reported that al-Qaeda was sponsoring simultaneous research projects to develop jammers to interfere with GPS signals and infrared tags that drone operators rely on to pinpoint missile targets.
Anyway.