Basics Matter in Killer Robots Debate
Last year, Human Rights Watch launched a campaign to ban the development of autonomous weapons systems, which they dysphemistically call “killer robots.” The report and email campaign has several problems to it, which Greg McNeal lays out ably here — namely, that it’s based entirely on an oversimplified campaign of fear and doesn’t take into account the reality of the technology, the current policies in place, or even basics like current human fallibility.
Indeed, there is a fascinating science fiction element to this discussion that bears discussion in a longer, separate post. The appeal to fiction and fear instead of facts has the potential to up-end the debate about the laws of armed conflict and how weapons are used — surely the opposite of what they’d prefer.
Nonsense like this anti-robots video HRW produced is a key example. It is based on half-truths, actual non-truths, and leaps of logic that are just silly.
Noel Sharkey, who’s famous for his anti-drone rants on the BBC, is a professor of artificial intelligence and robotics at the University of Sheffield. And he apparently is not aware of the basics of active pattern recognition for understanding human behavior, measurements, and image processing research: all very important factors in designing autonomous weapons.
Then, the HRW senior research asserts that the X-47b, a Navy drone that can take off and land on an aircraft carrier without human input (an incredibly difficult and complex task), will carry weapons. The US Navy has said categorically on the record it will not.
Lastly, almost in a coup de grace, a Nobel laureate appeals to her success in getting international treaties banning land mines and cluster bombs, and says the world should duplicate that success for autonomous weapons. The thing is, the U.S. has not signed the land mine treaty, nor has it signed the cluster munitions treaty. Neither has Russia, China, or Israel. So what good are those treaties?
This video — and the campaign against autonomous robots — is based on a whole lot of false fears and misleading presentations of easily-googleable facts. Which is a real shame because there are serious challenges to the development and use of these weapons systems that need to be debated. This campaign, however, prevents such a discussion.
Besides which, humans aren’t that great in warfare anyway. Even highly trained U.S. troops make mistakes and kill children. The police mistakenly shoot unarmed people with depressing regularity. Really: how much worse could machines, with strict rules and a factory’s precision, really be?