How (Not) to Regulate the Internet
In 2012, famous computer security expert Bruce Schneier worried about the rise of what he called “security feudalism.” This is the process by which users place their trust in a given vendor to safeguard their data and their devices — whether through automatic updates, automatic backups, required two-factor authentication, and so on. Echoing the old paranoid hacker ethos, he laments that “Trust is our only option. In this system, we have no control over the security provided by our feudal lords.”
Of course, Schneier conceded, this trust isn’t all bad. ” For the average user, giving up control is largely a good thing,” because these services do a better job of securing themselves than normies ever can. For people who are not inclined to tinker, that is to say 99% of users, this is a feature and not a bug. People do not want oppressive technical barriers to using a device, then want it to just work — that is why Apple phones are so popular, and it is why secure communications services are so unpopular. They’re just too hard.
Schneier’s solution? “It’s time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals.”
Fast forward a couple of years, and we are now facing the prospect of insecure devices, the Internet of Things, that can be hacked into taking down the internet. It is a thorny problem: these devices are built by sub-sub-subcontractors overseas, rebranded here, and assembled into appliances sold at big box stores with no real security built into them. When someone exploits these weaknesses to launch a devastating denial of service attack, as happened recently, there is no real way to trace back who might be responsible. As I wrote at the time:
“Products ship with a default username and password; they have to, in order to be usable by a normal person (there are other schemes that might work, like manufacturing unique product keys the way Microsoft does with Windows, but again those are very expensive and given how easy it is to “crack” those product keys it probably wouldn’t work anyway).”
Most of these devices are cracked within six minutes of being connected to the internet — a painfully short window in which to wipe the password and deny access to malware.
Reacting to this tenuous state of affairs, Schneier again raises his clarion call for government regulation of the IoT market in the Washington Post.
The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.
Ahh, the details. Here’s the thing: liability is a meaningless concept for IoT manufacturing. Requiring U.S. based sellers to certify the information security of the devices they sell would introduce a galaxy of lawsuits against the industry. Can you imagine Microsoft being sued because someone got a virus? I can’t — and I would be shocked if a judge allowed such a suit to be brought forward. Similarly, can you imagine Dyn suing Phillips for a malware-infected lightbulb, or XiongMai Technologies for building one without adequate security? Can you imagine Apple being held liable for people misusing their iPhones to commit crimes?
To call such a legal regime problematic is an understatement. It would essentially crash the industry to a halt. Right now, Section 230 of the Communications Decency Act says:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”
This means that an Internet Service Provider cannot be held liable for illegal activities that take place by a consumer of its service: that is, if a person is using Comcast to access something like child pornography, it is the person accessing it who is liable, and not Comcast. The provider of the service is not responsible for people criminally misusing that service.
This is a bedrock principle of both speech online and of regulatory frameworks on liability for abusive conduct online. When I was chased off Twitter some months ago by a mob of hate-trolls, I couldn’t hold Twitter itself liable for the conduct of its users. Their “right” to be dicks is protected.
Schneier’s concept of holding device manufacturers responsible for when a hacker criminally misuses their products to create havoc would turn this principle on its head. It would place the onus for preventing illegal activity on makers, rather than the criminals. There is some precedent for doing this in a limited way — people have sued cellphone manufacturers for traffic deaths, and there’s a movement to remove the shield that protects gun manufacturers from being sued by shooting victims — but to pretend like such a regime would not be extraordinarily disruptive to the technology industry is simply dishonest.
The call for (international) regulation poses similar issues. It is an easy thing to call for — just like, as with his example, environmental regulations — but the consequences of such a call are vast. Security researchers like Schneier are routinely up in arms about how flimsy the government laws are about computer security — the poor phrasing of the Computer Fraud and Abuse Act, the inadequacy of bureaucrats keeping pace with bleeding edge cryptography research, and so on. And his solution is to make that even more pervasive, by having the government regulate the security on your DVR.
It’s true that this is a domestic solution to an international problem and that there’s no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites.
Again, as with establishing liability, to call such a thing problematic is to call the Death Star a paperweight. I strongly doubt Schneier would be comfortable with the current majorities in the House of Representatives and the Senate sitting down and coming up with a broadly applicable, yet constantly updateable, definition of what “security” means in an IoT appliance. If he can think of legal wording to stipulate exactly how much security is enough, and how to determine security is insufficient, then he should be bringing that into the open.
Lastly, Schneier relies on a bizarre logical construct in trying to narrow the scope of how international these regulations would have to be.
If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets.
This does not follow at all. The U.S. is not even remotely the largest market for cellphones: China and India have between three and four times as many cellphones as the U.S., and Brazil, Russia, and Indonesia have almost as many. Within a very short period of time, the IoT market for those countries will look similar, especially as prices continue to drop. In what universe is a common regulatory regime on device security possible between the U.S., the BRICs, and Indonesia? And more pressingly, why would a manufacturer not create a luxury “secure” version of a device for a wealthy, western country, and an insecure, cheap version of a device for everyone else?
Think of something like the pharmaceutical industry: unlike Schneier’s fears that the internet is a life and death issue (yeah, not so much, not for a while yet), here is an actual life and death issue. Over a million people die every year from counterfeit drugs, but that hasn’t stopped their spread because actual drugs are so expensive. How would a small regulatory regime do anything about a global problem with these devices activating massive botnets?
The reality is, government regulation is not the answer. You cannot meaningfully regulate sound device security, when you can’t even do it with web browsers or operating systems. Moreover, an international regime is not only impractical, but at a basic level impossible to enforce and actually verify. The solution to preventing future massive DDoS attacks on the internet’s backbone is going to come from somewhere else — the cat is already too out of the bag with consumer devices.