An Unavoidable Wave of Internet Regulation
On Twitter, the blue “verified” checkmark next to a person’s account is meant to be a symbol of trust. It says that the service has verified the person’s identity because they are of public interest and at risk of impersonation. While Twitter used to have staff individually verify accounts in an opaque process, last year they opened up the process to public application. They go to great pains to note that a verification does not imply endorsement by the company.
At the same time, Twitter has made a token effort to address the torrent of abusive content on its site: the endless waves of trolling, propaganda, misinformation, hate speech, and calls to violence. It has become the subject of personal statements by Twitter’s most senior leadership, and when it comes to Islamist extremism, such as al Qaeda and ISIS, they have expended effort to automatically block accounts that promote extremist content.
So it was a bit confusing when they approved the verification of Jason Kessler, the organizer of the Charlottesville Nazi rally that resulted in one death and dozens of injuries when one of the marchers drove his car, ISIS-style, into a crowd of anti-fascist protesters. Afterward, Kessler mocked the victims in nasty, personal ways, and has been arrested repeatedly (including for doxing activists in an effort to subject them to violent reprisal). It would seem to cut against Twitter’s stated goals to stamp out hateful, abusive content, to verify such a person. But it has bigger implications than that.
The digital media companies that dominate our political discourse and news consumption are in the midst of an existential debate about themselves, but they don’t seem to realize it. With the recent contentious Congressional hearings featuring the general counsels of the three largest digital media companies — more on that term in a minute — it seems clear that the Internet’s largest firms are about to face a wave of unwanted government regulation. Yet there seems to be no urgency from these firms themselves to police themselves in order to avoid it.
What is going on? Let’s start with my decision to refer to Twitter, Facebook, and Google as digital media companies, rather than as social media companies. It is a designation that the companies’ executives vehemently reject (see Facebook COO Sheryl Sanberg’s recent rejection of the term during a tour of Europe).
Yet it seems everyone else thinks of them as digital media: strong majorities of both Twitter and Facebook users say the services are their primary means of reading the news, and the President uses Twitter as an information press release machine. Google even operates a News service, which acts as a customizable front page for people who plug in certain preferences, topics, and locations. To differentiate these firms from more traditional companies simply because they produce little of their own content requires too many leaps of logic — especially considering the moves all three have taken to curate, edit, and produce news summaries and breaking stories.
I suspect one reason none of these firms want to be called media companies is because that would subject them to certain types of scrutiny and regulation — such as campaign advertising laws, which would require them to be far more transparent about the groups spending money on political advertisements than they have been willing to be so far. Put simply, one of the ways these firms remain effective at serving ads and mining consumer data is by avoiding that sort of transparency.
It is easy to see how someone reacts to a piece of content if they don’t know where it’s coming from, which makes it easier to sell valuable advertising advisement services to high spending clients. The Trump campaign, for example, made use of this exact type of service, and hosted Facebook employees at its campaign headquarters to drill down into microtargeted ads. This is one more way the social media companies are really digital media companies: they actively sell their services as advertising and broadcast platforms, the same way a TV station or newspaper would. Regulation and transparency would gum up this very lucrative process.
Despite this clear threat to the corporate bottom line, these companies nevertheless seem reticent to engage in the difficult self-policing that would be necessary to avoid such an outcome. Their services are still hotbeds of misinformation, propaganda, and lies. Moreover, whether performed by humans or by algorithms, they remain frustratingly inconsistent in distinguishing fraudulent stories from actual news and real facts. Just this weekend, as a gunman murdered 26 people at church in Texas, Google amplified maliciously false memes on Twitter about the alleged shooter.
This is not what’s supposed to happen. Google announced months ago that it has cracked down on “billions” of pieces of fake news, yet while it has laudably disrupted the ad revenue for websites that peddle conspiracy theories, it still cannot seem to tamp down on outrageously false stories bubbling to the top of its algorithmic searches. Similarly, Twitter is pondering a “flag” for users to identify misinformation, though much like its lamentably inadequate anti-harassment tools, it’s difficult to see how this will do anything other than exacerbate the tribalism that already makes people vulnerable to fake news in the first place. (Disclosure: I permanently left Twitter after a coordinated harassment campaign left me unable to cope with the volume of abusive messages; at the time, Twitter even ruled that posts I had reported for containing violent imagery of me did not violate their terms of service).
There is no question that fake news, propaganda, misinformation, and even coordinated abuse campaigns are difficult challenges to address, especially in the context of supporting a platform for free speech. But a flood of hate speech does not encourage the free flow of ideas — if anything, it prevents that exchange by raising the cost of public engagement so high that most people simply opt out of the conversation. And if the digital media companies that currently dominate the discussion were not so essential to our politics, it probably would not be a huge deal when people call to regulate the industry. But Facebook recently revealed that fake content generated by the Russian government reached 126,000,000 Americans during the election — a degree of reach that is utterly unprecedented in other forms of media. This was also a dramatic revision of last month’s estimate of 10,000,000 people, which was itself an updated from CEO Mark Zuckerberg declaring the entire issue “absurd” after the election.
YouTube, which is owned by Google, is experimenting with one way to deal with the problem. It directs searchers away from obvious conspiracy theories and fake news, and it has begun to limit the potential ad revenue from channels that peddle hateful and abusive messages. This has obviously upset some YouTube creators, especially because it has been applied to sexual minorities in addition to obviously abusive content, but it is at least one method for handling the problem — and one that YouTube itself will undoubtedly refine as it figures out the right balance to strike between being a totally amoral, neutral platform and actually taking an interest in the social impact of its products.
The other large social/digital media companies companies are transitioning to this view, but probably not quickly enough. When even publications like The Economist are asking whether these firms are bad for democracy, it is a leading indicator that a serious backlash is underway. Despite the sober tone adopted by the general counsels in Congress, the big three companies themselves remain strangely blithe to the real world effects of their platforms… and how that real world is on the verge of yanking their leash back, hard.
Over the summer, Germany passed a harsh law that imposes a €50 million fine on social media companies that don’t remove hate speech within 24 hours. It was the result of years of Facebook inaction against individuals who engage in hate speech on the site–like falsely accusing innocent people of terrorism. Facebook is in hot water elsewhere as well for refusing to police the content on its servers more thoroughly–like in India, where a lawsuit alleges the service refused to take down graphic images of child abuse for over a year.
It isn’t just Facebook, either. Twitter has the capacity to filter out Nazis from its service in Germany (one assumes it will not pay such a heavy fine for each account posting swastikas), in an effort similar to how it removes ISIS and al Qaeda accounts. Yet it refuses to apply that capacity anywhere else on its service – a point of growing contention even if it cannot compelled by law to do so, as it is in Europe.
Whether Germany’s legislation becomes a model for elsewhere is yet to be seen. It is the downside of firms declining to self-police until a government forces them to. Thus, while such a law would certainly fall foul of the US First Amendment, it might provide a model for other, more abusive governments to use as a justification to crack down on political and social dissent. Everyone loses in that scenario, especially the very free speech advocates these companies say they value. Moreover, this legislation specifically targets hate speech, and not fake news — though there are common threads between the two problems, they are distinct challenges.
Still, at this point, after years of foot-dragging over racist and sexist abuse, and months of foot-dragging over fake news, it seems like regulation is inevitable. For whatever reason, none of these companies began to take concrete action about the problem until large numbers of people essentially forced governments to intervene, and some major reworking of how these companies operate seems unavoidable.
The recent moves to crack down on the avalanche of garbage are worth praising — Facebook, in particular, was much more proactive at taking down conspiracies and foreign-financed advertising during the French and German elections than it was during the UK Brexit vote or the US election last year. Yet, the decision to treat these companies like the media companies they are, rather than the neutral platforms they wish to be, is almost certain to happen. That means that social media is in for some disruption of its own pretty soon.
This is a real shame, because Facebook, Twitter, and Google all provide extremely valuable services for public life today. Even apart from its search functions, the way Google revolutionized ad services and similar monetization schemes for websites has helped to salvage some of the extreme damage Craigslist wrought upon newspaper advertising — not enough to stop the decline in reporting jobs, but enough to at least lighten the financial load for small, independent news sites. Similarly, there is no question that Twitter and Facebook have both revolutionized how journalism is created and consumed, by helping journalists connect with sources and experts to interview for stories and by allowing regular people to better understand the reporters who filter and provide information. It’s also allowed them to build up their own personal brands to sell their work to a much broader audience than they would ordinarily be able to access. This was how I, myself, built my own brand when I worked as a journalist many years ago. It is difficult to imagine the current media landscape existing without these companies, even though they are relatively new to the game.
But the idealism of Silicon Valley executives was never a very good match for the brutal realities of international politics or domestic elections. It feels more like naïveté today. People can and will play dirty, and these services were built on the assumption that people will use them in good faith. It is an assumption that does not match realityThey don’t, and they won’t in the future, and this strikes me as a fatal challenge to their business models. It seems impossible to know for now exactly how far-ranging the regulations on these companies will ultimately become, or how they will have to alter their business models as a result. The only certain thing right now is that some sort of change is coming. Hopefully the executives at these firms are no longer in denial about it, or else they will feel blindsided by the coming backlash.