An Unavoidable Wave of Internet Regulation

On Twit­ter, the blue “ver­i­fied” check­mark next to a per­son­’s account is meant to be a sym­bol of trust. It says that the ser­vice has ver­i­fied the per­son­’s iden­ti­ty because they are of pub­lic inter­est and at risk of imper­son­ation. While Twit­ter used to have staff indi­vid­u­al­ly ver­i­fy accounts in an opaque process, last year they opened up the process to pub­lic appli­ca­tion. They go to great pains to note that a ver­i­fi­ca­tion does not imply endorse­ment by the com­pa­ny.

At the same time, Twit­ter has made a token effort to address the tor­rent of abu­sive con­tent on its site: the end­less waves of trolling, pro­pa­gan­da, mis­in­for­ma­tion, hate speech, and calls to vio­lence. It has become the sub­ject of per­son­al state­ments by Twit­ter’s most senior lead­er­ship, and when it comes to Islamist extrem­ism, such as al Qae­da and ISIS, they have expend­ed effort to auto­mat­i­cal­ly block accounts that pro­mote extrem­ist con­tent.

So it was a bit con­fus­ing when they approved the ver­i­fi­ca­tion of Jason Kessler, the orga­niz­er of the Char­lottesville Nazi ral­ly that result­ed in one death and dozens of injuries when one of the marchers drove his car, ISIS-style, into a crowd of anti-fas­cist pro­test­ers. After­ward, Kessler mocked the vic­tims in nasty, per­son­al ways, and has been arrest­ed repeat­ed­ly (includ­ing for dox­ing activists in an effort to sub­ject them to vio­lent reprisal). It would seem to cut against Twit­ter’s stat­ed goals to stamp out hate­ful, abu­sive con­tent, to ver­i­fy such a per­son. But it has big­ger impli­ca­tions than that.

The dig­i­tal media com­pa­nies that dom­i­nate our polit­i­cal dis­course and news con­sump­tion are in the midst of an exis­ten­tial debate about them­selves, but they don’t seem to real­ize it. With the recent con­tentious Con­gres­sion­al hear­ings fea­tur­ing the gen­er­al coun­sels of the three largest dig­i­tal media com­pa­nies — more on that term in a minute — it seems clear that the Inter­net’s largest firms are about to face a wave of unwant­ed gov­ern­ment reg­u­la­tion. Yet there seems to be no urgency from these firms them­selves to police them­selves in order to avoid it.

What is going on? Let’s start with my deci­sion to refer to Twit­ter, Face­book, and Google as dig­i­tal media com­pa­nies, rather than as social media com­pa­nies. It is a des­ig­na­tion that the com­pa­nies’ exec­u­tives vehe­ment­ly reject (see Face­book COO Sheryl San­berg’s recent rejec­tion of the term dur­ing a tour of Europe).

Yet it seems every­one else thinks of them as dig­i­tal media: strong majori­ties of both Twit­ter and Face­book users say the ser­vices are their pri­ma­ry means of read­ing the news, and the Pres­i­dent uses Twit­ter as an infor­ma­tion press release machine. Google even oper­ates a News ser­vice, which acts as a cus­tomiz­able front page for peo­ple who plug in cer­tain pref­er­ences, top­ics, and loca­tions. To dif­fer­en­ti­ate these firms from more tra­di­tion­al com­pa­nies sim­ply because they pro­duce lit­tle of their own con­tent requires too many leaps of log­ic — espe­cial­ly con­sid­er­ing the moves all three have tak­en to curate, edit, and pro­duce news sum­maries and break­ing sto­ries. 

I sus­pect one rea­son none of these firms want to be called media com­pa­nies is because that would sub­ject them to cer­tain types of scruti­ny and reg­u­la­tion — such as cam­paign adver­tis­ing laws, which would require them to be far more trans­par­ent about the groups spend­ing mon­ey on polit­i­cal adver­tise­ments than they have been will­ing to be so far. Put sim­ply, one of the ways these firms remain effec­tive at serv­ing ads and min­ing con­sumer data is by avoid­ing that sort of trans­paren­cy.

It is easy to see how some­one reacts to a piece of con­tent if they don’t know where it’s com­ing from, which makes it eas­i­er to sell valu­able adver­tis­ing advise­ment ser­vices to high spend­ing clients. The Trump cam­paign, for exam­ple, made use of this exact type of ser­vice, and host­ed Face­book employ­ees at its cam­paign head­quar­ters to drill down into micro­tar­get­ed ads. This is one more way the social media com­pa­nies are real­ly dig­i­tal media com­pa­nies: they active­ly sell their ser­vices as adver­tis­ing and broad­cast plat­forms, the same way a TV sta­tion or news­pa­per would. Reg­u­la­tion and trans­paren­cy would gum up this very lucra­tive process.

Despite this clear threat to the cor­po­rate bot­tom line, these com­pa­nies nev­er­the­less seem ret­i­cent to engage in the dif­fi­cult self-polic­ing that would be nec­es­sary to avoid such an out­come. Their ser­vices are still hotbeds of mis­in­for­ma­tion, pro­pa­gan­da, and lies. More­over, whether per­formed by humans or by algo­rithms, they remain frus­trat­ing­ly incon­sis­tent in dis­tin­guish­ing fraud­u­lent sto­ries from actu­al news and real facts. Just this week­end, as a gun­man mur­dered 26 peo­ple at church in Texas, Google ampli­fied mali­cious­ly false memes on Twit­ter about the alleged shoot­er.

This is not what’s sup­posed to hap­pen. Google announced months ago that it has cracked down on “bil­lions” of pieces of fake news, yet while it has laud­ably dis­rupt­ed the ad rev­enue for web­sites that ped­dle con­spir­a­cy the­o­ries, it still can­not seem to tamp down on out­ra­geous­ly false sto­ries bub­bling to the top of its algo­rith­mic search­es. Sim­i­lar­ly, Twit­ter is pon­der­ing a “flag” for users to iden­ti­fy mis­in­for­ma­tion, though much like its lam­en­ta­bly inad­e­quate anti-harass­ment tools, it’s dif­fi­cult to see how this will do any­thing oth­er than exac­er­bate the trib­al­ism that already makes peo­ple vul­ner­a­ble to fake news in the first place. (Dis­clo­sure: I per­ma­nent­ly left Twit­ter after a coor­di­nat­ed harass­ment cam­paign left me unable to cope with the vol­ume of abu­sive mes­sages; at the time, Twit­ter even ruled that posts I had report­ed for con­tain­ing vio­lent imagery of me did not vio­late their terms of ser­vice).

There is no ques­tion that fake news, pro­pa­gan­da, mis­in­for­ma­tion, and even coor­di­nat­ed abuse cam­paigns are dif­fi­cult chal­lenges to address, espe­cial­ly in the con­text of sup­port­ing a plat­form for free speech. But a flood of hate speech does not encour­age the free flow of ideas — if any­thing, it pre­vents that exchange by rais­ing the cost of pub­lic engage­ment so high that most peo­ple sim­ply opt out of the con­ver­sa­tion. And if the dig­i­tal media com­pa­nies that cur­rent­ly dom­i­nate the dis­cus­sion were not so essen­tial to our pol­i­tics, it prob­a­bly would not be a huge deal when peo­ple call to reg­u­late the indus­try. But Face­book recent­ly revealed that fake con­tent gen­er­at­ed by the Russ­ian gov­ern­ment reached 126,000,000 Amer­i­cans dur­ing the elec­tion — a degree of reach that is utter­ly unprece­dent­ed in oth­er forms of media. This was also a dra­mat­ic revi­sion of last month’s esti­mate of  10,000,000 peo­ple, which was itself an updat­ed from CEO Mark Zucker­berg declar­ing the entire issue “absurd” after the elec­tion. 

YouTube, which is owned by Google, is exper­i­ment­ing with one way to deal with the prob­lem. It directs searchers away from obvi­ous con­spir­a­cy the­o­ries and fake news, and it has begun to lim­it the poten­tial ad rev­enue from chan­nels that ped­dle hate­ful and abu­sive mes­sages. This has obvi­ous­ly upset some YouTube cre­ators, espe­cial­ly because it has been applied to sex­u­al minori­ties in addi­tion to obvi­ous­ly abu­sive con­tent, but it is at least one method for han­dling the prob­lem — and one that YouTube itself will undoubt­ed­ly refine as it fig­ures out the right bal­ance to strike between being a total­ly amoral, neu­tral plat­form and actu­al­ly tak­ing an inter­est in the social impact of its prod­ucts.

The oth­er large social/digital media com­pa­nies com­pa­nies are tran­si­tion­ing to this view, but prob­a­bly not quick­ly enough. When even pub­li­ca­tions like The Econ­o­mist are ask­ing whether these firms are bad for democ­ra­cy, it is a lead­ing indi­ca­tor that a seri­ous back­lash is under­way. Despite the sober tone adopt­ed by the gen­er­al coun­sels in Con­gress, the big three com­pa­nies them­selves remain strange­ly blithe to the real world effects of their plat­forms… and how that real world is on the verge of yank­ing their leash back, hard.

Over the sum­mer, Ger­many passed a harsh law that impos­es a €50 mil­lion fine on social media com­pa­nies that don’t remove hate speech with­in 24 hours. It was the result of years of Face­book inac­tion against indi­vid­u­als who engage in hate speech on the site–like false­ly accus­ing inno­cent peo­ple of ter­ror­ism. Face­book is in hot water else­where as well for refus­ing to police the con­tent on its servers more thoroughly–like in India, where a law­suit alleges the ser­vice refused to take down graph­ic images of child abuse for over a year.

It isn’t just Face­book, either. Twit­ter has the capac­i­ty to fil­ter out Nazis from its ser­vice in Ger­many (one assumes it will not pay such a heavy fine for each account post­ing swastikas), in an effort sim­i­lar to how it removes ISIS and al Qae­da accounts. Yet it refus­es to apply that capac­i­ty any­where else on its ser­vice – a point of grow­ing con­tention even if it can­not com­pelled by law to do so, as it is in Europe.

Whether Ger­many’s leg­is­la­tion becomes a mod­el for else­where is yet to be seen. It is the down­side of firms declin­ing to self-police until a gov­ern­ment forces them to. Thus, while such a law would cer­tain­ly fall foul of the US First Amend­ment, it might pro­vide a mod­el for oth­er, more abu­sive gov­ern­ments to use as a jus­ti­fi­ca­tion to crack down on polit­i­cal and social dis­sent. Every­one los­es in that sce­nario, espe­cial­ly the very free speech advo­cates these com­pa­nies say they val­ue. More­over, this leg­is­la­tion specif­i­cal­ly tar­gets hate speech, and not fake news — though there are com­mon threads between the two prob­lems, they are dis­tinct chal­lenges.

Still, at this point, after years of foot-drag­ging over racist and sex­ist abuse, and months of foot-drag­ging over fake news, it seems like reg­u­la­tion is inevitable. For what­ev­er rea­son, none of these com­pa­nies began to take con­crete action about the prob­lem until large num­bers of peo­ple essen­tial­ly forced gov­ern­ments to inter­vene, and some major rework­ing of how these com­pa­nies oper­ate seems unavoid­able.

The recent moves to crack down on the avalanche of garbage are worth prais­ing — Face­book, in par­tic­u­lar, was much more proac­tive at tak­ing down con­spir­a­cies and for­eign-financed adver­tis­ing dur­ing the French and Ger­man elec­tions than it was dur­ing the UK Brex­it vote or the US elec­tion last year. Yet, the deci­sion to treat these com­pa­nies like the media com­pa­nies they are, rather than the neu­tral plat­forms they wish to be, is almost cer­tain to hap­pen. That means that social media is in for some dis­rup­tion of its own pret­ty soon.

This is a real shame, because Face­book, Twit­ter, and Google all pro­vide extreme­ly valu­able ser­vices for pub­lic life today. Even apart from its search func­tions, the way Google rev­o­lu­tion­ized ad ser­vices and sim­i­lar mon­e­ti­za­tion schemes for web­sites has helped to sal­vage some of the extreme dam­age Craigslist wrought upon news­pa­per adver­tis­ing — not enough to stop the decline in report­ing jobs, but enough to at least light­en the finan­cial load for small, inde­pen­dent news sites. Sim­i­lar­ly, there is no ques­tion that Twit­ter and Face­book have both rev­o­lu­tion­ized how jour­nal­ism is cre­at­ed and con­sumed, by help­ing jour­nal­ists con­nect with sources and experts to inter­view for sto­ries and by allow­ing reg­u­lar peo­ple to bet­ter under­stand the reporters who fil­ter and pro­vide infor­ma­tion. It’s also allowed them to build up their own per­son­al brands to sell their work to a much broad­er audi­ence than they would ordi­nar­i­ly be able to access. This was how I, myself, built my own brand when I worked as a jour­nal­ist many years ago. It is dif­fi­cult to imag­ine the cur­rent media land­scape exist­ing with­out these com­pa­nies, even though they are rel­a­tive­ly new to the game.

But the ide­al­ism of Sil­i­con Val­ley exec­u­tives was nev­er a very good match for the bru­tal real­i­ties of inter­na­tion­al pol­i­tics or domes­tic elec­tions. It feels more like naïveté today. Peo­ple can and will play dirty, and these ser­vices were built on the assump­tion that peo­ple will use them in good faith. It is an assump­tion that does not match real­i­tyThey don’t, and they won’t in the future, and this strikes me as a fatal chal­lenge to their busi­ness mod­els. It seems impos­si­ble to know for now exact­ly how far-rang­ing the reg­u­la­tions on these com­pa­nies will ulti­mate­ly become, or how they will have to alter their busi­ness mod­els as a result. The only cer­tain thing right now is that some sort of change is com­ing. Hope­ful­ly the exec­u­tives at these firms are no longer in denial about it, or else they will feel blind­sided by the com­ing back­lash.

Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.