Issue 1.3. “Social Capital and the Tinderbot,” by Tanner Geer

Orig­i­nal­ly pub­lished on July 12, 2016

Only a few weeks ago, I met my first bot swarm.

That morn­ing, I post­ed a link on my Face­book Wall to one of Don­ald Trump’s cam­paign speech­es. Clut­ter­ing my friends’ Face­book feeds with polit­i­cal com­men­tary is one of my more shame­ful addic­tions, but the costs of indulging the habit are small enough that, at this point, the behav­ior is almost auto­mat­ic. This time the costs were high­er. Like blowflies drawn to a car­cass, the bots descend­ed on the link, shar­ing it, lik­ing it, and leav­ing one com­ment after anoth­er beneath it.

A rep­re­sen­ta­tive exam­ple was one Yenj Romin, who repost­ed my com­ments on ‘her’ own pro­file. Ms. Romin had been on Face­book for approx­i­mate­ly nine weeks. Those who vis­it her pro­file will dis­cov­er that all of her posts are writ­ten in cap­i­tal let­ters. They pro­claim things like “KILL ALL THE MUSLIMS” and “EVERYONE HATES JOHN MCCAIN.” She had four friends, and her pro­file pic­ture is a draw­ing of the words “I’m vot­ing for Trump.” This March she enrolled at Barstow (CA) high school, Barstow Com­mu­ni­ty Col­lege, and began her gain­ful employ­ment in the U.S. Army—all at the same time.

Barstow, Cal­i­for­nia is home to a Marine Corps Logis­tics Base.*

Romin was prob­a­bly the most col­or­ful bot attract­ed to the post, but hard­ly the only one. I wast­ed a day delet­ing memes post­ed by ‘peo­ple’ I had nev­er heard of, who do not fol­low me, and who do not have any friends who are my friends, or even any of my fol­low­er’s friends. Some of these accounts were quite old—the old­est (a Hillary­Bot) had been post­ing things since 2013, and had just enough pic­tures of neat fam­i­ly vaca­tions in Mia­mi to allay sus­pi­cion.

In the election bots of today we see small foreshocks of the social shockwaves that will ripple through our society in the future.”

The cen­tral bot of Maria Cecire’s “Mas­sive­ly Open” is far more sophis­ti­cat­ed than the bots that defaced my Face­book wall in May. No one could ever imag­ine falling in love with an elec­tion bot (or, for that mat­ter, their sex­u­al­ly-charged cousins on Tin­der). Cecire imag­ines a world where that is no longer the case. That world may seem far away, but in the elec­tion bots of today we see small fore­shocks of the social shock­waves that will rip­ple through our soci­ety in the future. Most dis­cus­sions in the pop­u­lar press on the impact of advances in AI tech­nol­o­gy focus on the eco­nom­ic side of these shocks, return­ing to the well-tread ground of automa­tion, inequal­i­ty, and eco­nom­ic dis­place­ment. Cecire calls our focus to the social fall­out of these same trends. What hap­pens to a social land­scape hijacked by bots? How is trust pos­si­ble in a soci­ety where any­one can be a bot in dis­guise?

The ques­tion is a sur­pris­ing­ly rel­e­vant one that cuts to the heart of con­tem­po­rary debates on how the Inter­net is chang­ing our com­mu­ni­ties and our will­ing­ness to trust the peo­ple we meet in them. Though some would like to blame the Inter­net for all of the country’s woes, America’s fail­ing social cap­i­tal pre­dates the Inter­net and social net­works like Face­book by decades. In the Internet’s ear­li­est days, many hoped that the web might be the key to revers­ing these trends. Sil­i­con Val­ley vision­ar­ies, con­fi­dent that their works alone had the pow­er to mend every dent, dry every tear, and fix every ill, advanced a hope­ful array of argu­ments about the trans­for­ma­tive pow­er of tech­nol­o­gy.

By now these argu­ments are prob­a­bly long famil­iar: the Inter­net makes coor­di­na­tion pos­si­ble between par­ties who would not even been able to com­mu­ni­cate with­out it; blog­ging, forums, and social net­works have cre­at­ed entire­ly new plat­forms for social cap­i­tal to bloom; and the Inter­net has low­ered the trans­ac­tions costs of friend­ship itself, so that lone­li­ness nev­er had to be deter­mined by loca­tion again.

Two decades have passed since these heady visions of the future were first described. The sci­ence is in and the ver­dict is out: the Inter­net has not brought about a new age of Amer­i­can com­mu­ni­ty.

Affairs have plod­ded along much as they did before. The Inter­net may make inter­na­tion­al friend­ships pos­si­ble, but the peo­ple who com­pose the aver­age Jane’s online net­work are more or less the same as those who live close to her. The Inter­net increas­es the num­ber and strength of ‘weak’ social ties with acquain­tances, but does not make a mean­ing­ful dif­fer­ence in the num­ber or inten­si­ty of the ‘strong’ social bonds we form with trust­ed friends and fam­i­ly mem­bers. Thus, using the Inter­net improves most mea­sures of social cap­i­tal, but only on the mar­gins (and far less than fast-dis­ap­pear­ing activ­i­ties like week­ly church atten­dance do). These ben­e­fits also fol­low class lines. Mil­len­ni­als with well-to-do par­ents are more engaged in asso­ci­a­tions, vol­un­teer work, and friend­ships than their par­ents were at their age; yet those with work­ing class ori­gins are even less engaged than their par­ents. For America’s poor, the Inter­net has done noth­ing more for the strength of their social net­works than the tele­vi­sion did.

The marginal benefits Internet use has on social health should not be discounted—meaningful Internet communities have formed, and what is more, formed on nothing more than ID tags and small, pixelated avatars.”

These hum­drum find­ings are unin­spir­ing in the eyes of the tech­nol­o­gist, whose lofty visions did not mate­ri­al­ize. How­ev­er, the mar­gin­al ben­e­fits Inter­net use has on social health should not be discounted—meaningful Inter­net com­mu­ni­ties have formed, and what is more, formed on noth­ing more than ID tags and small, pix­e­lat­ed avatars. These social bonds forged on dig­i­tal social net­works have not changed the world, but they have proven some­thing remark­able: you do not need to meet some­one in per­son in order to trust them.

In time, bots may put an end to that.

Cecire cap­tures the psy­cho­log­i­cal costs of a bot-dom­i­nat­ed social sphere in the char­ac­ter Zoe, a bat­tered down teacher strug­gling to find pur­pose in her work. The only bright spot in the MOOC class­es she man­ages are the few “stu­dents who get it.” These few stu­dents are all that keep her “feel[ing] like [I’m] doing some­thing that isn’t just stamp­ing out imple­ments for a mind­less machine.” But in a world where any­one online might be a bot in dis­guise, the stu­dent who ‘gets it’ may have been a mind­less machine all along. Zoe faces this rev­e­la­tion with shock:

What’s real about a bot?” Zoe cried. “He wasn’t even a real stu­dent, just a piece of a giant degree scrap­ing scam. He was in my course — in all of his cours­es — just to get an authen­ti­cat­ed degree, not to learn any­thing. All so his scam­mer over­lords could har­vest the wages of what­ev­er remote job he’d even­tu­al­ly steal from a real stu­dent. What was the point?” ….Zoe felt her eyes fill again. “I thought I was get­ting through to some­one,” she said soft­ly. “But I was just talk­ing to myself.”

Zoe’s tears are fic­tion­al, but her frus­tra­tions have clear ana­logues in the real world. In her cry there is an echo of the very first dude fooled by a Tin­der­bot into flirt­ing for ten min­utes with a piece of code. As AI tech­nol­o­gy pro­gress­es, it is a feel­ing we will all become more famil­iar with.

Zoe’s tears are fictional, but her frustrations have clear analogues in the real world.”

These feel­ings have con­se­quences. A social net­work bom­bard­ed by bots will soon be a bar­ren one. This is part­ly because of the sheer tedi­um of block­ing, delet­ing, or ignor­ing one bot-mes­sage after anoth­er. A high vol­ume of stu­pid bots is enough to dri­ve any­one away from a plat­form. But there is a sec­ond haz­ard, one that becomes more appar­ent the smarter the bots become. I might reduce this to a gen­er­al hypoth­e­sis: the more sophis­ti­cat­ed bots haunt­ing a social net­work are, the less trust­ing its mem­bers will be.

The log­ic behind this hypoth­e­sis is easy to grasp. How many false stu­dents would you have to put your heart into before you stopped teach­ing with earnest­ness? How many polit­i­cal debates would you have with robots before you decid­ed to stop dis­cussing pol­i­tics with strangers alto­geth­er? How many times would you be fooled by dat­ing bots before you give up on online dat­ing alto­geth­er?

Bots are not yet sophis­ti­cat­ed enough to cast that kind of pall on most online com­mu­ni­ties they fre­quent. But the bot hordes of today are suf­fi­cient­ly adept to lend a peek into the like­ly tenor of things to come. Look no fur­ther than the 2016 elec­tions, whose pri­maries were filled with accu­sa­tions that elec­tion­eers on one side were using bots to drown out the oth­er. Per­haps these claims are true. Per­haps they are not. The truth here did not real­ly mat­ter. What mat­tered is the accu­sa­tion. The mere inti­ma­tion that the oth­er side relies on robots for its strongest sup­port poi­sons dis­cus­sion. It is the pos­si­bil­i­ty of astro­turf­ing, not its proven use in every case, that sows these seeds of dis­trust. The more sophis­ti­cat­ed these bots become, the more dis­trust­ful we will be. It is not hard to imag­ine a future in which any hon­est Inter­net debater will be tarred with the accu­sa­tion they are noth­ing more than a sophis­ti­cat­ed dis­cus­sion bot.

How we will solve this prob­lem remains to be seen. It is still a prob­lem far on the hori­zon, most vis­i­ble in fic­tion­al tales like Cecire’s. But it is a prob­lem we will one day have to con­front: how is civic dis­course pos­si­ble when you can­not be sure the peo­ple talk­ing with you are human?


Tan­ner Greer is a writer and ana­lyst cur­rent­ly based out of Taipei. His research focus­es on the evo­lu­tion of East Asian strate­gic thought from the time of Sun­zi to today. He blogs at The Scholar’s Stage, and can be fol­lowed on twit­ter at @Scholars_Stage.

*It is also the clos­est city to the Army run Fort Irwin. But if our bot had tru­ly been assigned there then that is where ‘she’ would be. With­out a fam­i­ly of her own there is no rea­son for ‘her’ to be liv­ing 45 min­utes off base.