On Treating Tools as Utilitarian Objects

There is a grow­ing con­sen­sus among west­ern com­put­er researchers, tech­nol­o­gists, celebri­ty intel­lec­tu­als, and pun­dits, about the ethics and morals of cre­at­ing arti­fi­cial intel­li­gence. One of the tech­ni­cal voic­es lead­ing the charge (as opposed to entre­pre­neurs and celebri­ty physi­cists) is arti­fi­cial intel­li­gence pio­neer Stu­art Rus­sell, who recent­ly issued a call for com­put­ers to be “prov­ably aligned” with what he believes to be “human val­ues.”

[T]o the extent that our val­ues are revealed in our behav­ior, you would hope to be able to prove that the machine will be able to “get” most of it. There might be some bits and pieces left in the cor­ners that the machine doesn’t under­stand or that we dis­agree on among our­selves. But as long as the machine has got the basics right, you should be able to show that it can­not be very harm­ful.

This is, to put it gen­tly, a remark­able exer­cise in hog­wash­ery. Ignor­ing the prob­lem of demand­ing a com­put­er (much less a human) “prov­ably align” itself with a set of val­ues, there is a more fun­da­men­tal prob­lem of defin­ing what those val­ues are. In the quote above, Rus­sell seems to have a hard time defin­ing “human val­ues” in a mean­ing­ful way, even admit­ting that they might nev­er be defin­able, while still sug­gest­ing that com­put­ers must be prov­ably aligned with them any­way. It is self-con­tra­dic­to­ry non­sense.

The val­ues that one ascribes to human­i­ty are much more flex­i­ble than most west­ern tech­nol­o­gists real­ize. What peo­ple usu­al­ly mean when they dis­cuss val­ues that they think apply to human­i­ty as a whole are real­ly just WEIRD val­ues — that is, val­ues unique to West­ern, Edu­cat­ed, Indus­tri­al­ized, Rich, and Demo­c­ra­t­ic soci­eties. Yet, as Mark Movs­esian notes, WEIRD cul­tures (espe­cial­ly Amer­i­can cul­ture!) have beliefs that are in a dis­tinct minor­i­ty when exam­ined glob­al­ly.

To sum­ma­rize some basic issues with assign­ing uni­ver­sal­i­ty to human val­ues, con­sid­er the role that pri­va­cy, free expres­sion, polit­i­cal choice, and enti­tle­ment to ser­vices play in Amer­i­can cul­ture. As Amer­i­cans learned react­ing to the hor­rors of the Char­lie Heb­do mas­sacres, not even our most cul­tur­al­ly sim­i­lar soci­eties in Europe hold the right to free, and freely offen­sive, speech as absolute­ly as we do. Amer­i­cans tend to val­ue reli­gious faith to a degree almost no one else in the devel­oped world does, to the point of delib­er­ate­ly enabling through prob­a­bly uncon­sti­tu­tion­al laws reli­gious extrem­ism by Chris­tians (often dis­guised as “reli­gious free­dom” laws to allow open­ly prej­u­di­cial behav­ior toward gays and les­bians).

So, while one can talk a lot about “human val­ues,” the real­i­ty is that most val­ues are much less “human” and much more “social” than the peo­ple assign­ing val­ue judg­ment to arti­fi­cial intel­li­gence real­ly want to admit. You can even look at the attempt to cre­ate a uni­ver­sal dec­la­ra­tion of the rights all humans are enti­tled to and find bil­lions of peo­ple exempt­ing them­selves from its claus­es about reli­gion, about free­dom from offense, and so on.

But there is anoth­er angle to this: I would sug­gest, con­tra Rus­sell, that com­put­ers and AI already reflect our val­ues. That is, they are accu­rate, if not exhaus­tive, rep­re­sen­ta­tions of what we care about and strive for.

Think about what we use advanced soft­ware and robots to do right now: pre­ci­sion man­u­fac­tur­ing, com­pli­cat­ed search and analy­sis pat­terns, com­merce, trad­ing, war­fare, and so on. Our wide­spread use of smart phones are dras­ti­cal­ly chang­ing how our brains process and under­stand infor­ma­tion, and that will only con­tin­ue as time goes on.

So “val­ues” are not a thing to be pro­grammed into com­put­ers, but some­thing inher­ent to how they are designed — and nei­ther those val­ues, nor that design, is a uni­ver­sal con­stant.

This is a point lost on many tech­nol­o­gists, who get so excit­ed by the oppor­tu­ni­ty (and, one would imag­ine, salary) of work­ing on this new fron­tier that they don’t con­sid­er the val­ues they are pro­mot­ing.

Last year, for exam­ple, Andrew Ng, a pio­neer­ing AI researcher for Google, left the com­pa­ny to work for Baidu, the Chi­nese search engine giant. He described his deci­sion in laud­able terms: work­ing for a new, less dom­i­nant play­er in the world mar­ket, focus­ing on new inter­net users, get­ting to push his research in new direc­tions, and so on. And as far as I can tell, Ng gen­uine­ly believes those things and wish­es no one any harm.

How­ev­er, his com­pa­ny, and the tech­nol­o­gy it pro­duces, is a dif­fer­ent mat­ter. Baidu, being a Chi­nese com­pa­ny, can­not escape the clutch­es of the Chi­nese gov­ern­ment, which recent­ly coopt­ed Baidu’s through a clever (and like­ly state-direct­ed) takeover of Chi­na’s inter­net back­bone to attack the web­site GitHub, along with mil­lions of oth­er users for days on end. I do not think Ng in any way sup­ports Chi­na coopt­ing Chi­nese com­pa­nies to cen­sor and destroy web­sites host­ed abroad,  but those are the val­ues built into the design of the tech­nol­o­gy on which he now works. What­ev­er his inten­tions are, it is unavoid­able (this is, of course, on top of the every­day expe­ri­ence of Baidu will­ing­ly help­ing the Chi­nese gov­ern­ment cen­sor and oppress free speech in the coun­try).

Chi­na is hard­ly the only place where cul­tur­al val­ues, expressed through tech­nol­o­gy, dif­fer dra­mat­i­cal­ly from the West. In Rus­sia, Eugene Kasper­sky, the founder of the epony­mous com­put­er secu­ri­ty firm, is not only a for­mer KGB offi­cer but open­ly social­izes with senior FSB offi­cers while tar­get­ing the Amer­i­can and British gov­ern­ments for embar­rass­ing dis­clo­sures about their online espi­onage activ­i­ties. I don’t doubt that many of Kasper­sky Labs’ secu­ri­ty researchers gen­uine­ly believe they are mak­ing the web a safer place, but there are inher­ent val­ues built into an enter­prise that runs as a qua­si-offi­cial arm of the Russ­ian intel­li­gence ser­vice.

But maybe even think­ing about com­put­ers in terms of val­ues is non­sense. Joan­na Bryson, anoth­er arti­fi­cial intel­li­gence researcher, has made the rather force­ful argu­ment that assign­ing human val­ues to machines is fun­da­men­tal­ly non­sen­si­cal. We can­not assign bio­log­i­cal val­ues to machines, she says, “because our com­plete author­ship [of robots and AI] is fun­da­men­tal­ly dif­fer­ent from our rela­tion­ship to humans or oth­er evolved sys­tems.”

Rather, Bryson argues, “Peo­ple want to make AI they owe oblig­a­tions to, can fall in love with, etc. – ‘equals’ over which we have com­plete domin­ion.”

It sim­ply is not what com­put­ers are, nor does it rep­re­sent what they do.

Even the smartest arti­fi­cial intel­li­gence sys­tem is a dumb machine. It does, and is capa­ble of, pre­cise­ly what is pro­grammed into it. It does not exist out­side the bounds of what humans cre­ate it to do, and the many prob­lem­at­ic allu­sions to sci­ence fic­tion aside, there is no known mech­a­nism for how a com­put­er could exceed those bound­aries.

As it stands now, com­put­ers are extrem­ists. They are lim­it­ed to a bina­ry log­ic: on/off, True/False. They are built on the assump­tion that things are either true or false, and one of the hard­est con­cepts to learn when you first learn pro­gram­ming is how to think in these stark, black-and-white terms. It sim­ply is not intu­itive­ly human, and those who do think in strict­ly Manichean terms (with us or against us, no in betweens) they are usu­al­ly con­sid­ered extrem­ists.

While mov­ing beyond boolean log­ic struc­tures are the very cut­ting edge, at the end of the day the machine code is still bina­ry: one or off, 1 or 0. It’s great for pat­tern recog­ni­tion, but assum­ing that that can built into moral rea­son­ing is just that: an assump­tion. We sim­ply haven’t cracked the code for how humans build moral rea­son­ing yet, so to assume that com­put­ers should have that built in before they’re even built strikes me as lit­tle more than a cyn­i­cal cov­er for techno­pho­bia.

And are machines inca­pable of moral rea­son­ing all that bad?Underneath all of the inter­est­ing dis­cus­sions about robots and morals I keep get­ting left with a big “so what?” Do you care if a machine only sim­u­lates polite­ness, a basic and rough con­cern for sen­si­bil­i­ties, and tries to max­i­mize human life when pre­sent­ed with the trol­ley prob­lem? Is the end result, the user expe­ri­ence if you will, all that dif­fer­ent if the deci­sion came from rigid and unfeel­ing rule sets rather than a heart­felt agony over what the best deci­sion is?

Maybe. But I don’t see it, for the time being. And I cer­tain­ly don’t see the big deal over whether Google’s next search algo­rithm has moral val­ues attached to it, so long as it finds the things I want it to on the Inter­net. And maybe that’s just going to be good enough for any of us.

joshua.foust
Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.