What Do “Values” Even Mean to an Artificial Intelligence?

th-300x270Over at Future Tense, Adam Elkus pos­es a very inter­est­ing ques­tion I’ve cov­ered here before:

Which brings us back to Russell’s opti­mistic assump­tions that com­put­er sci­en­tists can side­step these social ques­tions through supe­ri­or algo­rithms and engi­neer­ing efforts. Rus­sell is an engi­neer, not a human­i­ties schol­ar. When he talks about “trade­offs” and “val­ue func­tions,” he assumes that a machine ought to be an arti­fi­cial util­i­tar­i­an. Rus­sell also sug­gests that machines ought to learn a cross-sec­tion of human val­ues from human cul­tur­al and media prod­ucts. So does that mean a machine could learn about Amer­i­can race rela­tions by watch­ing the canon­i­cal pro-Ku Klux Klan and pro-Con­fed­er­a­cy film The Birth of a Nation?

This is one of the most impor­tant ques­tions to grap­ple with when think­ing about whether com­put­er sys­tems have val­ues and what those val­ues might mean. Elkus and I agree that right now intel­li­gent sys­tems will be reflect­ing the val­ues of the soci­ety that cre­at­ed them. The chal­lenge is that those val­ues are not only not uni­ver­sal, but even with­in a cul­ture they vary: after all, to read the con­sti­tu­tion slav­ery is just as much an Amer­i­can “val­ue” as is indi­vid­ual lib­er­ty from gov­ern­ment con­trol. Much of Amer­i­can polit­i­cal his­to­ry has been devot­ed to wrestling with what, exact­ly, Amer­i­can val­ues real­ly mean: help­ing the poor? Seg­re­tat­ing dark-skinned and light-skinned peo­ple? Access to abor­tion?

And even putting val­ues in terms of “Amer­i­can” val­ues has a lot of con­text that prob­a­bly won’t make sense for devel­op­ing arti­fi­cial intel­li­gence. A com­mon refrain among left wing activists (and, ahem, Sting) dur­ing the Cold War was that “Rus­sians love their chil­dren too,” so there­fore, the think­ing went, why would we need nuclear weapons? This is an attempt to take a com­mon human val­ue — regard for one’s off­spring — and from that assume that every­one shar­ing regard for their own off spring should com­pel peo­ple to not hate or com­mit acts of vio­lence.

The obvi­ous nut­ti­ness of such an idea (Ger­man Nazis loved their kids but they still com­mit­ted geno­cide) should speak to the nut­ti­ness of try­ing to pro­gram “human val­ue” into an arti­fi­cial mind.

But I want to go a step beyond where Elkus goes. He asks what those val­ues should be, right­ly notes that a wide range of val­ues exist across human­i­ty, and argues that it is sim­ply not appro­pri­ate to have straight white cis­gen­dered male sci­en­tists mak­ing that deter­mi­na­tion. And he is cor­rect to do so! But why should we assume that “human val­ues” will be mean­ing­ful at all to a machine? We know that even humans with sim­i­lar val­ues will do unspeak­ably ter­ri­ble things to each oth­er — the com­mon her­itage of Euro­pean lib­er­al val­ues plunged the entire world into hor­rors unknown in our his­to­ry dur­ing the 20th cen­tu­ry — so why should we expect a machine that is some­how pro­grammed with those val­ues would do any­thing else?

After all, humans raised in the same envi­ron­ment by the same par­ents will often exhib­it a wide range of behav­ioral and psy­cho­log­i­cal out­comes lat­er in life — is that an issue of their receiv­ing the right “val­ues?” Hell, a big chunk of Amer­i­can pop cul­ture is about cheeky teens telling their par­ents to shove their bor­ing adult val­ues where the sun won’t shine. Who in their right minds would want a machine to grap­ple with such a thing?

There­fore, I sug­gest that maybe the entire dis­cus­sion of “val­ues” in arti­fi­cial intel­li­gence is lit­tle more than a moral pan­ic, because “val­ues” as we con­ceive them (with all of the many unspo­ken assump­tions that go into such a con­cept) are mean­ing­less to a machine. The A.I. researcher Joan­na Bryson argues that we can­not assign bio­log­i­cal val­ues to machines “because our com­plete author­ship [of robots and AI] is fun­da­men­tal­ly dif­fer­ent from our rela­tion­ship to humans or oth­er evolved sys­tems.”

Think of what goes into a “human val­ue.” You could say it is a set of nor­ma­tive assump­tions about moral beliefs, tied to a set of per­son­al rules, steeped in an idea of jus­tice, bound­ed by social con­straints on behav­ior, and on and on and on, and even then it will be hen­pecked to death by philoso­phers and social sci­en­tists and the­o­rists and crit­ics about what any of those words even mean, because we do not have a sol­id under­stand­ing of what uni­ver­sal­ist val­ues are for human­i­ty. In a real way, we don’t have any. So demand­ing machines some­how be pro­grammed with these ephemer­al, inter­sub­jec­tive, vague, and rel­a­tive val­ues is an impos­si­ble demand. I won­der if it’s inten­tion­al­ly made impos­si­ble, so as to fore­stall mean­ing­ful devel­op­ment.

Because we have no the­o­ret­i­cal way of over­com­ing the halt­ing prob­lem, even the smartest arti­fi­cial intel­li­gence sys­tem we can cre­ate will, ulti­mate­ly, be just a dumb machine. It will nev­er be able to do more or less than it is pre­cise­ly pro­grammed to. And, ulti­mate­ly, the bounds with which that machine are pro­gramed will not have val­ues that are “human” in any uni­ver­sal way, but rather soci­etal and cul­tur­al. And in a major way, our machines already reflect our val­ues.

At the end of the day, we, as a species, have a very poor under­stand­ing of how and why we make moral deci­sions, how our val­ues inter­act with our deci­sions, and how those deci­sions actu­al­ly get made. At the most, we can infer cer­tain types of bound­ed hypothe­ses based on lim­it­ed obser­va­tion of small con­trol groups; this is a poor foun­da­tion for cre­at­ing an entire psy­choso­ci­ol­o­gy of a new class of arti­fi­cial brain we still can­not the­o­ret­i­cal­ly describe. So why does the dis­cus­sion seem caught at the debate over which “val­ues” a machine should be pro­grammed with? It’s like start­ing the alpha­bet at the let­ter “L” instead of “A.”

So could we pos­si­bly expect an arti­fi­cial intel­li­gence, steeped in none of the accul­tur­a­tion and bio­log­i­cal urges that (very com­plex­ly) make up who we are, as val­ue-based beings, to reflect our exact moral pri­or­i­ties? To me, this is a non-sequitur. We have no right to demand or assume that a soft­ware-based sys­tem will have any moral ana­log to what we con­sid­er to be val­ues: it might, but it prob­a­bly won’t. It will have its own val­ues: they might align with ours, and we should try to make sure they don’t clash with ours. But why on earth would we try to make a non-human have human val­ues? It’s like ask­ing a dog to have human val­ues — it nev­er will, and it is deeply unfair to expect a dog to do so.

Fur­ther read­ing:
The Evo­lu­tion of Robot Pan­ic
Why Experts Fret About A.I.
Com­mon Mis­con­cep­tions in A.I. Apoc­a­lypse Sce­nar­ios
Tools Are Util­i­tar­i­an Objects

joshua.foust
Joshua Foust used to be a foreign policy maven. Now he helps organizations communicate strategically and build audiences.