On Treating Tools as Utilitarian Objects

There is a growing consensus among western computer researchers, technologists, celebrity intellectuals, and pundits, about the ethics and morals of creating artificial intelligence. One of the technical voices leading the charge (as opposed to entrepreneurs and celebrity physicists) is artificial intelligence pioneer Stuart Russell, who recently issued a call for computers to be “provably aligned” with what he believes to be “human values.”

[T]o the extent that our values are revealed in our behavior, you would hope to be able to prove that the machine will be able to “get” most of it. There might be some bits and pieces left in the corners that the machine doesn’t understand or that we disagree on among ourselves. But as long as the machine has got the basics right, you should be able to show that it cannot be very harmful.

This is, to put it gently, a remarkable exercise in hogwashery. Ignoring the problem of demanding a computer (much less a human) “provably align” itself with a set of values, there is a more fundamental problem of defining what those values are. In the quote above, Russell seems to have a hard time defining “human values” in a meaningful way, even admitting that they might never be definable, while still suggesting that computers must be provably aligned with them anyway. It is self-contradictory nonsense.

The values that one ascribes to humanity are much more flexible than most western technologists realize. What people usually mean when they discuss values that they think apply to humanity as a whole are really just WEIRD values — that is, values unique to Western, Educated, Industrialized, Rich, and Democratic societies. Yet, as Mark Movsesian notes, WEIRD cultures (especially American culture!) have beliefs that are in a distinct minority when examined globally.

To summarize some basic issues with assigning universality to human values, consider the role that privacy, free expression, political choice, and entitlement to services play in American culture. As Americans learned reacting to the horrors of the Charlie Hebdo massacres, not even our most culturally similar societies in Europe hold the right to free, and freely offensive, speech as absolutely as we do. Americans tend to value religious faith to a degree almost no one else in the developed world does, to the point of deliberately enabling through probably unconstitutional laws religious extremism by Christians (often disguised as “religious freedom” laws to allow openly prejudicial behavior toward gays and lesbians).

So, while one can talk a lot about “human values,” the reality is that most values are much less “human” and much more “social” than the people assigning value judgment to artificial intelligence really want to admit. You can even look at the attempt to create a universal declaration of the rights all humans are entitled to and find billions of people exempting themselves from its clauses about religion, about freedom from offense, and so on.

But there is another angle to this: I would suggest, contra Russell, that computers and AI already reflect our values. That is, they are accurate, if not exhaustive, representations of what we care about and strive for.

Think about what we use advanced software and robots to do right now: precision manufacturing, complicated search and analysis patterns, commerce, trading, warfare, and so on. Our widespread use of smart phones are drastically changing how our brains process and understand information, and that will only continue as time goes on.

So “values” are not a thing to be programmed into computers, but something inherent to how they are designed — and neither those values, nor that design, is a universal constant.

This is a point lost on many technologists, who get so excited by the opportunity (and, one would imagine, salary) of working on this new frontier that they don’t consider the values they are promoting.

Last year, for example, Andrew Ng, a pioneering AI researcher for Google, left the company to work for Baidu, the Chinese search engine giant. He described his decision in laudable terms: working for a new, less dominant player in the world market, focusing on new internet users, getting to push his research in new directions, and so on. And as far as I can tell, Ng genuinely believes those things and wishes no one any harm.

However, his company, and the technology it produces, is a different matter. Baidu, being a Chinese company, cannot escape the clutches of the Chinese government, which recently coopted Baidu’s through a clever (and likely state-directed) takeover of China’s internet backbone to attack the website GitHub, along with millions of other users for days on end. I do not think Ng in any way supports China coopting Chinese companies to censor and destroy websites hosted abroad,  but those are the values built into the design of the technology on which he now works. Whatever his intentions are, it is unavoidable (this is, of course, on top of the everyday experience of Baidu willingly helping the Chinese government censor and oppress free speech in the country).

China is hardly the only place where cultural values, expressed through technology, differ dramatically from the West. In Russia, Eugene Kaspersky, the founder of the eponymous computer security firm, is not only a former KGB officer but openly socializes with senior FSB officers while targeting the American and British governments for embarrassing disclosures about their online espionage activities. I don’t doubt that many of Kaspersky Labs’ security researchers genuinely believe they are making the web a safer place, but there are inherent values built into an enterprise that runs as a quasi-official arm of the Russian intelligence service.

But maybe even thinking about computers in terms of values is nonsense. Joanna Bryson, another artificial intelligence researcher, has made the rather forceful argument that assigning human values to machines is fundamentally nonsensical. We cannot assign biological values to machines, she says, “because our complete authorship [of robots and AI] is fundamentally different from our relationship to humans or other evolved systems.”

Rather, Bryson argues, “People want to make AI they owe obligations to, can fall in love with, etc. – ‘equals’ over which we have complete dominion.”

It simply is not what computers are, nor does it represent what they do.

Even the smartest artificial intelligence system is a dumb machine. It does, and is capable of, precisely what is programmed into it. It does not exist outside the bounds of what humans create it to do, and the many problematic allusions to science fiction aside, there is no known mechanism for how a computer could exceed those boundaries.

As it stands now, computers are extremists. They are limited to a binary logic: on/off, True/False. They are built on the assumption that things are either true or false, and one of the hardest concepts to learn when you first learn programming is how to think in these stark, black-and-white terms. It simply is not intuitively human, and those who do think in strictly Manichean terms (with us or against us, no in betweens) they are usually considered extremists.

While moving beyond boolean logic structures are the very cutting edge, at the end of the day the machine code is still binary: one or off, 1 or 0. It’s great for pattern recognition, but assuming that that can built into moral reasoning is just that: an assumption. We simply haven’t cracked the code for how humans build moral reasoning yet, so to assume that computers should have that built in before they’re even built strikes me as little more than a cynical cover for technophobia.

And are machines incapable of moral reasoning all that bad?Underneath all of the interesting discussions about robots and morals I keep getting left with a big “so what?” Do you care if a machine only simulates politeness, a basic and rough concern for sensibilities, and tries to maximize human life when presented with the trolley problem? Is the end result, the user experience if you will, all that different if the decision came from rigid and unfeeling rule sets rather than a heartfelt agony over what the best decision is?

Maybe. But I don’t see it, for the time being. And I certainly don’t see the big deal over whether Google’s next search algorithm has moral values attached to it, so long as it finds the things I want it to on the Internet. And maybe that’s just going to be good enough for any of us.

comments powered by Disqus