How Artificial Intelligence Can Turn Us Into Bigots
Growing up as a bookish, bullied kid in 1970’s Birmingham, Alabama, I became obsessed with science fiction.
I couldn’t get enough far-out science fantasy, particularly the stories about intelligent computers or androids. In retrospect, I think their struggles as artificial people resonated with my difficulties fitting in as a little boy. Superficially those stories may have been about robots, but I think they were really about understanding humanity. Today, after a 30-year career working in AI, I’m amazed that newspapers are filled with science fiction, but of a sort that I think may actually undermine humanity.
In 2016, The Atlantic reported that Google’s AlphaGo program, which bested a human Go master four-out-of-five games, “imitates human intuition". As an AI scientist and engineer, I can tell you that how this program plays the ancient Chinese board game is in no way like what we know scientifically about intuition.
In October last year, The Washington Post reported that a robotic hand “learned to solve Rubik’s Cube on its own -- just like a human". In fact, the “hand” didn’t learn to solve the cube at all; instead, the computers behind it used a rote program, and only “learned” (again, in a very non-human way) to turn the cube’s faces (and in fact, not that reliably, the machine drops the cube quite frequently).
In the same month The New Yorker reported that “AIs … can write for you", including in its article “text that an artificial intelligence predicted would come next". A careful reading revealed that the human writer selected the paragraphs from several alternatives (which were generated by a machine programmed carefully for that specific article).
Numerous studies say that AIs will soon be able to do our jobs, leading to a World Without Work. This, while employment increasingly dominates people’s lives. For instance, work emails intruding on personal time has gone so far as to become a political issue in the UK. While millions of people’s fundamental rights are denied or undermined, some are concerned about the rights of sentient robots (even though such machines are far from actually existing). Others believe that big-data-analysing computers will soon know us better than our friends, our partners, or even ourselves. While it is true that the data that we generate is increasingly surveilled and exploited by computers, do they really “know” us? If so, their insights are displaying some disturbingly retrograde trends.
A Google image search for “unprofessional hair” returned pictures of black women, while another of the search giant’s automatic image classification algorithms labelled a black couple “gorillas”. A UK government site repeatedly rejected a black man’s passport photo because his lips were algorithmically mistaken for an open mouth. A Microsoft Twitter bot learned to say “Hilter was right I hate the jews".
The Apple Card’s credit scoring algorithm granted vastly larger limits to men than women. Google autosuggest completed the phrases “are Jews” and “are women” with the word “evil” (and clicking through on the resulting searches led to sites that answered that machine-generated query in the affirmative).
My experience makes it easier for me to recognise fictions in headlines about AI, but the appearance of apparent old-school intolerance in algorithms is as troubling to me as it is disturbingly familiar from my youth in Alabama. But I think I know why it’s happening.
While the computing involved in AI is often incomprehensibly vast and fast, it always rests upon foundations of simplifying, quantifying, and generalising. And that’s okay: it’s how engineers traditionally solve problems, by breaking them down into simple parts, simplifying where necessary, and building in safety margins to account for inaccuracies.
But a historical perspective on science that simplifies people reveals that it has always required even more careful human consideration, to prevent it from turning into bigotry. Today, machines, not people, are doing this simplification of people. And it’s pure science fiction to think that we have AI that is in any way capable of careful, human-like consideration in such matters.
The sci-fi characters I read about as a kid, from Frankenstein to HAL, to Star Trek’s Lieutenant Commander Data, and even the Terminator (at least in the second film) were all explorations of humanity, through the eyes of the ultimate outsiders. In contrast, today’s science fictions about AI masks its entirely quantitative perspective on people behind over-hyped headlines about its capabilities.
Without some human understanding of this reality, those who don’t fit into simplifying, quantitatively-driven categories, or those who fit into the inevitable minority categories, could be pushed further to the outskirts of social inclusion.
However, there is hope. If we all can get a more scientifically realistic perspective on machine thinking, we could begin to turn back the clock on centuries of pseudoscientific thinking that has for centuries attempted to quantify human qualities, rank human beings, and place them into restrictive, socially-counter productive categories, like those of race, gender, etc. It is possible that if we learn to see through misleading headlines about AI, today’s manifest contrasts between science fiction and science reality could redirect the way we not only treat machines but one another.
Featured Image: Disney