Home   News   Article

Subscribe Now

Racial bias in AI setting up future where ‘people of colour are erased’, says Cambridge study




‘Sophia’: UN ‘innovation champion’, could be perpetuating racial stereotypes
‘Sophia’: UN ‘innovation champion’, could be perpetuating racial stereotypes

The algorithm which marked down A-level grades for pupils from disadvantaged backgrounds and benefitted pupils at private schools is just one example of bias which is baked into technology - and University of Cambridge researchers have found this applies to people of colour too.

Researchers from Cambridge’s Leverhulme Centre for the Future of Intelligence (CFI) have concluded that “the overwhelming ‘whiteness’ of artificial intelligence – from stock images and cinematic robots to the dialects of virtual assistants – removes people of colour from humanity’s visions of its high-tech future”.

The researchers conducted an investigation into search engines, and found that “all non-abstract results for AI had either Caucasian features or were literally the colour white”.

A typical example of AI imagery adorning book covers and mainstream media articles is ‘Sophia’: the hyper-Caucasian humanoid declared an “innovation champion” by the UN development programme. But this recent iteration is just the tip of the iceberg, say researchers.

“Stock imagery for AI distils the visualisations of intelligent machines in western popular culture as it has developed over decades,” said Stephen Cave, executive director of CFI.

“From Terminator to Blade Runner, Metropolis to Ex Machina, all are played by white actors or are visibly white onscreen. Androids of metal or plastic are given white features, such as in I, Robot. Even disembodied AI – from HAL-9000 to Samantha in Her – have white voices. Only very recently have a few TV shows, such as Westworld, used AI characters with a mix of skin tones.”

Dr Kanta Dihal, who leads CFI’s ‘Decolonising AI’ initiative and authors the new paper on the case for decolonising AI - published this month in the journal Philosophy and Technology - points out that even works clearly based on slave rebellion, such as Blade Runner, depict their AIs as white.

“AI is often depicted as outsmarting and surpassing humanity,” said Dr Dihal. “White culture can’t imagine being taken over by superior beings resembling races it has historically framed as inferior.

“Images of AI are not generic representations of human-like machines: their whiteness is a proxy for their status and potential.

“Portrayals of AI as white situates machines in a power hierarchy above currently marginalised groups, and relegates people of colour to positions below that of machines. As machines become increasingly central to automated decision-making in areas such as employment and criminal justice, this could be highly consequential.

“The perceived whiteness of AI will make it more difficult for people of colour to advance in the field. If the developer demographic does not diversify, AI stands to exacerbate racial inequality.”

Honda robot: whiteness is normalised - even by Japanese designers
Honda robot: whiteness is normalised - even by Japanese designers

Cultural depictions of AI as white need to be challenged, say the researchers, as they “do not offer a post-racial future but rather one from which people of colour are simply erased” . AI, they say, is - perhaps inadvertently - “creating a racially homogenous workforce of aspiring technologists, building machines with bias baked into their algorithms”.

They argue that there is a long tradition of crude racial stereotypes when it comes to extraterrestrials – from the “orientalised” alien of Ming the Merciless to the Caribbean caricature of Jar Jar Binks.

But artificial intelligence is portrayed as white because, unlike species from other planets, AI has attributes used “to justify colonialism and segregation in the past: superior intelligence, professionalism and power” .

“Given that society has, for centuries, promoted the association of intelligence with white Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a white machine,” said Dr Dihal. “People trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. In cases where these systems are racialised as white that could have dangerous consequences for humans that are not.”

The paper brings together recent research from a range of fields, including human-computer interaction and critical race theory, to demonstrate that “machines can be racialised, and that this perpetuates real-world racial biases”.

This includes work on how robots are seen to have distinct racial identities, with black robots receiving more online abuse, and a study showing that people feel closer to virtual agents when they perceive shared racial identity.

“One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard white middle-class English,” concludes Dr Dihal. “Ideas of adding black dialects have been dismissed as too controversial or outside the target market.”



This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More