- A tool called Face Depixelizer grabbed the attention of the artificial intelligence research community this weekend.
- The tool takes pixelated pictures of people and uses
AI to reconstruct sharp images of them. - When given a pixelated photograph of
Barack Obama , Face Depixelizer turned him into a white man. - This is an illustration of ingrained algorithmic
racial bias , which happens when the data sets algorithms are trained on are made up primarily of white male faces.
A new artificial intelligence tool for making clear pictures of people's faces from pixelated images has become a prime, if unwitting, example of algorithmic racial bias.
The tool, called Face Depixelizer, was built by a coder and put out on Twitter. It's built to take pixelated photographs of people and accurately reconstruct a sharp and accurate image of their face using
—Bomze (@tg_bomze) June 19, 2020
Users started to spot however that the system was not particularly accurate when it came to processing Black faces. One shared an image of what happened after they input a pixelated picture of Barack Obama — Face Depixelizer turned him into a white man.
—Chicken3gg (@Chicken3gg) June 20, 2020
Another user ran the same image through the tool multiple times, and each time its reconstructed version of Obama was white.
—Ken Chic (@bitcashio) June 20, 2020
Machine learning blogger Robert Osazuwa Ness then ran other pictures of people of color through the system including himself, Rep. Alexandria-Ocasio Cortez, and actress Lucy Liu. Face Depixelizer consistently reconstructed their faces to look white.
—Robert Osazuwa Ness (@osazuwa) June 20, 2020
One explanation for why the Depixeliser is doing this could be to do with its data set.
Machine learning algorithms are trained on large data sets, from which they identify patterns and teach themselves what they're supposed to be looking for. A lack of diversity in widely-used datasets means these systems are often predominantly trained on images of white men, meaning they infer that white male characteristics are the default.
Although Face Depixelizer is an illustration of how these systems can fail at identifying Black people's faces, the implications of algorithmic racial bias go far beyond a depixelating proof-of-concept tool circulating on Twitter.
One example of how
Quite apart from the fact police are more likely to use facial recognition to target communities of color, studies have shown commercially available facial recognition software is far more likely to misidentify women and people with darker skin tones.
Another explanation for why it could be doing this is because of something called "mode collapse."
This affects the kind of AI which was used to build Depixeliser, called Generative Adversarial Network or GAN. GANs work by pitting two algorithms against one another, with one algorithm generating fake images and the other one trying to spot the fakes. Mode collapse effectively means that even if there's only a small bias in the data, it will be exacerbated by the GANs, as the first algorithm will go for the easiest way to fool the second one.