+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

The selfie tool going viral for its weirdly specific captions is really designed to show how bigoted AI can be

Sep 17, 2019, 20:20 IST

Advertisement
ImageNet Roulette classifies people's selfies.Isobel Asher Hamilton/Business Insider

  • A website called ImageNet Roulette went viral on Twitter for allowing people to upload their selfies and then have an AI try and guess what kind of person they are.
  • The AI was trained on a huge and significant dataset of images called ImageNet. The classifications it can come up with are incredibly wide-ranging, including terms like "computer-user," "grandma," and "first offender" to name a few.
  • Some people of color, including New Statesman journalist Stephen Bush, noticed that some of the classifier's terms were racist.
  • Showing these terms is deliberate, as ImageNet Roulette is partly designed to show how the dangers of AI bias.
  • Visit Business Insider's homepage for more stories.

A new viral tool that uses artificial intelligence to label people's selfies is demonstrating just how weird and biased AI can be.

The ImageNet Roulette site was shared widely on Twitter on Monday, and was created by AI Now Institute cofounder Kate Crawford and artist Trevor Paglen. The pair are examining the dangers of using datasets with ingrained biases - such as racial bias - to train AI.

ImageNet Roulette's AI was trained on ImageNet, a database compiled in 2009 of 14 million labelled images. ImageNet is one of the most important and comprehensive training datasets in the field of artificial intelligence, in part because it's free and available to anyone.

The creators of ImageNet Roulette trained their AI on 2833 sub-categories of "person" found in ImageNet.

Advertisement

Users upload photographs of themselves and the AI uses this dataset to try fits them into these sub-categories.

This Business Insider reporter tried uploading a selfie, and was identified by the AI as "myope", a short-sighed person. I wear glasses, which would seem the most likely explanation for the classification.

Some of the classifications the engine came up with were more career orientated or even abstract. "Computer user," "enchantress," "creep," and "pessimist" were among the classifications thrown up. Plugging a few more pictures of myself in yielded such gems as "sleuth," "perspirer, sweater," and "diver."

Other users were variously bewildered and amused by their classifications:

However, a less amusing side to the classifier soon became apparent, as the classifier threw up disturbing classifications for people of color. New Statesman political editor Stephen Bush found a picture of himself classified not only along racial lines, but using racist slurs like "negroid."

Advertisement

Another of his photos was labelled "first offender."

And a photo of Bush in a Napoleon costume was labelled "Igbo," an ethnic group from Nigeria.

 

However this isn't a case of ImageNet Roulette going unexpectedly off the rails like Microsoft's social media chatbot Tay, which had to be shut down less than 24 hours after being exposed to Twitter denizens who successfully manipulated it into being a holocaust-denier.

Instead, creators Crawford and Paglen wanted to highlight what happens if the fundamental data used to train AI algorithms is bad. ImageNet Roulette is is currently on display as part of an exhibition in Milan.

Advertisement

Read more: Taylor Swift once threatened to sue Microsoft over its chatbot Tay, which Twitter manipulated into a bile-spewing racist

"ImageNet contains a number of problematic, offensive and bizarre categories - all drawn from WordNet. Some use misogynistic or racist terminology," the pair wrote on the site.

"Hence, the results ImageNet Roulette returns will also draw upon those categories. That is by design: we want to shed light on what happens when technical systems are trained on problematic training data. WordNet is a database of word classifications formulated at Princeton in the 1980s and was used to label the images in ImageNet."

Crawford tweeted that although ImageNet was a "major achievement" for AI, being such a huge database, the project revealed fundamental problems with bias: "be it race, gender, emotions or characteristics. It's politics all the way down, and there's no simple way to 'debias' it."

AI bias is far from a theoretical problem. In 2016 a ProPublica investigation found that a computer programme called COMPAS, used to predict the likelihood of criminals re-offending, displayed racial bias against black people. Similarly, Amazon had to scrap an AI recruitment tool it was working on last year after it found the AI system was deranking women applicants.

Advertisement

NOW WATCH: Watch Apple unveil the new iPad that has a 10.2" screen and supports Apple pencil

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article