scorecard
  1. Home
  2. tech
  3. news
  4. Twitter is investigating after anecdotal data suggested its picture-cropping tool favors white faces

Twitter is investigating after anecdotal data suggested its picture-cropping tool favors white faces

Isobel Asher Hamilton   

Twitter is investigating after anecdotal data suggested its picture-cropping tool favors white faces
  • Twitter is investigating whether its automatic image cropper may be racially biased following some rudimentary tests conducted by users this weekend.
  • Users began to notice that the algorithm behind Twitter's automatic cropping tool appeared to be systematically favoring white faces.
  • The evidence so far is anecdotal, but Twitter has promised to investigate its systems.

Twitter is looking into the possibility that its automated tool which selects which part of a picture to preview in tweets might be racially biased against Black people.

For several years, Twitter has used machine learning to find the most "interesting" part of photos and crop accordingly for better image previews. The upshot is that as you scroll through Twitter, you'll likely see photo previews focused on faces rather than, say, necks or foreheads.

Questions about whether Twitter's photo preview might be racially biased sprung up in a tweet by PhD student Colin Madland about Zoom erasing a Black man's face when he used a virtual background.

This tweet prompted other Twitter users to test out the automated cropping, including developer Tony Arcieri.

Arcieri conducted a small test using two pictures containing photos of Barack Obama and Mitch McConnell separated by a wide white space — essentially forcing the algorithm to pick just one face for an image preview.

In each picture the position of Obama and McConnell was inverted, but in both instances, the preview zeroed in on McConnell's face.

Twitter commented on Arcieri's tweet saying it had tested its cropping algorithm for bias before building it into the platform, but hinted it will be investigating the matter more deeply.

"We tested for bias before shipping the model & didn't find evidence of racial or gender bias in our testing. But it's clear that we've got more analysis to do. We'll continue to share what we learn, what actions we take, & will open source it so others can review and replicate," Twitter said.

Individual Twitter engineers also weighed in to say they'd take a closer look at the algorithm.

Zehan Wang, engineering lead at Twitter's machine learning research division Cortex, commented on Madland's original thread: "We'll look into this," adding that the current algorithm being used by Twitter was put into force in 2017 and doesn't use face detection.

Twitter's chief design officer Dantley Davis also weighed in, saying the algorithm could be picking up on things other than skin color.

CTO Parag Agrawal added its systems need "continuous improvement."

Algorithmic bias is an issue that extends far beyond how Twitter crops photos.

Machine learning algorithms like the one used by Twitter rely on vast data sets. If these data sets are weighted in favor of a particular race, gender, or anything else, the resultant algorithm can then reflect that bias.

READ MORE ARTICLES ON



Popular Right Now



Advertisement