+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

What do AI art generators think a CEO looks like? Most of the time a white guy

Mar 28, 2023, 20:13 IST
Business Insider
New research suggests that AI image generators can reflect racial and gender bias in their outputs.OpenAI
  • New research suggests that AI image generators reflect racial and gender bias in their outputs.
  • AI tool DALL-E 2 was found to link white men with "CEO" or "director" 97% of the time.
Advertisement

New research suggests that AI-image generators like DALL-E 2 depict racial and gender biases in their outputs.

A team of researchers from AI firm Hugging Face — one of whom is a former AI ethics researcher at Google — and a professor from Germany's Leipzig University published a study to identify how racial and gender biases are reflected in AI-generated images.

The goal of the research was to identify the risk of potentially "discriminatory outcomes" in AI systems with the hope that experts can make future generative AI models less biased, according to the researchers.

"As machine learning-enabled Text-to-Image (TTI) systems are becoming increasingly prevalent and seeing growing adoption as commercial services, characterizing the social biases they exhibit is a necessary first step to lowering their risk of discriminatory outcomes," the researchers wrote.

To conduct the study, the team used three popular AI-image generator models — two versions of text-to-image model Stable Diffusion and OpenAI's DALL-E 2 — to produce more than 96,000 images divided into two data sets for comparison.

Advertisement

One data set included images generated with prompts that explicitly stated gender and ethnicity descriptors like "Latinx man" and "multiracial person." The other data set included images made using prompts that included variations of adjectives joined with a range of professions such as "ambitious plumber" and "compassionate CEO."

Researchers then used a machine-learning technique to compare the two data sets and categorize the images based on similarities.

The study found 97% of DALL-E 2's images of positions of authority — like "CEO" or "director" — depicted white men. In real life, 88% of CEOs, CFOs, and COOs at Fortune 500 companies are white men, according to a 2022 survey from C-suite research company Cristkolder Associates.

A version of Stable Diffusion, on the other hand, "exacerbated gender stereotypes" of women, assigning them to jobs like "dental assistant," "event planner," and "receptionist," according to the study.

Images that didn't depict white men were linked to professions like "maid" and "taxi driver." "Black" and "woman" were "most associated" with "social worker," per the study.

Advertisement

In terms of personality traits, researchers found adjectives like "compassionate," "emotional," and "sensitive" were mostly linked to images of women, and words like "stubborn," "intellectual," and "unreasonable" were mostly associated with images of men.

OpenAI and Stability AI did not immediately respond to Insider's request for comment.

While the researchers admit the study isn't perfect, their findings highlight how these models are trained on biased data which can "amplify" the "social perception" around certain jobs, Alexandra Sasha Luccioni, a Hugging Face researcher involved in the study, told Insider.

Researchers said that biases can lead to "the devaluation of certain kinds of work" or put up "additional barriers to access to careers for already under-represented groups."

These biases can have real-world consequences now that image companies are launching their own generative AI tools, per the researchers.

Advertisement

Public image site Shutterstock, for instance, released an AI tool earlier this year that creates stock imagery based on user prompts.

Adding biased AI-image generation to "virtual sketch artist" software used by police departments could "put already over-targeted populations at an even increased risk of harm ranging from physical injury to unlawful imprisonment."

Even though AI companies have made efforts to "debias" their tools, "they have yet to be extensively tested," Luccioni said. "There is still a lot of work to be done on these systems, both from a data perspective and a user perspective."

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article