Google suspends Gemini from making AI images of people after a backlash complaining it was 'woke'
- Google says it plans to fix issues flagged with its AI model, Gemini.
- Users complained Gemini generated historically inaccurate images of people of color.
Google says it will fix Gemini, its answer to OpenAI's GPT-4, after people complained the multi-modal AI model's image-generating feature was "woke."
On Thursday, the company said in a statement sent to Business Insider that it was pausing Gemini from generating AI images of people while it makes the changes.
Social media users have complained that Gemini was producing images of people of color in historically inaccurate contexts.
BBC News was one of the first outlets to report this.
For example, software engineer Mike Wacker posted on X that a prompt to generate images of the Founding Fathers produced an image of a woman of color and, later, a man of color wearing a turban.
Others on X complained that Gemini had gone "woke," citing instances where prompts regularly resulted in responses including the word "diverse."
Debarghya Das, a computer scientist who used to work at Google, said on X that "it's embarrassingly hard to get Google Gemini to acknowledge that white people exist."
In a statement provided to Business Insider via email, a Google spokesperson said it was "working to improve these kinds of depictions immediately."
The spokesperson highlighted the importance of generating images representing a diverse range of people, given Gemini's global user base, but admitted that it's "missing the mark here."
A post from Google on X acknowledged that Gemini was "offering inaccuracies in some historical image generation depictions."
In further comments released on Thursday, provided to BI by email, a Google spokesperson said Gemini would temporarily pause the feature that generates images of people while the changes are made.
The spokesperson added that Google will "re-release an improved version soon."