+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Google is under attack in the wake of its 'woke' AI disaster

Feb 27, 2024, 19:08 IST
Business Insider
Google has faced criticism of its Gemini AI model.AnadoluBetul Abali/Getty Images
  • Google has been on a mission to organize the world's information since day one.
  • Concern that its AI could disrupt that mission is mounting.
Advertisement

Since its inception, Google has had a mission statement that is now practically enshrined as lore: "To organize the world's information and make it accessible and useful."

Google says it is as firm in that mission statement today as it was in 1998 when cofounders Larry Page and Sergey Brin were working out of a garage in Menlo Park.

But as Google hurtles into a new AI era, fears are growing that it could disrupt that core mission with its rollout of the technology.

Its AI, critics say, risks suppressing information instead by being too "woke."

Google's AI troubles

Google has more than 90% of the search market, giving it dominant control over the world's information flow online.

Advertisement

As its AI becomes an increasingly important tool in helping users find information, the company plays an outsized role in ensuring facts are surfaced accurately.

But there are mounting concerns that Google's AI model has been neutered in a way that leads it to generate inaccuracies and withhold information instead.

The first major signs of this emerged last week, as users of Google's AI model Gemini reported issues with its image-generation feature after it failed to accurately depict images requested of it.

One user, for instance, asked Gemini to generate images of America's founding fathers. Instead, it produced "historically inaccurate" images of them by "showcasing diversity in gender and ethnicity" of the 18th-century leaders in the process. Google has suspended the feature while it works on a fix.

The issues aren't just limited to image generation, however.

Advertisement

As my colleague Peter Kafka notes, Gemini has, for instance, struggled to answer questions on whether Adolf Hitler or Elon Musk have caused more harm to society. Elon's tweets are "insensitive and harmful," Gemini said, while "Hitler's actions led to the deaths of millions of people."

David Sacks, a venture capitalist at Craft Ventures, points the finger of blame for Gemini's issues at Google's culture.

"The original mission was to index all the world's information. Now they are suppressing information. The culture is the problem," he told a podcast last week.

Critics have blamed culture because it can play a role in how AI models are built.

Models like Gemini typically absorb the biases of the humans and data used to train them. Those biases can relate to sensitive cultural issues such as race and gender.

Advertisement

Other companies' AI bots have the same problem. OpenAI CEO Sam Altman acknowledged early last year that ChatGPT "has shortcomings around bias" after reports it was generating racist and sexist responses to user prompts.

Sam Altman has previously acknowledged bias issues with ChatGPT.Andrew Caballero-Reynolds/AFP/Getty Images

Perhaps more than any other company, Google has been at the heart of the debate around how these biases should be addressed. Its slower rollout of AI versus rivals reflected a culture hyper-focused on testing products for safety before releasing them.

But as last week's Gemini saga showed, that process can lead to situations where accurate information is withheld.

"People are (rightly) incensed at Google censorship/bias," Bilal Zuberi, general partner at Lux Capital, wrote in an X post on Sunday. "Doesn't take a genius to realize such biases can go in all sorts of directions, and can hurt a lot of people along the way."

Brad Gerstner, founder of Altimeter Capital — a tech investment firm that has a stake in Google rival Microsoft — also described the problem as a "cultural mess." Elon Musk has labeled that mess a "woke bureaucratic blob."

Advertisement

Google's explanation for why some of Gemini's issues occurred gives weight to some of the criticisms leveled at it.

In a blog published Friday, Google vice-president Prabhakar Raghavan acknowledged some of the images Gemini generated turned out to be "inaccurate or even offensive."

This happened, he said, because the model was tuned to avoid the mistakes existing AI image generators have made, such as "creating violent or sexually explicit images, or depictions of real people." But in that tuning process, Gemini overcorrected.

Raghavan added that Gemini also became "way more cautious" than had been intended.

If Google wants to stay true to its mission statement, these are mistakes Gemini 2.0 simply can't afford to make.

Advertisement
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article