scorecard
  1. Home
  2. tech
  3. news
  4. A Google researcher – who said she was fired after pointing out biases in AI – says companies won't 'self-regulate' because of the AI 'gold rush'

A Google researcher – who said she was fired after pointing out biases in AI – says companies won't 'self-regulate' because of the AI 'gold rush'

Sawdah Bhaimiya   

A Google researcher – who said she was fired after pointing out biases in AI – says companies won't 'self-regulate' because of the AI 'gold rush'
Tech2 min read
  • Ex-Google AI researcher said the current AI "gold rush" means firms won't "self-regulate."
  • Timnit Gebru was fired from Google shortly after writing a paper that pointed out biases in AI.

A former Google researcher, who said she was fired after pointing out the biases in AI, has warned that the current AI "gold rush" means companies won't "self-regulate" unless they are pressured.

Timnit Gebru, a computer scientist who specializes in artificial intelligence, was co-lead at Google's ethical AI team before a clash with the company resulted in her being ousted in 2020. Gebru opened up about her experience with The Guardian in an interview published Monday.

The 40-year-old, who also had stints at Apple and Microsoft, explained that the hype around AI means safeguards are being cast to the side.

"Unless there is external pressure to do something different, companies are not just going to self-regulate," she told The Guardian. "We need regulation and we need something better than just a profit motive."

"It feels like a gold rush," Gebru added. "In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it's humans who decide whether all this should be done or not. We should remember that we have the agency to do that."

Gebru left Google shortly after she co-authored a research paper with colleagues pointing out the built-in biases in AI tools.

Senior management at Google reportedly took issue with the paper and asked her to either withdraw it or remove her and her colleagues' names from it, per Insider's report at the time. Gebru clarified to The Guardian that she had refused to retract the paper and said she would only remove the authors' names if Google made its criticisms clear.

Gebru says she was fired after that, but Google claimed she resigned, per The Guardian.

"We believe our approach to AI must be both bold and responsible. That means developing AI in a way that maximizes the positive benefits to society while addressing the challenges, guided by our AI Principles," Google said in a statement to Insider.

Insider reached out to Gebru for comment but didn't immediately hear back.

"The paper doesn't say anything surprising to anyone who works with language models," one employee, who reviewed the report, told Insider at the time. "It basically makes what Google's doing look bad."

OpenAI's chatbot, ChatGPT, was launched in November 2022 and became the fastest-growing consumer app in internet history. Google released its own version, Bard, in March.


Advertisement

Advertisement