+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Google is trying to be 'safe and responsible' with AI, says the engineer who got fired after sentience claim

Apr 30, 2023, 20:54 IST
Business Insider
Blake Lemoine said Google has "far more advanced technology" it hasn't released yet.Getty Images
  • The Google engineer fired after saying an AI chatbot was sentient said it's being "responsible".
  • Alphabet wasn't "being pushed around by OpenAI", Blake Lemoine told Futurism.
Advertisement

A Google engineer who was fired after saying its AI chatbot gained sentience said the company is approaching artificial intelligence in a "safe and responsible" way.

Blake Lemoine, a former member of Google's Responsible AI team, told Futurism he doesn't think Google is "being pushed around by OpenAI" and that the company behind ChatGPT had not affected "Google's trajectory."

"I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something," he said.

Lemoine also claimed Bard was in development in mid-2021, well before ChatGPT was released in late 2022.

"It wasn't called Bard then, but they were working on it, and they were trying to figure out whether or not it was safe to release it," he said. "They were on the verge of releasing something in the fall of 2022. So it would have come out right around the same time as ChatGPT, or right before it. Then, in part because of some of the safety concerns I raised, they deleted it."

Advertisement

The engineer, who joined Google in 2015 according to his LinkedIn profile, also told Futurism that the company has "far more advanced technology" that it hasn't released yet.

He said a product that essentially had the same capabilities as Bard could've been released two years ago, but Google has been "making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that."

Lemoine told The Washington Post last June he believed Google's Language Model for Dialogue Applications (LaMDA) became a sentient entity after he chatted with it. He also shared an "interview" he carried out with LaMDA in a Medium post, which he claimed was evidence of its independent thoughts.

He was fired later that month as Google claimed he violated its confidentiality policy. A company representative told Insider at the time that his sentience claims were unsupported and there wasn't any evidence to suggest it had consciousness.

Google didn't immediately respond to a request for comment from Insider, made outside normal working hours.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article