+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

A suspended Google engineer who believes the company's AI chatbot is sentient says it's like 'any child' and could grow up to be 'bad'

Jun 28, 2022, 03:05 IST
Business Insider
Blake Lemoine in San Francisco, California on Thursday, June 9, 2022.Martin Klimek for The Washington Post via Getty Images.
  • Google engineer Blake Lemoine compared its LaMDA AI to a child in a Fox News interview.
  • Lemoine said that just like any child, the chatbot "has the potential to grow up to be a bad person and do bad things."
Advertisement

A Google engineer who tested the company's AI chatbot said he considered the machine to be akin to a "child" which has been alive for around a year.

Speaking to Fox News host Tucker Carlson last week, Blake Lemoine, a senior software engineer at Google who tested the company's conversation technology, LaMDA — or Language Model for Dialogue Applications — said the machine was a "very intelligent person."

Pressed by the Carlson on whether the machine could escape control of people and turn against them, Lemoine said that wasn't the right way to think about it, adding that "any child has the potential to grow up to be a bad person and do bad things."

Lemoine had previously claimed that LaMDA had gained sentience and had published a conversation with the chatbot on Medium. He previously told The Washington Post that if he didn't know the chatbot was a computer programme, he would have thought it was a seven or eight-year-old child.

The engineer told Carlson that more work needed to be done on the chatbot to discover whether his personal perceptions of it were accurate.

Advertisement

Lemoine was placed on leave earlier in June. Google's HR department said this was because he had breached employee confidentiality policy.

The suspension came a day after he handed over documents to a US senator, which Lemoine claimed contained evidence that Google's technology had been involved in instances of religious discrimination, The New York Times previously reported.

The engineer told Carlson that he did not think Google had considered the implications of creating a "person." Lemoine said that when he escalated the conversation he had with LaMDA to management, Google did not have a plan of action.

A Google spokesperson previously told The Post that there was no evidence to support Lemoine's claims that the machine was sentient.

"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," the company said in a statement to Insider Monday.

Advertisement
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article