scorecard
  1. Home
  2. international
  3. news
  4. A suspended Google engineer who believes the company's AI chatbot is sentient says it's like 'any child' and could grow up to be 'bad'

A suspended Google engineer who believes the company's AI chatbot is sentient says it's like 'any child' and could grow up to be 'bad'

Abby Wallace   

A suspended Google engineer who believes the company's AI chatbot is sentient says it's like 'any child' and could grow up to be 'bad'
International2 min read
  • Google engineer Blake Lemoine compared its LaMDA AI to a child in a Fox News interview.
  • Lemoine said that just like any child, the chatbot "has the potential to grow up to be a bad person and do bad things."

A Google engineer who tested the company's AI chatbot said he considered the machine to be akin to a "child" which has been alive for around a year.

Speaking to Fox News host Tucker Carlson last week, Blake Lemoine, a senior software engineer at Google who tested the company's conversation technology, LaMDA — or Language Model for Dialogue Applications — said the machine was a "very intelligent person."

Pressed by the Carlson on whether the machine could escape control of people and turn against them, Lemoine said that wasn't the right way to think about it, adding that "any child has the potential to grow up to be a bad person and do bad things."

Lemoine had previously claimed that LaMDA had gained sentience and had published a conversation with the chatbot on Medium. He previously told The Washington Post that if he didn't know the chatbot was a computer programme, he would have thought it was a seven or eight-year-old child.

The engineer told Carlson that more work needed to be done on the chatbot to discover whether his personal perceptions of it were accurate.

Lemoine was placed on leave earlier in June. Google's HR department said this was because he had breached employee confidentiality policy.

The suspension came a day after he handed over documents to a US senator, which Lemoine claimed contained evidence that Google's technology had been involved in instances of religious discrimination, The New York Times previously reported.

The engineer told Carlson that he did not think Google had considered the implications of creating a "person." Lemoine said that when he escalated the conversation he had with LaMDA to management, Google did not have a plan of action.

A Google spokesperson previously told The Post that there was no evidence to support Lemoine's claims that the machine was sentient.

"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," the company said in a statement to Insider Monday.

READ MORE ARTICLES ON


Advertisement

Advertisement