+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Google engineer believed chatbot had become an 8-year-old child. Experts say it's not sentient — just programmed to sound 'real'

Jun 14, 2022, 15:44 IST
Business Insider
Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images
  • Last week, a Google engineer was put on leave after he claimed the company's chatbot was sentient.
  • Insider spoke with seven experts that said the chatbot likely isn't sentient.
Advertisement

It's unlikely — if not impossible — that a Google chatbot has come to life, experts told Insider after one of the search giant's senior engineers was suspended for making startling claims.

The engineer told The Washington Post that in chatting with Google's interface called LaMDA — or Language Model for Dialogue Applications — he had begun to believe that the chatbot had become "sentient," or able to perceive and feel just like a human. Blake Lemoine, the engineer, worked in Google's Responsible Artificial Intelligence Organization.

But Lemoine, who didn't respond to a request for comment from Insider, is apparently on his own when it comes to his claims about the artificial intelligence-powered chatbot: A Google spokesperson said a team of ethicists and technologists have reviewed Lemoine's claims. They said there is no evidence to support them.

"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," the spokesperson said.

Seven experts Insider contacted agreed: They said the AI chatbot probably isn't sentient and that there is no clear way to gauge whether the AI-powered bot is "alive."

Advertisement

"The idea of sentient robots has inspired great science fiction novels and movies," Sandra Wachter, a professor at the University of Oxford who focuses on the ethics of AI, told Insider. "But we are far away from creating a machine that is akin to humans and the capacity for thought," she added.

A simple system

Another Google engineer who has worked with LaMDA told Insider that the chatbot, while capable of carrying on a multitude of conversations, follows relatively simple processes.

"What the code does is model sequences in language that it has harvested from the web," the engineer, who prefers to remain anonymous due to Google media policies, told Insider. In other words, the AI can "learn" from material scattered across the web.

Getty Images

The engineer said in a physical sense it would be extremely unlikely that LaMDA could feel pain or experience emotion, despite conversations in which the machine appears to convey emotion. In one conversation Lemoine published, the chatbot says it feels "happy or sad at times."

It's difficult to distinguish 'sentience'

The Google engineer and several experts told Insider that there is no clear way to determine "sentience," or distinguish between a bot that has been designed to mimic social interactions versus one that might be capable of actually feeling what it conveys.

Advertisement

"You couldn't somehow distinguish between feeling and not feeling based on the sequences of words that come out because they are just patterns that have been learned," the engineer told Insider. "There is no 'gotcha' question.'"

Laura Edelson, a postdoctoral researcher in computer science at NYU, told Insider the subject matter of the conversation between Lemoine and LaMDA does little to show proof of life. And the fact that the conversation was edited makes it even more hazy, she said.

The Google logo is seen at the company's headquarters in Mountain View, California.Marcio Jose Sanchez/AP

"Even if you had a chatbot that could have a surface-level conversation about philosophy, that's not particularly different than a chatbot that can have a surface-level conversation about movies," Edelson said.

Giada Pistilli, a researcher specializing in AI ethics, told Insider it's human nature to ascribe emotions to inanimate objects — a phenomenon known as anthropomorphization.

And Thomas Diettrich, an emeritus professor of computer science at Oregon State University, said it's relatively easy for AI to use language involving internal emotions.

Advertisement

"You can train it on vast amounts of written texts, including stories with emotion and pain and then it can finish that story in a manner that appears original," he said. "Not because it understands these feelings, but because it knows how to combine old sequences to create new ones."

Diettrich told Insider the role of AI in society will undoubtedly face further scrutiny.

"SciFi has made sentience this magical thing, but philosophers have been struggling with this for centuries," Diettrich said. "I think our definitions of what is alive will change as we continue to build systems over the next 10 to 100 years."

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article