+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Google employees who tested its ChatGPT rival Bard reportedly called it 'cringeworthy' and a 'pathological liar,' and said the AI's responses could 'result in serious injury or death'

Apr 20, 2023, 01:35 IST
Business Insider
Google employees tasked to test Google's AI chatbot Bard called it "cringeworthy" and a "pathological liar," according to Bloomberg.Jakub Porzycki/NurPhoto via Getty Images
  • Google employees who tested Bard called it "cringeworthy" and a "pathological liar," per Bloomberg.
  • One employee reportedly said the AI chatbot's advice could "result in serious injury or death."
Advertisement

Google employees tasked with testing their employer's AI chatbot Bard were not all that pleased by what they found, according to Bloomberg.

After testing the bot, one employee reportedly called Bard "cringeworthy," and another called it a "pathological liar," according to screenshots of internal discussions obtained by Bloomberg.

Others found the AI chatbot could be potentially dangerous. One employee said they asked Bard for tips on how to land a plane; the chatbot spit out advice that would cause a crash, Bloomberg reported. Another reportedly said Bard generated scuba diving advice "which would likely result in serious injury or death."

One employee even pleaded with Google to think twice before launching its chatbot, Bloomberg reported.

"Bard is worse than useless: please do not launch," the employee wrote in a February internal message that was viewed by almost 7,000 people, per Bloomberg

Advertisement

Employees tasked with assessing the safety and ethics of Google's new products told Bloomberg that their employer discouraged them from making any attempts at delaying the development of its AI technology, which they said lowered their morale.

They added that their employer is deprioritizing AI ethics in order to release a chatbot that can beat OpenAI's ChatGPT as quickly as possible, Bloomberg reported.

“Responsible AI remains a top priority at the company, and we are continuing to invest in the teams that work on applying our AI Principles to our technology," a Google spokesperson told Insider.

Despite employee concerns, Google still decided to reveal Bard to the public during a disastrous demo in February that featured the chatbot making a factual error. A couple days later, John Hennessy, Alphabet's chair, said Google wasn't "really ready" to release its AI. Google employees called the announcement "rushed" and "botched."

The criticism has only continued. A month after the demo, Googled launched a beta version of Bard. Like the employees, users with early access weren't impressed.

Advertisement

Users alleged that Bard spread misinformation, plagiarized articles, and got basic math wrong, according to Gizmodo.

As tech giants like Microsoft continue to release their own AI chatbots, the question of how these companies will address the ethics of their AI tools remains, Meredith Whittaker, a former Google manager, told Bloomberg.

"AI ethics has taken a back seat," Whittaker said. "If ethics aren't positioned to take precedence over profit and growth, they will not ultimately work."

Read Bloomberg's full report on Google employees' responses to Bard here

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article