- Google employees who tested Bard called it "cringeworthy" and a "pathological liar," per Bloomberg.
- One employee reportedly said the AI chatbot's advice could "result in serious injury or death."
Google employees tasked with testing their employer's AI chatbot Bard were not all that pleased by what they found, according to Bloomberg.
After testing the bot, one employee reportedly called Bard "cringeworthy," and another called it a "pathological liar," according to screenshots of internal discussions obtained by Bloomberg.
Others found the AI chatbot could be potentially dangerous. One employee said they asked Bard for tips on how to land a plane; the chatbot spit out advice that would cause a crash, Bloomberg reported. Another reportedly said Bard generated scuba diving advice "which would likely result in serious injury or death."
One employee even pleaded with Google to think twice before launching its chatbot, Bloomberg reported.
"Bard is worse than useless: please do not launch," the employee wrote in a February internal message that was viewed by almost 7,000 people, per Bloomberg
Employees tasked with assessing the safety and ethics of Google's new products told Bloomberg that their employer discouraged them from making any attempts at delaying the development of its AI technology, which they said lowered their morale.
They added that their employer is deprioritizing AI ethics in order to release a chatbot that can beat OpenAI's ChatGPT as quickly as possible, Bloomberg reported.
“Responsible AI remains a top priority at the company, and we are continuing to invest in the teams that work on applying our AI Principles to our technology," a Google spokesperson told Insider.
Despite employee concerns, Google still decided to reveal Bard to the public during a disastrous demo in February that featured the chatbot making a factual error. A couple days later, John Hennessy, Alphabet's chair, said Google wasn't "really ready" to release its AI. Google employees called the announcement "rushed" and "botched."
The criticism has only continued. A month after the demo, Googled launched a beta version of Bard. Like the employees, users with early access weren't impressed.
Users alleged that Bard spread misinformation, plagiarized articles, and got basic math wrong, according to Gizmodo.
As tech giants like Microsoft continue to release their own AI chatbots, the question of how these companies will address the ethics of their AI tools remains, Meredith Whittaker, a former Google manager, told Bloomberg.
"AI ethics has taken a back seat," Whittaker said. "If ethics aren't positioned to take precedence over profit and growth, they will not ultimately work."
Read Bloomberg's full report on Google employees' responses to Bard here