- Amazon Web Services unveiled Q, its business-focused generative AI chatbot, last week.
- Amazon employees said the chatbot is leaking confidential information, according to Platformer.
Amazon's Q, the AI chatbot for workers its cloud division unveiled on Tuesday, appears to have a few issues.
Employees using the chatbot said Q could potentially reveal confidential information — including the location of AWS data centers or unreleased features — according to leaked internal communications obtained by Platformer, a tech newsletter.
The bot is also "experiencing severe hallucinations," a phenomenon in which AI confidently spits out inaccuracies like they're facts, the employees said.
In Q's case, it could deliver such bad legal advice to "potentially induce cardiac incidents in Legal," as one employee put it in a company Slack channel, according to Platformer.
Amazon told Business Insider it had not identified any security issues related to Q and denied that Q had leaked confidential information.
"We appreciate all of the feedback we've already received and will continue to tune Q as it transitions from being a product in preview to being generally available," the company said in the statement.
It's not uncommon for generative AI chatbots to falter.
It wasn't long after Microsoft released its consumer-focused generative AI assistant, Sydney, that it went viral with its own hallucinations. But Q's transgressions are all the more ironic given that the bot was designed to be a safer and more secure option that businesses could rely on.
Q was built to help workers generate emails, summarize reports, troubleshoot, research, and code. It was also designed to provide them with helpful answers, but only related to content "each user is permitted to access," Amazon said in a blog post about Q.
Correction: December 3, 2023 — This story has been updated to include a statement from Amazon and to note that some of the problems employees mentioned were hypothetical scenarios.