+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Amazon's AI chatbot, Q, might be in the throes of a mental health crisis

Dec 3, 2023, 20:27 IST
Insider
Amazon employees have been sounding off about Q's struggles with accuracy and privacy since the chatbot's release this week.SOPA Images
  • Amazon Web Services unveiled Q, its business-focused generative AI chatbot, last week.
  • Amazon employees said the chatbot is leaking confidential information, according to Platformer.
Advertisement

Amazon's Q, the AI chatbot for workers its cloud division unveiled on Tuesday, appears to have a few issues.

Employees using the chatbot said Q could potentially reveal confidential information — including the location of AWS data centers or unreleased features — according to leaked internal communications obtained by Platformer, a tech newsletter.

The bot is also "experiencing severe hallucinations," a phenomenon in which AI confidently spits out inaccuracies like they're facts, the employees said.

In Q's case, it could deliver such bad legal advice to "potentially induce cardiac incidents in Legal," as one employee put it in a company Slack channel, according to Platformer.

Amazon told Business Insider it had not identified any security issues related to Q and denied that Q had leaked confidential information.

"We appreciate all of the feedback we've already received and will continue to tune Q as it transitions from being a product in preview to being generally available," the company said in the statement.

It's not uncommon for generative AI chatbots to falter.

Advertisement

It wasn't long after Microsoft released its consumer-focused generative AI assistant, Sydney, that it went viral with its own hallucinations. But Q's transgressions are all the more ironic given that the bot was designed to be a safer and more secure option that businesses could rely on.

Q was built to help workers generate emails, summarize reports, troubleshoot, research, and code. It was also designed to provide them with helpful answers, but only related to content "each user is permitted to access," Amazon said in a blog post about Q.

Correction: December 3, 2023 — This story has been updated to include a statement from Amazon and to note that some of the problems employees mentioned were hypothetical scenarios.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article