+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Google's shiny new AI gave the wrong information in a promo video — again

May 15, 2024, 12:26 IST
Business Insider
Google CEO Sundar Pichai presents Google's Gemini.Google
  • Google's Gemini in Search demo video, released Tuesday, made a factual error.
  • Gemini suggested opening a film camera without a dark room, which would ruin the photos.
Advertisement

In two back-to-back days of big launches, OpenAI and Google showed the world their newest artificial intelligence projects.

They made impressive demo videos featuring all the new things OpenAI's ChatGPT-4o can do, and how Google's Gemini will revolutionize Search as we know it.

But Google's Tuesday video shows one of the major pitfalls of AI: wrong, not just bad, advice. A minute into the flashy, quick-paced video, Gemini AI in Google Search presented a factual error first spotted by The Verge.

A photographer takes a video of his malfunctioning film camera and asks Gemini: "Why is the lever not moving all the way." Gemini provides a list of solutions right away — including one that would destroy all his photos.

The video of the list highlights one suggestion: "Open the back door and gently remove the film if the camera is jammed."

Advertisement

Professional photographers — or anyone who has used a film camera — know that this is a terrible idea. Opening a camera outdoors, where the video takes place, could ruin some or all of the film by exposing it to bright light.

Screen grab from Gemini in Search's demo video.Google

Google has faced similar issues with earlier AI products.

Last year, a Google demo video showing the Bard chatbot incorrectly said that the James Webb Space Telescope was the first to photograph a planet outside our own solar system.

Earlier this year, the Gemini chatbot was hammered for refusing to produce pictures of white people. It was criticized for being too "woke" and developing photos riled with historical inaccuracies like Asian Nazis and Black founding fathers. Google leadership apologized, saying they "missed the mark."

Tuesday's video highlights the perils of AI chatbots, which have been producing hallucinations, which are incorrect predictions, and giving users bad advice. Last year, users of Bing, Microsoft's AI chatbot, reported strange interactions with the bot. It called users delusional, tried to gaslight them about what year it is, and even professed its love to some users.

Advertisement

Companies using such AI tools may also be legally responsible for what their bots say. In February, a Canadian tribunal held Air Canada responsible for its chatbot feeding a passenger wrong information about bereavement discounts.

Google did not immediately respond to a request for comment sent outside standard business hours.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article