Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. Home

Google's shiny new AI gave the wrong information in a promo video — again

Google i/o event Sundar Pichai Gemini
Google CEO Sundar Pichai presents Google's Gemini. Google
  • Google's Gemini in Search demo video, released Tuesday, made a factual error.
  • Gemini suggested opening a film camera without a dark room, which would ruin the photos.
  • Google's demo videos have made such mistakes in the past too.
Advertisement

In two back-to-back days of big launches, OpenAI and Google showed the world their newest artificial intelligence projects.

They made impressive demo videos featuring all the new things OpenAI's ChatGPT-4o can do, and how Google's Gemini will revolutionize Search as we know it.

But Google's Tuesday video shows one of the major pitfalls of AI: wrong, not just bad, advice. A minute into the flashy, quick-paced video, Gemini AI in Google Search presented a factual error first spotted by The Verge.

A photographer takes a video of his malfunctioning film camera and asks Gemini: "Why is the lever not moving all the way." Gemini provides a list of solutions right away — including one that would destroy all his photos.

Advertisement

The video of the list highlights one suggestion: "Open the back door and gently remove the film if the camera is jammed."

Professional photographers — or anyone who has used a film camera — know that this is a terrible idea. Opening a camera outdoors, where the video takes place, could ruin some or all of the film by exposing it to bright light.

Screen grab from Gemini in Search's demo video.
Screen grab from Gemini in Search's demo video. Google

Google has faced similar issues with earlier AI products.

Last year, a Google demo video showing the Bard chatbot incorrectly said that the James Webb Space Telescope was the first to photograph a planet outside our own solar system.

Advertisement

Earlier this year, the Gemini chatbot was hammered for refusing to produce pictures of white people. It was criticized for being too "woke" and developing photos riled with historical inaccuracies like Asian Nazis and Black founding fathers. Google leadership apologized, saying they "missed the mark."

Tuesday's video highlights the perils of AI chatbots, which have been producing hallucinations, which are incorrect predictions, and giving users bad advice. Last year, users of Bing, Microsoft's AI chatbot, reported strange interactions with the bot. It called users delusional, tried to gaslight them about what year it is, and even professed its love to some users.

Companies using such AI tools may also be legally responsible for what their bots say. In February, a Canadian tribunal held Air Canada responsible for its chatbot feeding a passenger wrong information about bereavement discounts.

Google did not immediately respond to a request for comment sent outside standard business hours.

Read next

Google AI
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account