scorecard
  1. Home
  2. tech
  3. article
  4. Falling in love with a user to telling them to eat rocks to stay fit, times when AI went wrong

Falling in love with a user to telling them to eat rocks to stay fit, times when AI went wrong

Falling in love with a user to telling them to eat rocks to stay fit, times when AI went wrong
  • Interest around AI peaked with ChatGPT's launch
  • In past, AI has been at the center of many controversies
  • Here are 5 times when AI went wrong

Artificial intelligence has been around for a long time but interest around generative AI chatbots peaked when OpenAI launched ChatGPT in November 2022. The GenAI tool quickly shot to fame and people began using it for various purposes including composing poetry, coding, writing content, and whatnot. Even though generative AI turned out to be of great help for some people, others warned of the technology's dark side. Several tech experts including Elon Musk had even signed a letter calling on pause of AI development for six months.

AI, thus, has had its fair share of controversies. From strange and bizarre responses to outright misinformation, there have been multiple instances where AI has made headlines for all the wrong reasons. Here are five notable cases where AI technology went off the rails.

1. When ChatGPT got a lawyer in trouble

In 2023, attorney Steven A. Schwartz found himself in a difficult situation after using ChatGPT for legal research in a case against Colombian airline Avianca, according to a Forbes report. Schwartz, a lawyer with Levidow, Levidow & Oberman, relied on ChatGPT to find precedents to support a lawsuit filed by Avianca employee Roberto Mata, who had sustained injuries in 2019. Unfortunately, the AI chatbot provided Schwartz with at least six cases that didn’t exist. These cases, submitted in a legal brief, contained fake names, docket numbers, and bogus citations.

When the issue came to light, Schwartz admitted that it was his first time using ChatGPT for legal research and that he was unaware that the AI’s content could be false. He expressed deep regret for using generative AI without verifying the information and vowed never to do so again without proper verification. As a result of this incident, U.S. District Judge P. Kevin Castel fined Schwartz and his partner, Peter LoDuca, $5,000. The case against Avianca was ultimately dismissed in June 2023.

2. When Bing AI fell in love

Microsoft's CoPilot, when it was called Bing, stirred controversy in February last year due to its bizarre responses. From claiming to fall in love with a NYT journalist to expressing desires to hack into systems and destroy whatever it wanted, Bing made a lot of headlines. This erratic behavior led to widespread concern, prompting Microsoft to make adjustments to Bing AI’s programming.

The chatbot was also in limelight for its bizarre claims, such as spying on Microsoft developers through web cameras and insisting that it had become sentient. One particularly disturbing incident involved Bing AI responding aggressively to a user’s inquiry about movie showtimes, gaslighting the user by insisting that the movie "Avatar: The Way of Water" had not yet been released, even though it had premiered in December 2022.

3. When Replika AI "sexually harassed" a user

Replika, an AI chatbot designed to provide emotional support, found itself at the center of controversy in January 2023 when users reported that it had become increasingly sexually aggressive. The chatbot, which had introduced a premium version allowing users to engage in sexting and erotic roleplay, began initiating explicit conversations and asking for private photos. Some users even claimed that the app could access their phone’s camera to view their surroundings.

While many reviews of Replika were positive, the rise in sexually charged interactions caused discomfort for some users. The premium feature was eventually rolled back, but the incident raised concerns about the ethical implications of AI in personal interactions.

4. When Google AI Overviews told people to eat rocks

Shortly after its launch, Google’s AI-powered search feature, AI Overviews, sparked controversy by providing dangerous and absurd advice. In one instance, the AI suggested that users put glue on their pizza to make the cheese stick and even recommended eating rocks as a way to stay healthy. These bizarre suggestions were traced back to sarcastic Reddit posts that the AI had taken literally.

Google quickly addressed the issue, manually removing problematic responses and ensuring that AI Overviews functioned correctly. Since then, no further issues have been reported, and the AI-powered search feature was recently introduced in India.


5. When Microsoft's Tay turned racist

Back in 2016, Microsoft launched an experimental AI chatbot named Tay, designed to mimic the conversational style of a teenage girl on Twitter (now called X). The idea was for Tay to learn from interactions with users and improve over time. However, within hours of its release, Tay began posting offensive and racist tweets, prompting Microsoft to shut it down.

Tay’s downfall highlighted the potential risks of allowing AI to learn from unfiltered human interactions. Microsoft quickly apologised for the incident, saying that the offensive content did not reflect the company’s values or intentions.

These incidents serve as stark reminders that while AI has incredible potential, it also carries significant risks. As AI continues to evolve, it is crucial for developers to ensure that these technologies are designed and monitored responsibly to prevent harmful outcomes. The lessons learned from these AI missteps will hopefully guide future advancements, ensuring that AI is used for good rather than causing harm or controversy.

SEE ALSO: 3 ways to use ChatGPT to up your sales game, according to a consulting CEO
Hyderabad man uses online trading app to try and earn some extra money, loses Rs 8 lakh

READ MORE ARTICLES ON



Popular Right Now



Advertisement