- Interest around AI peaked with ChatGPT's launch
- In past, AI has been at the center of many controversies
- Here are 5 times when AI went wrong
AI, thus, has had its fair share of controversies. From strange and bizarre responses to outright misinformation, there have been multiple instances where AI has made headlines for all the wrong reasons. Here are five notable cases where
1. When ChatGPT got a lawyer in trouble
In 2023, attorney Steven A. Schwartz found himself in a difficult situation after using ChatGPT for legal research in a case against Colombian airline Avianca, according to a Forbes report. Schwartz, a lawyer with Levidow, Levidow & Oberman, relied on ChatGPT to find precedents to support a lawsuit filed by Avianca employee Roberto Mata, who had sustained injuries in 2019. Unfortunately, the AIWhen the issue came to light, Schwartz admitted that it was his first time using ChatGPT for legal research and that he was unaware that the AI’s content could be false. He expressed deep regret for using generative AI without verifying the information and vowed never to do so again without proper verification. As a result of this incident, U.S. District Judge P. Kevin Castel fined Schwartz and his partner, Peter LoDuca, $5,000. The case against Avianca was ultimately dismissed in June 2023.
2. When Bing AI fell in love
The chatbot was also in limelight for its bizarre claims, such as spying on Microsoft developers through web cameras and insisting that it had become sentient. One particularly disturbing incident involved Bing AI responding aggressively to a user’s inquiry about movie showtimes, gaslighting the user by insisting that the movie "Avatar: The Way of Water" had not yet been released, even though it had premiered in December 2022.
3. When Replika AI "sexually harassed" a user
Replika, an AI chatbot designed to provide emotional support, found itself at the center of controversy in January 2023 when users reported that it had become increasingly sexually aggressive. The chatbot, which had introduced a premium version allowing users to engage in sexting and erotic roleplay, began initiating explicit conversations and asking for private photos. Some users even claimed that the app could access their phone’s camera to view their surroundings.While many reviews of Replika were positive, the rise in sexually charged interactions caused discomfort for some users. The premium feature was eventually rolled back, but the incident raised concerns about the ethical implications of AI in personal interactions.
4. When Google AI Overviews told people to eat rocks
Shortly after its launch, Google’s AI-powered search feature, AI Overviews, sparked controversy by providing dangerous and absurd advice. In one instance, the AI suggested that users put glue on their pizza to make the cheese stick and even recommended eating rocks as a way to stay healthy. These bizarre suggestions were traced back to sarcastic Reddit posts that the AI had taken literally.Google quickly addressed the issue, manually removing problematic responses and ensuring that AI Overviews functioned correctly. Since then, no further issues have been reported, and the AI-powered search feature was recently introduced in India.
5. When Microsoft's Tay turned racist
Back in 2016, Microsoft launched an experimental AI chatbot named Tay, designed to mimic the conversational style of a teenage girl on Twitter (now called X). The idea was for Tay to learn from interactions with users and improve over time. However, within hours of its release, Tay began posting offensive and racist tweets, prompting Microsoft to shut it down.Tay’s downfall highlighted the potential risks of allowing AI to learn from unfiltered human interactions. Microsoft quickly apologised for the incident, saying that the offensive content did not reflect the company’s values or intentions.
These incidents serve as stark reminders that while AI has incredible potential, it also carries significant risks. As AI continues to evolve, it is crucial for developers to ensure that these technologies are designed and monitored responsibly to prevent harmful outcomes. The lessons learned from these AI missteps will hopefully guide future advancements, ensuring that AI is used for good rather than causing harm or controversy.
SEE ALSO: 3 ways to use ChatGPT to up your sales game, according to a consulting CEO
Hyderabad man uses online trading app to try and earn some extra money, loses Rs 8 lakh