scorecard
  1. Home
  2. tech
  3. news
  4. The internet has more AI-generated content today than human-created and this could mean trouble

The internet has more AI-generated content today than human-created and this could mean trouble

The internet has more AI-generated content today than human-created and this could mean trouble
  • More than half of content on internet is AI generated, a new study finds
  • This could have some serious implications
  • AI tends to hallucinate when it is trained on its own content

When OpenAI launched ChatGPT in November 2022, the internet was a different place. Chatbots that could create human-like content and hold conversations were not that common and people never thought that it would soon be possible to get an entire article written with just a few text prompts. Then, ChatGPT entered our lives and changed it. For the better or worse, that's yet to be decided.

After ChatGPT, AI tools like Microsoft's Bing (now called CoPilot) and Google's Bard (now called Gemini) were also launched. Other AI content generators like Claude and Writesonic also gained popularity. With the popularity of AI tools, the internet is slowly filling up with AI-generated content.

57 percent internet content is AI generated

A study by Amazon Web Services, as reported by a recent Forbes report, says that 57 percent content on the internet is generated by AI. This means that more than half of content on the internet is AI-generated today. The Forbes report also quoted an expert who said that 90 percent of all internet content is likely to be AI-generated by 2025. And this could mean a drop in the quality of content available on the internet.


What happens when AI fills the internet?

A new study by Dr. Ilia Shumailov and a team of researchers, published in Nature, says that generative AI models degrade rapidly when they rely solely on AI-generated content. The researchers observed that after just two prompts, the quality of AI responses begins to drop, becoming nonsensical by the ninth attempt.

This phenomenon, termed "model collapse," occurs when AI continues training on its own outputs, leading to distorted results and loss of accuracy, especially for minority or less-represented data.

On a related note, there have been several instances where AI-generated content got people in trouble by producing inaccurate results.

For instance, in March this year, a Canadian lawyer faced investigation after using ChatGPT for legal research as the AI chatbot came up with fictitious cases. Despite her apology and steps to correct the error, the incident highlighted the risks of AI in legal research, prompting an inquiry by the British Columbia law society.

SEE ALSO:
Majority IT workers fear being replaced by AI, new survey shows
Salesforce joins the AI race, announces plans to acquire AI startup Tenyx

READ MORE ARTICLES ON



Popular Right Now



Advertisement