+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Even high school interns were able to use common AI tools to bypass security and launch bots on X, Facebook and other social media: report

Oct 16, 2024, 17:17 IST
Business Insider India
AI bots on social mediaiStock/wildpixel
Social media platforms like X (formerly Twitter) and Facebook have established safeguards to combat bot accounts, including policies and technical mechanisms. However, the effectiveness of these measures is increasingly being questioned as AI-driven bots evolve and find ways to bypass restrictions.
Advertisement

AI bot evolution

While AI bots can serve legitimate purposes like marketing and customer service, some are designed to manipulate public discussions, incite hate speech, and spread misinformation. Research from the University of Notre Dame investigated the bot policies of platforms like LinkedIn, TikTok, and X, testing how well these platforms enforced their rules. The findings were troubling.

"Despite what their policy says or the technical bot mechanisms they have, it was very easy to get a bot up and working on X. They aren't effectively enforcing their policies," said Paul Brenner, a Director at Notre Dame.

Even interns, who had only a high school-level education and minimal training, were able to launch the test bots using technology that is readily available to the public, Brenner explained. Their bots were created using commonly-available AI tools such as OpenAI's GPT-4o and DALL-E 3, and the tests were run on LinkedIn, Mastodon, Reddit, TikTok, X, Facebook, Instagram and Threads.
As per the tests, none of these platforms had sufficient safeguards to keep the bots out. Researchers successfully published benign test posts from bots across all platforms, with X and Reddit posing minimal challenges. While Meta platforms proved somewhat more challenging due to stronger enforcement mechanisms, they ultimately could still be bypassed — even by relatively inexperienced folk.

Why does it matter?

In India — and the rest of the world, for that matter — the rise of AI-driven bots has significant implications for public discourse, particularly in the political arena. Research has indicated that a substantial portion of misinformation circulating during electoral periods can often be traced back to automated accounts. Many experts emphasise that the pervasive influence of these accounts can negatively shape public perception and discussion, and is often used to target certain sensitive groups and stoke communal tension.

As platforms struggle to keep pace with the evolving tactics of malicious bots, the need for stronger regulations becomes increasingly evident. Considering that AI is evolving at a rapid pace that makes it supremely difficult to tell bots apart from real human accounts — even for the tech-savvy — Brenner argues for US legislation that mandates platforms to identify human versus bot accounts. Similar measures are needed in India, where the lack of accountability for social media platforms allows harmful bot activity to proliferate.
Advertisement

What needs to be done?

Addressing the bot problem requires a multitude of different solutions. For starters, enhanced technology, such as more sophisticated AI detection systems, could help platforms identify and mitigate bot activity more effectively. Recent advancements in machine learning are paving the way for tools that can better analyse user behavior and detect anomalies that indicate bot activity.
However, it is almost just as necessary to educate users about identifying bots and understanding online discourse. To platforms can implement user-friendly tools that highlight potentially automated accounts, empowering users to make more informed decisions about whom to engage with.

Legislation could also play a pivotal role in shaping how social media platforms handle bot accounts. By requiring transparency about bot activity and holding companies accountable for enforcement, policymakers can create an environment where genuine discourse thrives and misinformation is curtailed. In India, the proposed Digital India Act could be an avenue for strengthening regulations around digital platforms and enhancing user safety.

While platforms like X and Facebook have made strides in developing detection methods, the sophistication of AI-driven bots continues to outpace current safeguards. This situation requires urgent attention from both tech companies and policymakers to ensure a safer social media environment.

The findings of this research has been published on a pre-print server and can be accessed here.
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article