Facebook's Comments tool promised to make 'higher quality discussions' on the internet. It's riddled with spam instead.
- Facebook's feature that lets people leave comments with their Facebook accounts on other websites is overrun with spam.
- Facebook Comments Plugin promises to use Facebook's tech to support "higher quality conversations."
- But instead, websites that use it including BuzzFeed News, Ozy, and Kirkus Reviews have been targeted with blatant spam and scam comments.
- Facebook's failure raises further questions about its ability to police malicious content online, and comes shortly after Facebook had major issues policing scams on its official user support forum.
- It also shows how a tool designed to promote conversational health online can instead end up having the opposite effect.
- Click here for more BI Prime stories.
Facebook's "Comments Plugin," built to let users leave comments on websites using their Facebook accounts, promises to help deliver "higher quality conversations" across the internet.
Instead, it has prompted a wave of spam across popular websites.
On sites including BuzzFeed, Ozy, and Kirkus Reviews, the Facebook Comments Plugin that sits below articles enabling user comments has been overwhelmed with blatant spam from seemingly fake accounts, Business Insider has found - frustrating readers and raising questions about the social network's ability to police malicious content online.
The offending comments typically promise questionable get-rich-quick schemes, and appear below everything from BuzzFeed News articles about the novel coronavirus to Kirkus reviews of books about the 1980s punk scene and Ozy features on Kentucky basketball. Some of the offending comments are months-old, indicating the issue is not just a one-off incident.
The comments are unsophisticated, repetitive, and easily recognizable as spam. "I have just received $17953 by working online from home in my part time. I have joined this job 3 months ago and in just 3 months i have earned $49k+ from this online job," one comment reads, before providing a link to a dodgy-looking website promising ways to make easy money.
Another writes: "Facebook gives me $ 10000 to $ 18000 every month by just doing some copy paste job on my Laptop. Its amazing to work and stay at home. Start Your Income online now and Earn upto $ 2520 every week."
Facebook has faced more than two years of scrutiny over its content moderation efforts - but the disarray on Facebook comment boxes across the web illustrate how entire categories of problematic content are still slipping through the cracks.
This new spam detection failure comes after Business Insider discovered in December 2019 that Facebook's official support forum had also been overrun by scammers trying to defraud users looking for help, and that Facebook had done nothing about it for months.
In an emailed statement, Facebook spokesperson Joe Osborne said: "We're constantly working to limit the spread of spam on Facebook, including across the Comments Plugin."
BIFacebook promises 'best-in class' spam detection. The reality is very different.
Originally introduced in 2009, Facebook Comments Plugin is an easy way for online publishers and other websites to let their users comment on articles. Rather than building comment functionality and user accounts from scratch, developers just integrate Facebook's code, which inserts a ready-made discussion thread users can comment on using their existing Facebook accounts.
For publishers, that meant not only less internal work, but an end to the trolls and spammers notorious for polluting the comment sections of news sites. By tying comments to users' (theoretically) real Facebook identities, Facebook's commenting tool promised to produce better discussions than what might take place within a site's home-made comments section or one provided by another third-party tool like Disqus.
"When people comment with their Facebook profile it leads to higher quality discussions and less spam,"reads Facebook's developer website promoting the plugin.
Website operators can police comments manually using Facebook's "powerful" moderation tools - but a key attraction is also purported access to Facebook's sophisticated automatic spam filters. "We've also improved our spam detection and filtering with best-in-class detection and automation systems," the promotional material says.
However, these systems seem to have clearly failed on multiple websites - with a product intended to promote conversational health instead eroding online discourse and putting ordinary people at risk.
What's more, Facebook's website promoting Facebook Comments didn't appear to have been updated in several years. It has a list of popular websites that use the feature - HuffPost, Yahoo Japan, MSN, ESPN, TechCrunch, and Fox Sports - but all seem to have since stopped. (It also features an outdated logo for HuffPost that was replaced nearly two years ago.) After Business Insider reached out, these logos were removed.
It's not clear how many employees at Facebook are working on the feature today, and a company spokesperson declined to share a figure.
Some other popular websites that use the Facebook Comments Plugin (including Consumer Reports and Out.com) don't appear to have spam comments; it's not clear if these sites' workers are far more active manually moderating comments, or if they simply haven't been targeted by spammers.
BIFacebook is spending big on safety and security
Facebook has widely touted investments into safety and security initiatives over the past few years, after repeated scandals over content on its platform.
The company has more than 35,000 people working on safety and security initiatives, including content moderation, and CEO Mark Zuckerberg has repeatedly said the company spends more on safety and security annually than its entire company revenues at the time of its 2011 initial public offering. It's not clear why Facebook's automated spam filters have failed to catch the junk user comments, which are often highly repetitive.
Back in 2015, security firm Symantec put out a report detailing how Facebook comment sections were being affected by spammy scammers trying to infect readers with malware.
Five years later, the nature of the spam has changed - but Facebook still hasn't managed to put a stop to it.
BIGot a tip? Contact this reporter using a non-work device via encrypted messaging app Signal (+1 650-636-6268), encrypted email (robaeprice@protonmail.com), standard email (rprice@businessinsider.com), Telegram/WeChat (robaeprice), or Twitter DM (@robaeprice). PR pitches by standard email only, please.
Selected stories: