Greg Sandoval/Business Insider
- Executives from Facebook, Twitter, and YouTube took part in a panel discussion that focused on how hate speech and disinformation online are impacting democracy.
- The panelists seemed to agree that there's no easy answers to a problem they aren't certain even exists.
Representatives from the big social networks got a chance Thursday evening to talk about their woes in trying to combat hate speech, propaganda, and fake news before a largely sympathetic crowd.
During a panel discussion on the campus of Stanford University, executives from Facebook, Twitter and Google's YouTube told an audience of law students and lawyers that the issue is complex, has no easy answers, and that they aren't certain there even is a problem.
The panel focused on free speech on the internet and the impacts to democracy. Obviously, now is not a bad time for this kind of discussion. In recent years, government regulators here and abroad have grilled the top social networks about what they're doing about the spread of racist propaganda and disinformation on their sites. The fear is that all three companies are unwittingly helping to influence elections.
This time, the executives spoke at Stanford Law School, in Palo Alto, CA. As one of the coordinators put it, they were speaking within America's "cradle of free speech." The crowd before them was far more gentle.
Though the executives weren't forced to field anything but softball questions, it was still interesting to hear unscripted thoughts on these issues. Some of the notable comments included:
- Nick Pickles, senior public policy manager for Twitter in the UK, said that the company is challenging an average of 6.4 million suspicious accounts per week as part of its fight against those spreading false information and other bad actors. He did not say how many of those accounts are suspended on average, only noting that the number was "huge." He also did not specify how the people behind those suspended accounts were stopped from creating new ones.
- Pickles also said that Twitter has begun tweaking its APIs to limit the use of multiple accounts and this in part has led to a 90 percent decrease in the amount of automated-malicious information via Tweetdeck, the web tool that enables and helps users maintain multiple accounts.
- When the discussion turned to whether the US Constitution should be amended to address the new threats posed by the internet, Elliot Schrage, Facebook's vice president of communications and public policy, said: "I'm really really nervous about looking at a recent phenomenon and deciding that we need to disrupt fundamental or foundational principles on which our society is based." He added that this is "an area where public conversation is driven more by anecdote than by data."
The reps from the social networks suggested that nobody knows just how much misinformation and hate speech exists and what the impacts are. They said it was not at all clear - or even known - whether hate speech actually influences anybody, or in what way.
Juniper Downs, YouTube's global head of public policy and government relations, agreed that the problem has been overblown and disputed the notion that "democracy is dying."
She told the audience that the problems with misinformation and propaganda have always existed in the history of the United States. Indeed, that was a position that all the social networks took, that bad actors will continue to exist online and off, and that rushing to create new laws and policies, without understanding the issues fully, is a mistake.
Get the latest Google stock price here.