Thomson Reuters
The new policy, first reported by Axios Saturday, comes out of concerns over how Facebook ads sold to Russia may have impacted the 2016 US presidential election.
In his Tweetstorm, Stamos called out what he sees as "the real gap between academics/journalists and SV (Silicon Valley)," after reading a thread from Quinta Jurecic, a member of the Washington Post's editor board.
Jurecic wrote that Facebook having humans monitor content instead of algorithms perpetuates the idea that code is a "neutral god." She said that it's problematic when people forget that algorithms are designed by humans, and contain human errors and biases as well.
Stamos, however, said the people behind Facebook's algorithms are well aware of these human biases. He said such critiques Silicon Valley trivialize how complicated the issues are, and overlook the fact that people on the inside are actively aware of some of the biggest concerns.
Here's the full thread:
I appreciate Quinta's work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV. https://t.co/CWulZrFaso
- Alex Stamos (@alexstamos) October 7, 2017
I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.
- Alex Stamos (@alexstamos) October 7, 2017
Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.
- Alex Stamos (@alexstamos) October 7, 2017
In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.
- Alex Stamos (@alexstamos) October 7, 2017
For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.
- Alex Stamos (@alexstamos) October 7, 2017
Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.
- Alex Stamos (@alexstamos) October 7, 2017
A bunch of the public research really comes down to the feedback loop of "we believe this viewpoint is being pushed by bots" -> ML
- Alex Stamos (@alexstamos) October 7, 2017
So if you don't worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it's easy!
- Alex Stamos (@alexstamos) October 7, 2017
Likewise all the stories about "The Algorithm". In any situation where millions/billions/tens of Bs of items need to be sorted, need algos
- Alex Stamos (@alexstamos) October 7, 2017
My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.
- Alex Stamos (@alexstamos) October 7, 2017
And to be careful of their own biases when making leaps of judgment between facts.
- Alex Stamos (@alexstamos) October 7, 2017
If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased
- Alex Stamos (@alexstamos) October 7, 2017
If your piece assumes that a problem hasn't been addressed because everybody at these companies is a nerd, you are incorrect.
- Alex Stamos (@alexstamos) October 7, 2017
If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.
- Alex Stamos (@alexstamos) October 7, 2017
If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad
- Alex Stamos (@alexstamos) October 7, 2017
Likewise if your call for data to be protected from governments is based upon who the person being protected is.
- Alex Stamos (@alexstamos) October 7, 2017
A lot of people aren't thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.
- Alex Stamos (@alexstamos) October 7, 2017
Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. FIN
- Alex Stamos (@alexstamos) October 7, 2017