+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Facebook's effort to stop suicides reveals a worrisome gap between Silicon Valley tech companies and healthcare experts

May 5, 2019, 20:30 IST

Photo by Chip Somodevilla/Getty Images

Advertisement
  • Facebook has a suicide-monitoring tool that uses machine learning to identify posts that may indicate someone is at risk of killing themselves.
  • The tool was involved in sending emergency responders to locations more than 3,500 times as of last fall.
  • A Harvard psychiatrist is worried the tool could worsen health problems by homing in on the wrong people or escalating mental-health crises.
  • Facebook does not consider the tool to be health research and hasn't published any information on how it works or whether it's successful.
  • Visit Business Insider's homepage for more stories.

Facebook knew there was a problem when a string of people used the platform to publicly broadcast their suicides in real time.

Staff at the company had been thinking about the issue of suicide since 2009, when a cluster of them occurred at two high schools near the company's headquarters in Palo Alto. Then, things became personal. After the company rolled out a video livestreaming tool called "Facebook Live," several people used it to broadcast themselves taking their own lives. First it was a 14-year-old girl and then a 33-year-old man, both in the US. Later, in the fall, a young man in Turkey broadcast himself dying by suicide.

Facebook, led by Chief Executive Officer Mark Zuckerberg, tasked its safety-and-security team with doing something about it.

The result was Facebook's suicide-monitoring algorithm, which has been running since 2017 and was involved in sending emergency responders to people more than 3,500 times as of last fall, according to the company.

Advertisement

Using pattern-recognition technology, the tool identifies posts and livestreams that appear to express intents of suicide. It scans the text in a post, along with the comments on it, such as "Are you OK?" When a post is ranked as potentially suicidal, it is sent first to a content moderator and then to a trained staff member tasked with notifying emergency responders.

Harvard psychiatrist and tech consultant John Torous only learned of the tool's existence last year, from a journalist. He said he's concerned it may be doing more harm than good.

'We as the public are partaking in this grand experiment'

"We as the public are partaking in this grand experiment, but we don't know if it's useful or not," Torous told Business Insider last week.

Torous has spent years collaborating with tech giants like Microsoft on scientific research. The reason he hadn't heard about Facebook's suicide-monitoring algorithm was because Facebook hasn't shared information about the tool with researchers such as him, or with the broader medical and scientific community.

In fact, Facebook hasn't published any data on how its tool works. The company's view is that the tool isn't a health product or research initiative but more akin to calling for help if you see someone in trouble in a public space.

Advertisement

"We are in the business of connecting people with supportive communities. We are not mental health providers," Antigone Davis, Facebook's global head of safety, previously told Business Insider.

But without public information on the tool, Torous said big questions about Facebook's suicide-monitoring tool are impossible to answer. He is worried the tool might home in on the wrong users, discourage frank discussions about mental health on the platform, or escalate or even create, a mental-health crisis where there wasn't one.

In sum, Torous said Facebook's use of the tool could be hurting more people than it's helping.

"It's one thing for an academic or a company to say this will or won't work. But you're not seeing any on-the-ground peer-reviewed evidence," Torous said. "It's concerning. It kind of has that Theranos feel."

Clinicians and companies disagree on the definition of health research

Facebook's suicide-monitoring tool just one example of how the barriers that separate tech from healthcare are crumbling. A growing array of products and services - think Apple Watch, Amazon's Alexa, and even the latest meditation app - straddle the gap between health innovation and tech disruption. Clinicians see red flags. Tech leaders see revolution.

Advertisement

"There's almost this implicit assumption that they play by a different set of rules," Torous said.

At Facebook, the safety and security team spoke with experts at several suicide-prevention nonprofits, including Daniel Reidenberg, the founder of Save.org. Reidenberg told Business Insider that he helped Facebook create a solution by sharing his experiences, bringing in people who'd struggled personally with suicide, and having them share what helped them.

Reidenberg told Business Insider that he thinks Facebook is doing good work in suicide, but because its efforts are in uncharted waters, he thinks everyday issues will arise with the tool. He disagrees with Torous' view that the efforts are health research.

"There isn't any company that's more forward-thinking in this area," Reidenberg said.

Still, it is unclear how well Facebook's suicide-monitoring tool works. Because of privacy issues, emergency responders can't tell Facebook what happened at the scene of a potential suicide, Davis said. In other words, emergency responders can't tell Facebook if they reached the scene too late to stop a death, showed up to the wrong place, or arrived only to learn there was no real problem.

Advertisement

Torous, a psychiatrist who's familiar with the thorny issues in predicting suicide, is skeptical of how that will play out with regard to the suicide monitoring tool. He points to a review of 17 studies in which researchers analyzed 64 different suicide-prediction models and concluded that the models had almost no ability to successfully predict a suicide attempt.

"We know Facebook built it and they're using it, but we don't really know if it's accurate, if it's flagging the right or wrong people, or if it's flagging things too early or too late," Torous said.

NOW WATCH: NASA's 5-step plan for when it discovers a giant, killer asteroid headed straight for Earth

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article