+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Inside Facebook's suicide algorithm: Here's how the company uses artificial intelligence to predict your mental state from your posts

Jan 6, 2019, 21:49 IST

Hollis Johnson/Business Insider

Advertisement
  • Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk.
  • Facebook passes the information along to law enforcement for wellness checks.
  • Privacy experts say Facebook's failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse.

In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence.

Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem.

But over a year later, following a wave of privacy scandals that brought Facebook's data-use into question, the idea of Facebook creating and storing actionable mental health data without user-consent has numerous privacy experts worried about whether Facebook can be trusted to make and store inferences about the most intimate details of our minds.

Facebook is creating new health information about users, but it isn't held to the same privacy standard as healthcare providers

Facebook automatically scores all posts in the US and select other countries on a scale from 0 to 1 for risk of imminent harm.Photo Illustration by Ute Grabowsky/Photothek via Getty Images

The algorithm touches nearly every post on Facebook, rating each piece of content on a scale from zero to one, with one expressing the highest likelihood of "imminent harm," according to a Facebook representative.

Advertisement

That data creation process alone raises concern for Natasha Duarte, a policy analyst at the Center for Democracy and Technology.

"I think this should be considered sensitive health information," she said. "Anyone who is collecting this type of information or who is making these types of inferences about people should be considering it as sensitive health information and treating it really sensitively as such."

Data protection laws that govern health information in the US currently don't apply to the data that is created by Facebook's suicide prevention algorithm, according to Duarte. In the US, information about a person's health is protected by the Health Insurance Portability and Accountability Act (HIPAA) which mandates specific privacy protections, including encryption and sharing restrictions, when handling health records. But these rules only apply to organizations providing healthcare services such as hospitals and insurance companies.

Companies such as Facebook that are making inferences about a person's health from non-medical data sources are not subject to the same privacy requirements, and according to Facebook, they know as much and do not classify the information they make as sensitive health information.

Facebook hasn't been transparent about the privacy protocols surrounding the data around suicide that it creates. A Facebook representative told Business Insider that suicide risk scores that are too low to merit review or escalation are stored for 30 days before being deleted, but Facebook did not respond when asked how long and in what form data about higher suicide risk scores and subsequent interventions are stored.

Advertisement

Facebook would not elaborate on why data was being kept if no escalation was made.

Could Facebook's next big data breach include your mental health data?

ANWAR AMRO/AFP/Getty ImagesFacebook's algorithm is meant to be a next step from suicide hotlines, which only screen callers who are actively seeking help.

The risks of storing such sensitive information is high without the proper protection and foresight, according to privacy experts.

The clearest risk is the information's susceptibility to a data breach.

"It's not a question of if they get hacked, it's a question of when," said Matthew Erickson of the consumer privacy group the Digital Privacy Alliance.

Advertisement

In September, Facebook revealed that a large-scale data breach had exposed the profiles of around 30 million people. For 400,000 of those, posts and photos were left open. Facebook would not comment on whether or not data from its suicide prevention algorithm had ever been the subject of a data breach.

Following the public airing of data from the hack of married dating site Ashley Madison, the risk of holding such sensitive information is clear, according to Erickson: "Will someone be able to Google your mental health information from Facebook the next time you go for a job interview?"

Dr. Dan Reidenberg, a nationally recognized suicide prevention expert who helped Facebook launch its suicide prevention program, acknowledged the risks of holding and creating such data, saying, "pick a company that hasn't had a data breach anymore."

But Reidenberg said the danger lies more in stigma against mental health issues. Reidenberg argues that discrimination against mental illness is barred by the Americans with Disabilities Act, making the worst potential outcomes addressable in court.

Who gets to see mental health information at Facebook

Once a post is flagged for potential suicide risk, it's sent to Facebook's team of content moderators. Facebook would not go into specifics on the training content moderators receive around suicide but insist that they are trained to accurately screen posts for potential suicide risk.

Advertisement

In a Wall Street Journal review of Facebook's thousands of content moderators in 2017, they were described as mostly contract employees who experienced high turnover and little training on how to cope with disturbing content. Facebook says that the initial content moderation team receives training on "content that is potentially admissive to Suicide, self-mutilation & eating disorders" and "identification of potential credible/imminent suicide threat" that has been developed by suicide experts.

Facebook said that during this initial stage of review, names are not attached to the posts that are reviewed, but Duarte said that de-identification of social media posts can be difficult to achieve.

"It's really hard effectively de-identify peoples' posts, there can be a lot of context in a message that people post on social media that reveals who there are even if their name isn't attached to it," he said.

If a post is flagged by an initial reviewer as containing information about a potential imminent risk, it is escalated to a team with more rapid response experience, according to Facebook, which said the specialized employees have backgrounds ranging from law enforcement to rape and suicide hotlines.

These more experienced employees have more access to information on the person whose post they're reviewing.

Advertisement

"I have encouraged Facebook to actually look at their profiles to look at a lot of different things around it to see if they can put it in context," Reidenberg said, insisting that adding context is one of the only ways to currently determine risk with accuracy at the moment. "The only way to get that is if we actually look at some of their history, and we look at some of their activities."

Sometimes police get involved

A communications officer works in s 911 dispatch center.Mike Groll/AP Photo

Once reviewed, two outreach actions can take place. Reviewers can either send the user suicide resource information or contact emergency responders.

"In the last year, we've helped first responders quickly reach around 3,500 people globally who needed help," wrote Facebook CEO Mark Zuckerberg in a post on the initiative.

Duarte says Facebook's surrender of user information to police represents the most critical privacy risk of the program.

Advertisement

"The biggest risk in my mind is a false positive that leads to unnecessary law enforcement contact," he said

Facebook has pointed out numerous successful interventions from its partnership with law enforcement, but in a recent report from The New York Times, one incident documented by police resulted in intervention with someone who said they weren't suicidal. The police took the person to a hospital for a mental health evaluation anyway. In another instance, police released personal information about person flagged for suicide risk by Facebook to The New York Times.

Why Facebook's suicide algorithm is banned in the EU

Carl Court / Getty Images

Facebook uses the suicide algorithm to scan posts in English, Spanish, Portuguese, and Arabic, but they don't scan posts in the European Union.

The prospect of using the algorithm in the EU was halted because of the area's special privacy protections under the General Data Protection Regulation (GDPR), which requires users give websites specific consent to collective sensitive information such as that pertaining to someone's mental health.

Advertisement

In the US, Facebook views its program as a matter of responsibility.

Reidenberg described the sacrifice of privacy as one that medical professionals routinely face.

"Health professionals make a critical professional decision if they're at risk and then they will initiate active rescue," Reidenberg said. "The technology companies, Facebook included, are no different than that they have to determine whether or not to activate law enforcement to save someone."

But Duarte said a critical difference exists between emergency professionals and tech companies.

"It's one of the big gaps that we have in privacy protections in the US, that sector by sector there's a lot of health information or pseudo health information that falls under the auspices of companies that aren't covered by HIPAA and there's also the issue information that is facially health information but is used to make inferences or health determinations that is currently not being treated with the sensitivity that we'd want for health information."

Advertisement

Privacy experts agreed that a better version of Facebook's program would require users to affirmatively opt-in, or at least provide a way for users to opt out of the program, but currently neither of those options are available.

Emily Cain, a Facebook policy communications representative, told INSIDER, "By using Facebook, you are opting into having your posts, comments, and videos (including FB Live) scanned for possible suicide risk."

Experts agree that the suicide algorithm has potential for good

Most experts in privacy and public health spoken to for this story agreed that Facebook's algorithm has the potential for good.

According to the World Health Organization, nearly 800,000 people commit suicide every year, disproportionately affecting teens and vulnerable populations like LGBT and indigenous peoples.

Facebook said that in their calculation, the risk of invasion of privacy is worth it.

Advertisement

"When it comes to suicide prevention efforts, we strive to balance people's privacy and their safety," the company said in a statement. "While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible. And we understand this is a sensitive issue so we have a number of privacy protections in place."

Kyle McGregor, Director of New York University School of Medicine's department of Pediatric Mental Health Ethics, agreed with the calculation, saying "suicidality in teens especially is a fixable problem and we as adults have every responsibility to make sure that kids can get over the hump of this prime developmental period and go onto live happy, healthy lives. If we have the possibility to prevent one or two more suicides accurately and effectively, that's worth it."

If you are having thoughts of suicide, call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255) or the National Hopeline Network at 1-800-SUICIDE (1-800-784-2433).

Have a tip? Email Benjamin Goggin at bgoggin@businessinsider.com or DM him on Twitter @BenjaminGoggin.

NOW WATCH: I cut Google out of my life for 2 weeks, but the alternatives prove why Google is so much better

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article