+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

The creepy secret behind online therapy

Apr 21, 2023, 23:03 IST
Business Insider
Online therapy apps like BetterHelp, Talkspace, and Cerebral have shared their patients' data with advertisers and third-party companies.Marianne Ayala/Insider
Mental-health apps are filling in a critical need — but they have a dark sideHow do you make money off people caught in a mental-health emergency?

Loris, a startup that helps companies improve their customer-service conversations, found a way. Founded in 2018, Loris used data generated from text conversations with people in distress to make "empathetic" customer-service software. The source for the data? Crisis Text Line, a nonprofit suicide-prevention hotline and the parent company of Loris.

Crisis Text Line, now in its 10th year of operations, uses artificial intelligence to respond to people experiencing emotional abuse, self-harm, and suicidal thoughts. A Politico investigation last year explained how the help line created Loris to develop and sell AI software that could guide customer-service agents through live chats with customers using proven "de-escalation techniques, emotional-intelligence strategies, and training experience."

At the heart of the arrangement was what Crisis Text Line called the "largest mental health data set in the world" — 219 million messages from more than 6.7 million conversations over text, Facebook Messenger, and WhatsApp.

Advertisement

Exploiting data for profit is par for the course in the modern technology business — search engines, social-media platforms, and streaming apps all gather and monetize the data they gain from users. But commercializing the data from a crisis line is vastly different from mining data on the binge-watching or online-shopping habits of customers. The data used for Loris came from people who were at the lowest point of their lives, when they may not have been able to truly understand and consent to how it would then be used.

After the Politico story sparked outrage, the companies claimed that they used only anonymized data — though research has found that anonymization is not fool-proof. Eventually, the furor became so great that Crisis Text Line severed all ties with Loris and told the company to delete any previously transferred information. But the Loris mess was just one of a growing number of mental-health-data fiascos.

Despite being touted as the fix for the broken healthcare system, the mushrooming tech-based mental-health industry has a dark side. In the past year, a flurry of reports have found that some of the most recognizable names in the industry have repeatedly engaged in creepy and harmful data-sharing practices that treat people in need of help as prospective sources of profit instead of as patients. Taken together, the reports reveal a dangerous cocktail of tech solutionism, abuse of consumer trust, and regulatory failure that puts highly vulnerable people at risk. And they raise important questions about the future of mental-health care and the role of technology in it.

The case for online mental health

Depression is a leading cause of disability worldwide. One person dies by suicide every 40 seconds. Globally, an estimated 12 billion working days and $1 trillion are lost every year because of depression and anxiety. It's a massive challenge that's brought conventional mental-health care to its knees. Mental health gets less than 2% of national healthcare budgets on average, and there's a dire shortage of trained therapists and psychiatrists. Only one in five people in high-income countries and one in 27 people in low- and lower-middle-income countries get minimally adequate treatment for depression. Add to this mix the unprecedented spike in demand for mental-health care caused by the pandemic, and the situation becomes immeasurably worse.

Enter: Silicon Valley-esque upstarts that believe tech can solve the world's most complex problems. Venture-capital investments in mental-health startups grew from $2.3 billion in 2020 to a record $5.5 billion in 2021. Even as funding cooled in 2022 amid a broader tech-industry slowdown, mental health remained the highest-funded area in digital health.

Advertisement

Two types of companies have emerged in the frothy mental-health-startup scene. There are chatbot-based apps such as Woebot, which offer DIY "clinically tested tools and tactics" grounded in the principles of cognitive-behavioral therapy to help users deal with everyday stress and emotional problems. The main attraction of these apps is the promise of immediate, 24/7 support.

But given concerns about the limits of technology as a replacement for human therapists, much of the action is concentrated in the second model: companies that match users with the right therapists via a virtual marketplace. The fully online mental-health clinics even offer prescription-management services to help people get access to medication.

Online mental-health care is attractive because it allows users to seek support from the privacy and comfort of their personal space. More importantly, it promises radically better access compared with traditional brick-and-mortar therapy clinics — similar to e-commerce or ride-hailing companies. But while these companies can offer some upside for patients, the "move fast and break things" mindset that has come to define the tech industry is pushing them to chase growth and replicate the worst excesses of their Silicon Valley predecessors.

'The vast majority of mental-health apps are exceptionally creepy'

BetterHelp, a poster child of online therapy founded in 2013, calls itself "the world's largest therapy platform" and says it has over 2 million users. BetterHelp is also a poster child for the industry in another way: Last month, the US Federal Trade Commission reprimanded the company for "betraying consumers' most personal health information for profit."

The FTC said BetterHelp handed over sensitive user data — including emails, IP addresses, and replies to health questionnaires — to Facebook so the platform could use this information to run advertisements for BetterHelp that targeted similar users. The commission added that BetterHelp did this despite promising users their personal data would not be used or disclosed except for limited purposes, such as to provide counseling services. It also said that BetterHelp's Facebook campaign helped it acquire tens of thousands of new paying users and millions of dollars in revenue. The FTC banned BetterHelp from sharing users' private health data with third parties and ordered it to pay a $7.8 million fine that would go toward partial refunds to customers — the first time the commission has penalized a mental-health company for mishandling users' private data. BetterHelp decided to settle, but it did not accept any wrongdoing and defended its methods as "industry-standard."

Advertisement

It wasn't the first time BetterHelp had been called out for its questionable data practices. In a 2020 investigation, Jezebel found that the company shared with Facebook the metadata — but not the content — of every message sent by Jezebel's writers during a therapy session on the BetterHelp app. The metadata allowed Facebook to know what time of day users went for therapy, their approximate location, and how long they chatted on the app.

BetterHelp's claim that its data-sharing practices were "industry-standard" is more of an indictment of the industry than exculpatory for the company. Days after the FTC order, news broke that Cerebral, another splashy online mental-health company, had admitted to sharing 3.1 million American patients' private health information, including mental-health assessments, with advertisers and social-media platforms. (The company says it has since removed the data-tracking code from its apps.)

The mental-health startups Talkspace and BetterHelp, as well as dozens of others, were also named in a damning 2022 report by the Mozilla Foundation, a nonprofit organization that describes its mission as working to keep the internet healthy. The report highlighted the unscrupulous security and privacy practices of mental-health apps, concluding that the vast majority of mental-health apps tracked, shared, and commercialized their users' most intimate and vulnerable thoughts and feelings. The Mozilla Foundation characterized these practices as "extremely creepy."

"Despite these apps dealing with incredibly sensitive issues — like depression, anxiety, suicidal thoughts, domestic violence, eating disorders, and PTSD — they routinely share data, allow weak passwords, target vulnerable users with personalized ads, and feature vague and poorly written privacy policies," the report said.

Earlier this year, a study by Joanne Kim at Duke University's Technology Policy Lab exposed the frightening extent to which mental-health data had been commoditized. While it isn't clear where brokers get their mental-health data because of a lack of regulation, the study found that these brokers traded the data in the "open market, with seemingly minimal vetting of customers and seemingly few controls on the use of purchased data."

Advertisement

Some data brokerage firms charged upward of $75,000 or $100,000 a year for access to data that they claimed included information on individuals' mental-health conditions. One broker charged $275 for 5,000 aggregated counts of Americans' mental-health records, the study found. Another "advertised highly sensitive mental health data to the author, including names and postal addresses of individuals with depression, bipolar disorder, anxiety issues, panic disorder, cancer, PTSD, OCD, and personality disorder, as well as individuals who have had strokes and data on those people's races and ethnicities," the report added.

There are enough health-focused startups that have failed to keep private health data safe that, at this point, it's clear the problem is industrywide. There's something deeply troubling about friendly-facing apps that promise to help people with their most sensitive problems betraying users' trust. The bigger challenge is how to hold these companies accountable for their actions.

Legal gray areas

Following the Mozilla report in the summer, a group of US senators sent a letter to Talkspace and BetterHelp asking them to explain their data policies. The senators expressed concerns about how these companies shared confidential user data with "third-party Big Tech firms and data brokers, who have shown remarkably little interest in protecting vulnerable consumers and users."

But the increased congressional scrutiny may not amount to much. While these activities sound illegal, they may not be. The Health Insurance Portability and Accountability Act, created in the 1990s, extends only to "covered entities" in the US, such as your doctor office or hospital, and their "business associates." Digital-health tools occupy a gray area under HIPAA, a flaw that can be — and is — exploited by all manner of businesses, from virtual-therapy platforms to period trackers.

"As I understand it, HIPAA does not apply to direct-to-consumer healthcare products, which would include the vast majority of mental-health apps," Piers Gooding, a researcher at the Melbourne Law School and associate editor of the International Journal of Mental Health and Capacity Law, told me. "The FDA and FTC may play roles in evaluating these direct-to-consumer technologies and their claims," Gooding added. "But there remain gaps. For example, the FTC doesn't seem to cover data gathered by nonprofit organizations, which was a concern raised in the Crisis Text Line case."

Advertisement

When questioned about data violations, these companies often hide behind the excuse that they had secured their users' "legal consent" by requiring they agree to the platforms' terms and conditions when they signed up. In the real world, however, legal consent rarely translates to meaningful consent.

"Nearly all consent forms we come across on the internet are dense texts filled with inaccessible legal jargon," Deepa Singh, an AI-ethics researcher at the University of Delhi, wrote after the Crisis Text Line controversy. On top of that, giving consent is usually a one-time event in the life cycle of a user using a particular service. Meaningful consent is more nuanced. It requires a clear understanding of what is being asked of the user, can change over time, and "does not hide behind obfuscation," Singh argued.

A 2020 survey found that only 9% of US consumers were willing to share their personal health information with a healthcare-tech company. Yet apps that betray users' health data remain popular. When Erica Camacho, Asher Cohen, and John Torous, psychiatry researchers at Boston's Beth Israel Deaconess Medical Center, analyzed 578 mental-health apps, they found no correlation between their problematic privacy features and their popularity.

A willing accomplice to surveillance capitalism

There are signs of a cultural churn at the major online mental-health companies that aren't limited to murky data dealings. Crisis Text Line, for instance, fired its founding CEO following accusations of racism from employees (this was before its data-sharing practices came to light). Cerebral fired its founder and CEO after it was accused of improperly prescribing controlled substances to its users. And now the company is downsizing — it laid off hundreds of employees at the end of February — as it struggles to cope with a tumultuous year of public scrutiny. Talkspace's cofounders left the company at the end of 2021, which industry outlet Behavioral Health Business said was due to lackluster financial results, and its president and the chief operating officer followed suit in the wake of an "internal review of his conduct."

To be fair, you could see the mess as the industry's growing pains. One of the first popular mental-health apps, PTSD Coach, was launched by the US Department of Veteran Affairs in 2011. The explosive growth in the number of apps since then — some 20,000 of them are in circulation, according to one estimate — makes it easy to forget that this is an industry barely in its teens.

Advertisement

However, the pattern of repeated deception and callous disregard for user safety cannot be entirely written off as missteps by a bunch of excitable startups. Predatory use of sensitive data is often baked into the business model of tech companies. But for mental-health companies these practices can undermine the very foundations of mental-health care: dignity, trust, and psychological safety.

As Crisis Text Line wrote on its website extolling its deal with Loris: "Why sell T-shirts when you can sell what your organization does best?"

Tanmoy Goswami is a user-survivor and creator of Sanity, India's first independent mental-health-storytelling platform. He is a graduating fellow at the Reuters Institute for the Study of Journalism, University of Oxford.

Correction: April 21, 2023 — An earlier version of this article misrepresented the current business relationship between Loris and Crisis Text Line. They are now separate entities.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article