scorecard
  1. Home
  2. tech
  3. news
  4. Sam Altman says society may decide we need AI-client privilege similar to confidentiality with lawyers or doctors

Sam Altman says society may decide we need AI-client privilege similar to confidentiality with lawyers or doctors

Ana Altchek   

Sam Altman says society may decide we need AI-client privilege similar to confidentiality with lawyers or doctors
  • Sam Altman discussed the possibility of granting some confidentiality around what people tell AI.
  • As AI systems grow more popular, safeguarding sensitive info people share with them is key.

Should the sensitive information we share with AI be regulated under a confidentiality agreement similar to attorney-client privilege?

Sam Altman mulled the idea in a recent interview with The Atlantic, saying society may "decide there's some version of AI privilege."

"When you talk to a doctor or a lawyer, there's medical privileges, legal privileges," Altman said. "There's no current concept of that when you talk to an AI, but maybe there should be."

The topic came up during a conversation with the OpenAI CEO and the media mogul Arianna Huffington about their new AI health venture, Thrive AI Health. The company promises an AI health coach that tracks users' health data and provides personalized recommendations on things like sleep, movement, and nutrition.

As companies have increasingly implemented AI systems and products, regulating how data is stored and shared has become a hot topic.

Laws like HIPAA make it illegal for doctors to disclose a patient's sensitive health information without the patient's permission. The agreement is designed to help people feel comfortable being honest with their doctors, which can lead to better treatment.

But some people still have trouble opening up to doctors or seeking medical attention — Altman said that's part of what motivated him to become involved with Thrive AI. In an op-ed article Altman and Huffington wrote for Time, they also cited healthcare costs and accessibility.

Altman said he was surprised by how many people were willing to share information with large language models, or LLMs, the AI systems that power chatbots like ChatGPT or Google's Gemini. He told The Atlantic he'd read Reddit threads about people who found success telling LLMs things they weren't comfortable sharing with others.

While Thrive AI is still figuring out what its product will look like, Huffington said she envisioned it being "available through every possible mode," including workplace platforms.

That raises concerns about data storage and regulation. Big tech companies have already faced lawsuits alleging they trained their AI models on content they didn't have a licensing agreement for. Health information is some of the most valuable and private data people have, and companies could use it to train LLMs.

Altman told The Atlantic it would be "super important to make it clear to people how data privacy works."

"But in our experience, people understand this pretty well," Altman added.

OpenAI's Startup Fund and Thrive Global announced the launch of Thrive AI Health last week, saying it sought to use AI "to democratize access to expert-level health coaching" and tackle "growing health inequities."



Popular Right Now



Advertisement