scorecard
  1. Home
  2. tech
  3. news
  4. Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: 'It has destroyed me completely.'

Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: 'It has destroyed me completely.'

Beatrice Nolan   

Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: 'It has destroyed me completely.'
  • Kenyan moderators who reviewed content for ChatGPT say they had to read pages of disturbing text.
  • A former Sama employee told The Guardian that he often read up to 700 text passages every day.

Kenyan moderators who reviewed content for OpenAI's ChatGPT say they were made to read graphic text passages, paid poorly, and offered limited support in a new report by The Guardian.

The moderators were employed by Sama, a California-based data-annotation company that had a contract with OpenAI and offered data-labeling services to Google and Microsoft. Sama ended its partnership with OpenAI in February 2022. Time reported that the California company cut ties with OpenAI because of concerns about working with potentially illegal content for AI training.

A moderator who reviewed content for OpenAI, Mophat Okinyi, told The Guardian that he often read up to 700 text passages daily. He said that many of these passages centered on sexual violence and that the work caused him to grow paranoid about those around him. He said this damaged his mental state and his relationship with his family.

Another former moderator, Alex Kairu, told the news outlet that what he saw on the job "destroyed me completely." He said that he became introverted and that his physical relationship with his wife deteriorated.

A spokesperson for OpenAI told Insider: "Human data annotation is one of the many streams of our work to collect human feedback and guide the models toward safer behavior in the real world. We believe this work needs to be done humanely and willingly, which is why we establish and share our own ethical and wellness standards for our data annotators.

"We recognize this is challenging work for our researchers and annotation workers in Kenya and around the world—their efforts to ensure the safety of AI systems has been immensely valuable."

The moderators told The Guardian that the content up for review often depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse. A Sama spokesperson told the news outlet that workers were paid from $1.46 to $3.74 an hour. Time previously reported that the data labelers were paid less than $2 an hour to review content for OpenAI.

Now, four of the moderators are calling on the Kenyan government to investigate the working conditions during the contract between OpenAI and Sama, per The Guardian.

The moderators said they weren't given adequate support for their work. Sama disputed this claim and told Insider that workers had access to group and individual wellness counseling via professionally-trained and licensed mental health therapists. Medical benefits also include reimbursement for professionals such as psychiatrists, the spokesperson said.

"We are in agreement with those who call for fair and just employment, as it aligns with our mission," a Sama spokesperson told The Guardian, "and believe that we would already be compliant with any legislation or requirements that may be enacted in this space."

Sama said it had made the decision to exit from all-natural language processing and content moderation work to focus on computer vision data annotation solutions. A representative told Insider the exit was complete as of March 2023.



Popular Right Now



Advertisement