Content moderators on ChatGPT experienced trauma from reviewing graphic content, stating that it completely destroyed them.
Content moderators on ChatGPT experienced trauma from reviewing graphic content, stating that it completely destroyed them.
Kenyan Moderators Speak Out About Disturbing Content and Poor Working Conditions at OpenAI
Moderators who reviewed content for OpenAI’s ChatGPT have recently come forward with shocking revelations about the nature of their work and the adverse conditions they faced. A recent report by The Guardian sheds light on the experiences of these Kenyan moderators, highlighting the graphic and disturbing text passages they were exposed to on a daily basis.
The moderators, employed by Sama, a California-based data annotation company, worked under a contract with OpenAI. Sama’s services have also been utilized by tech giants like Google and Microsoft. However, the company ended its partnership with OpenAI in February 2022, reportedly due to concerns over working with potentially illegal content used for AI training.
Mophat Okinyi, one of the moderators assigned to review content for OpenAI, disclosed to The Guardian that he would often read up to 700 text passages per day. Shockingly, many of these passages depicted scenes of sexual violence, which grew increasingly distressing over time. The toll on Okinyi’s mental health was substantial, causing him to develop paranoia and strain his relationships with loved ones.
Another former moderator, Alex Kairu, expressed how the job “destroyed” him. The content he encountered had a profoundly negative impact on his overall well-being, turning him into an introverted individual whose physical relationship with his wife deteriorated. These testimonials underscore the immense toll that exposure to such disturbing content can have on individuals, both mentally and emotionally.
According to The Guardian report, the content reviewed by moderators often featured graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse. Shockingly, these moderators were paid meagerly for their harrowing work, earning hourly wages ranging from $1.46 to $3.74. This rate is far below what is considered a fair and just wage for the nature of the work they were involved in. In fact, Time previously reported that data labelers for OpenAI were paid less than $2 an hour for content review.
- Top airlines for flight bumping.
- Bank of England raises rates for 14th time
- Warner Bros Discovery reports smaller loss, shares rise on cost red...
In response to their distressing experiences, four of the moderators are now calling upon the Kenyan government to investigate the working conditions during the contract period between OpenAI and Sama. Their aim is to shed light on the reality they faced and seek justice for their treatment.
The moderators also claim that they were not provided with adequate mental health support as they carried out their duties. However, Sama has disputed this, stating that workers had access to therapists around the clock and received other medical benefits. Despite this, the moderators are adamant that the support they received was insufficient, and they deserve greater recognition and assistance in navigating the emotional toll of their work.
OpenAI and Sama have not yet responded to requests for comment on these allegations. OpenAI did not provide The Guardian with a statement, leaving the matter unresolved and raising questions about their responsibility and commitment to the well-being of their contract workers.
As this story gains attention, it emphasizes the importance of ethical standards and proper working conditions in the realm of artificial intelligence. It poses serious questions about the extent to which AI development should expose human moderators to such disturbing content and the ethical and legal implications of using such material for training AI models.
The experiences of these Kenyan moderators highlight the urgent need for more transparent and comprehensive guidelines to ensure the well-being and mental health of those involved in content moderation for AI systems. It is vital to prioritize the ethical treatment and fair compensation of individuals performing this essential but emotionally demanding work.