This article is more than 1 year old

Hundreds of Facebook moderators complain: AI content moderation isn't working and we're paying for it

Human contractors battling COVID-19 stress and psychological trauma

Facebook’s AI algorithms aren’t effective enough to automatically screen for violent images or child abuse, leaving the job to human moderators who are complaining about having to come into an office to screen harmful content during the coronavirus pandemic.

In an open letter to the social media giant, over 200 content moderators said that the company’s technology was futile. “It is important to explain that the reason you have chosen to risk our lives is that this year Facebook tried using ‘AI’ to moderate content—and failed,” it said.

As COVID-19 spread across the world, Facebook ramped up its efforts to use machine learning algorithms to automatically remove toxic posts. The letter backed by Foxglove, a tech-focused non-profit, claimed that the technology would make it easier for human moderators since the worst content - graphic images of self-harm, violence, or child abuse - would be screened beforehand, leaving them with less harmful work like removing hate speech or misinformation.

Initially there was some success, Cori Crider, director of Foxglove, told The Register. “During the at-home work period, at first, we did have reports of a decrease in people’s exposure to graphic content. But then, it appears from Facebook’s own transparency documents that this meant non-violating content got taken down and problematic stuff like self harm stayed up. This is the source of the drive to force these people back to the office.”

The moderators are kept six-feet away from each other, but there have been numerous cases of staff members being infected with COVID-19 in multiple offices. “Workers have asked Facebook leadership, and the leadership of your outsourcing firms like Accenture and CPL, to take urgent steps to protect us and value our work. You refused. We are publishing this letter because we are left with no choice,” the letter continued.

Now, they have asked Facebook to let them work from home more and to provide higher wages to those going into the office. They also want the company to offer health care and mental health services to help them deal with the psychological trauma of content moderation.

A Facebook spokesperson told El Reg in a statement that the company already offers healthcare benefits and that most moderators have been working from home during the pandemic.

“We appreciate the valuable work content reviewers do and we prioritize their health and safety. While we believe in having an open internal dialogue, these discussions need to be honest," the spokesperson said.

"The majority of these 15,000 global content reviewers have been working from home and will continue to do so for the duration of the pandemic. All of them have access to health care and confidential wellbeing resources from their first day of employment, and Facebook has exceeded health guidance on keeping facilities safe for any in-office work.”

Although the moderators receive some support, they don’t get the same benefits as full-time Facebook employees do. “It is time to reorganize Facebook’s moderation work on the basis of equality and justice. We are the core of Facebook’s business. We deserve the rights and benefits of full Facebook staff,” the moderators concluded. ®

More about

TIP US OFF

Send us news


Other stories you might like