mo4ch:> Facebook to pay $52m to content moderators over PTSD | Mo4ch News - Mo4ch News

Breaking

Wednesday, May 13, 2020

mo4ch:> Facebook to pay $52m to content moderators over PTSD | Mo4ch News

Facebook to pay $52m to content moderators over PTSD

  • 13 May 2020
Facebook moderators working at its offices in Austin, TexasImage copyright Getty Images
Image caption Facebook moderators working at its offices in Austin, Texas

Facebook has agreed to pay $52m (£42m) to content moderators as compensation for mental health issues developed on the job.

The agreement settles a class-action lawsuit brought by the moderators, as first reported by The Verge.

Facebook said it is using both humans and artificial intelligence (AI) to detect posts that violate policies.

The social media giant has increased its use of AI to remove harmful content during the coronavirus lockdown.

In 2018, a group of US moderators hired by third-party companies to review content sued Facebook for failing to create a safe work environment.

The moderators alleged that reviewing violent and graphic images - sometimes of rape and suicide - for the social network had led to them developing post-traumatic stress disorder (PTSD).

The agreement, filed in court in California on Friday, settles that lawsuit. A judge is expected to sign off on the deal later this year.

The agreement covers moderators who worked in California, Arizona, Texas and Florida from 2015 until now. Each moderator, both former and current, will receive a minimum of $1,000, as well as additional funds if they are diagnosed with PTSD or related conditions. Around 11,250 moderators are eligible for compensation.

Facebook also agreed to roll out new tools designed to reduce the impact of viewing the harmful content.

A spokesperson for Facebook said the company was "committed to providing [moderators] additional support through this settlement and in the future".

Moderating the lockdown

In January, Accenture, a third-party contractor that hires moderators for social media platforms including Facebook and YouTube, began asking workers to sign a form acknowledging they understood the job could lead to PTSD.

The agreement comes as Facebook looks for ways to bring more of its human reviewers back online after the coronavirus lockdown ends.

Image copyright NurPhoto
Image caption Facebook has increased its use of AI to detect misleading information about the coronavirus outbreak

The company said many human reviewers were working from home, but some types of content could not be safely reviewed in that setting. Moderators who have not been able to review content from home have been paid, but are not working.

To offset the loss of human reviewers, Facebook boosted its use of AI to moderate the content instead.

In its fifth Community Standards Enforcement Report released on Tuesday, the social media giant said AI helped to proactively detect 90% of hate speech content.

AI has also been crucial in detecting harmful posts about the coronavirus. Facebook said in April that it was able to put warning labels on around 50 million posts that contained misleading information on the pandemic.

However, the technology does still struggle at times to recognise harmful content in video images. Human moderators can often better detect the nuances or wordplay in memes or video clips, allowing them to spot harmful content more easily.

Facebook says it is now developing a neural network called SimSearchNet that can detect nearly identical copies of images that contain false or misleading information.

According to the social media giant's chief technology officer Mike Schroepfer, this will human reviewers to focus on "new instances of misinformation", rather than looking at "near-identical variations" of images they have already reviewed.



Source : BBC News - Technology

Facebook to pay $52m to content moderators over PTSD

Facebook to pay $52m to content moderators over PTSD