Facebook’s Attempt To Stop Court Case Against It In Kenya Fails
The Employment and Labour Relations Court has ruled that the Facebook parent firm Meta have a case to answer since they can be sued in Kenya.
In a ruling Monday morning, Justice Jacob Gakeri declined to strike out Meta Platforms Inc and Meta Platforms Ireland Ltd from a case filed by a former South African Facebook moderator Daniel Motaung, who has sued the social technology company over a toxic work environment.
“My finding is that 2nd and 3rd respondent shall not be struck,” the judge said.
Meta wanted to be removed from the case, arguing that Kenyan courts do not have jurisdiction since the foreign corporations are not domiciled or trading in Kenya.
Mr Motaung filed the case last year, claiming that he and his colleagues suffered psychological injuries from repeated exposure to extremely disturbing, graphic violent content coupled with a toxic working environment.
Mr Motaung was employed as a moderator by Meta’s local outsourcing company- Samasource Kenya EPZ ltd. He wants to be paid damages for his suffering in the six months he worked for the firm and for working extra hours without pay. Business Daily wrote.
Cases of moderators from tech firms especially social media firms being sued by content moderators are not new. Giant social media platforms such us Facebook and Twitter employ moderators who are tasked with identifying and pulling don both graphic and harmful content on the platforms.
In may case, most of these moderators do not stay at their workplaces for long and many end up suing their employers siting psychological torture and trauma caused by their working environment. In line of their duties, they end up being exposed to to extremely disturbing, graphic violent content coupled with a toxic working environment.
Just recently, a Time investigation revealed how OpenAI used outsourced Kenyan laborers earning less than $2 per hour.
To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man performing an indecent act with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.