This article is more than 1 year old

Facebook admits it was 'too slow' to ban Myanmar regime

But, hey, it's not like it had been warned hundreds of times over several years...

Analysis Facebook has banned 20 organizations and individuals in Myanmar, including the country's commander-in-chief, following a United Nations report formally accusing the military regime of serious human rights abuses.

Despite having received years of complaints about how the authorities were using Facebook to spread hateful rhetoric about its minority Rohingya Muslims, the social media giant has failed to act effectively.

It finally did so in the wake of the UN report detailing allegations of murder, imprisonment and sexual violence against the Rohingyans, and issued yet another apology in which it said it had been "too slow" to act against the "hate and misinformation" that was pushed through its service.

In a statement on Monday, Facebook cited the UN report for justification in banning the organizations as well as General Min Aung Hlaing and noted it was the first time it has banned a state actor from its platform.

"We want to prevent them from using our service to further inflame ethnic and religious tensions," the company said, promising to keep an eye on things in future. But the company has long been warned about how its service was being used as a weapon and critics note is has done little to effectively tackle the problem.

Facebook was first warned back in 2013 that its service was being used to spread dangerously false and hateful messages in Myanmar - and did nothing. Then, in the middle of 2014, its service was used to spread false rumors that a Muslim man had raped a Buddhist woman. That sparked a series of riots that killed two people, injured many more, and caused the Myanmar government to call for a meeting with Facebook execs.

Yeah, could you email me a link?

Facebook reportedly told government representatives to email any future examples of dangerous information, promising to review them. That system was woefully inadequate, however, and may even have encouraged the authorities to push or post their own divisive messages.

One year later, in 2015, civil society groups started complaining loudly that the issue was getting worse and pointed to the fact that Facebook only had two Burmese-speaking moderators as evidence that it was not taking the issue seriously.

But it was only three years after that, when the US Congress raised concerns at a meeting with Facebook CEO Mark Zuckerberg, that the company started making serious efforts and blocked some of the many thousands of hate-filled messages on its site.

The Congressional comments also set off an investigation by Reuters, which issued a report earlier this month detailing over 1,000 posts and videos that viciously attacked Rohingyans, accusing them of being maggots, dogs, pigs, rapists and so on, often with explicit threats of physical violence.

Meanwhile, Facebook refused to say how many Burmese-speaking moderators it had, claiming that such a figure would be "misleading" because moderators don't need to speak a language to tackle issues like nudity.

Of course, what campaigners were concerned about was not nudity but posts like the review of a Rohingyan restaurant that read: "We must fight them the way Hitler did the Jews, damn kalars!"

After the Reuters report was published, Facebook finally admitted in a blog post that it had 60 Burmese-speaking moderators and was planning to hire another 40. But in the same paragraph it stressed that it was working on "building artificial intelligence tools that help us identify abusive posts" and that its team was focused on "working with civil society and building digital literacy programs for people in Myanmar."

Palming the problem off

Previously, Facebook has suggested that the solution to hate-speech lie in the hands of civil society groups reporting incidents to Facebook, rather than, say, active moderation on Facebook's part.

Those same civil society groups have complained that Facebook's AI approach is largely worthless because it works by searching for specific Burmese keywords rather than use an actual intelligent reading of posts, including context.

Facebook has also not deleted posts that have been flagged as hateful, deciding only to "de-rank" them. Only posts with clear, explicit threats of violence are deleted. Questions over the efficacy of its approach has been repeatedly met with Facebook PR-speak about identifying abuses rounded with touchy-feely nonsense about people enjoying "the benefits of connectivity."

UN reports are not known for their speediness which makes it all the most worrying that a company working at internet speed would wait until its publication to take significant action.

In fact, Facebook's persistent failure to get ahead of the abuse on its platform, relying only on small specific actions saturated in public relations spiel once it has been thoroughly embarrassed, has focused attention on the company's failings.

Cultural failings

As an engineer-led company with a highly controlling central figure in CEO Zuckerberg, Facebook has many of the same characteristics of authoritarian regimes and totalitarian systems, academics have noted.

Facebook certainly has the money to fix many of its problems but stubbornly refuses to do so, relying instead on fake apologies, PR pushes around specific actions, misleading options, and a stupidly stubborn insistence that you can code your way around any human problem.

In short, Facebook is a company that is completely incapable of handling any problems larger than a few thousand well-educated American students. While its platform's popularity has grown beyond any reasonable expectation, its founder – and the central figure in its evolution – has barely left the dorm room. ®

More about

TIP US OFF

Send us news


Other stories you might like