Facebook’s ‘oversight’ body overturns four takedowns and issues a slew of policy suggestions

Facebook’s self-regulatory ‘Oversight Board’ (FOB) has delivered its first batch of decisions on contested content moderation decisions almost two months after picking its first cases.

A long time in the making, the FOB is part of Facebook’s crisis PR push to distance its business from the impact of controversial content moderation decisions — by creating a review body to handle a tiny fraction of the complaints its content takedowns attract. It started accepting submissions for review in October 2020 — and has faced criticism for being slow to get off the ground.

Announcing the first decisions today, the FOB reveals it has chosen to uphold just one of the content moderation decisions made earlier by Facebook, overturning four of the tech giant’s decisions.

Decisions on the cases were made by five-member panels that contained at least one member from the region in question and a mix of genders, per the FOB. A majority of the full Board then had to review each panel’s findings to approve the decision before it could be issued.

The sole case where the Board has upheld Facebook’s decision to remove content is case 2020-003-FB-UA — where Facebook had removed a post under its Community Standard on Hate Speech which had used the Russian word “тазики” (“taziks”) to describe Azerbaijanis, who the user claimed have no history compared to Armenians.

In the four other cases the Board has overturned Facebook takedowns, rejecting earlier assessments made by the tech giant in relation to policies on hate speech, adult nudity, dangerous individuals/organizations, and violence and incitement. (You can read the outline of these cases on its website.)

Each decision relates to a specific piece of content but the board has also issued nine policy recommendations.

These include suggestions that Facebook [emphasis ours]:

  • Create a new Community Standard on health misinformation, consolidating and clarifying the existing rules in one place. This should define key terms such as “misinformation.”
  • Adopt less intrusive means of enforcing its health misinformation policies where the content does not reach Facebook’s threshold of imminent physical harm.
  • Increase transparency around how it moderates health misinformation, including publishing a transparency report on how the Community Standards have been enforced during the COVID-19 pandemic. This recommendation draws upon the public comments the Board received.
  • Ensure that users are always notified of the reasons for any enforcement of the Community Standards against them, including the specific rule Facebook is enforcing. (The Board made two identical policy recommendations on this front related to the cases it considered, also noting in relation to the second hate speech case that “Facebook’s lack of transparency left its decision open to the mistaken belief that the company removed the content because the user expressed a view it disagreed with”.)
  • Explain and provide examples of the application of key terms from the Dangerous Individuals and Organizations policy, including the meanings of “praise,” “support” and “representation.” The Community Standard should also better advise users on how to make their intent clear when discussing dangerous individuals or organizations.
  • Provide a public list of the organizations and individuals designated as ‘dangerous’ under the Dangerous Individuals and Organizations Community Standard or, at the very least, a list of examples.
  • Inform users when automated enforcement is used to moderate their content, ensure that users can appeal automated decisions to a human being in certain cases, and improve automated detection of images with text-overlay so that posts raising awareness of breast cancer symptoms are not wrongly flagged for review. Facebook should also improve its transparency reporting on its use of automated enforcement.
  • Revise Instagram’s Community Guidelines to specify that female nipples can be shown to raise breast cancer awareness and clarify that where there are inconsistencies between Instagram’s Community Guidelines and Facebook’s Community Standards, the latter take precedence.

Where it has overturned Facebook takedowns the board says it expects Facebook to restore the specific pieces of removed content within seven days.

In addition, the Board writes that Facebook will also “examine whether identical content with parallel context associated with the Board’s decisions should remain on its platform”. And says Facebook has 30 days to publicly respond to its policy recommendations.

So it will certainly be interesting to see how the tech giant responds to the laundry list of proposed policy tweaks — perhaps especially the recommendations for increased transparency (including the suggestion it inform users when content has been removed solely by its AIs) — and whether Facebook is happy to align entirely with the policy guidance issued by the self-regulatory vehicle (or not).

Facebook created the board’s structure and charter and appointed its members — but has encouraged the notion it’s ‘independent’ from Facebook, even though it also funds FOB (indirectly, via a foundation it set up to administer the body).

And while the Board claims its review decisions are binding on Facebook there is no such requirement for Facebook to follow its policy recommendations.

It’s also notable that the FOB’s review efforts are entirely focused on takedowns — rather than on things Facebook chooses to host on its platform.

Given all that it’s impossible to quantify how much influence Facebook exerts on the Facebook Oversight Board’s decisions. And even if Facebook swallows all the aforementioned policy recommendations — or more likely puts out a PR line welcoming the FOB’s ‘thoughtful’ contributions to a ‘complex area’ and says it will ‘take them into account as it moves forward’ — it’s doing so from a place where it has retained maximum control of content review by defining, shaping and funding the ‘oversight’ involved.

tl;dr: An actual supreme court this is not.

In the coming weeks, the FOB will likely be most closely watched over a case it accepted recently — related to the Facebook’s indefinite suspension of former US president Donald Trump, after he incited a violent assault on the US capital earlier this month.

The board notes that it will be opening public comment on that case “shortly”.

“Recent events in the United States and around the world have highlighted the enormous impact that content decisions taken by internet services have on human rights and free expression,” it writes, going on to add that: “The challenges and limitations of the existing approaches to moderating content draw attention to the value of independent oversight of the most consequential decisions by companies such as Facebook.”

But of course this ‘Oversight Board’ is unable to be entirely independent of its founder, Facebook.

By TechCrunch Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here