Facebook offers the first estimate of the prevalence of hate speech on its platform
Facebook for the first time on Thursday released figures on the prevalence of hate speech on its platform, saying that out of 10,000 content views in the third quarter, 10 to 11 included hate speech.
The world’s largest social media company, under scrutiny for its surveillance of abuse, particularly around the November presidential election in the United States, released the estimate in its quarterly report. content moderation report.
Facebook It said it took action on 22.1 million pieces of hate speech content in the third quarter, about 95 percent of which were proactively identified, compared with 22.5 million in the prior quarter.
The company defines “take action” as removing content, covering it with a warning, disabling accounts, or escalation to outside agencies.
This summer, civil rights groups organized a widespread advertising boycott to try to pressure Facebook to act against hate speech.
The company agreed to disclose the hate speech metric, calculated by examining a representative sample of content viewed on Facebook, and undergoing an independent audit of its compliance history.
In a call with reporters, Facebook’s head of security and integrity, Guy Rosen, said the audit will be completed “in the course of 2021.”
the Anti-Defamation League, one of the groups behind the boycott, said Facebook’s new metric still lacked sufficient context for a full evaluation of its performance.
“We don’t yet know from this report exactly how many content items users are flagging on Facebook, whether action was taken or not,” said ADL spokesman Todd Gutnick. That information matters, he said, as “there are many forms of hate speech that are not eliminated, even after they are flagged.”
Rivals Twitter and Youtube, owned by Alphabet’s Google, do not disclose comparable prevalence metrics.
Facebook’s Rosen also said that from March 1 to the November 3 election, the company removed more than 2.65,000 pieces of content from Facebook and Instagram in the United States for violating its electoral interference policies.
In October, Facebook said it was updating its hate speech policy. ban content Denying or distorting the Holocaust, a turnaround from Facebook CEO’s public comments Mark Zuckerberg he had done about what should be allowed.
Facebook said it took action on 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, he took action on 4.1 million pieces of violent and graphic content.
Earlier this week, Zuckerberg and the CEO of Twitter Jack dorsey were grilled by Congress about their companies’ content moderation practices, from Republican accusations of political bias to decisions about violent speech.
Last week, Reuters reported that Zuckerberg told an all-staff meeting that former Trump White House adviser Steve Bannon had not violated Enough of company policies to justify suspension when he urged the beheading of two US officials.
The company has also come under fire in recent months for allowing large Facebook groups that share false election claims and violent rhetoric to gain ground.
Facebook said its search rates for content that broke the rules before users reported increased in most areas due to improvements in artificial intelligence tools and the expansion of its detection technologies to more languages.
in a blog post, Facebook said that COVID-19 The pandemic continued to disrupt its content review workforce, although some compliance metrics were returning to pre-pandemic levels.
An open letter from more than 200 Facebook content moderators published Wednesday accused the company to force these workers to return to the office and “unnecessarily risk” lives during the pandemic.