Facebook Inc. revealed the figures for the first time on its Hate Speech Use platform on Thursday, claiming that 10-11 out of 10,000 content views in the third quarter included hate speech.
The world’s largest social media company, which is investigating the abuse case, particularly around the November US presidential election, published estimates in its quarterly content moderation report.
Facebook said it took action on 22.1 million hateful content in the third quarter, of which about 95% identified 22.5 million compared to the previous week.
The company defines “take action” as removing content, covering it with warnings, deactivating accounts or extending it to outside agencies.
This summer, civil rights groups staged a widespread advertising boycott in an attempt to pressure Facebook against hate speech.
The company agreed to disclose the hate speech measure, which was calculated by examining a representative sample of content viewed on Facebook and subjected to an independent audit of its enforcement records.
In a call with reporters, Facebook’s head of security and integrity, Guy Rosen, said the audit would be completed over a period of 20 to 20 years.
The Anti-Defamation League, one of the groups behind the boycott, said Facebook’s new metric still lacks sufficient context to fully assess its performance.
“We still don’t know from this report how many Facebook ~ CHECK ~ users are reporting or if action has been taken,” ADL spokesman Todd Gutnick said. This data matters, he said, “that there are many forms of hate speech that are not suppressed, even if they are reported.”
Twitter and YouTube, the competitors of Alphabet Inc. owned by Google, do not reveal comparable circulation statistics.
Facebook’s Rosen also said that from March 1 to November 3, the company removed more than 265,000 pieces of content from Facebook and Instagram in the United States for violating its voter intervention policies.
In October, Facebook said it was updating its abusive language policy to ban content that denies or distorts the Holocaust, a rebuttal to public comments by Facebook CEO Mark Zuckerberg on what allowed Should Be Known
Facebook said it processed 19.2 million violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, he processed 4.1 million violent and graphic content.
Earlier this week, Zuckerberg and Twitter Inc. CEO Jack Dorsey was questioned by Congress over their companies’ practice of moderating content, ranging from Republican allegations of political partisanship to violent speech decisions.
Last week, Reuters reported that Zuckerberg told an all staff meeting that former Trump White House adviser Steve Bannon had not violated company policies enough to justify the decision. suspension, when he asked two US officials. Was.
The company has come under fire in recent months for allowing large Facebook groups to gain traction for making false election statements and violent rhetoric.
Facebook said this was the rule before reporting by users that it was reporting in most areas due to improvements in artificial intelligence tools and the expansion of its detection technology into more languages.
In a blog post, Facebook said the COVID-19 outbreak continues to disrupt its content review workforce, although some enforcement measures have returned to pre-epidemic levels.
An open letter posted Wednesday at https://www.foxglove.org.uk/news/open-letter-from-content-moderators-re-pandemic accused more than 200 Facebook content moderators of calling these companies Overthrow. Office and life “unnecessarily risky” during the epidemic.
“The facilities meet or exceed the guidelines for a safe workspace,” said Rosen of Facebook.
(Except for the title, this story was not edited by NDTV staff and posted from a syndicated feed.)