Facebook has published the latest version of its Community Standards Enforcement Report, which outlines all of the policy and rule violations that Facebook has taken action on, across both Facebook and Instagram, over the preceding three months.
The latest version of the report utilizes a new layout to present the data, which is essentially a large page of charts.
Which is a little hard to follow – we’ve broken out some of the key notes from the data below to highlight the latest shifts, though it is also worth noting that Facebook’s enforcement efforts in the last quarter have been impacted by staffing shifts due to COVID-19.
Back in March, Facebook shut down its moderation centers due to the global lockdowns, which reduced its human intervention capacity by some 35,000 workers. Facebook has since re-opened most of its operating facilities, and is working to get back to full capacity, but for the majority of the preceding months, much of its moderation work was governed by its AI detection systems.
Which has had varying impacts.
“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram. Despite these decreases, we prioritized and took action on the most harmful content within these categories. […] The number of appeals is also much lower in this report because we couldn’t always offer them.”
With that in mind, here are a couple of key notes from the latest Community Standards Enforcement report.
First off, Facebook took a lot more action on hate speech in the quarter – up almost 10x on the same period back in 2018.
Facebook credits that on improvements to its technology, and expanding its detection to more languages.
“Our proactive detection rate for hate speech on Facebook increased 6 points from 89% to 95%. In turn, the amount of content we took action on increased from 9.6 million in Q1 to 22.5 million in Q2. This is because we expanded some of our automation technology in Spanish, Arabic and Indonesian and made improvements to our English detection technology in Q1.”
Facebook, as you may recall, was the subject of a massive advertiser boycott last month due to its perceived inaction in addressing hate speech. According to these numbers, that’s not correct – which would then suggest that the protest was more specifically about comments from US President Donald Trump, and Facebook’s decision not to remove them, than it was about hate speech more generally.
That’s still a significant concern – comments from President Trump have far more reach and influence than virtually anybody else. But still, it is worth noting, for context, that Facebook is taking increased action on such, which would appear to be a positive sign.
Worth noting, also, that Facebook is considering possible restrictions on QAnon-linked groups, while it also took down a cluster of Pages linked to extremist groups in the US back in June.
On Instagram, Facebook’s systems are also getting better at detecting and removing nudity and adult content.
While its systems are also detecting more hate speech on that platform as well.
And content from dangerous organizations is also being removed more often on Instagram – which could also suggest that more of these groups are turning to the app to spread their messaging.
Or, again, that Facebook is simply getting better at detecting it. Either way, more removals is obviously a better outcome overall.
In terms of fake profiles, Facebook has reported little change in the amount of fakes it’s finding on its platforms, with fake accounts still making up around 5% of its overall userbase.
Facebook outlined improvements to its process of detecting fake accounts back in April, but as we noted then, it may never be able to get down to zero on this front due to scammers updating their tactics in line with Facebook’s system updates. 5% is actually, probably, the best-case scenario – which means that there are more than 125 million active fake profiles on the platform at any given time.
Most of the other numbers reported by Facebook are in-line with ongoing averages – though bullying and harassment concerns did see a rise on Instagram.
That may be due to limited avenues for appeal, which saw more of these claims upheld in the quarter, so some of the data points like this have been impacted due to expanded staffing impacts and shifts.
Overall, though, the data does suggest that Facebook is doing more to address some key areas of concern. And Facebook’s looking to take that further, commissioning an independent audit of its processes to ensure it’s doing all it can to reflect the right data in its quarterly reports.
Again, the impacts of the COVID-19 shutdown have influenced some of the data points here, but it’s worth noting where Facebook is at on these areas, and how it’s continually working to address issues.
You can read Facebook’s full Community Standards Enforcement Report here.