Shropshire Star

Facebook harmful content figures ‘underplay’ scale of problem, says NSPCC

The social network has been accused of using ‘self-selective reporting’ on removed harmful content in its latest transparency report.

Published
A woman on a smartphone in front of the Facebook logo

Facebook has been accused of using “selective big numbers” to promote its latest transparency report.

On Wednesday, the social network published figures on the amount of harmful and abusive content it had taken down in recent months.

However, children’s charity the NSPCC said the social network’s statistics “underplayed” the experience of vulnerable young people who see such images.

Andy Burrows, the charity’s head of child safety online, said: “It is incredibly disappointing but unfortunately not surprising that Facebook has yet again used selective big numbers that don’t show us the full picture.

“The statistics we’ve heard on the number of self-harm images is likely to underplay the lived experience of a vulnerable young person on social media being served up these images by an algorithm.”

In its report, Facebook said it was now proactively removing more harmful and abusive content than ever – revealing it had detected and removed around 2.5 million pieces of content linked to self-harm between July and September this year.

The firm’s technology also proactively detected around 11.6 million pieces of content related to child exploitation and abuse, up from around 5.8 million in the first three months of the year.

The social network said 99% of the child exploitation content had been proactively detected by its technology and reported similar levels of detection for content related to terrorism, suicide and self-harm.

Facebook also published removal figures for Instagram for the first time as part of the report, which showed the photo-sharing platform is not yet proactively detecting self-harm images on that service as often as on Facebook.

Facebook’s vice president for integrity, Guy Rosen, said the company would continue to invest in detection technology and artificial intelligence (AI) in order to further increase the amount of content that is removed.

“Across the most harmful types of content we work to combat, we’ve continued to strengthen our efforts to enforce our policies and bring greater transparency to our work,” he said.

“In addition to suicide and self-injury content and terrorist propaganda, the metrics for child nudity and sexual exploitation of children, as well as regulated goods, demonstrate this progress.

“The investments we’ve made in AI over the last five years continue to be a key factor in tackling these issues.

“In fact, recent advancements in this technology have helped with rate of detection and removal of violating content.”

However, Mr Burrows said social networks should no longer be able to self-report their content moderation.

“The time for this self-selective reporting must end now and reinforces why we need an independent regulator in place who can call social media networks to account,” he said.

A Government White Paper on Online Harms, published earlier this year, proposed an independent regulator for social media companies be introduced, with greater penalties for those which breach a proposed statutory duty of care to their users should they be exposed to harmful or abusive content.

Sorry, we are not accepting comments on this article.