Twitter reports drop in content violation account removals around terrorism
The social media site said it was seeing a steady decrease in terrorist organisations using the platform.
The number of Twitter accounts removed for content violations around terrorist and child sexual exploitation fell in the second half of 2018, according to the social media platform’s latest transparency report.
Twitter said it also saw a decrease in the amount of attempted platform manipulation carried out by spam and bot accounts.
The report, released by the site every six months, said that 166,513 accounts were removed for terrorism content, down 19% on the previous six months.
Social media companies have been repeatedly criticised for failing to act quickly enough to remove dangerous content and accounts from their platforms.
The UK government has recently published a white paper around online harms which called for a statutory duty of care to be introduced for internet firms, which would be enforced by a new independent regulator.
Twitter said 91% of these accounts were found by its internal technology tools, adding that it was seeing a steady decrease in terrorist organisations using the platform.
“This is due to zero tolerance policy enforcement that has allowed us to take swift action on ban evaders and other identified forms of behaviour used by terrorist entities and their affiliates,” Twitter’s legal, policy and trust and safety head Vijaya Gadde said.
“In the majority of cases, we take action at the account setup stage, before the account even tweets.
“We are encouraged by these metrics but will remain vigilant.
“Our goal is to stay one step ahead of emergent behaviours and new attempts to circumvent our robust approach.”
The number of accounts suspended for violations related to child sexual exploitation was 456,989, down 6% on the previous report and 96% of which was found using technology.
The social media site said it had challenged more than 194 million accounts for “spammy behaviour and platform manipulation” in the second half of 2018, with around 75% of those accounts subsequently removed after failing that challenge process, which involves proving an account is being run legitimately.
The company said it saw roughly the same number of government requests for account information as the previous report, with the UK submitting the third largest number of requests – 881.
The UK also topped the list for emergency disclosure requests for the first time, which are submitted when it is believed the person linked to the account is in danger of death or serious injury.
“Transparency is a key guiding principle in our mission to serve the public conversation. For the past seven years, our biannual Twitter Transparency Report has highlighted trends in requests made to Twitter from around the globe,” Ms Gadde said of the report.
“We believe it is vital that the public see the demands we receive, and how we work to strike a balance between respecting local law, letting the tweets flow, and protecting people from harm.
“We will continue to update our report with new data and evolve our commitment in this space, particularly around the Twitter Rules and our own enforcement.”