SAN FRANCISCO | YouTube has had to send content moderators home because of the pandemic, and the heightened reliance on artificial intelligence technologies has led the platform to nearly double the number of videos removed in the second quarter compared to the first.
In a statement released Tuesday, the Google subsidiary announced that it had deleted 11.4 million videos between April and June 2020, against 6.1 million in the previous quarter.
She explains that she had to choose at the start of the health crisis between a wider or more restricted application of her regulations.
Usually, algorithms detect problematic content that human teams then evaluate. If they remove them from the site, the authors of the videos concerned can appeal.
Knowing that its moderators would not be able to do the same amount of work under the new conditions, YouTube chose to do too much rather than not enough.
The platform used “the automated system to launch a larger net so that most potentially dangerous content for the community is quickly removed, knowing that (…) some videos would be removed” wrongly, explains the company.
The majority of videos removed are for child protection reasons (almost 34%), followed by scams (28%) and nudity and pornography (15%).
YouTube, out of “excess of caution”, says it has launched a particularly wide net on sensitive subjects, such as child pornography and extremist violence. Video withdrawals for these reasons have suddenly tripled.
Calls from video authors have doubled, but in total they are only 3% of deletions. In half of the cases, in the second quarter, they resulted in a repost of the content, compared to 25% from January to March.
To minimize the impact on content creators, YouTube did not give “warnings” to authors whose video had been removed automatically, without human intervention.
The number of withdrawn channels (nearly 2 million) remained stable.
Like other social networks, but to a lesser extent compared to Facebook, the global platform is regularly accused of not sufficiently fighting against problematic or dangerous content.
All have deployed an arsenal of measures to fight against disinformation around the new coronavirus or the US presidential election.