YouTube has been under the scanner for the past few months for hosting offensive and extremist content on its platform and Google has turned to artificial intelligence to monitor the content.
In a blog post published on Tuesday, Google announced that the use of AI to moderate the videos has resulted in recognition of more than 75 percent offensive content on the site.
Google said: “With over 400 hours of content uploaded to YouTube every minute, finding and taking action on violent extremist content poses a significant challenge. But over the past month, our initial use of machine learning has more than doubled both the number of videos we’ve removed for violent extremism, as well as the rate at which we’ve taken this kind of content down.”
Moreover, the tech-giant also said that robots are much more accurate that their human moderators.
The firm wrote: “While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.”
Google recently began developing and implementing cutting-edge machine learning technology designed to identify and remove violent extremism and terrorism-related content from Youtube.
In its blog post, the company said: “The accuracy of our systems has improved dramatically due to our machine learning technology.
Google to increase workforce
However, despite AI at Google is surpassing its human workforce, the company reassured that it was actually hiring more people. “’We are also hiring more people to help review and enforce our policies, and will continue to invest in technical resources to keep pace with these issues and address them responsibly,” the blog said.
As fears loom large over whether or not all jobs will be lost to machines, Google added that the use of robots to moderate video content should come as good news to brands including Marks and Spencer and McDonalds who in March pulled their adverts due to Google’s failure to remove extremist content.