Google pledges to clampdown on extremist YouTube videos and content in the aftermath of recent terror attacks
There have been multiple terror attacks in recent months and in the wake of these attacks Google has said that it will scale up its efforts to reduce online extremism. Google pledges to clampdown on extremist by putting additional resources into identifying videos that spread hate through its YouTube site.
There has been a huge demand by groups and Governments for media companies to do more to police their platforms.
In a blog post Google said “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.”
Google will enforce or try to remove this type of content in multiple ways across its platforms.
First it will increase its spending on AI (Artificial Intelligent) technology. AI will help Google to detect the correct content. While at the same time they will increase the use of third party experts to flag and report videos that match terror type content.
For videos and content that may not be classed as terror type but instead may heighten the tension between groups, Google will show warnings before the user is able to view the content. Social media companies need to do more to police their platforms.
Google has lost millions in advertisements revenues. This is due to Google allowing ads from high profile international companies and government being shown against content that was terror or extremist material related. Johnson & Johnson, Verizon and UK Government are just some who have stopped advertising on Google for the time being.
Google has also said that it would instead encourage videos and content that speak out against hate.
They are hoping that with multiple action being taken they will have the biggest effect, fine tuning until they feel that they have achieved what they want to.