Every month, more than 19 billion users log in to YouTube.
Every day, users watch videos on Youtube over 10 billion hours.
Every minute, users upload an average of 300 hours of video.
So many users, activities, and content generate a lot of data, so YouTube can take advantage of AI to help operate.
Digest is introduced today to YouTube's magical approach to using AI and machine learning. From video filtering, content recommendation to video effects, video depth prediction, Youtube's AI algorithm is everywhere.

Automatically delete bad content
In the first quarter of 19 alone, 830 million videos were removed from YouTube, and 76% of the videos were automatically recognized by the AI classifier. Also, videos that exceed 70% are recognized before they are viewed by users. Although these algorithms are not foolproof, they are much faster to sort through the content than the manual monitoring platform.
In some cases, however, the algorithm misinterprets newsworthy videos as “violent extremism,” which is one of the reasons why Google hired full-time human experts to work with AI to deal with violent content.
YouTube’s “top priority” is to prevent users from reaching harmful content. In order to achieve this goal, YouTube has not only invested in human experts, but also invested in machine learning technology.
AI has greatly improved YouTube's ability to quickly identify inferior content. Before using AI, only 8% of videos containing "violent extremism" were tagged and deleted before being viewed 10 times; but after using machine learning techniques, this number was raised to 50% or more.
The reason why YouTube pays so much attention to the removal of inferior content is the pressure from brands, institutions and governments, and the adverse reactions that may arise from the simultaneous appearance of advertisements and illegal videos. When advertising begins to appear next to videos that support racism and terrorism, advertisers may begin to cut advertising costs. So YouTube deployed advanced machine learning technology and worked with third-party companies to improve the quality of advertising for the masters.

YouTube also has a "junk video classifier" for scanning the YouTube homepage and the "watch next" panel. It focuses on feedback from the audience, and the viewer may report a misleading title, inappropriate or otherwise undesirable content.
Add special effects to the video
Google AI researchers trained a neural network to change the video background without the need for specialized equipment.
Prior to this, the video background was a complex and time consuming process. The researchers trained an algorithm with precisely labeled images to enable the algorithm to learn patterns and get a fast effect system that can render in real time.
"Next Video" feature
If you have used YouTube's "next video" feature, you should have learned the platform's AI. The dataset on YouTube is constantly changing as the user uploads videos all the time, so the AI that powers its recommendation engine needs a different recommendation engine than Netflix and Spotify. It must be able to make real-time recommendations while users are constantly adding new data.
YouTube's solution is a two-part system, the first part is a candidate recommendation by evaluating the user's viewing history, and the second part is a ranking system that scores each video.
Guillaume Chaslot, a former Google employee, proposed an initiative to promote transparency, called AlgoTransparency.
He said that the standard used by the YouTube algorithm to determine successful recommendations is viewing time. This is good for both the platform and the advertiser, but not so good for the user. This situation may amplify those videos that are singular, and the more people spend time watching it, the more recommendations it will have.
Depth prediction training
YouTube has so much data that provides a fertile training ground for AI algorithms. Google AI researchers created an AI model that recognizes the depth of field in a video using 2000's multiple Dummy Challenge videos posted on the platform.
In the "Dummy Challenge", a group of people stood in the video as if they were frozen, and then the photographer took pictures through the scene. Ultimately, this deep prediction technology will help drive the development of the augmented reality (AR) experience.
Prevent violence
As the crisis of mass shooting continued to plague the United States, President Trump asked social media companies to "develop tools that could detect murderers before a large-scale shooting incident." So YouTube, Twitter and Facebook have started to remove the content of terrorists with the help of AI, and now the president has asked them to cooperate with the Ministry of Justice and law enforcement agencies.
However, there are still many problems in establishing such a partnership. It is unpredictable whether social media companies can find out before real terrorists take action and whether they will affect the civil liberties of innocent people.

It remains to be seen whether YouTube and other social media companies can use AI to stop terrorists without violating the rights of others.
This article is transferred from the Public Machine Learning Research Association subscription number.Original address
Comments