There are many advantages to machine learning, particularly for the live streamed video world. Thanks to automation and new technology, video sites are not only safer for all audiences, but also help users connect with new content they would not have discovered before.
But what happens when the machine takes things too far? One community is accusing YouTube of selective bias through machine learning, forcing the platform to reconsider how they apply their algorithm to new content. Where YouTube goes next on this will likely have a positive effect on the further development of machine learning and live video sharing.
What is Machine Learning?
Wikipedia defines machine learning as:
"a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to ‘learn’"
This means that by programming a computer to identify tasks and empowering it to improve over time, a program can “learn” to do tasks otherwise taken on by humans.
When applying the concept of machine learning to video streaming websites, there is no shortage of applications. Machines can help categorize data based on content and keywords, or improve stream reliability by analyzing what part of a video people watch and why. The world’s biggest video platforms today rely on machine learning to do everything from categorize and restrict videos based on age of the viewer, to predict what kind of videos you want to see next.
For example: if a viewer watches three cat videos in a row, a machine learning algorithm will recommend more animal videos based on similar keywords and content. If that viewer switches to music videos from their favorite band, machine learning may suggest another video from that band, along with those from similar artists.
Which Video Platforms Use Machine Learning?
When applying machine learning to the streaming industry, various platforms use it in different manners. But they all have one goal in common: making videos more accessible and enabling viewers to spend more time on their platforms.
Video Streaming Industry: No viewer wants to wait for their videos to buffer, especially when they are watching a live stream. Researchers at MIT developed a machine learning tool called Pensieve, which helps the website to determine the best way to deliver video. The result is a constant video stream without dropped frames or buffering pauses.
YouTube: YouTube relies on machine learning to automate a number of processes, from pairing advertisers with channels to determining what content is appropriate for certain ages. Most importantly, YouTube uses a machine algorithm to flag violent or extreme content for human review before it is available for public consumption.
Twitch: Because much of Twitch content is live, their approach to machine learning is different. Machines measure custom emotes to determine how engaged users are with a content creator, or what they are streaming online.
Facebook Video: Much like YouTube, Facebook applies machine learning to all their products – including video – to improve the user experience. This may range from showing similar content from friends and content creators, to removing spam videos and content from the website. The program also “reads” videos and pictures to blind users, and “translates” up to 2 billion stories daily.
Controversy Surrounding the YouTube Algorithm
Although machine learning sounds like the future of content categorization, it is far from perfect. Some content creators found this out when their videos were regularly flagged by YouTube’s algorithm, for apparently no reason.
Content creators building videos for the LGBTQ community claim the platform inadvertently discriminated against them because of their content. Speaking to The Verge, regular LGBTQ creators said their channels were often targeted for additional scrutiny. This included age restrictions, channel restrictions, advertising that didn’t align with LGBTQ interests, and even demonetization. According to those affected, the harshest sanctions came when videos included keywords such as “gay” or “trans”.
At the end of June 2018 – coinciding with the end of Pride Month – YouTube admitted in a four-part Twitter thread that their machine learning allowed “inappropriate ads”, while producers expressed “concerns with how we’re enforcing out monetization policy. We’re sorry and we want to do better.”
Unfortunately, the mega-platform was short on answers on how to correct the problem. Instead, they reassured creators:
“It’s critical to us that the LGBTQ community feels safe, welcome, equal and supported on YouTube. Your work is incredibly powerful and we are committed to working with you to get this right.”
How Are Live Video Creators Impacted by Machine Learning?
While science perfects machine learning and adjusts it to the nuances of human behavior, creators must be vigilant to how their content is categorized. When building any video project – especially with sensitive subject material – it’s important to consider video SEO, advertising options, and suggested videos.
Video SEO: An important tool to getting found online is video SEO, however it can also work against content creators who may be flagged for a sensitive keyword set. When managing video SEO, consider all of the options to get found and determine which combination works best.
Advertising & Monetization: Machine learning matches advertisers with interpreted content. This can sometimes result in inappropriate advertising on monetized content. By monitoring ads displayed on channels, creators can make sure their audience lines up with every video they see.
Suggested Videos: In most cases, suggested videos can help a channel get new followers or encourage more video views. It can also display inappropriate content that has nothing to do with the channel. To avoid sensitive content altogether, it may be prudent to disable suggested videos. While it can hurt visibility, it can also ensure video viewers aren’t subjected to irrelevant content.
When machine learning goes awry, video creators should always be prepared to request manual review. YouTube allows users to submit appeals on demonetization, inappropriate advertising to the channel content, or suggested videos. By pushing back against machine learning, creators can help the robot get better – which creates a safer and better platform for everyone.