When delving into the hate speech posted on YouTube, it doesn’t take long to stumble upon the larger topic of general extremism and how the YouTube algorithm factors into the revenue of the people who post it. Initially, all content posted to YouTube is posted equally. While channels with more subscribers, people who have signed up to be notified about when a channel updates, will receive a bigger influx of initial watchers which can boost them in the search results, YouTube’s “filtering system” doesn’t treat a video with a million watches than one with just fifty. This is in large part due to YouTube’s viewer powered alert system.
When watching a video that seems to include objectionable content or material that violates the YouTube guidelines, any viewer with concerns has the ability to “flag” the video. Flagging a video is done by finding a flag icon under the video, clicking on it and describing what you are reporting the video for. The amount of time it takes for YouTube to get to this complaint varies, but after a certain number of strikes a channel can be suspended or even deleted.
This sounds like a good idea in practice, but what happens when viewers who stumble across a video that breaks these guidelines but doesn’t find the content objectionable? Incredibly dangerous misinformed information gets spread, that’s what.
Since YouTube boosts a video based on views, it’s very common for videos to break the guidelines and for the YouTube algorithm to accidentally push it to the front page because it has a lot of views. Really objectionable guideline breaking content often tends to be either extremely entertaining or scandalous to the populous at large, which means more people will watch. In the wake of a disaster or a piece of news, a content creator that likes to make the type of video previously mentioned might make content covering the news through a lens of their own bias and poorly researched facts, then get boosted above videos from more reliable sources when they finally post it because it attracts a fanatical crowd or a group of bemused onlookers.
False information immediately starts getting spread like wildfire. People already have a tendency to give a false impression or inadequate summary of news they’ve heard. Add a source that isn’t really reliable in the first place, and you have a riled audience regurgitating what they’ve heard to others. It doesn’t take long for the outrage to spread, and in the wake of many of these instances, it’s led to new outraged communities being formed.
What’s worse is that the content creator who might’ve originally started the panic does not usually care. They’re either too deep in their own convictions to consider their content inaccurate, want panic and outrage to spread or are doing everything for laughs. In all three of these instances, the content creator wants their message to be spread to as man people as possible. Having people repeat what they’ve heard to others is great. Having people form groups is better, especially for their bottom line.
YouTube videos carrying themes of extremism or hate content can be monetized just like any other kind of video on YouTube. The more views it gets, the more the ads assigned to the video runs, and the bigger the slice of the pie the content creator gets. The more money they receive, the more videos they can pump out, using the money towards equipment and living costs. These were the kind of problems YouTube was trying to tackle when they tried to update their algorithm, but they fell through in more ways besides roping in innocents and failing to take down extremists who immediately began to work around the algorithm. In the process of trying to appear neutral towards sources few have been afraid to call out in the past, YouTube has made an ineffective rebuttal and creating a toxic culture of neutrality.