YouTube has frequently come under scrutiny for the unclear and inconsistent policies they use to police the uploads of their users. While these guidelines can appear to be applied in a nonsensical manner, it is useful to breakdown each of these requirements, looking at how they have been applied previously by the website’s moderators.
Understanding the rules in this manner will hopefully allow the reader a greater sense of clarity regarding the removal of their video, ensuring their future content won’t fall outside of YouTubes’ boundaries. Removing a YouTube video is the most extreme action the platform will take. Often the server will either demonetise or flag a video when certain terms have been breached by the content creator. Nevertheless, in recent years the grounds for video removal have increased, with the platform deleting over 100,000 videos in 2019. Therefore, all video creators should be aware of YouTube’s policies, and the method of their application, to avoid the potential of video removal.
Spam and Deceptive Practices
As creators have established their careers through YouTube, YouTube has tightened its regulations to ensure that their statistics reflect genuine human interactions. As a result of this policy, YouTube will remove content that they deem to promote deceptive practices.
For example, if a YouTuber participates in the promotion of a pay-per-view service, this would be deemed as the promotion of a deceptive practice. Therefore, this could be judged by YouTube as sufficient grounds for removing their video.
Another form of deceptive practice that YouTube will not condone is channel impersonation. If one channel intends to mislead their audience by impersonating another channel, YouTube will remove videos that mimic those of the original creator. However, this form of malpractice has a relatively high threshold in comparison with others, as YouTube respects that creators will be inspired by one another.
For example, channels will be deemed as impersonations if their username and profile photo directly copy another, or they post videos intending to mimic another creator. It is therefore unlikely that YouTube would remove videos under this power unless it is clear that their author is seeking to trick their audience into believing they are somebody else.
The source of most of YouTube’s controversy stems from its’ management of sensitive content. With the platform failing to effectively detect controversial content, mainstream media has noted the emergence of uncensored sensitive content that could cause viewers and participants distress. For example, a 2017 YouTube video from Logan Paul, showing the aftermath of a suicide attempt reached over 6.5 million views before the creator withdrew the footage.
As a result of this, YouTube has closely monitored videos that may be considered to contain sensitive imagery. In their guidelines, YouTube lists a wide range of content that would be deemed to fall into this category. This includes, but is not limited to, videos that sexualise minors, videos promoting or engaging in cyberbullying and harassment, videos containing pornographic scenes and content containing scenes of suicide and self-harm.
However, the parameters of these categories are often confusing, for example, the difference between videos depicting harassment and videos showing mere pranks. This problem is augmented by YouTubes employment of machine learning algorithms as human replacements. These algorithms are less precise, with the company itself embracing these “lower levels of accuracy” as the trade-off for a safe platform. Therefore, as these broad categories are moderated by equations rather than humans, it may be necessary to appeal this decision if you believe your content has been mistakenly labelled as containing sensitive content.
Lastly, YouTube has recently developed its policy towards monitoring hateful content. Between June and September last year, the platform removed over 100,000 videos that allegedly contained hateful content.  The site has failed to release a comprehensive analysis of what this category contains.
However, there has been a distinct increase in the removal of content propagating terrorist activities,  advocating for segregation or exclusion or perpetuating hateful ideologies such as Nazism. This increased moderation has been due to the platforms belief that it is their “responsibility to protect” their audience, and stop their platform “from being used to incite hatred, harassment and violence.”
In conclusion, YouTube has a complex set of video guidelines which are often inconsistently applied. Content creators who have their videos removed from the site should re-evaluate if they have fallen outside these guidelines, and appeal YouTube’s decision if they believe the algorithm has unfairly judged their content.
Would you like to meet and talk with other YouTube video creators? The Zoom mixer meeting that we offer is a wonderful place for this.