YouTube, the video streaming service owned by Google, has announced plans to hire more than 10,000 people in 2018 with the goal of “working to address content that might violate our policies.”
Once again, a cerebral prison encloses creative freedom and personal expression.
In a blog post released on Monday, Susan Wojcicki, the CEO of YouTube, a unit of Google, stressed the need for more “human reviewers” at the company.
“I’ve seen how our open platform has been a force for creativity, learning, and access to information,” Wojcicki claims, ignoring the numerous incidents where conservatives have been demonetized or censored by having their videos placed on “restricted mode” lists. “I’ve seen how activists have used it to advocate for social change, mobilize protests, and document war crimes.”
“But I’ve also seen up-close that there can be another, more troubling, side of YouTube’s openness. I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” she adds. “In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats. We tightened our policies on what content can appear on our platform, or earn revenue for creators.”
Clearly, the “violent or extremist content” that gets flagged mostly encompasses Conservative and Libertarian voices, as Prager University can attest to when it witnessed its videos being demonetized and placed onto the “restricted mode” list.
It isn’t just those on the right that are censored, however, as YouTube’s policies apparently deem satire, and parody, too extreme for the platform, especially when the parody is aimed at YouTube.
“Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content,” Wojcicki wrote in her blog post. “Since June, our trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future.”
The amount of videos viewed since June is clearly not enough as in late November, the Times of London revealed that YouTube was allowing advertisements on videos containing sexualized imagery of children. The revelation caused big brands such as Adidas, Deutsche Bank, Amazon, and Oreo maker Mondelez, to pull their advertisements from the site.
Not only that, despite its claims of removing “over 150,000 videos for violent extremism,” YouTube refused to remove videos made by the senior recruiter for al-Qaeda, and Islamic hate preacher, Anwar Al-Awlaki.
“We need an approach that does a better job determining which channels and videos should be eligible for advertising,” Wojcicki admitted.
Alas, this does not mean Conservative voices will once again be allowed to make money from the video streaming service as Wojcicki also admitted in her blog post that, YouTube “has begun training machine-learning technology across other challenging content areas, including child safety and hate speech.”
“We understand that people want a clearer view of how we’re tackling problematic content,” the CEO adds. “Our Community Guidelines give users notice about what we do not allow on our platforms and we want to share more information about how these are enforced.”
Of course, linking to YouTube’s Community Guidelines doesn’t help as the policy on “hateful content” is just as undefined as “hate speech.”
The community guideline on “Hateful Content” states:
Our products are platforms for free expression. But we don’t support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics. This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line.
Although this is incredibly opaque already, in the “Learn More” section, YouTube claims:
There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but if the primary purpose of the content is to incite hatred against a group of people solely based on their ethnicity, or if the content promotes violence based on any of these core attributes, like religion, it violates our policy.
This means that although it is “generally ok” to criticize a “nation-state,” sometimes it won’t be, because if someone criticizes states that are primarily religious, for example, Muslim majority countries, that person could be committing “hate speech.”
Wojcicki also said the company would be taking “aggressive action on comments, launching new comment moderation tools and in some cases shutting down comments altogether.”
It seems that not even visitors of the site, let alone content creators themselves, are able to escape the ideological gulag.
Skeptical half-ginger Brit, former journalist at now defunct DANGEROUS. An advocate of free speech, always questioning the known.