•  
  •  
  •  
  •  
  •  
  •  

YouTube rehires more moderators in a flash; I mean human moderators whom the company decided to send on “holiday” during quarantine. The reason is that AI filters banned too many videos without any reason. 

When the situation in the world started rolling from the high hills, YouTube took a stand in March to pass more work to machine learning systems to regulate the situation with inappropriate content that went against YouTube’s rules such as misinformation. YouTube shared with the Financial Times that AI moderation had lost the real point of its work and had started removing videos without any policy violations.  

Image credit: support.google.com

James Vincent writes some clear numbers for The Verge, noting that YouTube’s AI guards deleted around 11 million videos between April and June. People who didn’t agree with the machine’s decision sent the appeal, and as a result, almost half of the 320,000 appealed videos were reinstated. 

Also, James brings up the comment from Neal Mohan, YouTube’s chief product officer: “One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”

The reason to provide more strict measures was very significant, because giant social platforms like Twitter, Facebook, and YouTube, are suffering from an increasing amount of hateful and violent content on their sites. These companies have a hand in the same thought: algorithmic and automated filters are measures to fight the spreading violence and other harmful content. 

Experts who understand AI algorithms have demonstrated their concerns about such measures. The absence of the human factor and consciousness makes it impossible to filter the content, because humans play the role of the final judge.

When there’s the content it seems the machine will not fail to remove or make the correct judgement – it fails. In May, YouTube had an awkward scandal where the machine learning algorithm was automatically deleting comments with a phrase that was criticising the Chinese Communist Party.

0
0