Can Facebook, YouTube, etc. settle for AI editing?

With the expansion of teleworking to Corona 19, major social platforms have taken another test. This is due to the inevitable AI editing.

Facebook, YouTube, Twitter, etc. recently decided to replace the work that content moderators were doing with automated AI technology.

Content moderators do not work remotely because of the unique nature of the work itself. Their primary role is to filter out hate speeches, child sexual abuse, and terrorism-related content on the platform.

The problem is that there are a lot of shocking contents among the contents that they need to filter out. Sexual assault, torture, beheading, and suicide are often encountered. That’s why working at home can have a bad effect on your family.

In fact, content moderators who worked on Facebook also filed a class action against the company. The reason is that post-traumatic stress occurred after the content inspection.

■ There is a lot of wrong

According to the protocol, the accuracy of the automatic censorship system has improved considerably as machine learning technology has evolved. In the case of Facebook, terrorist propaganda and propaganda content were automatically filtered out by 98% or more before users received the report.

On the other hand, the rate of filtering out content related to hate speech remained at 80%.

Why is this difference? According to the protocol, “the latter need to be interpreted according to the point of view”.

AI can easily filter out obviously bad content no matter who sees it. However, in the case of content submitted by a person to raise protests or awareness, the processing method becomes subtle. This is because the context must be read properly.

If you block it without ignoring this context, you will be overwhelmed by the over censorship controversy. Some of the cases that occurred after Facebook returned ‘content moderators’ are typical.

■ “Corona emergency may trigger AI moderator innovation”

Given this situation, YouTube decided not to take sanctions unless it was a “violent situation,” within Corona 19. Also, it was decided to pay special attention to the real-time relay video.

At the same time, YouTube prevented the account from being permanently suspended based on the judgment result of the automatic evaluation system.

Twitter also included Corona19 content in the ‘hazardous’ content category. In particular, contents that are in direct conflict with the recommendations of related organizations or health authorities around the world are treated as ‘harmful contents’.

Twitter, like YouTube, decided not to suspend the account permanently based on the judgment of the automatic evaluation system.

Despite this situation, the protocol predicted that the Corona 19 event could also be a trigger for AI innovation. This is the same context that the medical system and manufacturing industry developed greatly after World War II.

Because of Corona19, the AI-based automatic moderating system that has been fully implemented is likely to reveal various limitations in the short term. However, because these emergencies provide a lot of learning opportunities for the AI platform, the protocol’s prospect is that it could be another momentum of innovation in the long run.