Chinese social media platform, TikTok, has outlined its new content moderation process for the US and Canada. According to the company, it is now going to implement the automated method to identify and remove "violative content" in the two biggest North American countries. It claims that the automated system has been tested in other markets and has a 95 percent accuracy rate.
TikTok's Head of US Safety, Eric Han, states that in the existing system content moderation rules are enforced by the US-based team. Anything flagged by the community is reviewed by a human for further action. While this process is effective, it is quite time-consuming. To make it efficient, ByteDance's subsidiary is planning to arm the algorithms with the authority to delete content right after it is uploaded. The system is expected to go live in the next few weeks.
According to the company, this machine-based deletion will be reserved for categories where the system has the highest degree of accuracy. This includes "minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities & regulated goods".
For areas where context and nuances matter, ByteDance is hoping to further improve its technology for better judgment. Till then, TikTokers will have to rely on the existing appeal mechanism to request human intervention. Apart from efficiency, the company hopes that the move will save the safety team from watching distressing TikTok videos.