ByteDance, the parent company of TikTok, has reportedly laid off hundreds of human content moderators globally as it shifts toward an AI-driven content moderation model.
According to Reuters, most of the approximately 500 positions eliminated were based in Malaysia. ByteDance employs over 110,000 people in total. A spokesperson for TikTok stated, “We’re making these changes as part of our ongoing efforts to enhance our global operating model for content moderation.”
The company currently utilizes a combination of human and AI moderators, with machines handling about 80% of the workload. ByteDance intends to invest around $2 billion in its trust and safety initiatives in 2024. These dismissals come as the company is under heightened regulatory scrutiny, particularly in light of an increase in harmful content and misinformation on social media this year.
On a related note, Adam Mosseri, head of Instagram, revealed that recent issues involving locked accounts, down-ranking of posts, and labeling them as spam were due to errors made by human moderators rather than the AI system. He acknowledged that the moderators were “making decisions without the necessary context,” which led to mistakes.
However, Mosseri clarified that not all fault lay with the human team. He admitted that “one of the tools we developed malfunctioned,” which left moderators lacking important context.
Over the past few days, users on Instagram and Threads reported their accounts being locked and disabled due to alleged violations of age restrictions, which prevent children under 13 from creating accounts. Despite providing age verification, many found their accounts remained inaccessible, as noted by The Verge.
Interestingly, the company’s public relations team took a different stance from Mosseri, indicating to TechCrunch that “not all user issues were tied to human moderators.” They also mentioned that the investigation regarding the age verification problems is still underway.