TikTok removed more than 580,000 videos in Kenya between July and September 2025 for violating its content rules, according to the company’s latest enforcement data.
The numbers show the scale of moderation on one of the country’s most widely used social platforms, at a time when questions around privacy, consent, and online safety are growing louder.
The disclosure comes days after online outrage in Kenya over a Russian content creator accused of secretly recording encounters with women and uploading clips to social media platforms, including TikTok and YouTube.
The case renewed debate about how fast platforms detect harmful material and whether moderation systems can keep pace with new forms of exploitative content.
TikTok said 99.7% of videos removed in Kenya were taken down before users reported them, while 94.6% were removed within 24 hours of posting.
The platform also said it interrupted about 90,000 live sessions in Kenya during the quarter for breaking content rules, representing around 1% of livestreams.
Globally, TikTok removed 204.5 million videos in the same period, about 0.7% of all uploads. The company said 99.3% were removed proactively and nearly 95% within a day. Automated systems accounted for 91% of those removals, according to the report.
TikTok also removed more than 118 million fake accounts and over 22 million accounts suspected to belong to users under 13.
The enforcement report lands as social media platforms face growing scrutiny over covert recording technology. In the Kenyan case that dominated local conversations, online users speculated that smart glasses may have been used to record women in public spaces without clear consent. However, no official confirmation has been provided.
Smart glasses can capture photos and video hands-free. Meta says the glasses include an LED light that signals when recording is active and that its policies prohibit harassment or privacy violations, though privacy advocates say awareness of these indicators remains low.
Mike Ololokwe, a Kenyan lawyer, told TechCabal that consent to interaction does not equal consent to filming or publication, a concern that has fuelled calls for stricter moderation and clearer enforcement standards across platforms.
“Digital platforms need to treat hidden recording as a serious rights violation and policy breach, because harm spreads long after posting,” Ololokwe said.
TikTok says its moderation combines automated tools and human reviewers to address harmful content, including harassment and misinformation. The company also said it expanded its wellbeing features to help users, especially teenagers, manage screen time and digital habits.


















