Meta has announced the rollout of advanced artificial intelligence systems to handle content enforcement across its platforms, as the company reduces reliance on third-party moderation vendors. The initiative focuses on detecting and removing harmful content, including material related to terrorism, child exploitation, fraud, scams, and illicit activities.
The company indicated that these AI systems will be expanded once they consistently outperform existing moderation processes. Early testing has shown improved performance, including higher detection rates for harmful content and a significant reduction in error rates, alongside enhanced identification of impersonation accounts and scam activity.
Meta stated that human reviewers will remain involved in high-risk and complex decisions, while AI systems take on large-scale, repetitive moderation tasks. The company also confirmed the launch of a Meta AI support assistant to provide continuous user assistance across its platforms.




