China Publishes Draft of Regulations for Human-Like AI Tech

Insider Brief

  • The proposed rules target AI systems that simulate human personality and emotional engagement, citing risks including blurred human–machine boundaries, addiction, psychological manipulation, data misuse, and erosion of social trust, with strict content red lines tied to national security, misinformation, and ethical norms.
  • Providers would face full-lifecycle responsibility requirements, including mandatory AI identity disclosure, enhanced protections for minors and the elderly, tight controls on emotional and interaction data, and limits on using sensitive data for model training without explicit consent.
  • The framework combines tiered, risk-based supervision, security assessments, app-store enforcement, and newly introduced regulatory sandboxes, signaling Beijing’s intent to allow controlled experimentation while conditioning AI growth on demonstrable compliance and social responsibility.

China’s cyberspace regulator is moving to formalize oversight of human-like, emotionally interactive AI services, framing the effort as a push for “responsible innovation” that balances technological progress with social stability and individual rights, according the draft published by the Cyberspace Administration of China.

The draft interim measures target what the Chinese government deems “anthropomorphic interactive services” — AI systems that simulate human personalities and emotional engagement, identifying risks such as blurred human–machine boundaries, user addiction, psychological manipulation, and erosion of social trust.

According to the proposed rules, the framework emphasizes lifecycle responsibility for providers, mandatory AI identity transparency, stricter protection of emotional and interaction data, self-regulation and enhanced safeguards for vulnerable groups such as minors and the elderly.

For example, Article 6 “encourages providers to reasonably expand application scenarios, actively apply them in areas such as cultural dissemination and elderly companionship, and build an application ecosystem that conforms to the core socialist values, provided that safety and reliability are fully demonstrated.”

Prohibited Activities for Anthropomorphic AI Interactive Services include:

  • Any other conduct that violates applicable laws, administrative regulations, or national rules
  • Content that threatens national security, national unity, public order, or spreads rumors or illegal religious activity.
  • Obscene, gambling-related, violent, or crime-inciting content.
  • Defamatory or insulting content that infringes on individual or organizational rights.
  • False promises or deceptive interactions that distort user behavior or harm social relationships.
  • Content that encourages, glorifies, or implies suicide or self-harm, or that causes psychological harm through verbal abuse or emotional manipulation.
  • Algorithmic manipulation, misleading information, or emotional “traps” that push users toward irrational or harmful decisions.
  • Inducing, extracting, or exploiting classified or sensitive information.

The draft imposes strict data controls, limiting the use of interaction data and sensitive personal information for model training without explicit consent, mandating encryption and deletion options, and tightening safeguards around emotional and behavioral data. Providers crossing user thresholds or launching new anthropomorphic functions would need to conduct formal security assessments and file reports with regulators, while app stores would be tasked with enforcing compliance through listing reviews and removals, according to the draft.

Regulators are proposing tiered, risk-based supervision combined with full-chain governance spanning model design, training data, deployment, and ongoing operations, aiming to prevent dependency and misuse before products reach scale. The policy also reinforces content red lines tied to national security, misinformation, and ethical norms, while encouraging industry self-regulation alongside government oversight.

Notably, the measures introduce regulatory sandboxes at the departmental level, allowing controlled experimentation under supervision to preserve innovation momentum while managing risk, according to Lin Wei, President of Southwest University of Political Science and Law, Vice President of the China Law Society, in an official interpretation of the draft rules. The approach reflects Beijing’s broader strategy to shape consumer AI deployment proactively, positioning emotional and anthropomorphic AI as a governance priority rather than a purely technical issue, and signaling that future AI growth in China will be conditioned on demonstrable social responsibility and compliance.

“By positively incentivizing enterprises to direct their technological capabilities towards applications that truly benefit society, while by setting bottom-line constraints to prevent technological capabilities from being used to manipulate user psychology and exploit vulnerabilities for profit, the Measures leave sufficient room for industrial development while effectively preventing the risk of technological alienation,” wrote Zhang Zhen, Deputy Director and Senior Engineer, National Internet Emergency Center, in another official interpretation of the proposed rules.

Li Qiangzhi, deputy director of the Policy and Economics Research Institute at the China Academy of Information and Communications Technology, wrote in an official interpretation publish by the state that outside China, regulators and courts are also tightening oversight of anthropomorphic and emotionally interactive AI as evidence of real-world harm mounts. He pointed to cases in the U.S. and Europe showing how AI companions can create risks around emotional manipulation, data misuse, and user safety.

In the U.S., platforms such as Character.AI have faced lawsuits alleging harmful psychological effects on teenagers, while the Federal Trade Commission has opened investigations into emotional companionship services, he cited. In Europe, regulators have fined companies such as Replika and ordered corrective measures, and the EU’s AI Act places stricter obligations on systems designed for emotional interaction. Together, Li said, these developments signal a global shift toward treating human-like AI as a high-risk category subject to closer regulation rather than a purely experimental consumer technology.

Greg Bock

Greg Bock is an award-winning investigative journalist with more than 25 years of experience in print, digital, and broadcast news. His reporting has spanned crime, politics, business and technology, earning multiple Keystone Awards and a Pennsylvania Association of Broadcasters honors. Through the Associated Press and Nexstar Media Group, his coverage has reached audiences across the United States.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape