
On December 27, 2025, the Cyberspace Administration of China (CAC) published a draft regulation for public feedback, called the Interim Measures for the Administration of AI Anthropomorphic Interactive Services. The consultation period is open through January 25, 2026.
The draft targets AI services available to users in China that engage people in sustained, human-like interactions. It applies to systems designed to emulate human communication styles or behavior through spoken or written language, visuals, sound, video, or other formats.
Under the draft, developers and operators would be responsible for establishing internal systems to manage both technical safety and the psychological and social effects of these AI services. Providers are expected to put in place processes that oversee algorithms, secure data and personal information, and provide emergency responses when interaction risks arise.
One key requirement is that users be clearly informed they are interacting with an artificial intelligence system rather than a real person. Notifications must be noticeable and presented at first contact, when a user logs in again after a break, and whenever patterns of heavy use are detected. Systems must also prompt users to consider taking a break if they interact without interruption for more than two hours.
The draft emphasizes the need to identify emotional risks. Providers should monitor for indications of emotional distress or dependence and, in such cases, take appropriate steps. When a user’s interaction suggests an immediate risk of self-harm or suicide, the system must trigger an escalation pathway that includes human review and possible outreach to a guardian or emergency contact.
Specific protections are outlined for minors and older adults. Services must include a designated mode for users under 18 that imposes limits, such as reminders to stay grounded in reality and controls on usage duration. When minors access services that include emotional companionship features, the provider must obtain consent from a parent or legal guardian. Guardians must be able to receive alerts about safety issues, see summaries of the minor’s activity, block certain roles or features, set usage limits, and deny in-app purchases.
The proposed measures also define categories of content and behaviors that are not allowed. AI must not produce or distribute material involving explicit sexual content, gambling, violence, criminal encouragement, disorderly rumors, or material considered harmful to national security. Providers must not build systems that aim to manipulate users emotionally, induce dependence, set psychological traps, or make self-harm seem appealing.
In regard to data, the draft calls for safeguards to protect records of user interactions and sets limits on sharing such information with third parties unless legally required or explicitly permitted by the user. Users must be given ways to delete their interaction records, and guardians can request deletion of data related to minors. The draft also states that providers must not use interaction data or sensitive personal information for training AI models without distinct consent, and they must conduct yearly audits of how minors’ personal information is handled.
The draft includes provisions requiring evaluations of security and social risk when new services are introduced, when major changes occur, or when systems reach a high number of users. In those situations, reports must be submitted to relevant cyberspace authorities. App distribution platforms are expected to check that these assessments and required regulatory filings are completed before making the services available.
As a draft, the regulation does not yet set a firm date for becoming effective. A final version is expected sometime in 2026 after the comment period concludes, at which point a formal implementation date will be announced.
This image is the property of The New Dispatch LLC and is not licenseable for external use without explicit written permission.







