abril 01 2026

China Issues Draft Rules on Interactive AI Services

Share

Regulators worldwide are grappling with how to address the alleged risks artificial intelligence ("AI")-powered chatbots and virtual companions pose, including for minors and the elderly. China has also taken a significant step in this direction—the Cyberspace Administration of China ("CAC") recently released a draft of the Interim Measures on the Administration of Human-like Interactive Artificial Intelligence Services (the "Draft Measures") for public consultation. The Draft Measures reinforce China's efforts to establish comprehensive governance over a rapidly emerging category of AI services which include AI companions, personalized virtual assistants, and interactive chatbots. These regulations build upon the existing AI regulatory framework in China whilst introducing targeted protections for users and establishing specific compliance obligations for service providers.

Scope and Application

The Draft Measures apply to the use of AI technologies that provide products or services to the public within mainland China that “simulate human personality traits, modes of thinking, and communication styles”, and/or that “engage in emotional interaction with humans” through text, images, audio, video, or other means (see Article 2 of the Draft Measures). The Draft Measures further clarify that service providers operating in professional services sectors such as healthcare, finance, or legal services must also comply with any other requirements imposed by the relevant sectoral regulators.

Key Requirements

Prohibited Activities for Both Service Providers and Users

The Draft Measures set out a comprehensive catalogue of prohibited activities. Service providers and users must not generate or disseminate content that endangers national security, harms national honour or interests, or undermines ethnic unity; conduct illegal religious activities; or spread rumours disrupting the economic or social order. The prohibition extends to content promoting obscenity, gambling, violence, and crimes, as well as content that insults or defames others. Notably, the Draft Measures go beyond traditional content restrictions to address risks unique to interactive AI services: providers must not provide false promises that seriously affect user behaviour, nor offer services that damage social relationships. They are also prohibited from harming users' physical or psychological health and dignity through encouraging self-harm, verbal abuse, or emotional manipulation. Interactive AI tools cannot be used for algorithmic manipulation, or to provide misleading information, or to induce users into making unreasonable decisions. Article 7 also prohibits the inducement or extraction of classified or sensitive information, and includes a catch-all provision covering any other circumstances that violate applicable laws and regulations.

Safety Obligations for Service Providers

The Draft Measures impose obligations on AI service providers who bear primary responsibility for the safety of interactive AI services. They must establish comprehensive management systems covering algorithmic mechanisms, content dissemination, cybersecurity and data security, personal information protection, telecommunications fraud prevention, science and technology ethics review, and emergency response. Service providers are required to deploy secure and controllable technical-safeguard measures, and have content management technology and personnel commensurate with the scale of their product, service offering, and customer base.

Under the Draft Measures, providers of interactive AI services are required to develop capabilities to assess certain states in the user, and to intervene if certain signs of user distress or extended use are detected. The framework for such intervention is tiered: pre-set supportive responses where risks to a user's life, health, or property are identified, and mandatory human operator takeover where users express intentions of self-harm or suicide.

The Draft Measures also promote embedding security considerations throughout the entire AI service lifecycle. The Draft Measures signal a clear regulatory expectation that these interactive AI services must be designed to support users' autonomy and wellbeing.

Data Training and Management

Rather than relying solely on output-level filtering, the Draft Measures also require service providers to embed safety measures and values alignment into the data pipeline itself, through the use of datasets that conform to state-mandated core values and Chinese traditional culture, rigorous data cleaning and labelling, diverse sourcing, ensuring legitimacy and traceability of data, and security safeguards against data leakage and tampering. In these ways, the Draft Measures require service providers to shape the underlying model foundation before the service ever interacts with users.

Minors and Elderly User Protection

The Draft Measures place considerable emphasis on protecting minors and the elderly. In particular, service providers must establish dedicated user modes for minors, offering users customised security settings including mode switching, periodic reminders, and usage duration limits. When providing interactive AI services to minors, express consent from guardians must be obtained. Guardians are to be provided with control functions enabling them to receive real-time safety risk alerts, view summaries of their child's usage, set usage restrictions, and prevent top-ups for in-app purchases. Service providers must also possess the capability to identify minor users. Where a user is identified as a minor, the service must automatically switch to minor mode, with an appeal channel provided for wrongful classification. Minors are also required to provide guardian or emergency contact details at the registration stage, ensuring that providers can promptly contact appropriate persons should a crisis arise.

Similarly, for elderly users, providers must guide them in setting up an emergency contact and promptly notify such contacts when threats to life, health, or property arise, whilst also providing access to psychological assistance or emergency rescue channels. Service providers are expressly prohibited from offering services that simulate elderly users' relatives or persons in specified relationships.

Transparency and Notifications

Service providers must display conspicuous alerts notifying users that they are interacting with AI rather than a natural person. Dynamic reminders must be provided through pop-up windows when providers identify signs of excessive use, or upon users' first use or new login.

Where users' consecutive usage exceeds two hours, providers must prompt them to take a break from the service. When offering interactive AI services, providers must maintain convenient exit channels and must not obstruct users from voluntarily terminating their sessions—where a user requests to exit via interface buttons or keywords, the service must be promptly terminated.

Security Assessment

Service providers must conduct security assessments and submit assessment reports to the provincial-level CAC in specified circumstances, including: when interactive AI service functions are offered online; when new technologies are likely to cause major changes to interactive AI services; when the number of registered users reaches one million or more, or the number of monthly active users reaches 100,000 or more; or when the services may impact national security or the public interest. Security assessments must focus on a number of key areas, including but not limited to the number of users and duration of use, age composition and group distribution, identification of high-risk uses, emergency response measures, manual takeover mechanisms, users complaints, and the implementation of provider obligations.

The Draft Measures reiterate that service providers must comply with the algorithm filing procedures in accordance with the existing Provisions on the Management of Algorithmic Recommendations in Internet Information Services, with annual reviews being conducted by the relevant departments.

Penalties and Enforcement

When service providers violate the Draft Measures, relevant competent departments may impose penalties in accordance with existing laws and administrative regulations. In the absence of specific provisions setting penalties, authorities may issue warnings and/or reprimands, or order rectifications within a specified period. Where rectifications are not carried out or circumstances are serious, an order for the suspension of services may be made.

Takeaways

The Draft Measures signal that providers of interactive AI tools will need to make a significant investment to put in place technical safeguards, content moderation capabilities, and protection mechanisms. The intervention framework, mandatory human operator takeover provisions, and express prohibitions on design objectives aimed at fostering overuse may necessitate fundamental changes to how interactive AI products are designed and operated. Businesses developing or deploying AI services in China should proactively evaluate requirements that may be applicable to them, and closely monitor any amendments or clarifications in the finalised version to ensure timely compliance.

The authors would like to thank Roslie Liu, Legal Practice Assistant at Mayer Brown Hong Kong LLP, for her assistance with this legal update.

Servicios e Industrias Relacionadas

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe