2025年10月02日

Artificial Intelligence: A Brave New World - China Formulates New AI Global Governance Action Plan and Issues Draft Ethics Rules and AI Labelling Rules

分享

Artificial Intelligence ("AI") development and deployment have been at the forefront of the national agenda in China for quite some time, with the government identifying AI as a key driver for economic growth and technological advancement. Over the last couple of years, significant investment, research, and widespread industry adoption have positioned China at the forefront of global AI innovation.

The "AI Plus" initiative implementation guideline issued in August 2025 sets out ambitious goals for the country: a penetration rate of new-generation intelligent terminals and AI agents set to exceed 70% by 2027, and 90% by 2030. This dynamic growth has prompted the introduction of AI-related regulations aimed at ensuring the safe, ethical, and inclusive development and deployment of AI technologies nationwide. China has recently announced three significant developments for the  rapidly evolving AI regulatory landscape, including the release of (i) the Global AI Governance Action Plan (the "AI Action Plan"); (ii) the draft Administrative Measures for the Ethical Management of Artificial Intelligence Technology (Trial) (the "Draft Measures"); and (iii) the Measures for Labelling of AI-Generated Synthetic Content (the "Labelling Measures"). These initiatives reflect the ambitions the country has to shape and influence international AI governance while establishing robust domestic safeguards for AI research, development, and deployment.  We discuss these recent updates and their potential impact on businesses operating in or engaging with China across the AI value chain.

1. Global AI Governance Action Plan

On 26 July 2025, China issued its AI Action Plan at the World Artificial Intelligence Conference 2025. The AI Action Plan sets out intended action areas spanning innovation, infrastructure, open ecosystems, high‑quality data, green AI, standards and multi‑stakeholder governance. We highlight key elements particularly relevant for businesses:

  • Innovation and industry adoption: Innovation and experimentation, international collaboration, and the transformation of research outcomes into real-world applications are encouraged. Businesses are urged to participate in cross-border technological cooperation, adopt AI in various sectors (such as industrial manufacturing, healthcare, education and smart cities), and share best practices.  This opens up opportunities for partnerships, technology transfer, and market expansion.
  • Open‑source and data sharing. The AI Action Plan calls for the development of secure, cross-border open-source communities and platforms to promote the sharing of resources, lower barriers to innovation, and improve accessibility. The AI Action Plan also highlights the importance of open-source compliance, technical safety guidelines, and promotes the open sharing of non-sensitive development resources (such as technical documentation and API documentation). The AI Action Plan also promotes enhanced compatibility between upstream and downstream products to foster a more inclusive and efficient AI environment.
  • High‑quality data and privacy. The AI Action Plan promotes the lawful, orderly, and free flow of data, supports the establishment of global data-sharing mechanisms and platforms, and encourages the creation of high-quality datasets. It places strong emphasis on safeguarding privacy and data security, while also prioritizing the diversification of data to help eliminate discrimination and bias in AI systems.
  • Governance of AI Safety. The AI Action Plan prioritizes robust AI safety governance by calling for regular risk assessments, targeted prevention measures, and the development of a widely recognized safety framework. It advocates for tiered management approaches, risk testing systems, and improved data security and personal information protection. The AI Action Plan also encourages stakeholders to explore the implementation of traceability management systems to prevent misuse of AI technologies.
  • Capacity building and inclusion. AI capacity building is important and this can be achieved through initiatives such as infrastructure development, joint laboratories, safety assessment platforms, education and training programs, and joint development of high-quality datasets. Importance is also given to improving public AI literacy and skills to help bridge the digital divide.

The AI Action Plan focuses on new opportunities for industry collaboration, innovation, and access to shared resources, while encouraging companies to adopt sustainable and inclusive AI development models. It also sets overarching standards for businesses to comply with in areas such as AI safety, governance, data protection, and ethical practices, with a view to foster a business environment that prioritizes responsible use of AI.

2. Draft Administrative Measures for the Ethical Management of AI Technology (Trial)

On 22 August 2025, China's Ministry of Industry and Information Technology ("MIIT"), together with nine other central regulators and two national associations, issued the Draft Measures for public comment. The Draft Measures focus on fostering responsible AI innovation, enhancing ethical oversight, and protecting the public interest in the development and use of AI.

Broad application

The Draft Measures apply to AI research, development and application within China that may pose ethical risks to life and health, human dignity, the environment, public order or sustainable development, as well as other AI activities subject to ethics review under Chinese laws.

Organizations involved in regulated AI activities, including tertiary education institutions, research institutes, medical institutions, and enterprises, are designated as responsible entities. Where feasible, these organizations shall establish independent AI technology ethics committees ("Ethics Committees"), and ensure that such committees are adequately resourced and composed of experts in technology, ethics, and law to effectively support the work of the committee. Local or sectoral authorities may establish specialized AI ethics service centers ("Ethics Service Centers"), which are responsible for offering ethics review, training, and advisory services.

Procedures and timeframe

AI projects governed by the Draft Measures will need to undertake an ethics review. This may be conducted either by the organization’s Ethics Committee or by qualified Ethics Service Centers.

To initiate an ethics review, an application must be submitted with a detailed activity plan (including research background and purpose, implementation plan, algorithmic mechanism, types of data involved, sources of the data, testing and evaluation methodology, intended outcome and products, and the intended use case and target users). Applicants are also required to submit an ethics risk assessment and risk mitigation plan regarding the intended use, details of potential risks of misuse or abuse of AI technologies, and a compliance undertaking.

Ethics Committees or Ethics Service Centers will determine whether to accept an application for ethics review and, if accepted, shall conduct the review in accordance with applicable procedures. A decision in an ethics review shall be issued within 30 days. Once approval is granted, the responsible person for an AI project is required to report promptly any changes in ethical risks to the Ethics Committee or Ethics Service Center. The Ethics Committee or Ethics Service Center will have ongoing oversight over approved AI projects, including follow-up reviews at intervals generally not exceeding 12 months, and have the power to suspend or terminate AI projects if significant ethical risks arise.

The key focus areas for the AI ethics review include (1) fairness and non-discrimination, (2) robust and controllable system design, (3) transparency and explainability of the algorithms, and (4) clear accountability through traceable processes. The ethics review also examines the qualifications of project personnel, the scientific and social value of the research, the balance between risks and benefits, and the adequacy of risk controls and emergency response plans.

The Draft Measures introduce a “List of AI Technology Activities Requiring Expert Second Review,” which designates certain high-risk AI activities for mandatory expert re-examination following an initial review by the Ethics Committees or Ethics Service Centers. Currently, the list includes human–machine integration systems that significantly affect human behavior, emotions, or health, algorithmic applications with the capacity to mobilize public opinion or shape social consciousness, and highly autonomous decision-making systems deployed in high-risk scenarios, such as those involving human health and safety. This list may be updated as regulatory needs evolve.

Businesses involved in AI research, development, or services in China should proactively evaluate their activities for potential ethical risks and determine whether their projects fall within the scope of the Draft Measures. Companies that contemplate AI projects subject to the Draft Measures should begin to think about establishing ethics committees if feasible, compile thorough documentation for ethics review, implement a strong risk assessment and deploy mitigation strategies, and maintain ongoing oversight and reporting mechanisms to address ethical issues as they arise throughout the lifecycle of their AI initiatives.

3. AIGC Labelling Measures

The Labelling Measures, released by the Cyberspace Administration of China ("CAC") and other authorities on 14 March 2025, took effect on 1 September 2025. The technical standard on AI content labelling, i.e., Cybersecurity Technology – Labelling Method for Content Generated by Artificial Intelligence (GB 45438‑2025) ("Labelling Standard"), also became effective on the same date. Together, the Labelling Measures and the Labelling Standard provide much-needed clarity on the content labelling requirements under the Interim Measures for the Administration of Generative Artificial Intelligence Services ("GenAI Interim Measures").

Scope of Application

The Labelling Measures apply to internet information service providers that use AI to generate text, images, audio, video, virtual scenes or other content – these service providers are already subject to the following existing regulations:

  • Internet Information Service Algorithmic Recommendation Management Provisions ("Algorithmic Provisions", in force March 2022) – which impose obligations on algorithm transparency, fairness, content moderation, and algorithm filing to the regulatory authority.
  • Internet Information Service Deep Synthesis Management Provisions ("Deep Synthesis Provisions" in force January 2023) – which regulate the use of “deep synthesis” technologies for internet information services.
  • The GenAI Interim Measures (in force August 2023) provide baseline obligations concerning training data legitimacy, personal information protection, algorithmic transparency, security assessments and model filing.
Key Requirements 

The Labelling Measures require both explicit and implicit labelling of  AI-generated Content ("AIGC"). Explicit labelling refers to labels that are added to AIGC or interactive scenario interfaces, and are presented in a manner (such as through text, audio, graphics, or other means) that can be clearly perceived by users. Service providers may refer to the Labelling Measures and Labelling Standards for the specific operational and technical requirements of the labelling of AIGC (text, audio, images, videos and virtual scenarios). Where AIGC can be downloaded, reproduced or exported, explicit labels must remain embedded within the file.

Implicit labelling refers to labels that are added to the data files of AIGC through technical means, and are not easily perceived by users. An implicit label should be added to the metadata of the AIGC file, and includes key information such as content attributes, the name or code/identifier of the service provider, and a content reference number.

The Labelling Measures also outline the obligations of service providers that offer online content distribution services with respect to AIGC. Specifically, they require these providers to verify whether implicit labels are present in the file metadata. If implicit labels are detected, the provider must add prominent explicit labels around the published content to clearly inform the public that the content is AI-generated. If no implicit label is found but the user declares the content as AI-generated, the provider should still add an explicit label to alert the public that the content may be AI-generated. In cases where neither implicit labels nor user declarations are present, but the provider detects explicit labels or other signs of AI generation, the content should be identified as suspected AI-generated and labelled accordingly. For all such scenarios, the provider must also add relevant key information, such as content attributes, platform name or code, and content reference number, into the file metadata. Additionally, providers are required to offer necessary labelling functions and prompt users to declare whether their content includes AI-generated material.

The Labelling Measures require all service providers to clearly specify in their user service agreements the methods, formats, and standards for labelling AIGC, and to remind users to carefully read and understand the relevant content labelling requirements. Where a user requests the provision of AIGC without explicit labelling, the service provider may do so only after clearly outlining the user’s labelling obligations and other responsibilities in the user service agreement, and must retain relevant logs and information about the recipients of such content for no less than six months (see Articles 8 and 9 of the Labelling Measures). Users who disseminate AIGC through online platforms shall declare and use the labelling functions provided by the service provider. The Labelling Measures also prohibit any organization or individual from maliciously deleting, altering, forging, or concealing the required labels, or from providing tools or services to facilitate such actions, and from using improper labelling methods to infringe upon the lawful rights and interests of others.

Where the provisions of the Labelling Measures are violated, relevant regulatory departments such as the departments for internet information, telecommunications, public security, and broadcasting may address such violations in accordance with the relevant laws, administrative regulations, and departmental rules. In particular, overseas Gen AI providers should be aware that the Gen AI Interim Measures expressly empower the regulators to take "technical measures" (e.g., shutting down network access) against companies providing Gen AI services to China from overseas which have violated the Chinese laws and regulations.

Takeaways

The recent updates on AI governance and content labelling in China mark a significant step toward fostering responsible, transparent, and ethical AI development. Businesses developing or adopting AI technologies in China should proactively review and update their internal policies, technical processes, and product designs to prepare for compliance with new requirements on ethical risk management and content labelling. Establishing dedicated AI governance committees, investing in staff training, and integrating robust labelling and traceability mechanisms will be essential to mitigating risks and building trust with regulators and users. As the AI regulatory landscape in China continues to evolve, businesses should keep an eye out for policy and regulatory developments and proactively align their policies with emerging standards to effectively manage compliance risks.

The authors would like to thank Roslie Liu, Legal Practice Assistant at Mayer Brown Hong Kong LLP, for her assistance with this article.

及时掌握我们的最新见解

见证我们如何使用跨学科的综合方法来满足客户需求
[订阅]