Amidst the ChatGPT frenzy and the emergence of similar generative artificial intelligence (Generative AI) services in the People’s Republic of China (PRC), the Cyberspace Administration of China (CAC) issued draft Measures for the Management of Generative Artificial Intelligence Services (Draft Measures) in April 2023. The Draft Measures set out proposed rules for the regulation of use of Generative AI in the PRC, with the consultation period having ended on 10 May 2023.

The Draft Measures are issued pursuant to the PRC Cybersecurity Law, the Personal Information Protection Law (PIPL) and the Data Security Law, and come hot on the heels of other legislation that aims to regulate the use of AI – namely the “Internet Information Service Algorithmic Recommendation Management Provisions” (Algorithmic Recommendation Provisions) and the “Provisions on the Administration of Deep Synthesis Internet Information Services” which were enacted in 2022 and 2023 respectively.

The Draft Measures are intended to apply to organisations or individuals that utilise Generative AI to provide chat and content generation services (Service Providers)1 to the public within the PRC.2

Generative AI is defined as “technologies generating text, image, audio, video, code, or other content based on algorithms, models, or rules”.3 Accordingly, non-PRC providers of Generative AI services would also be considered Service Providers and subject to the Draft Measures, if such services are accessible by the public within the PRC. In other words, the Draft Measures, much like the PIPL, apply extraterritorially. It is, however, unclear how this is intended to be enforced on companies outside the CAC’s geographical reach.

The key provisions of the Draft Measures concerning Service Providers are highlighted below:

(a) Algorithm and System Transparency

Service Providers are required to comply with two filing requirements, namely the need to file:

  1. A security assessment with the CAC that complies with the “Provisions on the Security Assessment of Internet Information Services with Public Opinion Properties or Social Mobilization Capacity” (Security Assessment Provisions); and
  2. Their algorithm as required under the Algorithmic Recommendation Provisions.4

While both requirements have been in place since Nov 20185 and Jan 20236 respectively, the Draft Measures clarify that Generative AI services are also subject to these requirements.

The Draft Measures further require Service Providers, upon request from the CAC and/or the relevant authorities, to provide “necessary information that [can] influence users’ trust or choices”, which includes, in particular, foundational algorithms and technical systems.7

Since such algorithms are the lifeblood of Generative AI services, non-PRC established Service Providers should consider this risk carefully before offering their Generative AI services to the PRC market, especially if they are considering establishing operations in the PRC that would be within the reach of the CAC.

(b) Quality of Training Data

The Draft Measures require Service Providers to be responsible for the legality of pre-training data and optimisation training data (Training Data), and prohibit the infringement of intellectual property rights or the non-consensual inclusion of personal information in such Training Data.8

Service Providers are required to ensure the “authenticity, accuracy, objectivity, and diversity of the Training Data, and comply with the requirements of the Cybersecurity Law of the PRC and other laws and regulations.9 In order to enhance transparency of the training data used, Service Providers may be required to disclose necessary information on the “descriptions of the source, scale, type, quality etc.” of such Training Data if requested by the CAC and the relevant authorities.10

The regulations on Training Data present two practical issues – compliance and audit.

Firstly, Service Providers will have to grapple with striking a balance between promoting the development of Generative AI technology with ensuring the legality of its use (and the Service Provider’s compliance with the Draft Measures). This is especially difficult when the degree of sophistication of an AI model relies heavily on, among other things, the size and variety of data it was trained on. Generative AIs ingest and process large amounts of data which is then used to train them to create “new” works based on the ingested data.

Because of the sheer volume of data that would be needed to train a Generative AI, most of this data – which may be subject to intellectual property (IP) protection (e.g., copyright) – would have been ingested without the permission of the original creators. Furthermore, it would be impossible for Generative AIs to determine whether or not the data it ingests is infringing, or if it contains non-consensually collected personal data.

Having to use overly sanitised data sets may impact the development of the AI model, while strict censorship on the Training Data used to develop the AI models would prove to be very time-consuming and ultimately limit the usefulness of the training data.

Secondly, even if a Service Provider is satisfied that the Training Data used to train the AI model is compliant with the Draft Measures, the Service Provider must keep meticulous records to evidence that compliance in case it is “audited” by the CAC and requested to provide information on the “source, scale, type, quality etc.” of the Training Data.

It is unclear how Service Providers are expected to execute this given that training an AI model is an iterative process that also depends heavily on the input of the users, and a requirement to capture (and filter) all of this input in real time, while slowing down the Generative AI service, may also prove to be extremely onerous, if not unachievable. This is made even trickier for Service Providers given the prohibitions against retaining information that may be used to identify users (see “(e) Service Provider Restrictions” below).

(c) Content Regulation

The Draft Measures also require that the AI-generated content “respect social virtue and public order customs” and among others, “reflect the socialist core values”; refrain from “subverting state power” or “disrupting economic or social order”; not be discriminatory; respect intellectual property rights; be truthful and accurate; and respect others’ lawful rights and interests.11

Service Providers may find these requirements infeasible. Generative AI is based on recognizing patterns in training data, not on understanding the intrinsic meaning of those patterns or verifying that the sentences reflect reality. As has been widely reported, this often results in convincing but false answers, referred to as “hallucinations.” This is a primary challenge for Service Providers globally. See AI Governance - Specific Takeaways for Companies Regarding the US Senate Judiciary Hearings on May 16, 2023 | Perspectives & Events | Mayer Brown.

Market players may also be concerned about the requirement under Article 5 of the Draft Measures, which requires Service Providers (including those who provide programmable interfaces or other means that support text, image, or audio etc. generation) to be responsible as the producer of the content generated by such services.12

This imposes an extremely onerous obligation on Service Providers, as it implies that Service Providers would have to bear responsibility for all content generated using their Generative AI services by any of its users. Given the unpredictable (and generative) nature of such AI services, Service Providers may have to consider interposing an additional (manual?) content moderation filter between the outputs from the AI and the users.

Service Providers are also held legally responsible as “personal information processors” (akin to the concept of a “data controller” under other commonly encountered data protection legislation) and required to comply with personal information protection obligations (i.e. the PIPL) if the AI-generated content involves personal information.13

Service Providers are also required to set up a complaint mechanism to timely process data subject requests for revision, deletion or masking of their personal information.14

The Draft Measures also contain a “whistle-blowing” provision that empowers users of Generative AI services to report any inappropriate AI-generated content to the CAC or the relevant authorities.15 Where the AI-generated content is found to be inappropriate, Service Providers have three months to re-train the Generative AI and ensure that such non-compliant content is no longer generated again by the AI.16

In view of the above, Service Providers should carefully examine the existing technical features of their AI models to determine whether they would be able to comply with such requirements in the event of an inappropriate content complaint, and at least consider preparatory steps necessary for compliance.

(d) User Guidance

The Draft Measures require Service Providers to define the appropriate user groups, occasions and purposes for the use of the Generative AI services, and adopt suitable measures to prevent users’ excessive reliance and addiction to the AI-generated content. Service Providers are also required to provide user guidance for their scientific understanding and rational use of the AI-generated content, so as not to put the AI-generated content into improper use.17

As observed from recent incidents,18 even highly educated individuals may use AI-generated content improperly, so Service Providers may have to consider including eye-catching disclaimers in the user interfaces of their Generative AI services.

(e) Service Provider Restrictions

Service Providers are prohibited from retaining information that is input by their users and from which it may be possible to deduce the identity of a particular user.19 Service Providers are also prohibited from carrying out user profiling on the basis of their input information and usage details, or provide this information to third-parties.20

Generative AIs typically process and retain user interactions to “learn” and fine tune their output. Accordingly, the imposition of such an arbitrarily wide prohibition – while well-meaning and intended to further the PRC’s recent focus on consumer-protection – will be another stumbling block for Service Providers seeking to balance regulatory compliance with technological progress.


A Service Provider’s failure to comply with the Draft Measures is punishable with a fine of up to RMB100,000 (~USD14,200). While the quantum of the financial penalty is not significant, the CAC and other relevant authorities may, in the case of refusal to rectify or under “grave circumstances”, suspend or terminate the perpetrator’s use of Generative AI. The relevant perpetrator may also bear criminal liability where its actions infringe criminal provisions.21

Given the PRC government’s broad discretion to determine the “grave circumstances” or the type of conduct that would “violate the relevant laws and regulations”, Service Providers should be wary of the impact that the Draft Measures may have on their business in the PRC.


As one of the first set of regulations on Generative AI, the Draft Measures are important, though the broad obligations imposed on Service Providers may need to be more carefully considered in order not to hamstring the competitiveness of Chinese Generative AI companies.

Service Providers and other Generative AI-related businesses should keep their eyes peeled for any future announcement regarding the Draft Measures as the CAC moves to finalise them.

The authors would like to thank Joanna Wong, Trainee Solicitor at Mayer Brown, for her assistance with this Legal Update.

1 Article 5, Draft Measures.

2 Article 2, Draft Measures.

3 Article 2, Draft Measures.

4 Article 6, Draft Measures.

5 Provisions on the Security Assessment of Internet Information Services with Public Opinion Properties or Social Mobilization Capacity came into effect on 30 November 2018.

6 Internet Information Service Algorithmic Recommendation Management Provisions came into effect on 1 March 2023.

7 Article 17, Draft Measures.

8 Article 7, Draft Measures.

9 ibid.

10 Article 17, Draft Measures.

11 Article 4, Draft Measures.

12 Article 5, Draft Measures.

13 ibid.

14 Article 13, Draft Measures.

15 Article 18, Draft Measures.

16 Article 15, Draft Measures.

17 Article 18, Draft Measures.

18 A US lawyer with over 30 years’ experience will face professional disciplinary proceedings on 8 June 2023 for using generative AI to conduct legal research and falsely citing fake precedents in his submissions.

19 Article 11, Draft Measures.

20 ibid.

21 Article 20, Draft Measures.