AI Governance: Practical Guidance from Hong Kong Privacy Commissioner for Personal Data

Authors:
Share

Introduction

Artificial Intelligence ("AI") tools are now widely adopted across various industries in Hong Kong. The Office of the Privacy Commissioner for Personal Data ("PCPD") conducted a number of compliance checks in May 2025 and found that 80% of the organizations (48 out of 60) reported using AI in their daily operations (see our previous Legal Update Hong Kong Privacy Commissioner for Personal Data Completes Compliance Checks on the Use of AI and Data Privacy).

Given the wide adoption of AI in Hong Kong, the PCPD has issued a new practical guidance on the adoption of AI which encourages organizations to refer to the Checklist on Guidelines for the Use of Generative AI by Employees (the "Guidelines") published earlier this year to help them develop internal policies that address the unique risks and challenges brought by AI.

In particular, the PCPD recommends that organizations address the following key areas when developing internal AI policies:

Scope of permissible use: Organizations should clearly identify which Generative AI ("Gen AI") tools are approved for use and define the permitted use cases (such as drafting documents, preparing summaries, or creating content). The internal policy should also clarify to whom it applies, specifying whether it covers all employees or is limited to certain departments or roles.

Protection of personal data privacy: The internal AI policy must provide clear guidance on both the inputs and outputs of Gen AI tools. This includes specifying the types and amounts of information that may be inputted, outlining the permissible use cases for AI-generated outputs, and establishing rules for the storage and retention of such information to ensure compliance with data privacy requirements.

Lawful and ethical use and prevention of bias: The internal AI policy shall prohibit the use of AI for unlawful or harmful activities. It should also require that all AI-generated outputs undergo human review to verify accuracy and to identify and address any potential bias or discrimination, and provide instructions on watermarking or labelling AI-generated materials.

Data Security: Organizations should define which categories of employees are authorized to use Gen AI tools and specify the types of devices on which these tools may be accessed. The use of strong user credentials and security settings should be mandatory. Employees must also be required to promptly report any AI-related incidents according to the organization’s incident response plan.

Violations of AI Policy: Organizations should clearly set out consequences for non-compliance with the internal AI policy. For broader AI governance, organizations can refer to the PCPD’s “Artificial Intelligence: Model Personal Data Protection Framework” issued in 2024 (see our previous Legal Update on the Model Framework).

Supporting Responsible Use: Practical Measures The PCPD proposes practical support measures including regular communication with employees on internal policies and updates, targeted training for employees, a designated support team in the organization and a feedback mechanism to drive ongoing improvement.

Takeaways

To effectively meet the expectations set by the PCPD and mitigate AI-related risks, organizations should consider:

  • Conducting a comprehensive review of all AI tools and use cases within the organization, with particular attention to whether personal data is processed.
  • Clearly specifying which AI tools are approved, outlining permitted use cases and user groups, and requiring pre-approval for the adoption of new tools.
  • Prohibiting the entry of sensitive or confidential information into public AI tools, and ensuring that any input of personal data into AI systems is carefully evaluated for compliance with data privacy laws and existing privacy policies. Establish clear rules governing the storage, retention, and labeling of AI-generated outputs.
  • Assigning designated reviewers for high-risk use cases, and requiring thorough fact-checking and bias assessment before any AI-generated content is used externally.
  • Implementing strong authentication, encryption, and secure configuration standards, and restricting AI use to permitted devices.
  • Delivering role-specific training and providing accessible support channels for employees.
  • Conducting regular audits of AI use, collecting employee feedback and updating internal policies as the use cases or regulatory landscape evolve.
  • Ensuring that internal AI policies and practices comply with data privacy laws,  including the Personal Data (Privacy) Ordinance (PDPO).

As AI continues to transform the business landscape in Hong Kong, the PCPD has called for organizations to adopt a proactive and structured approach to AI governance, and to establish comprehensive AI policies that promote lawful, ethical, and responsible use of AI technologies.

The guidelines and guidance on the adoption and use of AI issued by the PCPD so far signal the focus on such issues by the PCPD in the event of an investigation and the need for organizations deploying AI tools to take action now to ensure that the guidance provided by the privacy regulator is baked into their practices.

The authors would like to thank Charmian Chan, Legal Practice Assistant at Mayer Brown Hong Kong LLP, for her assistance with this article.

Related Services & Industries

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe