juillet 25 2025

Hong Kong Privacy Commissioner for Personal Data Completes Compliance Checks on the Use of AI and Data Privacy

Share

Introduction

Artificial intelligence ("AI") has rapidly transitioned from experimental use to widespread adoption across Hong Kong. Organisations are now leveraging AI models to enhance customer service, improve risk management, and expedite research and development activities. Against this backdrop, the Office of the Privacy Commissioner for Personal Data (“PCPD”) carried out a round of compliance checks in February 2025, which covered 60 local organisations from various sectors. The review offers first-hand insight into the state of AI governance, data protection, and AI risk management in Hong Kong. In this article we discuss the key findings, regulatory expectations, and practical implications for organisations deploying AI in Hong Kong.

Background and Scope of the Compliance Checks

After the last round of compliance checks conducted between August 2023 and February 2024 targeting 28 local organisations, the PCPD undertook a new round of compliance checks in 2025 covering 60 organisations from various sectors such as telecommunications, banking and finance, insurance, beauty services, retail, transportation, education, medical services, public utilities, social services and government departments. The purpose of the exercise was two-fold: first, to assess compliance with Personal Data (Privacy) Ordinance (“PDPO”) by companies when collecting, using and/or processing personal data aided by AI tools; and second, to examine organisations' implementation of the PCPD “Artificial Intelligence: Model Personal Data Protection Framework” (“Model Framework”) (see our previous Legal Update on the Model Framework).

Key Findings

Use of AI and related data processing practices

Of the 60 organisations reviewed, 48 (80%) used AI in their day-to-day operations—a 5% increase compared to the results obtained last year. Notably, 42 of these 48 organisations had been using AI for over a year, and more than half (26 of them) used three or more AI systems. The most common use cases include customer service, marketing, administrative support, compliance and risk management, and research and development.

50% of the organisations which used AI in their day-to-day operations collected and/or used personal data through AI systems. These entities formulated Privacy Policy Statements and Personal Information Collection Statements, specifying the purposes of use of the personal data and potential data transferees. Approximately 29% of these organisations provided Privacy Policy Statements that also covered the application of AI.

Of these, the majority retained the personal data collected through AI systems and specified the retention periods for personal data.

Data Security and Privacy Measures

All organisations handling personal data via AI systems implemented appropriate data security measures. These measures included access controls, penetration testing, data encryption, and data anonymisation. A subset (29%) went further by activating AI-specific security alerts and conducting red teaming drills.

As far as data minimisation is concerned, 67% of the organisations which collected and/or used personal data through AI systems used anonymised or pseudonymised data when using AI systems, and 29% implemented advanced privacy-enhancing technologies such as synthetic data and federated learning.

AI Governance, Risk Assessment and Incident Response

Among the 24 organisations collecting and/or using personal data through AI systems, 79%had established AI governance structures such as AI governance committees and/or designated responsible personnel which oversee the use of AI in the organisations. Furthermore, approximately 46% conducted internal audits and/or independent assessments regularly to ensure compliance with the organisation’s AI strategies and/or policies.

96% of the organisations which collected and/or used personal data through AI systems conducted pre-implementation testing to ensure reliability, robustness, and fairness of the AI systems. Around 83% of them performed privacy impact assessments before implementation. All organisations conducted risk assessments in the procurement, use and management of AI systems. The risk assessments considered factors such as data security, legal requirements, data volume, quality and sensitivity, potential impact of the AI systems, and mitigating measures.

92% of the organisations had formulated data breach response plans, with around one third of them specifically addressing AI-related incidents.

Regulatory Expectations and Recommendations

The PCPD confirmed in her report that no contravention of the PDPO had been found during the 2025 compliance checks. The PCPD provided guidance and set out her expectations and recommended the best practices for organisations to follow when adopting AI tools:

  • Continuous Monitoring: Regularly monitor and review AI systems and adopt measures to ensure compliance with the PDPO requirements.
  • AI Strategy and Governance: Establish clear AI strategies, AI governance structures, and provide appropriate employee training.
  • Comprehensive Risk Assessment: Identify, analyse and evaluate risks, and tailor risk management measures for each AI system’s risk profile.
  • Incident Response: Prepare AI-specific response plans to address and mitigate potential risks arising from AI system failures or other data breaches.
  • Regular Internal Audits and Independent Assessments: Conduct regular internal audits and independent assessments to ensure system security, data security, and compliance with the organisation’s data and AI policies.
  • Transparency and Engagement: Communicate and engage with stakeholders and respond to stakeholders' feedback.

Apart from the Model Framework, organisations should also refer to the PCPD “Checklist on Guidelines for the Use of Generative AI by Employees” which provides guidance on how to develop internal employee policies or guidelines for the use of generative AI at work.

Conclusion

The findings from the PCPD compliance checks highlight the growing integration of AI across diverse sectors in Hong Kong and the critical importance of robust data protection and governance practices. As AI technologies and regulatory expectations evolve, regular assessments, and a commitment to best practices will be essential for maintaining public trust and supporting the responsible development and deployment of AI technologies in the evolving regulatory landscape.

The authors would like to thank Charmian Chan, Trademark Assistant at Mayer Brown Hong Kong LLP, for her assistance with this article.

Compétences et Secteurs liés

Domaines de compétences

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe