Share

The global data privacy and online safety landscape is undergoing a period of unprecedented regulatory transformation. Across every major economic region, lawmakers and regulators are moving aggressively to address the challenges posed by artificial intelligence (“AI”), biometric technologies, children’s online experiences, and cross-border data flows. What was once a fragmented patchwork of national approaches is rapidly converging toward a new era of comprehensive digital governance, though significant regional variations remain.

Regarding AI particularly, in the European Union, the AI Act has entered its phased implementation, establishing the world’s first comprehensive legal framework for AI and setting a regulatory benchmark that other jurisdictions are watching closely. The United Kingdom, while charting its own post-Brexit course, continues to develop AI safety frameworks and is aggressively enforcing its Online Safety Act. The United States, without a comprehensive federal AI statute, is taking a sectoral regulatory approach. China does not yet have a single, comprehensive AI law. Instead, AI is governed through a suite of measures and regulations that sit alongside the overarching data laws.

Several cross-cutting themes define the current moment. First, regulators worldwide are focusing intensely on protecting children online, with multiple jurisdictions pursuing or considering outright bans on social media access for minors and imposing new age verification requirements. Second, enforcement activity is accelerating dramatically, with cumulative GDPR fines alone now exceeding €5.88 billion and regulators demonstrating willingness to impose penalties that fundamentally alter business models. Third, cross-border data transfers remain a flashpoint, as geopolitical tensions complicate the legal mechanisms companies rely upon to move data internationally. Fourth, AI governance has moved from theoretical discussion to binding legal requirements, with significant compliance deadlines approaching throughout 2026 and 2027.

This alert examines the most significant developments across these areas. Organizations operating internationally should assess their exposure across multiple regulatory regimes and prepare for a compliance environment that will only grow more demanding in the years ahead.

1. AI Regulation Goes Global

EU AI Act Enforcement

The EU AI Act entered into force on August 1, 2024, and represents the world’s first comprehensive legal framework for artificial intelligence. Implementation is occurring in phases, with the most immediate requirements now in effect and additional obligations coming into force throughout 2026 and 2027.

The prohibition on AI systems posing “unacceptable risks” became effective on February 2, 2025. These prohibitions target eight categories of AI practices deemed harmful and contrary to EU values, including AI systems that deploy subliminal manipulation or deceptive techniques, systems that exploit vulnerabilities of individuals due to age, disability, or socioeconomic circumstances, social scoring systems, and predictive criminal risk assessment based solely on profiling. Violations carry maximum penalties of €35 million or 7% of total worldwide annual turnover, whichever is higher.

The governance rules and obligations for general-purpose AI (“GPAI”) models became applicable on August 2, 2025. GPAI providers must now comply with transparency requirements, copyright-related rules, and model safety obligations. In July 2025, the European Commission (the “Commission”) published three key instruments to support compliance: guidelines on the scope of GPAI obligations, a voluntary GPAI Code of Practice, and a template for public summaries of training content. The Commission created an “AI Office” to enforce the GPAI obligations and empowered EU member states to enforce other EU AI Act provisions through national competent authorities.

Rules for high-risk AI systems, including those used in critical infrastructure, education, employment, access to essential services, law enforcement, immigration, and administration of justice, will come into effect August 2026. Providers of high-risk systems will be required to conduct adequate risk assessments, ensure high-quality training datasets, maintain activity logs and detailed documentation, provide clear user information, implement human oversight measures, and meet high standards of robustness and cybersecurity. An extended transition period until August 2027 applies to high-risk AI systems embedded in certain regulated products (e.g., medical devices).

In November 2025, the Commission adopted a Digital Package on Simplification proposing amendments to streamline EU AI Act implementation. The proposal would reinforce the AI Office’s powers, centralize oversight of AI systems built on GPAI models, extend certain simplifications granted to SMEs, and adjust the timeline for application of high-risk rules.

The intersection between privacy and AI is huge. Many data protection authorities in Europe have issued guidance in this regard, focusing in particular on legal bases for the processing of personal data and disclosing information to data subjects about AI training and the use of AI tools. One piece in the EU Digital Omnibus proposal is to amend GDPR to clarify the legal bases for processing in the context of AI—and our prediction is that this intense interplay between privacy and AI will continue in 2026.

US Sectoral AI Rules

Without a comprehensive federal AI statute, the US continues a patchwork, sectoral regulatory approach. Colorado was the first state to enact a comprehensive AI law; the “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” law (“Colorado AI Act”) takes effect on June 30, 2026 (postponed from February 1, 2026).

US states have also advanced sectoral AI rules targeting prohibited AI uses. The Texas Responsible Artificial Intelligence Governance Act carries stiff penalties for specific prohibited AI practices. Effective as of January 1, 2026 and enforceable by the Texas Attorney General, the Texas law prohibits the following practices by AI systems: manipulation of human behavior, social scoring, capture of biometric data, infringement of rights under the US Constitution, unlawful discrimination, and certain sexually explicit content and child pornography. Under the Utah Artificial Intelligence Policy Act, which took effect May 1, 2024, businesses must, among other things, disclose when a generative AI tool interacts with an individual in connection with a consumer transaction. In addition to Utah, California, Maine, and New Jersey have chatbot laws that require businesses to disclose to users that they are interacting with AI and not a human being. The Colorado AI Act has a similar requirement even for non-high-risk AI systems. In the human resources context, Illinois (effective January 1, 2026), New York City, and the California Civil Rights Council have passed AI laws and regulations to protect employees and job applicants from AI-related discrimination and provide transparency when AI is used for employment decisions. Some states, like California and New York have also passed AI laws focused on developers of AI systems and models. These are only a few examples of AI laws that are developing throughout the United States, as states are also updating and enforcing existing laws of general applicability to enforce AI-related violations, including under data privacy, right of publicity, and consumer protection laws.

China’s Algorithmic Governance

China has embedded AI requirements into its broader data regulatory regime. It has issued a suite of AI-related regulations that apply broadly to organizations utilizing generative AI, deep synthesis technologies, or algorithmic recommendation technologies to provide internet information services or content generation services in China or to the public within China. These regulations include:

Unlike the EU AI Act, which delineates the obligations of distinct actors (providers, deployers, importers, and distributors of AI systems), Chinese regulations do not clearly define each role within the AI service supply chain. Service providers face substantial compliance obligations under this framework. AI service providers must conduct security assessments and file their algorithms with the Cyberspace Administration of China (“CAC”) prior to launch if their AI services can influence public opinion or cause social mobilization. Regarding training data, providers must ensure data is obtained from legitimate sources and complies with intellectual property and personal data protection requirements. Regarding content management, if the AI service provider identifies any illegal or harmful content (e.g., content that threatens national security, public order, or social morality), it must take prompt action to suspend the content generation and transmission, and remove such content. AI service providers must establish complaint-handling mechanisms and comply with overarching data protection laws, including the Cybersecurity Law (“CSL”), the Data Security Law (“DSL”), and the Personal Information Protection Law (“PIPL”). Also, China’s mandatory AI labeling rules took effect September 1, 2025, requiring AI service providers to clearly mark AI-generated content (e.g., explicit labels for interactive interfaces like chatbots and implicit labels for metadata).

UK AI Safety Frameworks

The United Kingdom is developing its own governance approach to AI that differs from the EU’s risk-based regulatory model. Since the UK Government published its “AI White Paper” on March 29, 2023 and a response paper on February 6, 2024, UK regulators have set out five cross‑sectoral principles for responsible AI use: (1) safety, security, and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress.

The UK Government has emphasized a pro-innovation approach to AI regulation, relying initially on existing regulators (e.g., the Information Commissioner’s Office (“ICO”), Office of Communication (“Ofcom”), Financial Conduct Authority, and Competition and Markets Authority) to apply sector-specific rules rather than creating new overarching AI legislation. However, increasing public pressure and parliamentary debate about technologies like image generators may accelerate moves toward more comprehensive AI governance. Organizations deploying AI systems in the United Kingdom should monitor these developments closely, particularly where their systems involve biometric processing, automated decision-making, or applications affecting children.

Privacy and AI are also closely related in the United Kingdom, and the ICO has issued related guidance.

Brazil’s AI Framework Advances Through Congress

Brazil is advancing toward comprehensive AI regulation with Bill 2338/2023 (PL 2338/2023), which was approved by the Senate in December 2024 and is currently under review by a Special Committee in the Chamber of Deputies (the “Chamber”). 

Inspired by the EU AI Act, the bill adopts a risk-based approach, classifying AI systems according to their potential impact on fundamental rights and prohibiting excessively risky applications such as autonomous weapons and subliminal manipulation technologies. 

The framework would create a National AI Regulation and Governance System (“SIA”), with the National Data Protection Agency (“ANPD”) designated as the competent authority to regulate sectors currently lacking dedicated oversight. In December 2025, the federal government submitted a complementary bill to address a constitutional defect in the original legislation regarding executive branch prerogatives over ANPD’s competencies, signaling strong political will to finalize the regulatory framework. 

Key provisions include mandatory preliminary risk assessments for generative AI and general-purpose models, transparency and explainability obligations for high-risk systems, liability rules for developers and operators, and protections for copyright holders whose works are used to train AI models. If enacted, Brazil will join the growing list of major economies with dedicated AI legislation, though final passage and effective dates remain subject to ongoing legislative deliberations in the Chamber. 

Apart from the AI regulatory side, ANPD has issued guidance and initiated enforcement actions based on privacy provisions in relation to legal bases for personal data processing.

2. Youth Online Safety Crackdowns

Under-16 Social Media Bans

Following Australia’s world-first social media ban for users under 16, which took effect on December 10, 2025, European countries are actively pursuing similar restrictions. The momentum for youth social media bans reflects growing concerns about mental health impacts, cyberbullying, exposure to inappropriate content, and altered sleep patterns among young users.

In the United Kingdom, Prime Minister Keir Starmer has expressed openness to an Australia-style ban to better protect children from social media. On January 19, 2026, ministers launched a consultation exploring options including introducing a social media age limit, enforcement mechanisms, stopping technology companies from accessing young users’ data, and limiting addictive features such as infinite scrolling. The House of Lords is considering amendments to the Children’s Wellbeing and Schools Bill that would enact a ban within a year of the bill passing. Digital rights advocates have warned that such a ban would require building a mass age-verification system for the entire internet, creating serious risks to privacy, data protection, and freedom of expression.

France intends to ban social media platforms for children under 15 from the start of the 2026 academic year. A draft bill backed by President Emmanuel Macron will be submitted for legal review, with the government targeting implementation by September 2026. The legislation cites the risks of excessive screen use by teenagers, including dangers of inappropriate content exposure, online bullying, and altered sleep patterns. France previously attempted to introduce a digital legal age of 15 in 2023, but that legislation conflicted with EU rules.

Denmark’s government announced in November 2025 that it is pursuing a ban on social media for minors under 15, likely with an exemption for ages 13-14 with parental consent, planning implementation as early as 2026. Other EU Member States exploring restrictions include: Spain, where draft legislation would require parental consent for children under 16 to access social networks and platforms incorporating generative AI; Italy, where a bill could impose restrictions on children under 15 and regulate “kidfluencers”; and Germany, which has ordered a committee to study the feasibility of restrictions, with a final report expected in autumn 2026.

At the EU level, the European Parliament has urged Brussels to set minimum ages for social media access to combat rising mental health problems among adolescents.

In the United States, several states have enacted or introduced laws aimed at restricting minors’ ability to create social media accounts; regulating the use of targeted advertising for children; and governing the collection, use, and sale of children’s personal information. Some states have gone further by prohibiting minors under the age of 13 from creating social media accounts altogether, while others require parental or guardian consent for account creation by minors over 13 until they reach the age of 18. These laws generally require social media platforms to perform age verification. Enforcement of these laws currently varies, as many of the social media laws face legal challenges, including pending litigation on First Amendment grounds. At the federal level, the Children’s Online Privacy Protection Act (“COPPA”) restricts data collection from children under 13 without parental consent. Recent proposed federal bills, including the Kids Off Social Media Act and Protecting Kids on Social Media Act aim to ban children under 13 from creating social media accounts, mandate parental consent for teens, and restrict addictive algorithms, though none are federal law yet.

Age Assurance and Verification Laws

The question of how platforms verify users’ ages has become central to regulatory debates across Europe. Platforms face a fundamental challenge, implementing age verification systems that are effective without creating unacceptable privacy risks or barriers to access.

A large social media platform announced in January 2026 that it will roll out new age-verification technology across the European Union. The system analyses profile information, posted videos, and behavioral signals to predict whether an account may belong to a user under the age of 13. Accounts flagged by the system face review by specialist moderators and potential removal, with users able to appeal through facial age estimation, credit card authorization, or government identification.

The Commission has released a prototype of an EU-wide age verification app allowing users to prove they are over 18 when accessing restricted adult content. The app is currently being piloted by Denmark, France, Greece, Italy, and Spain. France’s SREN law requires robust age verification for pornographic content, leading several major platforms to voluntarily block or suspend access in France rather than implement the stringent requirements. However, challenges to the law are ongoing, and an Advocate General opinion in September 2025 concluded that France’s age verification obligation may be incompatible with EU law and the e-Commerce Directive’s country-of-origin principle.

In the United Kingdom, age assurance and verification is addressed through the Online Safety Act 2023, which has already prompted enforcement actions against platforms for inadequate age verification measures. For further discussion of the United Kingdom’s approach, see Section 4, “UK Online Safety Act Enforcement,” below.

In the United States, state laws requiring verification or estimation of users’ ages have gained momentum across the country. As of January 2026, 25 states have enacted or introduced age-verification laws targeting minors’ access to harmful content, thus requiring operators of online platforms to verify users’ ages. Permitted methods vary and include matching a real-time photo of the user, a government-issued ID upload, biometric analysis, and parental consent.

While these laws have faced legal challenges, on June 27, 2025, the Supreme Court upheld Texas’s age verification law in Free Speech Coalition, Inc. v. Paxton. The Court held that Texas may require adult-content websites to verify that users are 18 or older before displaying sexual material harmful to minors. The Court’s ruling has further validated and accelerated state age assurance and verification laws.

Brazil’s Digital ECA Framework: A New Paradigm for Child Protection Online

In September 2025, Brazil enacted Law No. 15,211/2025, known as the Estatuto da Criança e do Adolescente Digital (“Digital ECA”), significantly updating the 1990 Child and Adolescent Statute to govern digital environments. The law applies not only to services expressly directed at children and adolescents but also to any online product or service “likely to be accessed” by minors, regardless of the provider’s location.

It establishes a high standard of data protection, requiring platforms to: ensure privacy and safety-by-default; implement robust age verification mechanisms (prohibiting self-declaration); and offer accessible parental controls in Portuguese. The statute also bans behavioral profiling, targeted advertising to minors, and the use of loot boxes in online games. It enters into force on March 17, 2026, and violations may result in sanctions of up to 10% of a company’s gross revenue in Brazil or BRL 50 million per infraction, including possible service suspension or permanent prohibition.

Following its enactment, ANPD revised its 2025–2026 Regulatory Agenda and enforcement priorities. In October 2025, the agency introduced three regulatory initiatives directly related to the implementation of the Digital ECA. These include detailed rules on age verification mechanisms, definitions regarding the scope and general obligations of technology providers, and revisions to enforcement and sanctioning procedures. The agency signaled that initial implementation would emphasize interpretive guidance, with enforcement actions expected once technical standards are finalized.

Early Enforcement in Brazil: Generative AI Liability

Even before the Digital ECA enters into force, Brazilian authorities have demonstrated a proactive enforcement stance. In January 2026, the ANPD, the Federal Prosecutor’s Office (“MPF”), and the National Consumer Secretariat (“Senacon”) jointly issued formal recommendations to US social media platform regarding its AI image generating tool. The authorities found that the tool was being used to generate synthetic sexualized content depicting real children, adolescents, and identifiable adults without consent. The coordinated action required the platform to immediately block the tool’s capacity to generate such content, establish detection and removal mechanisms, and suspend accounts involved in its production. They clarified that in this case, the platform may be treated as a co-author of the content, which excludes the application of safe harbor protections under Article 19 of Brazilian Civil Rights Framework for the Internet (Law No. 12,965/2014).

3. Biometric & Facial Recognition Restrictions

US Regulatory Landscape

Although biometric technologies are now widely used across sectors, the US still has no federal law governing the collection, use, or retention of biometric information. In the absence of a federal framework, states have had to fill the gaps. This fragmented landscape leaves companies navigating inconsistent obligations and at times, creates uneven protections for consumers whose biometric identifiers are being used for authentication and surveillance purposes.

For years, Washington, Illinois, and Texas stood alone as the only three states with laws specifically targeting entities’ collection and use of biometric data. Last year, Colorado became the first state in more than a decade to take a meaningful step toward regulating biometric data.

While many comprehensive state privacy laws identify biometric data as a category of sensitive data and impose heightened obligations on controllers that collect it, Colorado went further. The Biometric Data Privacy Amendment to the Colorado Privacy Act imposes explicit, standalone requirements on businesses that collect biometric identifiers or biometric data from Colorado residents (including employees and job applicants). Specifically, these businesses must adopt a written policy that (1) establishes a retention schedule for biometric identifiers and biometric data; (2) includes a protocol for responding to data security incidents involving such data; and (3) sets out guidelines for the deletion of biometric identifiers. Processors of biometric data must also maintain a protocol for responding to data security incidents involving biometric identifiers or biometric data.

Texas also enhanced its laws last year with the Texas Responsible Artificial Governance Act, effective January 1, 2026, which prohibits governmental entities from developing or deploying AI systems that use a person’s biometric data to uniquely identify them without consent if doing so would infringe on their rights under federal or Texas law. This reflects a growing trend toward regulating biometric identification within the broader context of AI governance.

Beyond comprehensive privacy laws, a growing number of jurisdictions have enacted targeted facial recognition laws. As of December 2025, 13 states and 23 local jurisdictions have laws specifically addressing facial recognition technology. These laws vary widely in scope and stringency, contributing to the broader patchwork that companies must manage.

Despite the growing attention to biometric technologies, the pace of new biometric-specific legislation remains stagnant. A few factors may contribute to this lack of momentum. First, Illinois’s Biometric Information Privacy Act has been the catalyst for extensive litigation and statutory damages exposure over the years, which may make some legislatures cautious about adopting similar frameworks. Second, many states may be prioritizing broader comprehensive privacy laws and children’s privacy laws. Finally, there remains uncertainty about how to regulate biometric technologies (including facial recognition tools), which may lead some states to wait for guidance from the federal government or entities that establish industry standards before moving forward.

As biometric and facial recognition technologies become more sophisticated and more deeply integrated into consumer products and public sector systems, lawmakers are increasingly focused on the privacy and security implications. We anticipate additional proposed biometric technology legislation this year as data privacy continues to be a major topic of discussion amidst rapid technological innovation.

European Biometric Limits

GDPR already imposes significant limitations on the processing of biometric data. In addition, the EU AI Act established restrictions on biometric technologies, prohibiting several applications outright while classifying others as high-risk subject to strict compliance requirements.

Under the prohibitions, the following biometric AI practices are banned:

  • Biometric categorization systems that categorize individuals based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for labelling or filtering of lawfully acquired biometric datasets in law enforcement.
  • Use of AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. (Untargeted scraping means systems that absorb as much data as possible without targeting specific individuals.)
  • AI systems for inferring emotions in workplace and educational settings are prohibited except where intended for medical or safety reasons. The EU AI Act distinguishes between emotion inference, which is regulated, and the detection of readily apparent expressions or physical states, which is not.
  • Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes face a general prohibition with narrow exceptions. Such systems may only be used for targeted searches for specific victims of abduction, trafficking, or sexual exploitation, as well as missing persons; prevention of specific, substantial, and imminent threats to life or terrorist attacks; and localization or identification of suspects of serious criminal offenses.

Retrospective facial recognition for law enforcement is classified as a high-risk AI system and, beginning August 2026, will be subject to additional conditions.

The United Kingdom currently lacks specific legislation governing facial recognition technologies. The ICO has published detailed guidance on biometric data and biometric recognition that provides the primary regulatory framework for organizations processing biometric data in the United Kingdom. The guidance clarifies the concept and properties of biometric data and provides practical considerations for organizations contemplating or using biometric recognition systems. Critically, the ICO guidance confirms that biometric data constitutes “special category personal data.” This means that any organization deploying a biometric recognition system is processing special category personal data and must identify both a lawful basis under Article 6 UK GDPR and a separate condition for processing special category data under Article 9.

Emerging Litigation and Regulatory Scrutiny Across LATAM

Latin America has become both a testing ground and an emerging market for biometric technologies, though regulatory and judicial pushback is intensifying. In Argentina, the Buenos Aires Facial Recognition System was deployed in 2019 across subway and railway stations to identify fugitives by matching live camera feeds against a national database of approximately 40,000 wanted individuals. This was ruled unconstitutional in 2022 and remains suspended pending agreement on an audit framework, after courts found authorities conducted over 10 million biometric queries and improperly loaded data of politicians, journalists, and activists, resulting in approximately 140 wrongful detentions.

Meanwhile an iris-scanning project has faced enforcement actions across the region. Colombia’s Superintendency of Industry and Commerce permanently shut down operations in October 2025 for collecting biometrics from nearly two million users without adequate transparency, and Brazil’s ANPD banned the company from offering cryptocurrency in exchange for biometric data.

A US-based AI company recently settled a landmark class action in the United States in March 2025 and is largely prohibited from selling its facial recognition database to American private companies. However, the company has been quietly expanding into Latin America where the company sees a more permissive regulatory environment. The company now operates in Brazil, Colombia, Chile, the Dominican Republic, and Trinidad and Tobago, and has conducted training operations with law enforcement officials from at least 10 countries in the region. These developments underscore a growing regional tension: while governments pursue biometric solutions to address security challenges, courts and regulators are increasingly willing to intervene when deployments lack adequate safeguards, transparent data processing practices, or proper legal bases under national data protection regimes.

4. Biggest Enforcement Risks in 2026

GDPR Fines Targeting AI, Adtech, and Children’s Data

GDPR enforcement continues to intensify, with cumulative fines reaching ~€5.88 billion since 2018. In the first half of 2025 alone, the five largest fines totalled over €3 billion. Recent enforcement demonstrates regulatory willingness to target business-critical practices involving AI, advertising technology, and children’s data.

The largest GDPR fine in history, €1.2 billion, was issued by the Irish Data Protection Commission (“DPC”) against a large social media platform in May 2023 for transferring European user data to the United States without adequate protection mechanisms. In May 2025, another social media platform received a €530 million fine for infringing GDPR requirements regarding transfers of EU user data to China and transparency failures. The Irish DPC found that the platform failed to verify and guarantee that EEA user data received equivalent privacy protections within China and did not address concerns arising from potential access by Chinese authorities under domestic anti-terrorism and counter-espionage laws.

Enforcement targeting children’s data protection has been particularly active. A different social media platform was fined €405 million in September 2022 for failing to protect children’s personal data, including making contact information publicly visible when teenagers switched to business accounts and failing to provide information in age-appropriate language. Another platform faced a €345 million fine in September 2023 for child data protection failures, including inadequate age verification, inappropriate default privacy settings for children’s accounts, and privacy notices not written in language children could understand. In April 2023, that same platform also received a £12.7 million UK fine from the ICO for processing the data of approximately 1.4 million underage users without parental consent.

Advertising technology and behavioral targeting remain enforcement priorities. An online shopping company was fined €746 million in July 2021 for processing personal data for behavioral advertising without proper user consent. A networking social media platform received a €310 million fine in October 2024 for violations related to behavioral analysis and targeted advertising without adequate consent. Another large tech company faced €150 million in combined fines in December 2021 in relation to users’ ability to reject cookies, with the authority finding that the company’s interfaces were deliberately designed to favor cookie acceptance while rejection required multiple steps.

AI-specific enforcement has emerged as a significant new frontier for data protection authorities. In December 2024, Italy’s Garante fined a large AI company €15 million for GDPR violations related to its chatbot, finding that the company processed personal data to train its AI models “without having an adequate legal basis” and violated transparency and information obligations toward users. The Garante also cited the company’s failure to notify authorities of a security breach in March 2023 and inadequate age verification mechanisms that risked exposing children under 13 to inappropriate AI-generated content.

A social media platform is now under investigation by the Irish DPC regarding the use of EU users’ personal data to train its AI chatbot. The DPC is examining whether it lawfully processed personal data contained in publicly accessible posts to train the AI’s large language models, and whether reliance on “legitimate interest” as a legal basis is valid given that GDPR generally requires explicit consent for such processing. 

FTC Enforcement Priorities

The US Federal Trade Commission’s (“FTC”) 2026-2030 draft Strategic Plan signals that the FTC intends to prioritize deceptive design practices, health-related deceptions, the protection of children’s information, and AI-enabled harms.

Specifically, the FTC appears to underscore its continued reliance on Section 5 of the FTC Act to address unfair and deceptive practices online, with particular attention to manipulative interfaces, misleading consent flows, and design patterns that obscure material information or steers users into unintended choices (i.e., “dark patterns”).

Health-related deception appears to also be a focus for the FTC in 2026, with the agency likely taking continued action against misleading claims involving health products and services, as well as focusing on enforcement actions tied to sensitive health-related data flows. Companies operating in digital health, wellness, and adjacent sectors should expect enhanced scrutiny of their data handling practices, privacy notices, consent mechanisms, and marketing claims.

The FTC’s Strategic Plan also reiterates the agency’s focus on safeguarding children’s information, both through COPPA enforcement and the agency’s new authority under the Take It Down Act, a law that prohibits the nonconsensual online publication of sexually explicit images and videos and imposes notice and takedown obligations on covered online platforms. Together, these authorities reflect the FTC’s broader commitment to protecting minors’ privacy and safety across digital environments.

Finally, the FTC has indicated that its enforcement posture will be supported by a more robust technical infrastructure, including heavier use of the agency’s technologists and expanded deployment of AI and machine-learning tools to help spot patterns of deception and build cases. This expanded technical capacity aligns with the agency’s broader focus on AI-enabled harms and signals that companies should expect more technologically sophisticated investigations into unfair and deceptive practices, dark pattern deception, and sensitive health-related data mishandling.

China PIPL Penalties for Cross-Border Transfers

Although China’s cross‑border data transfer regime was relaxed in March 2024, with clearer exemptions now available, cross‑border transfers remain a key focus of regulatory scrutiny. The landmark judgment issued by the Guangzhou Internet Court represents the first judicial ruling on cross-border personal data transfers under the PIPL, signalling heightened supervision by the authorities and increased risks for multinational organizations with operations in China.

This Guangzhou Internet Court’s decision relates to an international hotel group that was accused of unlawfully transferring the Plaintiff’s personal data to various overseas entities in the course of processing a hotel reservation. The Court ruled that the cross-border transfer of personal data for hotel reservation and managing and operating the hotel’s central reservation system was necessary for contractual performance and therefore lawful. However, the transfer of personal data to other overseas business partners and marketing personnel for marketing purposes was not necessary for the performance of the relevant contract and, in the absence of separate consent provided by the Plaintiff, was unlawful.

The Court also clarified the “separate consent” requirements under the PIPL, which should be distinguished from general, one-off consent obtained for multiple processing purposes, holding that separate consent necessitates specific notification to data subjects regarding the particular purpose of processing, constituting an express and specific authorization. Notably, the court determined that merely clicking a checkbox to agree to a comprehensive privacy policy does not satisfy the threshold for separate consent to cross-border data transfers. Organizations relying solely on bundled consent mechanisms therefore face considerable compliance exposure.

Despite the relatively small penalty (written apology, damages of RMB 20,000 (approx. US$2,900), and deletion of the plaintiff’s personal data from all relevant recipients), the more significant consequence is the reputational damage caused in such cases. The decision also reflects growing consumer awareness of privacy rights in China and is a clear prompt for businesses to reassess their China-facing privacy practices.

Brazil ANPD Sanctions for Security Failures/LGPD Enforcement

In line with global trends of intensifying regulatory oversight over AI, children’s data, and biometric systems, Brazil’s ANPD has emerged as a key enforcement actor in 2026. ANPD has transitioned from a moderately active to a very active enforcement posture, with administrative sanctions and investigations accelerating throughout 2025 against both domestic entities and global technology companies, signaling that compliance with the Brazilian General Data Protection Law (“LGPD”) is no longer optional.

In addition to the high-profile enforcement actions previously discussed, the ANPD also issued a determination requiring a global social media platform based in China to enhance its age verification mechanisms. Brazilian courts upheld the measure, affirming that access to the platform must be restricted for users who are not properly registered or verified.

With the ANPD’s transformation into a full regulatory agency with enhanced autonomy and binding rulemaking authority, companies operating in Brazil should expect more aggressive enforcement, particularly regarding AI-driven data processing, children’s data, and security failures. Penalties under the LGPD can reach up to 2% of a company’s Brazilian revenue, capped at BRL 50 million per violation, in addition to non-monetary sanctions such as data deletion mandates and partial or total bans on processing activities.

Enforcement risks are further heightened by overlapping sector-specific regulations. In addition to the ANPD, agencies such as BACEN, ANATEL, and CVM have issued cybersecurity and data-related obligations, non-compliance with which can trigger independent sanctions under their respective mandates.

UK Online Safety Act Enforcement

The UK Online Safety Act 2023 (the “OSA”) continues its phased implementation, with Ofcom demonstrating increasing willingness to take enforcement action against non-compliant services. Non-compliance carries potential fines of up to £18 million or 10% of global turnover, plus court orders requiring internet service providers to block access to services in the United Kingdom.

Ofcom has already issued enforcement actions under the OSA. In August 2025, Ofcom fined an anonymous imageboard provider £20,000 plus daily penalties for failure to comply with information requests. In November 2025, a nudification site operator was fined £50,000 for failing to use sufficiently rigorous age verification. In December 2025, a Belize-based company operating adult websites received a £1 million fine for lack of age checks, plus £50,000 for not responding to information requests.

Ofcom is increasingly focusing on AI chatbots and generative AI services under the OSA framework. In January 2026, Ofcom launched a formal investigation into the same social media company investigated by the Irish DPC over the use of its chatbot to generate non-consensual sexualized images of adults and children. Ofcom is examining whether the company has complied with its obligations to assess the risk of people in the United Kingdom encountering illegal content, take appropriate steps to prevent users from seeing priority illegal content including non-consensual intimate images and child sexual abuse material, take down illegal content swiftly, protect users from breaches of privacy laws, compile children’s risk assessments, and use highly effective age assurance to protect children from encountering pornography.

Ofcom has also opened an investigation into another company, which provides an AI character companion chatbot service, in relation to its compliance with age check requirements under the OSA. This investigation reflects Ofcom’s broader position that generative AI tools, including chatbots and search assistants, fall within the scope of regulated services under the Act. Where a site or app includes a generative AI chatbot that enables users to share text, images, or videos generated by the chatbot with other users, it constitutes a user-to-user service subject to the full range of OSA duties.

Conclusion

The regulatory landscape for data privacy, AI, and online safety continues to evolve rapidly across the globe. Organizations operating in these markets should prioritize conducting comprehensive AI systems inventories and risk classifications, implementing robust age verification and children’s safety measures, reviewing cross-border data transfer mechanisms and safeguards, and establishing proactive compliance monitoring and documentation practices. Companies that address these requirements now will be better positioned to manage regulatory risk and avoid the significant financial and reputational consequences of non-compliance.

Serviços e Indústrias Relacionadas

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe