junio 27 2025

Texas Passes Unique Artificial Intelligence Law Focused on Prohibited Practices

Share

On June 22, 2025, Texas Governor Greg Abbott signed into law the Texas Responsible Artificial Intelligence (AI) Governance Act (Texas AI Act). The Texas AI Act adopts a unique approach to regulating AI that has not been observed under other AI laws in the United States. The law imposes requirements on both the public and private sector, adopts a list of prohibited AI practices, and carries stiff penalties. In addition, unlike the Colorado Anti-Discrimination in AI Law (Colorado AI Act) and the EU AI Act, which require AI developers/providers and deployers to adhere to detailed party-specific obligations based on applicable risk, the Texas AI Act does not provide such an AI governance compliance roadmap. That said, it implicitly requires such parties to maintain detailed internal documentation for a potential response to a civil investigative demand. The bill enacting the Texas AI Act also updates the Texas Capture or Use of Biometric Identifier Act (CUBI). Below, we provide a breakdown of the Texas AI Act.

  • Who does the law apply to? The Texas AI Act applies to any person who: (1) promotes, advertises, or conducts business in Texas; (2) produces a product or service used by Texas residents; or (3) develops or deploys an AI system in Texas. The law also contains provisions that apply to Texas government agencies.
  • What does the law regulate? The Texas AI Act regulates AI systems, which, similar to other major AI laws (e.g., EU AI Act and Colorado AI Act), “means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
  • What are the law’s obligations?
Obligation Details
Transparency A government agency must disclose to consumers that they are interacting with an AI system, even if it would be obvious to the consumer.
Prohibited Use (Manipulation of human behavior) It is prohibited to develop or deploy an AI system with the intent to incite or encourage a person to: (1) commit physical self-harm, including suicide; (2) harm another person; or (3) engage in criminal activity.
Prohibited Use (Social scoring) A government agency may not use or deploy an AI system for social scoring that may lead to certain detrimental or unfavorable treatment or infringement of any right guaranteed under the United States Constitution, the Texas Constitution, or state or federal law.
Prohibited Use (Capture of biometric data) A government entity is prohibited from developing or deploying an AI system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source without the individual’s consent, if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or state or federal law.
Prohibited Use (Constitutional protection) It is prohibited to develop or deploy an AI system with the sole intent for the AI system to infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution.
Prohibited Use (Unlawful discrimination) It is prohibited to develop or deploy an AI system with the intent to unlawfully discriminate against a protected class in violation of state or federal law. The law notes, however, that a disparate impact is not sufficient by itself to demonstrate an intent to discriminate. Insurance entities and federally insured financial institutions may be exempt from this prohibition.
Prohibited Use (Certain sexually explicit content and child pornography) It is prohibited to develop or distribute an AI system with the sole intent of producing, assisting or aiding in producing, or distributing unlawful visual material or deepfake videos or images. It is also prohibited to develop or distribute an AI system that engages in text-based conversations that simulate or describe sexual conduct, while impersonating or imitating a child under the age of 18.
  • When does the law go into effect? The Texas AI Act goes into effect January 1, 2026.
  • How is the law enforced? The Texas Attorney General has authority to enforce the Texas AI Act. As part of its powers, the Texas Attorney General may issue a civil investigative demand requesting that a party associated with a complaint to provide certain details regarding the AI system, such as: (1) a high-level description of the purpose, intended use, deployment context, and associated benefits of the AI system with which the person is affiliated; (2) a description of the type of data used to program or train the AI system; (3) a high-level description of the categories of data processed as inputs for the AI system; (4) a high-level description of the outputs produced by the AI system; (5) any metrics the person uses to evaluate the performance of the AI system; (6) any known limitations of the AI system; (7) a high-level description of the post-deployment monitoring and user safeguards the person uses for the AI system, including, if the person is a deployer, the oversight, use, and learning process established by the person to address issues arising from the system’s deployment; or (8) any other relevant documentation reasonably necessary for the Texas Attorney General to conduct an investigation under the Texas AI Act. Before bringing an action, the Texas Attorney General must provide notice to the person alleged to have violated the law, and give 60 days to cure the alleged violation. In addition, a state agency may sanction an individual licensed, registered, or certified by that agency.
  • Are there affirmative defenses? Yes. A defendant may not be liable under the Texas AI Act if another person uses the AI system affiliated with the defendant in a manner prohibited by the Texas AI Act or the defendant discovers a violation of the Texas AI Act through: (1) feedback from a developer, deployer, or other person who believes a violation has occurred; (2) testing, including adversarial testing or red-team testing; (3) following guidelines set by applicable state agencies; or (4) if the defendant substantially complies with the most recent version of the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI RMF) published by the National Institute of Standards and Technology or another nationally or internationally recognized risk-management framework for AI systems. In addition, the Texas Attorney General may not pursue a defendant to collect civil penalties if the AI system has not been deployed.
  • What are the penalties for noncompliance? The Texas Attorney General may bring an action for injunction, recover reasonable attorneys’ fees and expenses, and assess civil penalties. The civil penalties range from $10,000 to $12,000 per curable violation, and $80,000 to $200,000 per uncurable violation. In addition, a person can face daily fines of $2,000 to $40,000 for each day the violation continues. A state agency may also sanction an individual licensed, registered, or certified by that agency if the person has been found in violation of the Texas AI Act and the Texas Attorney General has recommended additional enforcement by the applicable agency. The penalties may include: (1) suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and (2) a monetary penalty up to $100,000. The Texas AI Act, however, notes that the law should not be construed to authorize any department or agency other than the Department of Insurance to regulate or oversee the business of insurance.
  • What steps can companies take to comply? Companies should incorporate the Texas AI Act’s prohibitions and requirements as part of their broader AI governance program. This involves: (1) forming an AI governance team that will evaluate AI systems to confirm that they do not trigger the prohibited practices; (2) implementing appropriate data governance techniques to ensure that AI systems are properly trained with high-quality data to avoid discriminatory outcomes; (3) adopting a risk management framework (e.g., NIST AI RMF) and preparing an AI impact assessment that evaluates risks, the mitigation measures implemented to reduce the risks, and documents the information described above that the Texas Attorney General may request as part of a civil investigative demand; (4) conducting a legal gap analysis to validate that the company’s development or deployment of AI systems complies with the Texas AI Act; (5) implementing mitigation measures, including, as relevant to the Texas AI Act, bias assessments, testing for model drift that may cause the AI system to manipulate human behavior, ongoing monitoring to ensure that the AI system is not used in connection with sexually explicit content and child pornography, setting up a communication mechanism for a feedback loop, and red-teaming; and (6) preparing policies to demonstrate accountability, including AI developer and deployer policies that clearly state that AI systems may not be developed or used for the prohibited scenarios under the Texas AI Act.
  • Updates to CUBI. In addition to enacting the above AI regulations, the Texas AI Act also updates CUBI, which is Texas’s biometric privacy law that requires a person to, among other things, obtain informed consent before capturing an individual’s biometric identifier for a commercial purpose. The first amendment to CUBI is to clarify that an individual is not considered to have provided informed consent simply because their image or other media containing the biometric identifiers are on the Internet or other publicly available source, unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate. Next, two new exemptions are added to CUBI. CUBI now also does not apply to: (1) the training, processing, or storage of biometric identifiers involved in the developing, training, evaluating, disseminating, or otherwise offering AI models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual; and (2) the development or deployment of an AI model or system for certain activities related to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or other illegal activity. However, CUBI’s provisions and penalties may still apply if a biometric identifier captured for training an AI system is later used for a commercial purpose not covered by the exemptions.

 

 

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe