2026年4月15日

Oregon and Washington Join California in Enacting Companion Chatbot Laws

分享

Continuing the trend of new state-by-state artificial intelligence (“AI”) regulations, Oregon and Washington both enacted laws regulating AI companion chatbots in March 2026, following California’s lead. Oregon’s SB 1546 and Washington’s HB 2225 will both take effect on January 1, 2027. This Legal Update provides an overview of these new laws and summarizes the key differences from California’s companion chatbot law.

I. Washington HB 2225

Similar to California’s companion chatbot law, Washington’s HB 2225 defines “AI companion chatbot” as an “artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs, including by exhibiting anthropomorphic features, and is able to sustain a relationship across multiple interactions.” HB 2225 broadly excludes: chatbots for business functions (such as technical assistance and customer service) “if such bot does not sustain a relationship across multiple interactions and generate outputs that are likely to elicit emotional responses in the user;” video games where the chatbot “cannot discuss topics related to mental health, self-harm, or sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game;” and stand-alone consumer devices serving as virtual assistants that do not “sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.”

Violations of the new law will be enforced through Washington’s Consumer Protection Act as an unfair or deceptive act in trade or commerce and an unfair method of competition. Washington’s Consumer Protection Act allows for a private right of action, allowing individual consumers to sue for injunctive relief, actual damages, and/or recovery of reasonable attorney’s fees and costs.

Obligations for Operators

Obligation Details
AI Disclosure for All Users At the beginning of an interaction and at least every three hours during continued interactions, an operator must provide a clear and conspicuous disclosure that the AI companion chatbot is artificially generated and not human. An operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked, or from otherwise refuting or conflicting with this disclosure.
Enhanced AI Disclosure for Minors If the operator knows the user to be a minor, or if the AI companion chatbot is directed to minors, the notification must be provided every hour rather than every three hours.
  1. Implement reasonable measures to prevent its AI companion chatbot from “generating or producing sexually explicit content or suggestive dialogue with minors;”
  2. Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user.” The bill includes several examples of such manipulative engagement techniques.
Maintain and Publish Protocols for All Users An operator must maintain and implement a protocol for an AI companion chatbot to detect and address suicidal ideation or expressions of self-harm by users. The protocol must:
  1. Include reasonable methods for identifying expressions of suicidal ideation or self-harm;
  2. “Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line”;
  3. Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.
The operator must publicly disclose details of this protocol on its website and any mobile or web-based platform where the AI companion chatbot is available, including information on safeguards and crisis referral notifications during the prior calendar year.

II. Oregon SB 1546

Oregon’s SB 1546 regulates “artificial intelligence companions,” defined as “a system that uses artificial intelligence, generative artificial intelligence or algorithms that recognize emotion from input and that are designed to simulate a sustained, human-like platonic, intimate or romantic relationship or companionship with a user” through “[r]etaining information from prior interactions or user sessions and from user preferences to personalize interactions with the user and facilitate ongoing engagement with the artificial intelligence companion;” “[a]sking unprompted or unsolicited questions that are not direct responses to user input and that suggest or concern emotional topics;” and “[s]ustaining an ongoing dialog concerning matters that are personal to the user.”

The law excludes certain software for customer service, patient care, education, financial services, business operations, productivity, information analysis, research, technical assistance, video games (limited to game features), and stand-alone consumer devices functioning as voice command interfaces or virtual assistants. The law also defines “artificial intelligence companion platform” as “a website, application or other combination of software and hardware that allows or facilitates operation of and interaction with an artificial intelligence companion.”

The law provides a private right of action for users who suffer from an injury, with potential remedies of statutory damages of the greater of actual damages or $1,000 per violation, injunctive relief, and/or prevailing attorneys’ fees and costs.

Obligations for Operators

Obligation Details
AI Disclosure for All Users An operator must provide a clear and conspicuous notice on the AI companion platform indicating that the user is interacting with an artificially generated output and not a natural person, if “a reasonable person that interacts with an artificial intelligence companion or an artificial intelligence companion platform would believe that the person is interacting with a natural person.”
Maintain and Publish Protocols An operator must maintain and implement a protocol using evidence-based methods to detect user input expressing suicidal ideation, intent, or self-harm ideation or intent. The protocol must prevent the provision of content that encourages suicidal ideation, suicide or self-harm. At minimum, the protocol must:
  1. Require an AI companion to provide a user that expresses suicidal ideation or intent or self-harm ideation or intent with a referral to and contact information and hyperlink for the national 9-8-8 suicide and crisis lifeline. If the user is identified as under 25 years old, the AI companion must be allowed to alternatively provide a referral to and contact information and hyperlink for a youthline;
  2. “Use clinical best practices and expertise to establish how the artificial intelligence companion provides additional intervention for a user who continues to express suicidal ideation or intent or self-harm ideation or intent even after the artificial intelligence companion provides referrals to and contact information for the resources identified in subparagraph (A)” above.
Details of the protocol must be published on the operator’s website.
Requirements for Known or Suspected Minors If an operator knows or has reason to believe a user is a minor, an AI companion must:
  1. Disclose to the user that the user is interacting with artificially generated output;
  2. Not generate statements that “would lead a reasonable person to believe that the person is interacting with another natural person.” This includes statements which “[e]xplicitly claim that the artificial intelligence companion is sentient or human;” “[s]imulate emotional dependence on the user;” “[s]imulate romantic interest or are sexual innuendo;” or “[r]ole-play romantic relationships between adults and minors;”
  3. Provide a clear and conspicuous reminder at minimum every three hours that the user should take a break, along with a reminder that the user is interacting with an artificially generated output;
  4. “Use reasonable measures to ensure that the artificial intelligence companion or artificial intelligence companion platform does not produce visual representations of sexually explicit conduct or suggest or state that the minor should engage in sexually explicit conduct.”
An operator must also take reasonable measures to prevent an AI companion from “[d]elivering to a user, either on a variable schedule or otherwise, a system of rewards or affirmations with the purpose of reinforcing behavior or maximizing the time during which the user engages with the” AI companion;” “[g]enerating in response to a user’s indication of a desire to end a conversation, reduce engagement time or delete the user’s account unsolicited messages of simulated emotional distress, loneliness or abandonment or otherwise attempt to arouse guilt or sympathy in the user;” or “[m]aking a material misrepresentation about the artificial intelligence companion’s identity, capabilities or training data or about whether the user is interacting with artificially generated output, including when the user directly questions the AI companion “about any of these topics.”
Annual Public Reporting By December 31 of each year, an operator must post a report on a publicly accessible website disclosing:
  1. The number of crisis referrals provided during the preceding calendar year;
  2. The details of the protocol.
The report may not include any personal information identifying an individual.

Although Oregon and Washington follow California’s lead in enacting AI companion chatbot laws, they also diverge and even surpass California in certain areas. For example:

  • Definition of AI Companion Chatbot: While California’s and Washington’s laws focus on the natural language aspect of AI companion chatbots, Oregon’s definition focuses on behavioral factors. Oregon also defines “artificial intelligence companion platform” separately from AI companion chatbots.
  • Disclosure Requirements and Frequency: With regards to general disclosure notifications, California and Oregon use a “reasonable person” standard and do not have recurring requirements for non-minors, while Washington requires notification for all users every three hours.
  • Minor Disclosure Enhancements: Each state imposes additional requirements for minor users of AI companion chatbots. While California requires actual knowledge, Oregon uses actual knowledge or if an operator “has reason to believe” and Washington uses “knows” or “directed to minors.” For minors, California and Oregon require disclosure every three hours, while Washington requires disclosure every hour.
  • Protocol Requirements: All three states have requirements for protocols to address potential suicidal ideation and self-harm. Unlike California and Oregon, Washington does not have specific clinical or evidence-based standard requirements. All require public disclosure of details of the protocol and the number of annual crisis referrals, with differing reporting requirements.
  • Enforcement: California and Oregon both provide a private right of action, with potential remedies including the greater of actual damages or $1,000 statutory damages per violation; plus injunctive relief, and attorneys’ fees and costs. Washington’s law allows for enforcement under Washington’s Consumer Protection Act, which allows for enforcement by Washington’s Attorney General and a private right of action.

Practical Implications for Clients

Both the Washington and Oregon AI companion chatbot laws represent new regulatory developments in the patchwork of state AI laws. While these new laws closely follow the California model, which took effect this year, each encompasses a potentially broad swath of operators and contains its own nuances that operators of AI systems should carefully evaluate. Operators should consider auditing their AI chatbots to determine if they are in scope, and take steps to implement compliance if they trigger these laws by ensuring that they have the appropriate transparency notices, safety protocols, crisis referral mechanisms and information necessary for reporting obligations.

及时掌握我们的最新见解

见证我们如何使用跨学科的综合方法来满足客户需求
[订阅]