Although US regulation specific to the use of artificial intelligence (AI) and machine learning (ML) by insurance carriers in the United States has been limited so far, state insurance regulators have had significant discussions regarding this. Recently, some state insurance departments have issued guidance, and some states have enacted laws to address the use of AI/ML by insurance carriers.

NYDFS Guidance on Use of External Data Sources in Underwriting

The New York State Department of Financial Services (NYDFS) was the first state insurance department to issue guidance (via a Circular Letter) regarding the use of AI in the underwriting of life insurance. Among other things, this guidance requires licensed life insurance carriers to:

  • Determine that their use of external data sources, algorithms or predictive models in underwriting or rating:
  • Does not collect or use prohibited criteria.
  • Is not unfairly discriminatory. 
  • Is based on sound actuarial principles with a valid explanation or rationale for any claimed correlation or causal connection.
  • Employ staff who are capable of making this determination.
  • Require third-party vendors to disclose sufficient information about their underwriting models so that the insurer can make this determination.
  • Provide appropriate disclosures to consumers.

NAIC Principles on Artificial Intelligence

In August 2020, the National Association of Insurance Commissioners (NAIC) adopted Principles on Artificial Intelligence, which set forth guiding principles for AI actors. This guidance proclaims that use of AI should be:

  • Fair and ethical.
  • Accountable.
  • Compliant.
  • Transparent.
  • Secure, safe and robust.

Emerging Legislation and Guidance in Other States

We have recently seen legislative and regulatory activity in certain states that seeks to prohibit discrimination against protected classes during the use of AI/ML platforms, including requiring insurance carriers to test whether an insurance carrier’s use of data, algorithms and models may result in unfair discrimination. This emerging area in insurance regulation should be carefully monitored by industry participants.

Key Considerations

While adopted guidance on the use of AI/ML in the insurance industry remains limited, state insurance regulators appear to be particularly concerned about:

  • Non-Discrimination: AI/ML should not be used in an unfairly discriminatory manner and should be based on sound actuarial principles. An insurance carrier using a third-party vendor for AI/ML services cannot simply rely on the vendor’s assertion that its AI/ML model is non-discriminatory. Instead, the insurance carrier must require the vendor to disclose information about its model so that the insurance carrier could independently make this determination.
  • Validity Testing: There should be a process to test that an insurance carrier’s use of AI/ML is not unfairly discriminatory and is based on sound actuarial principles.
  • Data Integrity: The integrity of the data used to develop and implement AI/ML systems should be evaluated. Further, the data supporting the final output of an AI/ML system should be retained in accordance with applicable insurance laws.
  • Transparency: Consumers should be provided with appropriate disclosures regarding an insurance carrier’s use of AI/ML. In particular, consumers should have a way to inquire about, review and seek recourse concerning for insurance decisions based on the use of AI/ML.