Summary of Key Points
- The National Association of Insurance Commissioners (“NAIC”) has a committee and working groups considering the use of big data and artificial intelligence (“AI”) in the industry and evaluating existing regulatory frameworks for their use. In addition, the NAIC has a forum for ongoing discussion among insurance industry stakeholders around these issues.
- These NAIC initiatives could lead to the development of or modifications to model laws, regulations, handbooks, and regulatory guidance.
- Some state insurance regulators, such as the New York Department of Financial Services, the California Department of Insurance, and the Connecticut Insurance Department, have issued circular letters and bulletins highlighting their concerns about bias and discrimination resulting from the use of AI and machine learning (“ML”) in insurance.
- Colorado has enacted a statute that requires its insurance commissioner to adopt rules prohibiting insurers from using algorithms or predictive models that use external consumer data and information sources in a way that unfairly discriminates. Other states have, or have had, similar legislation pending.
As technological innovation has gathered speed in the insurance industry over the past decade, state insurance regulators have tried to enable the implementation of insurance technology while balancing consumer protection concerns. While recognizing that advancements in insurtech certainly enable the delivery of a broader range of insurance products through streamlined underwriting processes and the payment of claims more efficiently through more effective data analytics, state insurance regulators continue to be concerned about ensuring that consumers understand the insurance products that they are buying, insurance products are accessible and fairly priced without reference to criteria that could be regarded as discriminatory, and individual consumer data is adequately protected and kept private. Among the key areas on which state insurance regulators have been focusing their attention with respect to innovation and technology in insurance is the use of artificial intelligence (“AI”), including machine learning (“ML”), in the insurance industry.
In the US, state insurance regulators’ efforts with respect to studying, assessing and potentially regulating the use of AI has been led principally by the National Association of Insurance Commissioners (“NAIC”), which is the association of the US insurance regulators from all the 50 states, DC and the territories. In addition, certain US states have taken the lead individually in assessing the potential regulatory considerations with respect to the use of AI in insurance.
Among the 2022 priorities for the NAIC is to analyze AI advancements to assess if current state laws and regulatory tools are sufficiently protecting consumers. This work is centralized within the NAIC’s Innovation, Cybersecurity, and Technology (H) Committee (the “ICT Committee”). Although this committee has a broad mandate with respect to innovation, cybersecurity, privacy, e-commerce and technology in insurance, one of its key working groups is the Big Data and Artificial Intelligence (H) Working Group (the “BD/AI Working Group”).
The GD/AI Working Group is tasked, among other things, to research the “use of big data and [AI] including [ML] in the business of insurance and evaluate existing regulatory frameworks for overseeing and monitoring their use”; “[r]eview current audit and certification programs and/or frameworks that could be used to oversee insurers’ use of consumer and non-insurance data, and models using intelligent algorithms, including AI”; and “[a]ssess data and regulatory tools needed for state insurance regulators to appropriately monitor the marketplace, and evaluate the use of big data, algorithms, and machine learning, including AI/ML in underwriting, rating, claims and marketing practices”.
The BD/AI Working Group met on August 10, 2022 at the NAIC Summer 2022 National Meeting. At the meeting, the working group received an analysis of the results of an AI/ML survey for the private passenger auto line of business, which was done in . There is an AI/ML survey being developed for the home line of business, which is in the final stages of development; once the NAIC programs the survey into its systems, 10 states will formally issue the market conduct data call to insurers. Finally, an AI/ML survey for the life line of business is in the development phase.
In addition, the BD/AI Working Group has a “Third-Party Data and Model Vendors workstream”. The workstream is considering several potential initial steps for enhanced regulatory oversight of third-party data and model vendors, including requiring contracting insurers to certify that the models that are being used comply with certain standards and developing a library of third-party vendors.
At the NAIC Summer 2022 National Meeting, the ICT Committee held a meeting of the Collaboration Forum on Algorithmic Bias, which was established by the NAIC earlier in 2022 as a platform for multiple NAIC committees to work together to identify and address foundational issues and develop a common framework that can inform the specific workstreams in each group. Rather than being a single event, the Collaboration Forum is intended to promote ongoing discussion among insurance industry stakeholders during regularly hosted events and presentations. The Collaboration Forum on Algorithmic Bias was designed to cover issues such as what kinds of algorithms raise concerns for insurance regulators, how bias might arise in algorithms, which tools might be effective in minimizing bias and detecting bias, and what are potential regulatory frameworks for addressing algorithmic bias.
The presentations made during the Collaboration Forum at the Summer 2022 National Meeting covered the following topics: Perspectives on AI Risk Management and Governance; Bias Detection Methods and Tools; Ethical and Responsible Use of Data and Predictive Models; Today’s Approaches to Algorithmic Bias; and Risk of Biased AI. Some of the key themes explored during these presentations were the following:
- Risk Management Approach to AI: Several presenters discussed that, in the absence of more specific guidance from insurance regulators on the use of AI/ML, the industry should treat its use of AI/ML as part of regular risk management. That is, a comprehensive AI/ML risk management and governance framework should include the following components: development and communication of written policies and procedures (including assignment of responsibility and accountability with respect to such policies and procedures), training and monitoring with regard to the policies and procedures, and taking corrective action (and documenting that action) when the policies and procedures are not followed.
- Ethical Use of Data and Predictive Models: Several presenters discussed the principles that they believe should guide the industry’s use of AI/ML, including fairness, safety, transparency and accountability. There was significant discussion of how the industry, guided by these principles, could avoid bias in all stages of AI/ML model development, including during the pre-design, design and development, testing and evaluation, and deployment stages.
- The Need for Testing: Several presenters emphasized the need for testing as a critical tool for identifying unintended discrimination. There are several forms of testing available that could be used to identify bias, including the Control Variable Test, the Interaction Test, the Nonparametric Matching (Matched Pairs) Test, and the Double Lift Chart. According to the presenters, the appropriate test for any particular model will vary based on the model type, the intended use, the output, the volume of data available, and the granularity of protected class data available.
- Access to Protected Class Data: The issue that insurers currently do not have systematic data about policyholders’ membership in protected classes was raised several times during the discussion. The lack of this data could make testing for bias more difficult.
- The Need for Diversity: Several presenters highlighted the importance of diversity in combating algorithmic bias. They explained that, to prevent bias in the development stage, models should be established with diverse users in mind, and a diverse and inclusive workforce is critical for the oversight or monitoring of AI/ML use because diverse perspectives can help identify bias.
- Model Explainability: Several presenters emphasized the importance of transparency and model explainability. In furtherance of this guiding principle, a proposal was made to develop model cards, which would present certain basic information about an AI/ML model (e.g., a description of the model goals, limitations of the model, trade-offs with respect to the use of the model and performance of the model). This proposal was described as being the equivalent of nutrition labels for AI/ML models.
The insights shared at the Collaboration Forum will be used by the ICT Committee and its BD/AI Working Group to evaluate existing regulatory frameworks for overseeing and monitoring the use of big data, algorithms, and machine learning—including AI/ML in underwriting, rating, claims, and marketing practices of insurers—potentially leading to the development of or modifications to model laws, regulations, handbooks and regulatory guidance.
In addition to the work being done on the use of AI in insurance at the NAIC, several states have also issued guidance to the insurance industry with respect to the use of AI including big data and ML. For example, the New York Department of Financial Services Insurance (“NY DFS”) issued its Circular Letter No. 1 (January 18, 2019) which resulted from an investigation into New York life insurers’ underwriting guidelines and practices. To address concerns about potential unlawful discrimination, the Circular Letter set forth two guiding principles for New York insurers that use external data in underwriting: (i) that insurers using external data sources must independently confirm that the data sources do not collect or use prohibited criteria; and (ii) that insurers should not use external data unless they can establish that it is not “unfairly discriminatory” in violation of applicable law—i.e., using external data only if the insurers are confident that the use of the data is demonstrably predictive of mortality risk and that they can explain how and why this is the case. The Circular Letter highlighted that NY DFS, like other regulators, continues to be concerned about unlawful discrimination and transparency in the use of data as well as AI and ML in insurance.
Based on similar concerns, the California Department of Insurance (“CDI”) recently issued its Bulletin 2022-5 on June 30, 2022. The focus of the bulletin was to address allegations of racial bias and discrimination in marketing, rating, underwriting, and claims practices by insurance companies and other licensees. CDI, like NY DFS, also highlighted concerns about transparency, and noted that the “greater use by the insurance industry of artificial intelligence, algorithms, and other data collection models have resulted in an increase in consumer complaints relating to unfair discrimination in California and elsewhere” and that the “use of these models and data often lack a sufficient actuarial nexus to the risk of loss and have the potential to have an unfairly discriminatory impact on consumers”. CDI emphasized in the bulletin that insurers and other licensees must “avoid both conscious and unconscious bias or discrimination that can and often does result from the use of artificial intelligence, as well as other forms of ‘Big Data’ (i.e., extremely large data sets analyzed to reveal patterns and trends) when marketing, rating, underwriting, processing claims, or investigating suspected fraud relating to any insurance transaction that impacts California residents, businesses, and policyholders”. Further, the bulletin provided that “before utilizing any data collection method, fraud algorithm, rating/underwriting or marketing tool, insurers and licensees must conduct their own due diligence to ensure full compliance with all applicable laws”.
Similarly, the Connecticut Insurance Department (“CID”) issued a bulletin on April 20, 2022 regarding The Usage of Big Data and Avoidance of Discriminatory Practices (which updated and amended a bulletin issued on April 8, 2021). CDI highlighted similar themes as its counterparts in New York and California—that insurance companies and other licensees must use technology and data in full compliance with anti-discrimination laws. CDI also began requiring a “data certification” that insurance licensees use of data complies with CDI’s bulletin and applicable laws; the first certification was due on September 1, 2022.
Some states are taking more robust action and introducing legislation to specifically prohibit discrimination in the insurance industry’s use of AI. In July 2021, Colorado enacted a new statute that requires the Colorado Insurance Commissioner to adopt rules prohibiting insurers from using any external consumer data, information sources, algorithms or predictive models that use external consumer data and information sources in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression. The Colorado Division of Insurance has conducted several stakeholder meetings to discuss related issues before the Division proceeds with adopting rules on how insurers should test and demonstrate to the Division that their use of big data is not unfairly discriminating against consumers. Other states have, or have had, similar legislation pending.
As the use of AI by the insurance industry, including data that feeds into AI and ML, continues to grow, the developments at both the NAIC and at the state-level are expected to continue to evolve as well.