August 22. 2022

US NAIC Summer 2022 National Meeting Highlights: Collaboration Forum on Algorithmic Bias

Share

At the Summer 2022 National Meeting of the National Association of Insurance Commissioners (“NAIC”), the Innovation, Cybersecurity, and Technology (H) Committee and its Big Data and Artificial Intelligence (H) Working Group held their first Collaboration Forum session on the topic of algorithmic bias. The Collaboration Forum was established at the Spring National Meeting as a platform for multiple NAIC committees to work together to identify and address foundational issues and develop a common framework that can inform the specific workstreams in each group.  

Rather than being a single event, the Collaboration Forum is intended to promote ongoing discussion among insurance industry stakeholders during regularly hosted events and presentations. The Collaboration Forum on Algorithmic Bias was designed to cover issues such as what kinds of algorithms raise concerns for insurance regulators, how bias might arise in algorithms, which tools might be effective in minimizing bias and detecting bias, and what are potential regulatory frameworks for addressing algorithmic bias. 

The presentations made during the Collaboration Forum at the Summer 2022 National Meeting covered the following topics:

  • Perspectives on Artificial Intelligence (“AI”) Risk Management and Governance
  • Bias Detection Methods and Tools 
  • Ethical and Responsible Use of Data and Predictive Models
  • Today’s Approaches to Algorithmic Bias
  • The Risk of Biased AI 

Some of the key themes explored during these presentations were the following:

  • Risk Management Approach to AI: Several presenters discussed that, in the absence of more specific guidance from insurance regulators on the use of AI/machine learning (“ML”), the industry should treat its use of AI/ML as part of regular risk management. That is, a comprehensive AI/ML risk management and governance framework should include the following components: development and communication of written policies and procedures (including assignment of responsibility and accountability with respect to such policies and procedures), training and monitoring with regard to the policies and procedures, and taking corrective action (and documenting that action) when the policies and procedures are not followed.
  • Ethical Use of Data and Predictive Models: Several presenters discussed the principles that they believe should guide the industry’s use of AI/ML, including fairness, safety, transparency and accountability. There was significant discussion of how the industry, guided by these principles, could avoid bias in all stages of AI/ML model development, including during the pre-design, design and development, testing and evaluation, and deployment stages.
  • The Need for Testing: Several presenters emphasized the need for testing as a critical tool for identifying unintended discrimination. There are several forms of testing available that could be used to identify bias, including the Control Variable Test, the Interaction Test, the Nonparametric Matching (Matched Pairs) Test, and the Double Lift Chart. According to the presenters, the appropriate test for any particular model will vary based on the model type, the intended use, the output, the volume of data available, and the granularity of protected class data available. 
  • Access to Protected Class Data: The issue that insurers currently do not have systematic data about policyholders’ membership in protected classes was raised several times during the discussion. The lack of this data could make testing for bias more difficult. 
  • The Need for Diversity: Several presenters highlighted the importance of diversity in combating algorithmic bias. They explained that, to prevent bias in the development stage, models should be established with diverse users in mind, and a diverse and inclusive workforce is critical for the oversight or monitoring of AI/ML use because diverse perspectives can help identify bias. 
  • Model Explainability: Several presenters emphasized the importance of transparency and model explainability. In furtherance of this guiding principle, a proposal was made to develop model cards, which would present certain basic information about an AI/ML model (e.g., a description of the model goals, limitations of the model, trade-offs with respect to the use of the model and performance of the model). This proposal was described as being the equivalent of nutrition labels for AI/ML models.

The insights shared at the Collaboration Forum will be used by the Innovation, Cybersecurity, and Technology (H) Committee and its Big Data and Artificial Intelligence (H) Working Group to evaluate existing regulatory frameworks for overseeing and monitoring the use of big data, algorithms, and machine learning—including AI/ML in underwriting, rating, claims and marketing practices of insurers—potentially leading to the development of or modifications to model laws, regulations, handbooks and regulatory guidance.

To view additional updates from the US NAIC Summer 2022 National Meeting, visit our meeting highlights page.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe