April 2026

US NAIC Spring 2026 National Meeting Highlights: Innovation, Cybersecurity and Technology (H) Committee Update

Share

The Innovation, Cybersecurity, and Technology (H) Committee (“H Committee”) of the US National Association of Insurance Commissioners (“NAIC”) and certain of its working groups met at the NAIC’s Spring 2026 National Meeting in San Diego, California.

Adoption of Working Group and Subgroup Reports

The H Committee received and adopted reports on the recent activities of its working groups. Highlights of those meetings and reports are included below.

Big Data and Artificial Intelligence (H) Working Group

The Big Data and Artificial Intelligence (H) Working Group (the “BDAI WG”) met on March 24, 2026. Most of the meeting was used to discuss how to operationalize the NAIC’s Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (the “Model Bulletin”) and holding a panel discussion on trends in artificial intelligence (“AI”) governance.

The BDAI WG also provided a brief update on the AI System Evaluation Tool pilot process (the “Pilot Process”). The AI System Evaluation Tool (“Tool”) is a tool developed by the BDAI WG to help regulators understand how insurers use AI and machine learning models and systems (“AI Systems”) across business operations and to assess the insurers’ governance practices regarding such AI Systems. The Tool provides a structured way for state insurance regulators to review AI Systems, promote transparency, and identify where additional oversight, training, or improvements may be needed.

The Pilot Process involves a small group of 12 state insurance regulators using the Tool with a small pilot group of insurance companies. States participating in the Pilot Process are planning to use the Tool in support of a mix regulatory functions, including, market conduct exams, financial exams, and financial analyses contexts, and as part of a more general regulatory inquiry. The Pilot Process started in March 2026 and participating state insurance regulators have sent, or are in process of sending, inquiries to certain insurance companies domiciled in their state that were selected by such state regulators for the Pilot Process. These insurance companies included a range of companies across main product lines, with emphasis on property and casualty insurance and life insurance. The goals of the Pilot Process are to:

  • Determine whether the Tool helps insurers clearly explain their AI governance systems to regulators;
  • Determine whether the Tool helps regulators better understand how companies use AI systems and how those companies apply standard governance practices;
  • Support the ongoing improvement and development of the Tool;
  • Help create long-term recommendations for market conduct and financial risk assessment review processes; and
  • Identify what additional regulator training may be needed in the future.

The Pilot Process will run from March to September 2026. The BDAI WG plans to provide frequent public updates as the Pilot Process continues. Once the Pilot Process runs, the BDAI WG plans to update the Tool based on feedback resulting from the Pilot Process with the goal of having an updated Tool for adoption at the Fall 2026 National Meeting.

Operationalizing the Model Bulletin

The BDAI WG heard a presentation on suggested approaches for operationalizing the Model Bulletin. The Model Bulletin was adopted by the Executive (EX) Committee and Plenary on December 4, 2023, as a model for state insurance regulators to use to issue guidance in the form of a bulletin to describe the insurance regulators’ expectations as to how insurers should govern the development, acquisition, and use of certain AI technologies with the goal of protecting consumers from harm by AI applications with a special focus on unfair discrimination. The Model Bulletin broadly describes the type of information and documentation a regulator may examine during an investigation or examination of any insurer licensed to do business in the regulator’s respective jurisdiction, regarding the insurer’s use of AI and machine learning technologies and AI systems. Since its adoption by the NAIC in 2023, 24 states and D.C. have adopted the Model Bulletin, and four states have adopted insurance specific regulation or guidance regarding AI.

AI System Risk Levels

NAIC staff gave a presentation on operationalizing the Model Bulletin. As part of that presentation, NAIC staff presented a sample taxonomy for consideration that categorized AI risk of harm into four risk levels: unacceptable, high, medium, and low risk. NAIC staff emphasized that creating a taxonomy of risk is important for identifying which systems need the most regulatory attention.

  • Unacceptable risk: The highest level of risk, which includes AI systems using methods such as subliminal manipulation or general social scoring.
  • High Risk: The category that NAIC staff asserted most regulated AI systems would fall into. These are AI systems that have the potential to cause significant harm if they fail or are misused.
  • Medium Risk: NAIC staff described this category as including AI systems with a risk of manipulation or deceit, such as chatbots or emotion recognition systems. NAIC staff asserted that people must be informed about their interaction with such AI systems.
  • Low Risk: All other AI systems which can be deployed without additional restrictions, such as spam filters.
Model Compliance Report Structure

NAIC staff then shared a model compliance report structure (the “Compliance Report) for consideration by the BDAI WG. The goal of the Compliance Report is that regulated insurance companies would complete the Compliance Report to demonstrate compliance with the Model Bulletin.  The Compliance Report would consist of the following components:

  • Executive Summary
  • Introduction Purpose
  • Report Authors – Titles & Credentials
  • Senior Management – BOD Oversight
  • Models & Data Sources – Internal & External
  • Risk Assessment Framework
  • Scope of Models & Model Cards – Inventory
  • Corporate Governance Structure
  • Model Drift & Validation Techniques
  • Protected Class Inference & Bias Testing
  • Consumer Complaint Process

In presenting this model Compliance Report, NAIC Staff emphasized the following points:

  • Internal and External Data: NAIC asserted that two types of data should be addressed in the Compliance Report – external and internal. For internal data, reporting should focus on how the data was constructed to address potential issues with selection bias. For external data, reporting should address design constraints that are not disclosed to users and may create bias in models. NAIC staff highlighted that confidentiality agreements that do not allow companies to share external data with regulators pose another obstacle that companies should be aware of.
  • AI Model Cards: NAIC staff also promoted the use of AI model cards, which are a clear standardized reporting tool for insurance companies designed to provide all the basic information needed for a regulator to accurately assess an AI model’s risk level. These cards cover, among other things, training data used, evaluation data, AI model details, intended use, metrics and ethical considerations.
  • Model Drift & Validation Techniques: NAIC staff highlighted that model drift, where a model’s intended use is degraded due to evolving data patterns and changes in data relationships, can create a risk to policyholders. In order to mitigate this risk, NAIC staff urged companies to describe their methods for model drift testing in this section of the Compliance Report, and regulators were encouraged to ask for further information and additional metrics.
  • Protected Class Inference and Bias Testing: NAIC staff noted that bias testing garners significant attention in the AI space, and some lines of insurance have better success than others with bias testing. In some instances, a sociotechnical analysis is necessary for bias testing when mathematical data cannot tell the full story. The Compliance Report provides companies with an opportunity to provide narratives on the weaknesses in their testing and results to provide further context for a regulator’s review of bias testing.
AI Governance Trends

The BDAI WG then hosted a panel discussion on AI governance trends that addressed best practices and potential issues in AI governance, particularly within the Pilot Program.

  • Best Practices: The panelists emphasized the importance of a cross-functional approach to AI governance, as it can overcome an organizational alignment challenge inherent in overseeing the varied uses of AI across business operations. They also noted a positive change, particularly within the Pilot Process, where insurers are beginning to put more pressure on vendors to provide AI governance information. As a result, vendors have seemed to be more forthcoming in explaining their AI models and processes which will help enable insurers to assess potential risks, consumer harm, and the type of safeguards that might need to be implemented. The panelists also emphasized the need for companies to be thoughtful about the specific operational purposes or goals for using AI tools.
  • Scope of Governance Review: Particularly in the early phases of AI governance adoption, companies are struggling with accurately determining the scope of their review. When the scope of review is too broad, the targeted review of every possible use case of AI becomes overwhelming and it may result in the application of a robust review and approval process even in cases where the proposed AI use is low risk. The panelists stressed the need for guidance on which AI use cases are material and should be given more attention in governance.
  • Risk Management: The panelists emphasized that a company’s risk management process should be designed to mitigate risks that can develop throughout the life cycle of an AI project. The panelists stressed that in the beginning stages, companies may not be aware of new issues with heightened risk levels that can arise as the project develops further. So, the panelists emphasized that it is critical for a company to have a strategy to mitigate risk to address any issues that may arise as more information regarding the AI tool develops with use. They also recommended streamlined AI use intake forms as a starting point for governance committees, which also encourages the proponents of AI usage to think more critically about the types of use cases and information they are asking the governance committee to review.
  • Burden on Resources: While AI governance trends have not seemed to differ across industries, the panelists warned that there could be a divide between larger and smaller organizations implementing AI governance programs due to different access to resources. A successful governance program requires significant resources, and the burden of sourcing those resources may be greater for smaller organizations. Smaller organizations simply may not have the human capital to create a dedicated governance committee, and thus may move slower in their governance efforts.
  • Continued Training: The panelists warned of a potential future information gap where companies rely too heavily on automated processes and lose the knowledge of how to conduct these processes manually. Even if human oversight over some of these technologies may be reduced in the future—especially for smaller companies with fewer resources—they recommended continuing training on the manual processes.
Looking Forward

The meeting concluded with two developing points for further discussion.

  • Federal AI Framework: The BDAI WG received an update on federal developments in connection with AI. The White House recently released a national policy framework on AI that includes policy guidance to help guide Congress in developing a national approach to AI regulation. The framework focused on the following areas:
    • Protecting Children and Empowering Parents
    • Safeguarding and Strengthening American Communities
    • Respecting Intellectual Property Rights and Supporting Creators
    • Preventing Censorship and Protecting Free Speech
    • Enabling Innovation and Ensuring American AI Dominance
    • Educating Americans and Developing an AI-Ready Workforce
    • Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws

The framework is consistent with prior policy statements made by the White House regarding AI that have asserted that the proliferation of state AI laws has created barriers to innovation, and recommended congressional action to create a national standard to ease this issue.

  • Claim Handling: The BDAI WG as well as a consumer representative noted that the use of AI claim handling is an area that should be further reviewed and discussed. The consumer representative emphasized that the lack of transparency in some cases and potential for bias in AI models could create risk for improperly decided claim settlements that violate states’ unfair claim settlement laws. 

Cybersecurity (H) Working Group

The Cybersecurity (H) Working Group (“CWG”) met on March 24, 2026. The CWG voted to adopt its March 13, 2026 and 2025 Fall National Meeting minutes. At 2025 Fall National Meeting, the CWG discussed, and received comments from interested parties on, the Cybersecurity Event Notification Portal Project, which is a project to develop a centralized portal for reporting cybersecurity events to insurance regulators. After receiving interested party feedback on the Cybersecurity Event Notification Portal Project intake form, the CWG met on March 13, 2026 to discuss revisions to the intake form and subsequently adopted a motion to adopt a revised Centralized Cybersecurity Event Notification Portal Project document as exposed to facilitate commencement of the project.

Presentation on Cyber Threats and Trends

The CWG then heard a presentation on cybercrime risks and trends from William Altman, Director of Cyber Threat Intelligence Services at CyberCube. While both criminal and nation-state threats were addressed in the presentation, the primary focus was on criminal threats, particularly ransomware, given its broader applicability to most organizations.

Mr. Altman highlighted that the United States experiences more ransomware attacks than any other country, though ransomware is increasingly becoming a global challenge as digitization spreads. Generative AI and large language models have enabled threat actors to scale, localize, and personalize attacks more effectively, including in emerging economies that have not traditionally been ransomware hotspots. In particular, more ransomware activity is occurring in and around Japan, Bhutan, and Nepal, with indications that some of this activity may be linked to threat actors based in China. To address potential AI-related vulnerabilities, Mr. Altman encouraged organizations to prioritize recovery speed and overall resilience against cyberattacks. Mr. Altman also identified coverage of the use of AI agents as a potential new coverage area for cyber (re)insurers to consider.

Third-Party Data and Models (H) Working Group

The Third-Party Data and Models (H) Working Group (the “TPDM WG“) met on March 23, 2026. The TPDM WG primarily discussed potential revisions to its Risk-Based Regulatory Framework for Third-Party Data and Model Vendors (the “Framework“). The Framework is a proposed regulatory framework for oversight by state insurance regulators of third-party data and model vendors engaged by insurers for certain insurance functions that have direct consumer impact. Under the Framework, third parties that meet the definition of a third-party data and model vendor would be required to register with the state insurance department before their data or models could be used by insurers. The registration process would focus on a governance review of the third-party datasets and models, which would include a review of the following:

  • Purpose, assumptions, inputs, limitations, performance metrics, and validation processes for third-party models; and
  • Accuracy, completeness, timeliness, representativeness, auditable data lineage, and quality controls for third-party data.

The goal of the registration process would be to enable state insurance regulators to verify consistent national governance standards across third-party data and model vendors, thereby strengthening consumer protection.

At this meeting, the TPDM WG continued discussions from its February 26, 2026 virtual meeting regarding the registration requirement. The TPDM WG clarified that the registration process is better described as a “registry,” and is intended to provide regulators with information they need from third parties without requiring each party to undergo an extensive licensure process. The registry was described as an initial step to enable regulators to understand the landscape of entities operating in this space and verify consistent national governance standards across third parties, thereby strengthening consumer protection. The TPDM WG envisioned the registry being a centralized database maintained by the NAIC, comparing it to the NAIC Quarterly Listing of Alien Insurers. However, the TPDM WG  noted various contours and considerations that still need to be determined before the registry can be fully implemented including: whether registration should be mandatory or voluntary; whether this guidance should continue to be provided in the form of a framework, or whether a model law should be developed for states to consider and adopt; and whether any such NAIC registry would supplement registration requirements in states that currently require registration, or be in addition to such registration requirements. The TPDM WG determined that the drafting group would consider these various decision points further, and prepare guidance on the pros, cons, and their recommendation for each, which could be discussed at the next meeting.

Next, the TPDM WG considered the scope of activities deemed to have a consumer impact for purposes of the Framework. In doing so, the TPDM WG reached consensus to narrow the Framework's initial focus from the full range of insurance functions with direct consumer impact—which originally encompassed pricing, underwriting, claims, utilization reviews, marketing, and fraud detection—to pricing and underwriting. The TPDM WG emphasized that this process would continue to be iterative, with potential expansion to additional functions in the future. The Working Group plans to revise the Framework to reflect this narrowed focus. 

As the TPDM WG continues to progress the Framework, third-party data and model vendors operating in the insurance space—particularly those involved in pricing and underwriting functions—should closely monitor the Working Group's continued development of the registry framework and the resolution of outstanding questions regarding its structure and implementation.

Privacy Protections (H) Working Group

Director Elizabeth Dwyer of Rhode Island, Chair of the Privacy Protections (H) Working Group (“PPWG”) reported that the PPWG has concluded its consideration of comments to proposed revisions to Article VI (Exceptions to Limits on Disclosures of Nonpublic Personal Information) of Privacy of Consumer Financial and Health Information Regulation # 672 (“Model 672”), and published the revised proposed changes to that Article.  The PPWG is now considering comments to proposed revisions to Article VII (Rules for Health Information), which were exposed for a 30-day public comment period which concluded on March 11, 2026. After the PPWG has finished considering comments to proposed changes to Article VII, it will collect and consider comments to Article VIII (Additional Provisions) and Article I (General Provisions).  The PPWG anticipates exposing the full revised draft of Model 672 for public comments at the end of 2026.

SupTech/GovTech (H) Subgroup & Data Call Study Group

Deputy Director Lori Dreaver Munn of Arizona reported on the work of the SupTech/GovTech (H) Subgroup, which heard presentations from three state insurance departments on their work to create and foster their data and analytics teams and their experiences interacting with insurance companies on their use of AI. The subgroup continues to discuss and evaluate educational opportunities for 2026 and continues to look at technology topics that would help regulators to improve efficiency of oversight capabilities.

Colton Schulz of North Dakota reported on the work of the Data Call Study Group, which is tasked with studying the enhancement of regulator access to high quality and timely data. The Data Call Study Group has shifted its efforts from developing a manual inventory of all NAIC data elements to a more targeted goal of focusing on market regulation data elements. An initial inventory has now been produced that includes Market Conduct Annual Statement (MCAS) data, complaints data, and data from the homeowners data call. The Data Call Study Group will next work to collect about ad hoc data calls being conducted by states.

Cybersecurity Event Notification Portal

The H Committee then received an update of the ongoing Cybersecurity Event Notification Portal Project, which as mentioned above aims to create a centralized, uniform method for regulators to receive cybersecurity event notifications. The goal of the project is to address inefficiencies and delays created as a result of fragmented reporting of cybersecurity events across states. The project was characterized as the third step in a three-step process to achieve alignment across the states and reduce unnecessary marginal regulatory costs in their implementation of the Insurance Data Security Model Law #668. The first two steps, already completed, were the development by the CWG of (i) the Cybersecurity Event Response Plan, and (ii) the ISDM Compliance & Enforcement Guide. Key updates included adoption of a draft standard intake form, clarifications that licensees will not be charged to use the portal, and refinements to the System and Organization Controls (SOC) 3 report language in response to industry feedback.

Presentation on Insurance Artificial Intelligence (AI) Trends, Including Agentic AI Applications

Finally, the H Committee heard a presentation from PwC on insurance AI trends, including agentic AI applications. PwC gave an overview of general insurance technology development and identified that many insurers are in a transitional phase when it comes to adoption of AI technologies due to factors such as legacy systems and data readiness issues. PwC then gave an overview of what AI agents are as compared to generative AI. AI agents make decisions, manage workflows and take action to complete tasks. PwC emphasized that current implementations of AI generally preserve human-in-the-loop controls for higher-risk decisions, particularly in underwriting and claims handling.

The presentation then highlighted that change management is important for implementation of new technologies and presented an AI Operating Model for AI enablement, which focused on the following components: AI Council/Steering Committee, AI Governance, AI Delivery, Tech Stack, Architecture and Upskilling & Training. PwC then provided an overview of three types of such operating models – a centralized model, a federated model and a balanced (hybrid) model. PwC highlighted that fully centralized models could present challenges to scale and move quickly enough to meet business demands and ad hoc oversight could create risks for inconsistent governance and fragmented oversight. As a result, many insurers are now experimenting with hybrid models.

PwC then discussed the evolving risk landscape of agentic AI systems and provided an overview of both potential value and potential risk. In the context of this landscape, PwC highlighted the importance of AI governance, but also the need to develop frameworks that expedite responsible AI use rather than create unnecessary impediments. Effective governance enables faster and more robust adoption of AI tools by providing clarity around risk tolerance, roles, decision authority, and controls.

To view additional updates from the US NAIC Spring 2026 National Meeting, visit our meeting highlights page.

 

Related Services & Industries

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe