
Francesca Butler, a HeplerBroom summer associate and third-year law student at St. Louis University School of Law, provided invaluable assistance in drafting this article.
The Takeaway
Proactive steps insurance providers can take to minimize risks when implementing AI in coverage decisions:
- Maintain human oversight during all stages of developing and implementing AI systems.
- Create C-level responsibility for AI oversight.
- Invest in understanding how AI works.
- Document how an AI system was developed, tested, and validated.
- Utilize ongoing internal and independent audits of the outcomes of AI-assisted claims decisions.
- Monitor state and federal regulations and legislation governing AI use in claims decisions.
New Technology, Same Accountability, Enhanced Scrutiny
Artificial intelligence systems are rapidly being implemented across the insurance industry. These systems affect underwriting decisions, claims handling, and coverage determinations. The insurance industry is excited about these developments because improving processes can provide better outcomes for both insurers and the insured.
At the same time, legislators and regulators are sending a strong message: Legal and ethical obligations that have regulated the industry for decades must remain, and regulators will check for compliance.
Much of current state legislation focuses on preventing healthcare insurers from making coverage determinations based solely on AI. The legislation requires some form of human oversight, among other accountability and transparency requirements. But all insurers—property, casualty, life, disability, etc.—should take proactive steps to align their policies and procedures with emerging governance standards.
Below is a brief overview of recent legislative activity, along with proactive steps insurers can take to more safely adopt the use of AI.
Specific Prohibitions and Expanded Regulatory Authority
Legislative trends and a model insurance AI system provide strong indicators of the pitfalls insurers may face when using AI in coverage decisions. They also offer guidance on best practices insurers should adopt.
Illinois
The Illinois legislature is currently considering one of the most comprehensive AI bills to date. It details prohibitions, sets clear guardrails, and expands regulatory authority over insurers implementing AI into decision-making processes. House Bill 35[1] (HB35 - “Artificial Intelligence Systems Use in Health Insurance”) passed the House in April 2025 is now in committee in the Illinois Senate.
HB 35 prohibits insurers from issuing an adverse outcome on an insured’s health insurance claim based solely on AI or a predictive model. “Adverse outcomes” include “denial, reduction, or termination of insurance plans or benefits.” The Bill requires that an individual with authority to override any AI system’s determinations conduct a “meaningfully review” of adverse decisions.
The Bill also empowers the Department of Insurance to investigate insurers’ development, implementation, and use of AI and predictive models, as well as the outcomes from the use of those systems. Aside from inquiring about insurers’ use and application of specific AI models or systems, the Department may delve into an array of related investigations, such as an insurer’s:
- governance
- risk management
- use protocols
- due diligence before acquiring or using AI systems
- monitoring of usage and outcomes
- auditing of data or systems developed by a third party (e.g., a vendor)
- documentation related to adherence to the insurer’s implementation and compliance program
California
California enacted a law in late 2024 that provides specific prohibitions and guidance for healthcare and disability insurers implementing AI for “utilization review or utilization management functions.”[2]
The law prohibits relying solely on group data sets instead of considering the individual circumstances of an “enrollee” in the plan, the clinical circumstances presented by a healthcare provider, and other information specific to an enrollee’s medical or clinical record. Additionally, the biased application of an AI system is forbidden.
Insurers’ AI systems must be open to inspection for audit and compliance by inspectors. Insurers must periodically review and revise an AI tool’s performance, use, and outcomes. And usage must comply with state and federal laws that protect the privacy of healthcare information.
Other States
Other states have less comprehensive legislation under consideration.
- Connecticut - Senate Bill 447[3] (proposed in January 2025) would prohibit health carriers from using AI in the evaluation and determination of patient care. It aims to safeguard patient access to testing, medications, and procedures.
- Indiana - Currently under review by the Committee on Public Health, House Bill 1620[4] would require both providers and accident/sickness insurers to disclose AI use in two specific situations: (1) when AI is used to make or inform decisions about a patient’s health care, or (2) in communications with the patient regarding their care.
- Montana[5], Iowa[6], Massachusetts[7], and Tennessee[8] - These states have leaned into utilizing task forces and advisory committees to study evolving AI models and draft legislation that can adapt as the technology changes while also protecting an insurer’s freedom to use AI. [9], [10], [11]
A Model AI System for Insurers
Many of the states working on AI legislation have adopted the National Association of Insurance Commissioners (NAIC) Model Bulletin on the Use of Artificial Intelligence Systems by Insurers.[12] As of May 2, 2025, 24 jurisdictions have adopted the Bulletin, which was originally published in 2023. Doubtless, insurers considering implementing AI systems are familiar with the Bulletin. It outlines best practices for using AI in the insurance industry and introduces a model Artificial Intelligence System (AIS) Program designed to help insurers and protect consumers by mitigating associated risks. The program emphasizes sound risk management and appeals to insurers operating across multiple states, as many of its policies align with existing state legislation.
Practical Guidance for Adopting and Implementing AI into Insurance Decision-Making Processes
Drafting a corporate strategy that complies with a patchwork of legal frameworks can be daunting. However, insurers should act proactively to prevent AI misuse and regulatory/legislative infractions. They should establish policies and processes around these core themes: human oversight and accountability, transparency and disclosure, and protections against algorithmic bias.
Legislators and regulators favor using humans—particularly those with industry expertise—to validate or override AI recommendations. Additionally, documenting regular audits of AI operations, procedures, and recommendations provides a record that verifies compliance with government mandates designed to protect insureds against AI making decisions without appropriate safeguards.
By establishing clear oversight, insurers can demonstrate that AI accountability is as critical to them as financial integrity or cybersecurity vigilance. Where feasible, insurers should consider either creating a Chief AI Officer role or integrating AI-focused responsibilities into another senior-level leader’s role. Additional responsibilities may include:
- scheduling bias audits, with formal reporting requirements to the C-suite or board
- developing company-wide education and training for the appropriate use of AI, ensuring it reflects the company’s values and core mission
- establishing a plan to test AI outcomes versus human decision makers
- developing plans to incorporate results of the AI versus human testing into improving how AI makes decisions
Insurers must also invest in understanding the AI systems they use. Ongoing development of auditing and interpretability tools will further strengthen these capabilities.
Explaining how AI decisions are made supports transparency, disclosure, and, ultimately, legal defensibility. This is a tall order, considering that even AI experts struggle to explain exactly how AI works. Because it’s not entirely understood, using AI in decision making related to insureds introduces an additional element of risk. As part of their implementation plans, insurers need to assess their acceptable level of risk. Part of that assessment should include evaluating legislation and regulation for disclosure to insureds of AI usage as they develop governance around the use of AI.
Documentation should inform the company’s future decision makers as well as insureds, and it must be designed to withstand future regulatory and litigation scrutiny. Insurers should prioritize maintaining detailed model development files that evaluate how AI systems were trained, tested for fairness, and validated. Each time a human reviews an AI recommendation, that process should be documented, outlining how and why a decision was adopted or altered. Documentation around leadership actions—such as employee training initiatives and governance measures taken to address AI risks—should also be maintained.
Independent audits should be commissioned that focus on disparate impacts and biases. The results of these audits should then be studied, and appropriate remedial actions should be documented.
Finally, insurers should provide clear, plain-language disclosures that explain how and why they use AI in their coverage decisions. Given the likelihood of lawsuits, these disclosures will be closely scrutinized by regulators and the judiciary.
Did We Mention Humans Remain Key to Successful AI Implementation
Advancements in AI are rapidly reshaping how insurers evaluate risk, manage claims, and determine coverage. Yet, the foundational principles of the insurance industry—fairness, transparency, and good faith—remain unchanged. Ironically, by investing in human leadership and oversight during adoption and implementation of AI, insurers are well-positioned to comply with existing and future legislation and regulation.
Adding transparency, disclosure, and robust documentation policies and practices into an AI adoption framework allows insurers to leverage AI’s benefits while remaining within evolving legislative and regulatory boundaries.
[1] https://legiscan.com/IL/drafts/HB0035/2025
[2] https://legiscan.com/CA/text/SB1120/2023
[3] https://legiscan.com/CT/text/SB00447/id/3046620
[4] https://legiscan.com/IN/bill/HB1620/2025
[5] https://legiscan.com/MT/text/SB212/2025
[6] https://iid.iowa.gov/media/5108/download?inline
[7] https://www.mass.gov/news/governor-healey-signs-executive-order-establishing-artificial-intelligence-ai-strategic-task-force
[8] https://www.capitol.tn.gov/Bills/114/Bill/SB1261.pdf
[9] https://governor.iowa.gov/press-release/2025-02-10/gov-reynolds-signs-executive-order-establishing-iowa-doge-task-force
[10] https://www.tn.gov/finance/ai-council.html
[11] https://www.ncsl.org/in-dc/task-forces/task-force-on-artificial-intelligence-cybersecurity-and-privacy
[12] https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf
- Partner
Whether representing Fortune 50 companies in high-stakes class or mass actions or a local non-profit needing help, Beth A. Bauer balances creative, tenacious advocacy with a calm, commanding demeanor. Clients benefit from her ...