Regulators probe AI oversight in insurance pilot

With artificial intelligence increasingly embedded in insurance decisions, the National Association of Insurance Commissioners has launched a pilot of its AI Systems Evaluation Tool across 12 states, including Colorado.

“What regulators most want is to get smart about how insurers are using AI,” said Scott Kosnoff, a partner at Faegre Drinker Biddle & Reath LLP and co-leader of the firm’s AI-X team.


“The reality is that insurers are using it for underwriting purposes. They’re using it for pricing purposes. They’re using it for claims evaluation purposes and for fraud detection,” he said.

The pilot is primarily assessing the potential for consumer harm, including inaccuracy, discrimination, lack of transparency and privacy risks.

The NAIC is also asking about the risks AI may pose to insurers. “The company’s financial condition could be weakened as a result of its use of AI,” Kosnoff explained, citing impacts on operations and reporting.

Colorado’s Distinct Approach

Unlike most states in the pilot, Colorado did not adopt NAIC’s model bulletin in 2023, which provided guidelines on AI use. Instead, the state “is focusing on the use of data and less on the use of AI,” Kosnoff said.

Colorado regulates the use of external consumer data and information sources under legislation passed in 2021.

Even so, participation in the pilot means Colorado insurers can expect scrutiny. Regulators are “going to be asking insurers domiciled in Colorado some of the questions that are spelled out in the pilot,” he said.

A ‘Good Story to Tell’

To demonstrate responsible decision-making, insurers “need to have a good story to tell,” Kosnoff said. “A credible story allows them to show regulators, courts and the public, ‘We appreciate the concerns that have been expressed about AI, we’re taking those concerns seriously and we’re acting with reasonable care to try to avoid negative outcomes.’”

Essential to the story is critical thinking about potential harms, he added. Kosnoff encouraged insurance companies to ask: “What could go wrong, what would be harmed, how bad would it be, how likely is it to occur and what can we do to put guardrails around our use of that AI?”

The Limits of ‘Human in the Loop’

Keeping a human involved in AI-driven decisions, a common safeguard suggested by regulators, comes with its own risks. “Everyone always assumes that having a human in the loop is like a cure,” Kosnoff said. “Having a human in the loop is helpful, but it’s got to be the right human.”

The best fit for the job, in Kosnoff’s perspective, is someone who is trained, attentive and aware of the system’s limitations — and not someone who trusts the technology. “There is a tendency for the people to get complacent, cut corners, to let their guard down,” he said.

He warned, too, about “model drift,” in which AI performance degrades over time.

Preparing for What’s Next

Although the pilot is exploratory, it is already shaping expectations.

“If you are an insurance company, you ought to be taking a look at the AI systems evaluation tool and asking yourself the same questions that regulators would,” Kosnoff said.

Compliance is not a one-time exercise: “There’s no finish line,” he said. “The technology’s going to continue to evolve, and the regulatory and legal expectations will continue to evolve.”

The pilot is expected to run through September, after which regulators will determine next steps.

Previous articleImplications for Colorado on federal cannabis reclassification

LEAVE A REPLY

Please enter your comment!
Please enter your name here