By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

News

The Use of Artificial Intelligence in Insurance Claims Handling

The Matrix is one of my favorite movies of all time. The picture above describes the scene in which Morpheus explains to Neo that the birth of AI (Artificial Intelligence) led to a war between man and machine, and how humans are nearly extinct as a result.

Despite the recent hysteria, the concept of machine learning has been around for decades, and involves supplying large amounts of data to a system and setting parameters for how you want the system to use that data. In 1957 an IBM engineer named Alex Bernstein created the world's first fully automated chess engine. In 1963, Stanford University opened its Stanford Artificial Intelligence Laboratory. By 1997, IBM's Deep Blue, a super computer, defeated then World Chess Champion Garry Kasparov.

Though AI has technically been around for a while, technological breakthroughs such as "Deep Learning" and "Large-scale Language Modeling" have put AI in everyone's radar. If you haven't yet checked out what ChatGPT4 can do, you should try it. It is truly astounding.

But how would you feel if your insurance company relied on AI to determine whether or not they have paid you what is owed on your claim? For many, it's not good. Cigna, one of the largest health insurance companies in the country used its "PxDx tool" to automatically deny patient claims. The PxDx Tool was an algorithm that would review patient files and make determinations based on criteria set by Cigna. As discovered in an article ProPublica, the PxDx Tool allowed Cigna medical directors to automatically deny a claim purportedly on medical grounds without making a medically necessary determination or even opening the patient file. Internal documents also indicated Cigna believed 95% of people would never challenge the denial. One of Cigna's former officials was quoted as saying "We thought it might fall into a legal gray zone... We sent the idea to legal, and they sent it back saying it was OK." In other words, Cigna knew it was potentially wrong, but it did it anyway because it was worth the business risk. This practice is implemented by other insurance companies as well.

The old saying "garbage in garbage out" will always hold true for AI (until it actually become sentient). What I mean by that is that AI will only be as good as the rules (algorithms) that the creator allows. For example, if an AI program an insurance company decides to use is "taught" incorrectly about whether property is damaged or whether something should be covered, it is flawed. That will lead to many wrongly denied claims. Even more concerning is that insurance companies may do it anyway just as Cigna did because most people won't challenge their claim denial. According to one study, more than 60% of insurance companies' senior executives surveyed indicated their company was investing at least $10 million per year in AI. It seems we are going to find out one way or another, and soon.

The departments of insurance in each state need to keep a close eye on how insurance companies are using and intend to use AI, particularly in the handling and adjustment of claims.

“There Is A Difference Between Knowing The Path And Walking The Path.” - Morpheus

Aaron J. Arenas