By Max Dorfman, Research Writer
A couple of articles crossed our desk recently that discussed the benefits and pitfalls of algorithms and artificial intelligence (AI). Neither discussed insurance, but they offered important lessons for the industry.
An algorithm is a simple set of instructions for a computer. Artificial intelligence is a group of algorithms that can modify and create new algorithms as it processes data. Broadly, these smart technologies can drive untold change for the industry.
As the Financial Times wrote earlier this year, “Insurance claims are, by their nature, painful processes. They happen only when something has gone wrong and they can take months to resolve.”
Chinese insurer Ping An uses AI to accelerate decision making, and New York-based insurance start-up Lemonade employs algorithms and AI to help pay clients more quickly. Other insurers use smart technologies for fraud detection, risk management, marketing, and other functions.
What could go wrong?
Algorithms and AI can work quickly, but they aren’t perfect. A recent article by Osonde A. Osoba, an information scientist and professor with the RAND Corporation, details what data scientists call an “algorithm audit.” An algorithm audit detects biases or blind spots that skew results, making it necessary to review and test the underlying data.
In the case Osoba discusses, Apple Pay was assailed on Twitter by tech executive David Heinemeier Hansson for giving him a credit limit 20 times larger than his wife’s, despite their sharing all assets, among other factors. Hansson concluded that the algorithm was sexist – causing a furor on the social media platform among both those who vehemently agreed and disagreed with him.
Apple Pay said it doesn’t have information about applicants’ gender or marital status. Yet no one from Apple could answer why Hansson received a significantly higher credit limit. They responded: “Credit limits are determined by an algorithm.”
Still, these algorithms and AI are informed by something – perhaps the implicit biases of the programmers. For example, systems using facial recognition software have yielded decisions that appear biased against darker-skinned women.
Are algorithms easier to fix than people?
An article in The New York Times by Sendhil Mullainathan, a professor of behavioral and computational science at the University of Chicago, discusses human and algorithmic biases. He cites a study in which he and his co-authors examined an algorithm that is commonly used to determine who requires extra levels of health care services. This algorithm has affected approximately 100 million people in the U.S. In this case, black patients were routinely rated to be at lower risk. However, the algorithm was inherently flawed: it used data on who receives the highest amount of health care expenditures.
Black patients already spend less money on health care that white patients with the same chronic conditions, so the algorithm only served to reinforce this bias. Indeed, without the algorithmic bias, the study estimated that the number of black patients receiving extra care would more than double. Yet Mullainathan believes that the algorithm can be fixed fairly easily.
Contrast this to a 2004 study Mullainathan conducted. He and his co-author responded to job listings with fabricated resumes: half the time they sent resumes with distinctively black names; the other half with distinctively white names. Resumes with black names received far fewer responses than those with white names.
This bias was verifiably human and, therefore, much harder to define.
“Humans are inscrutable in a way that algorithms are not,” Mullainathan says. “Our explanations for our behavior are shifting and constructed after the fact.”
Don’t write algorithms off
As RAND’s Osoba writes, algorithms and AI “help speed up complex decisions, enable wider access to services, and in many cases make better decisions than humans.” It’s the last point that one must be particularly mindful of; while algorithms can reproduce and intensify biases of their programmers, they don’t possess inherent prejudices, as people do.
As Mullainathan puts it, “Changing algorithms is easier than changing people: software on computers can be updated; the ‘wetware’ in our brains has so far proven much less pliable.”