LinkedIn

Insurers turn to artificial intelligence in war on fraud

2018-June-26

Machine learning is helping the insurance industry flag suspicious claims–and even crawl through social media accounts to find fraud.

By Steven Melendez  | June 26, 2018

From bogus claims to shady brokers, insurance fraud costs companies and their customers more than $40 billion a year, the FBI estimates. And that’s excluding medical insurance fraud, which is estimated by industry groups to cost tens of billions more.

The staggering level of criminality costs us all, adding $400 to $700 a year to premiums we pay for our homes, cars, and healthcare, the feds say. There are simply not enough investigators to put a significant dent in the criminality, so the industry is turning to the machines.

Using artificial intelligence to pick out inconsistencies and unusual patterns has quickly become standard for insurance companies, whether they’re looking for sophisticated rings of fraudsters rigging auto accidents or just individuals embellishing how much their damaged property was worth.

“You can’t not do it–this is kind of part and parcel of modern insurance,” says Jim Guszcza, U.S. chief data scientist at Deloitte Consulting. “You can’t not have machine learning and predictive analytics for claims.”

Among the companies harnessing the power of data is Lemonade, a New York home and renters’ insurance startup founded in 2015 by two tech veterans. CEO Daniel Schreiber says the data-driven approach often lets Lemonade evaluate and pay out claims substantially quicker than many traditional insurers.

In about one third of cases, claims can be approved and paid out essentially instantly on approval by the company’s algorithms, he says. “Even if a human is involved, it’s radically quicker.”

Humans can also still review claims after they’ve been paid, checking up on and improving the automated processes, he says. That way, they can teach the algorithms what to be suspicious of in a claim–just as the machines can highlight suspicious factors they might miss.

“We’re finding that our claims department is responding much faster because it’s now competing with an algorithm,” he says.

For many insurance companies, it’s not so much a competition but a way to more effectively triage claims to let humans dive deep into the ones that need more examination.

“Part of the zeitgeist among insurers today is low-touch or no-touch claims processing,” says James Quiggle, director of communications at the Coalition Against Insurance Fraud, an industry group. “More and more insurance companies are looking toward machines to help deal with often basic scams, thereby freeing up investigators for more complicated aspects for investigations that only humans can handle.”

Sophisticated AI tools can also spot complex patterns of fraud–like groups of connected people filing similar claims, perhaps with overlapping networks of doctors or lawyers, about injuries from deliberately staged car accidents.

“Data crunching can sift through a hugely confusing pile of information, make sense of it, piece it all together in a way that investigators can clearly see and thus create an aha moment where the entire ring is outlined in graphic terms on a computer screen,” says Quiggle. “The whole scam–the whole organization–is minutely outlined in graphic terms that might have taken months to analyze with humans acting alone.”

BIG BROTHER IS WATCHING

Part of the equation is that ever more data is now available to investigators and their digital assistants, from public social media posts–people with purported severe injuries posting pictures from the softball field to license plate readings and even Fitbit records.

Hanzo, a web archiving and analysis firm with offices in the U.S. and U.K., develops software that insurers can use to pull and sift through data from social media, marketplace sites like eBay and Craigslist, and elsewhere on the web in researching claims.

“Anything you can see in a browser we can effectively collect,” says Keith Laska, the company’s chief commercial officer. “The web-crawling technology then spiders down and tries to sift through thousands of pages of content to find the relevant information.”

That could be evidence that an insurance customer claiming items were stolen from a car listed similar items on a classified site or checked in on social media to locations far from the scene of the crime when the burglary was said to take place.

Inevitably, experts say, increased use of machine learning and disparate sources of data will raise questions about privacy that could lead to the development of industry standards or regulation.

For instance, Quiggle says, people might look askance at a worker’s compensation insurer flying a drone over an injury victim’s backyard, hoping to find evidence that the person is in better shape than claimed.

“You think this person is doing fun stuff when he or she should be flat on his or her back,” he says. “Where can you go and where can’t you go to try to pierce that person’s privacy veil? Can you fly over that person’s backyard?”

 

15

April

2019

21st Century Compliance: How Technology is Central to Risk Protection in a Complex World

Good customer screening is an essential part of the response to financial crime. In this paper, we look at the […]

View more
21

March

2019

Gain Insight from your KYC/AML Screening Program

Compliance operations teams in most organizations are overwhelmed with customer due diligence and KYC/AML data. As a result, the focus […]

View more
10

January

2019

Join RDC in Supporting National Human Trafficking Awareness Day – January 11, 2019

By Michael Kerman | January 10, 2019 Join RDC and the Department of Homeland Security in wearing Blue on January […]

View more
09

January

2019

EBook – 12 Lessons from One Trillion Screens

When it comes to customer screening, organizations are always wondering “what does everyone else do?”.  Designing and optimising a customer […]

View more
27

December

2018

My Compliance Wishes for 2019

By Michael Kerman | December 27, 2018 It’s hard to believe that 2018 is coming to an end. At first, […]

View more
18

December

2018

Making Negative News a Positive for FinTech Companies

By Joel Cope | December 18, 2018 Adverse media, or negative news, is an important component in the anti-money laundering […]

View more
11

December

2018

Customer Screening is the DIY Project From Hell

By Michael Kerman | December 11, 2018 Every year, without fail, many banks and financial institutions embark on internal projects […]

View more
03

December

2018

Improving the Impact of Adverse Media Screening for Fintech Companies

Adverse media screening has been widely adopted by UK fintech companies as part of their AML compliance programmes. But where […]

View more