The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
Only 17% of consumers say that they’d be comfortable with the idea of their home, retners, or auto insurance claims being reviewed exclusively by AI. That’s according to a new survey commissioned by Policygenius, which also found that 60% of consumers would rather switch insurance companies than let AI review their claims.
The results point to a general reluctance to trust AI systems — particularly “black box” systems that lack an explainability component. For example, just 12% of respondents to a recent AAA report said they’d trust riding in an autonomous car. High-profile failures in recent years haven’t instilled much confidence, with AI-powered recruitment tools showing bias against women, algorithms unfairly downgrading students’ grades, and facial recognition leading to false arrests.
In the insurance domain, the survey suggests that people, particularly drivers and homeowners, are wary of sacrificing privacy even if it nets them policy discounts. More than half (58%) of auto insurance customers told Policygenius that no savings was worth using an app that collected data about their driving behavior and location. And just one in three (32%) respondents said that they’d be willing to install a smart home device that collected personal data, such as a doorbell camera, water sensor, or smart thermostat.
“We’re seeing home and auto insurers integrate various data collection and analysis technology into policy distribution, pricing, and claims, but it’s clear consumers aren’t readily willing to trade personal data, or give up the human touch for marginal savings,” Policygenius property and casualty insurance expert Pat Howard said in a press release.
Importance of explainability
In a recent report, McKinsey predicted that insurance will shift from its current state of “detect and repair” to “predict and prevent,” transforming every aspect of the industry in the process. As AI becomes more deeply integrated in the industry, carriers must position themselves to respond to the changing business landscape, the firm wrote, while insurance executives must understand the factors that will contribute to this change.
Policygenius has a horse in the race — it’s an online insurance marketplace. But its survey is salient in light of efforts by European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building “trustworthy AI.” Explainability continues to present major hurdles for companies adopting AI. According to FICO, 65% of employees can’t explain how AI model decisions or predictions are made.
Not every expert is convinced that AI can become truly “trustworthy.” But researchers like Manoj Saxena, who chairs the Responsible AI Institute, a consultancy firm, assert that”checks” can ensure there’s awareness of the context in which AI will be used and could create biased outcomes. By engaging product owners, risk assessors, and users (for example, insurance policyholders) in conversations about AI’s potential flaws, processes can be created that expose, test, and fix these flaws.
For the insurance market specifically, the Dutch Association of Insurers (DAI) offers a possible model of adopting AI responsibly. The organization’s Ethical Framework for the Application of AI in the Insurance Sector, which became binding in January, requires companies to consider how best to explain outcomes from AI or other data-driven apps to customers before those apps are deployed.
“Human governance is hugely important; there can’t be total reliance on technology and algorithms. Human involvement is essential to continuous learning and responding to questions and dilemmas that will inevitably occur,” DAI general director Richard Weurding told KPMG, which worked with DAI on an educational campaign around the framework’s rollout. “Companies want to use technology to build trust with customers, and human involvement is critical to achieving that.”
Responsible AI practices can bring major business value to bear. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them. There’s both a reputational risk and direct impact on the bottom line for companies that don’t approach the issue thoughtfully, according to Saxena.
“[Stakeholders need to] ensure that potential biases are understood and that the data being sourced to feed to these models is representative of various populations that the AI will impact,” Saxena told VentureBeat in a recent interview. “[They also need to] invest more to ensure members who are designing the systems are diverse.”
VentureBeat
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Source: Read Full Article