Generative AI: A Balance Between Innovation and Privacy

In this blog we'll discuss the balance between innovation and privacy guardrails of generative AI in insurance fraud detection

By Carla Rodriguez | Jun. 20, 2024 | 5 min. read


Generative AI has long been praised as a tool for extending creativity and improving the efficiency of how people do their jobs. However, many of the most high-value, vulnerable sectors have experienced concern about the potential for misuse of such advancements.

One example of generative AI that can cause problems is OpenAI’s Sora. According to Forbes, OpenAI revealed their Voice Engine tool Friday, an AI generator that can use text and an audio sample to create “natural-sounding speech that closely resembles the original speaker.” Still, the company said it will not be publicly available yet due to the “potential for synthetic voice misuse.” As you can imagine the company was being bombarded with concerns of deepfakes being used in the upcoming election. All it needs is a 15-second audio sample and a text sample. Crazy stuff right? We’ll get into more detail about this later.


The Pros and Cons

First, let’s talk about the good stuff. Generative AI can transform the way claims are processed. Think about the time saved by automating routine tasks like document analysis, data entry, and even preliminary assessments. Adjusters can then focus on more complex aspects of claims, where human judgment is crucial.

Moreover, AI can help identify patterns and anomalies that might be missed by the human eye, potentially flagging fraudulent claims before they become a problem. This proactive approach can save you money and resources.


Enhanced Data Utilization:

Generative AI can unlock value from large amounts of unstructured data across marketing, underwriting, claims, and control functions. This allows insurers to derive new insights, create customer segments, and develop hyper-personalized offerings, ultimately changing the game when it comes to consumer relationships

Process Automation:

By automating routine tasks such as data entry, document classification, and claims processing, gen AI will cut in half the to write new business or submit a claim. The less time you’re spending doing mundane tasks the more time and profits are freed up.

Fraud Detection and Risk Assessment:

Generative AI excels at identifying patterns and anomalies in data, making it a powerful tool for detecting fraud and assessing risks. This enables insurers to make better underwriting decisions, prevent fraudulent claims, and reduce claims leakage. It can one day be as easy as uploading a claim and getting recommendations on the next course of action.



The Role of Consumers in AI


Incorporating AI to combat deepfakes is not just about staying ahead of fraudsters; it’s about embracing the future of technology in a way that enhances the future of automated claims processing.

AI tools give us the ability to do anything – CTO of openAI Mira Murati

According to WSJ, consumers in the insurance industry yearn for the digital experiences offered by other sectors such as e-commerce.

Consumers in the insurance industry want the kind of digital experiences they get from e-commerce and other sectors.

But, there are some important things to consider. Insurers need to be cautious about relying too much on AI, especially with regulatory concerns and questions about how reliable AI-generated decisions really are.

Insurers deal with a lot of sensitive data, so it’s crucial to make sure it’s protected properly. If not, it could end up causing more problems than it solves.

Also, the strength of insurers’ AI and analytics depends heavily on the quality of the data they’re using. In the case of Insurtechs, they’re building AI models based on years of consumer and company data. But, because this data is so sensitive, there’s a bit of hesitation.

So, it’s all about finding the right balance between AI and human judgment, especially when it comes to overseeing the quality of the data being used.

Watch Joanna Stern from the Wall Street Journal, ask OpenAI’s CTO all the hard questions:


AI’s Biggest Obstacle in Insurance

The European Union seems to be paving the way for regulators around the world. The EU is implementing a system that labels different rules to the extent of the risk.

Unacceptable risk AI systems are systems considered a threat to people and will be banned.

They include:

  • Cognitive behavioral manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children.
  • Social scoring, or classifying people based on behavior, socioeconomic status, or personal characteristics.
  • Both real-time and remote biometric identification and categorization of people such as facial recognition.

AI systems that negatively affect safety or fundamental rights will be considered high-risk and will be divided into two categories:

AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

AI systems falling into specific areas that will have to be registered in an EU database are:

  1. Management and operation of critical infrastructure
  2. Education and vocational training
  3. Employment, worker management and access to self-employment
  4. Access to and enjoyment of essential private services and public services and benefits
  5. Law enforcement
  6. Migration, asylum and border control management
  7. Assistance in legal interpretation and application of the law.

Generative AI models such as ChatGPT, though not classified as high-risk, must adhere to transparency requirements and EU copyright law. This includes disclosing AI-generated content, designing models to prevent illegal content generation, and publishing summaries of copyrighted data used for training.


What is the main concern?

Generative AI, like ChatGPT, is poised to revolutionize claims processing by automating tasks like document analysis and data entry, saving time and allowing you to focus on complex claims. Insurers and states are worried about “unfair discrimination” with AI in insurance algorithms and underwriting. Colorado is taking the lead in creating AI regulations, acknowledging that states have been the main regulators of insurers since the 19th century.

Interested to learn more about the effects of AI in insurance? Check out one of our favorite learning center blogs!