AI vs. AI: Deepfake Creators vs. Investigators

This blog explores how deepfake technology is being used to commit insurance fraud and the challenges it creates for the industry. It also highlights AI tools and practical steps insurers can use to detect and prevent fraudulent claims.

By Caroline Caranante | Jun. 16, 2025 | 5 min. read

Artificial intelligence is reshaping the way we live, work, and investigate fraud. While AI tools have helped insurers improve claims processing and detect suspicious behavior, the same technology is being used in a dangerous way: to create deepfake content.

These fake videos, images, and voice recordings are so realistic that they can pass as real evidence— unless you know what to look for. That’s why insurers and investigators are now fighting back with AI detection tools.

This is the double-edged reality of modern fraud: AI is both a weapon and a shield.

AI

The Deepfake Trend Has Turned into a Real Threat

Deepfake technology has moved beyond viral videos— it’s now a growing risk for insurers facing sophisticated evidence tampering. According to a 2024 survey by the Identity Theft Resource Center, 63% of fraud investigators in the US consider deepfakes to be one of the top three threats in digital evidence manipulation.

Example:

In 2023, a US man filed for long-term disability benefits after claiming a severe spinal injury. As part of his evidence, he submitted a video telehealth consultation showing him in visible pain. However, forensic investigators uncovered that the entire video was AI-generated— a deepfake featuring synthetic versions of both the claimant and the doctor. The man had no real injury but tried to use the fake to commit fraud.

How Deepfake Technology Works

Deepfakes are generated using AI algorithms that study real videos, images, and audio to learn how things look and sound. They can then produce realistic but fake content that mimics people, events, or environments—often misleading viewers into believing something false actually happened.

What used to take hours of editing can now be done in minutes using free or inexpensive tools. Anyone with a smartphone and basic tech skills can create a convincing fake.

Popular tools include:

  • DeepFaceLab: a free, high-quality tool that’s commonly used to swap faces
  • Synthesia: a platform that lets users turn written scripts into videos featuring AI-generated people who speak the words on screen
  • HeyGen: a tool that creates realistic videos of people delivering scripted messages, using AI to match facial expressions and lip movements
  • Zao: a mobile face-swap app that lets users insert their faces into movie scenes in seconds

The quality of these tools has improved so quickly that many deepfakes are nearly impossible to spot with the naked eye.

Turning the Tables with AI

To counter deepfakes, investigators are using AI-powered detection tools—essentially having machines catch what other machines create.

Here are some key AI tools helping investigators spot deepfakes:

  • Sensity AI: monitors online content to detect deepfakes and flags anything that looks suspicious
  • Reality Defender: works with insurers, government agencies, and businesses to catch fake videos, photos, and audio by scanning them in real time
  • Hive Moderation: reviews videos and images to find signs they’ve been altered, helping companies filter out manipulated content before it spreads

These tools are trained by analyzing thousands of real and fake examples. That helps them learn what to look for, like strange blinking, weird reflections, or lip movements that don’t match the words being said.

Impact on Insurance Claims

The shift to digital claims has made things faster but also riskier. Today, claimants can submit photos, videos, and even recorded statements directly from their phones, without meeting with an adjuster.

That opens the door for deepfake abuse in:

  • Auto claims— fake dashcam footage or damage photos
  • Workers’ compensation— AI-generated injury reenactments or fraudulent video testimony
  • Property damage— doctored surveillance footage or drone images

A Capgemini report found that 62% of insurers believe that AI and machine learning can reduce fraud and improve claim accuracy. These tools help flag inconsistencies that human reviewers might miss, especially when it comes to manipulated images or video.

Seeing Isn’t Believing

Traditionally, claims professionals have relied on their experience to flag suspicious activity, but deepfakes don’t always show obvious signs. They’re built to look exactly like the real thing.

This is especially risky when adjusters are working quickly or handling high claim volumes. A fake video might pass through the system and lead to an unnecessary payout.

Steps You Can Take Now to Reduce Risk

The good news is, there are practical steps your team can take to prepare for this new wave of fraud.

  1. Use AI detection tools to automatically scan submitted evidence, saving time and reducing risk.
  2. Train adjusters to spot deepfake red flags like unnatural blinking, odd lighting, or mismatched audio in videos.
  3. Always verify the source by requesting original files and checking metadata for signs of tampering.
  4. Establish a clear digital evidence protocol to track how photos and videos are received, reviewed, and verified.
  5. For high-stakes claims, partner with forensic experts who can provide detailed reports and expert testimony if needed.

 

The use of deepfakes in fraud isn’t a future concern— it’s happening today. In fact, experts predict deepfake-related fraud attempts could double by 2026, as these tools become easier to use and harder to detect.

As a senior fraud analyst at Sensity put it:

This isn’t just someone swapping a face on TikTok. These are people trying to fabricate evidence and manipulate investigations. And they’re getting better at it.

The battle between deepfake creators and investigators will only intensify. But with the right combination of technology, training, and process, investigators can stay ahead.

In today’s world, your best defense isn’t just common sense – it’s leveraging AI tools to outsmart fraudsters.

 

Want to stay ahead of deepfake claims? Connect with our experts today. 

 

Check out our sources:

Capgemini Research Institute. Insurance Leaders Optimistic about AI’s Impact on Underwriting Quality and Fraud Reduction but Underwriter Confidence Lags. Capgemini, 17 Apr. 2024, www.capgemini.com/news/press-releases/insurance-leaders-optimistic-about-ais-impact-on-underwriting-quality-and-fraud-reduction-but-underwriter-confidence-lags/

Coalition Against Insurance Fraud. Annual Fraud Report. Coalition Against Insurance Fraud, 2024, www.insurancefraud.org/

Identity Theft Resource Center. 2024 Annual Data Breach Report: Identity Theft Resource Center’s 2024 Annual Data Breach Report Reveals Near‑Record Number of Compromises and Victim Notices. Identity Theft Resource Center, 28 Jan. 2025, www.idtheftcenter.org/post/2024-annual-data-breach-report-near-record-compromises/

National Institute of Standards and Technology. Open Media Forensics Challenge. NIST, Open Media Forensics Challenge, www.nist.gov/itl/iad/mig/open-media-forensics-challenge

Sensity AI. Threat Intelligence Brief: Deepfake Threat Landscape 2023. Sensity AI, 2023, sensity.ai/reports/

 

Related Articles

Dive deeper into the world of risk management and investigative insights with our curated selection of related articles.