Who’s Liable in Self-Driving Car Accidents?
By Caroline Caranante | Jun. 10, 2025 | 6 min. read
What you will find below:
- Who’s Responsible When Self-Driving Cars Crash
- How State Laws and Product Liability Affect Claims
- Challenges with Vehicle Data Access and Privacy
- New Risks of Fraud Involving Autonomous Vehicles
As self-driving cars become more common on the roads, they’re bringing major changes to the insurance world. When a crash happens and no one is physically behind the wheel, the question becomes: who is responsible?
This has major implications. It affects how claims are handled, how policies are written, and how liability is assigned. This blog breaks down the challenges self-driving cars are creating for insurance adjusters, and what to keep an eye on as this technology advances.
Who’s At Fault When No One’s Driving?
Normally, when two cars crash, someone behind the wheel is to blame. Maybe they were speeding, distracted, or ran a red light. However, with self-driving vehicles, the question of liability gets more complicated. Responsibility could fall on the vehicle owner, the manufacturer, the software developer, or even the companies that supply the sensors and hardware.
This creates a whole new way of thinking about liability. Here are some key questions that insurance adjusters may need to ask:
- Was the car in self-driving mode when the crash happened?
- If there was a human driver present, should they have stepped in to take control?
- Did the car’s technology make a mistake?
- Was there a flaw in the way the system was designed?
Example:
If a Tesla operating in Full Self-Driving mode rear-ends another vehicle, the crash might be caused by a software glitch, misread road sign, or delayed response from the driver. Insurance adjusters may need to determine whether the driver had any control or if the car was supposed to handle the situation on its own.
Adding to the complexity, self-driving vehicles tend to be involved in more crashes, on average, than human-driven cars. According to data up to 2021, autonomous vehicles experience 9.1 crashes per million miles, compared to 4.1 crashes per million miles in traditional vehicles. That’s more than twice as many accidents, on average.
So, even though these cars are marketed as safer, they may still pose higher risks from an insurance standpoint.
Navigating the Legal Grey Area of Self-Driving Crashes
One challenge that insurance companies are facing is that the legal system hasn’t fully caught up with self-driving technology. Right now, there are no national laws in the United States that clearly explain who’s responsible when a self-driving car causes a crash. Instead, the laws vary widely by state.
For example:
- California requires companies testing self-driving cars to report crashes, but it still considers the human driver responsible if they’re behind the wheel, even if the car is driving itself.
- Arizona has allowed fully self-driving vehicles to operate with fewer restrictions, and in many cases, the fault may lie with the manufacturer or technology provider.
- Florida permits self-driving cars to operate with no one in the vehicle but holds the technology itself accountable for obeying traffic laws, which makes fault harder to determine when things go wrong.
Because laws vary from state to state, where a crash happens matters just as much as how it happens. Adjusters should understand the local legal environment to determine who’s responsible.
Additionally, product liability laws can come into play. This means a company is responsible if a defective product causes harm, such as a faulty airbag or brake failure. With self-driving cars, software glitches and poor design choices can fall into this category.
If a crash happens because the car’s software misreads a road sign or fails to avoid a clear obstacle, it might not be the driver’s fault at all. It could be a design flaw, opening the door to product liability claims against the automaker or technology provider. That matters to insurers, since it could change how they design policies and set premiums for drivers of autonomous cars.
Privacy and Data Access
Both traditional and self-driving cars record a lot of driving information through event data recorders—like speed, braking, steering, and more. In self-driving cars, this data often tells the whole story, since the driver may not have been in control when the crash happened.
Under the Federal Driver’s Privacy Act, that data belongs to the vehicle’s owner or lessee. But in practice, it’s not easy to access:
- Automakers often decline to release data unless legally compelled.
- If no one requests the data quickly, it may be overwritten before it can be retrieved.
- Even when available, the data is often in a format that only the manufacturer can easily read.
Without timely and reliable access to this data, it becomes difficult for adjusters to reconstruct accidents, determine faults, or spot fraud. Until new regulations set clear standards for data access and sharing, adjusters may be working with only part of the full picture.
Can Self-Driving Cars Be Used to Stage Accidents?
As quickly as technology evolves, so does the sophistication of fraud. Every new system presents fresh opportunities for exploitation, and self-driving cars are no exception. Researchers have shown that semi-autonomous systems, like Tesla’s autopilot, can be tricked by manipulated objects. While there aren’t many public real-world examples yet, the potential is there.
Example:
Imagine a driver claims their self-driving car suddenly malfunctioned and rear-ended another car. Later, it was discovered that someone tampered with a traffic sign to confuse the car’s sensors. In more advanced scenarios, someone might even manipulate the vehicle’s software or data to fake a malfunction.
These aren’t typical staged accidents. Insurance adjusters may need to look for digital clues, such as unusual driver behavior, inconsistent logs, or patterns that don’t match up with physical damage.
Technology is evolving, and so are the tactics. As more of the driving is done by computers, adjusters will need to start thinking like digital detectives.
Preparing for the Road Ahead
Self-driving cars are transforming how we live, drive, and think about insurance. The question of “who’s liable when no one’s driving?” doesn’t have one answer. It has many, and they’re buried in lines of code, data logs, and AI decision trees.
For insurance adjusters, the key is to stay informed. The way we handle claims, liability, and fraud is shifting rapidly. The future of risk is already here—and it doesn’t have its hands on the wheel.
Curious about the legal implications of self-driving car accidents? Talk to our claims management experts today.
Check out our sources:
ConsumerShield. Self-Driving Car Accidents: Trends, Statistics, and Risks. ConsumerShield, https://www.consumershield.com/articles/self-driving-car-accidents-trends.
National Highway Traffic Safety Administration. Event Data Recorders. U.S. Department of Transportation, Aug. 2006, https://www.nhtsa.gov/sites/nhtsa.gov/files/edrfinalrule_aug2006.pdf.
“Standing General Order on Crash Reporting.” National Highway Traffic Safety Administration, https://www.nhtsa.gov/laws-regulations/standing-general-order-crash-reporting.
“Tesla Autopilot Accidents: Legal Rights & Liability Explained.” Team Justice, 2025, https://teamjustice.com/tesla-autopilot-accidents-legal-rights-liability/.
Wood Smith Henning & Berman LLP. Navigating Liability in the Age of Autonomous Vehicles: Implications for Insurance Carriers. WSHB, https://www.wshblaw.com/experience-navigating-liability-in-the-age-of-autonomous-vehicle-implications-for-insurance-carriers.