Teslas boss Elon Musk claims the company's Full Self-Driving is up to 10 times safer than human drivers, but a closer look at the data suggests that claim doesn't really stack up.
According to reporting from Electrek, there's currently no publicly available dataset from Tesla that actually supports the "10x safer" figure. Analysts and researchers point out that, at best, the safety advantage remains unproven when you account for how the numbers are being calculated.
A big issue is how Tesla defines a crash. The company typically counts only more severe incidents - like those involving airbag deployment - while official benchmarks include a much broader range of accidents. That means Tesla is effectively comparing two very different datasets, which can make its system appear safer than it might actually be.
There are other variables muddying the waters too. Tesla's systems are often used on highways, which are statistically safer than city streets, and the cars themselves are newer and packed with modern safety tech. Strip those factors out, and the gap between FSD and human drivers shrinks significantly.
Crucially, Tesla doesn't release detailed data that would allow independent verification, things like disengagement rates, crash severity breakdowns, or road-type usage. That lack of transparency makes it difficult to properly evaluate how safe the system really is, especially compared to rivals that publish more rigorous, peer-reviewed analyses.
There's also the legal angle. Several lawsuits in the US argue that Tesla's driver-assistance systems may have contributed to crashes, either through system errors or by encouraging over-reliance due to the "Full Self-Driving" branding. That complicates the narrative that the tech simply reduces accidents across the board.