Tesla Model Y first to pass NHTSA ADAS safety tests while agency investigates 3.2M Teslas for FSD crashes



TL;DR

The Trump administration announced the Tesla Model Y is the first car to pass NHTSA’s new driver assistance safety tests. The same agency is investigating 3.2 million Teslas for crashing while using the company’s more advanced system.

 

The Trump administration announced on Wednesday that the Tesla Model Y is the first vehicle to pass NHTSA’s new advanced driver assistance safety tests. The same agency is simultaneously investigating 3.2 million Tesla vehicles for crashing while using the company’s more advanced self-driving system. The announcement celebrates Tesla for passing a test that measures whether a car can detect a pedestrian. The investigation examines whether Tesla’s cars can detect a pedestrian.

The distinction between the two is the distance between what the tests measure and what the technology attempts. The ADAS benchmark evaluates features that are standard equipment on dozens of vehicles from Toyota, Honda, Hyundai, BMW, and others. The investigation covers Tesla’s Full Self-Driving software, which operates at a level of autonomy that the ADAS tests do not assess. The press release and the probe exist in the same agency, issued weeks apart, about the same company.

The tests

The 2026 Model Y passed eight evaluations under NHTSA’s updated New Car Assessment Program. Four are legacy criteria that have been part of the programme for years: forward collision warning, crash imminent braking, dynamic brake support, and lane departure warning. Four are newly added: pedestrian automatic emergency braking, lane keeping assistance, blind spot warning, and blind spot intervention.

The new tests are pass-fail assessments of features that the automotive industry has been shipping as standard or optional equipment for years. Blind spot warning has been available on mainstream vehicles since the mid-2010s. Pedestrian automatic emergency braking is standard on most new cars sold in the United States. Lane keeping assistance is a feature that a 25,000 dollar Honda Civic includes at no additional cost.

The tests do not evaluate Tesla’s Autopilot or Full Self-Driving capabilities. They do not measure how the vehicle performs when operating autonomously. They measure whether the vehicle’s basic safety systems, the features that activate when a human is driving, function correctly. Passing them is necessary. It is not exceptional.

The timing

NHTSA finalised the updated NCAP criteria in late 2024 for implementation in model year 2026. In September 2025, the Trump administration delayed the requirement by one year to model year 2027, after the Alliance for Automotive Innovation, the industry’s main lobbying group, requested more time. Tesla, Rivian, and Lucid are not members of the alliance.

The delay means that most automakers have not yet submitted vehicles for the new tests, not because their cars cannot pass, but because the deadline has been pushed to 2027. Tesla submitted the Model Y voluntarily, ahead of the delayed timeline. It was the only manufacturer to do so. The result is a press release from the Department of Transportation announcing that Tesla is the “first vehicle” to pass tests that other manufacturers were told they did not yet need to take.

The announcement was titled “Trump’s Transportation Department Announces Tesla Model Y Is the First Vehicle to Pass NHTSA’s New ‘Advanced Driver Assistance System’ Tests.” The relationship between the Trump administration and Tesla’s regulatory environment is not incidental to the framing. The department delayed the tests, creating a window in which Tesla could be the only company to submit, then announced the result with the president’s name in the headline.

The investigation

While NHTSA was certifying the Model Y’s basic safety features, its Office of Defects Investigation was escalating a probe into 3.2 million Tesla vehicles equipped with Full Self-Driving software. The engineering analysis, opened in March 2026, covers crashes in which FSD failed to detect common roadway conditions that impaired camera visibility, including glare, fog, and airborne debris.

The agency documented incidents in which vehicles running FSD crossed into opposing lanes, ran red lights, and struck pedestrians. Tesla’s robotaxi service in Austin has been involved in 14 crashes since launching, a rate that Electrek calculated at approximately four times worse than human drivers. NHTSA said the system “did not detect common roadway conditions that impaired camera visibility and/or provide alerts when camera performance had deteriorated until immediately before the crash occurred.”

The engineering analysis is a required step before a potential recall. Tesla has asked for, and received, multiple extensions to submit crash data to the agency. The investigation covers the software that Tesla charges up to 8,000 dollars for and markets under the name “Full Self-Driving,” a name that NHTSA itself has noted does not accurately describe the system’s capabilities.

The levels

The automotive and technology industries classify driver assistance on a scale from Level 0, no automation, to Level 5, full automation with no human oversight required. The ADAS tests that the Model Y passed evaluate Level 1 and Level 2 features: systems that assist the driver but require the driver to remain in control at all times.

Tesla’s Full Self-Driving software, which is the subject of the NHTSA investigation, attempts to operate at Level 2 with ambitions toward higher levels of autonomy. Companies like Wayve are targeting Level 4 autonomy, which means the vehicle can operate without human intervention in defined conditions. Wayve raised 1.2 billion dollars to develop autonomous driving systems that do not require a human safety driver.

The gap between Level 2, where a human must always be ready to take over, and Level 4, where the car handles defined conditions independently, is the gap between the ADAS benchmark the Model Y just passed and the Full Self-Driving system that NHTSA is investigating. Uber relaunched Motional’s robotaxi service in Las Vegas with a target of fully driverless operation by the end of 2026, using a system designed from the ground up for Level 4. Tesla is attempting to reach the same destination using cameras, consumer vehicles, and software updates.

The gap

Tesla reclaimed the global quarterly EV sales crown from BYD in the first quarter of 2026, selling 358,000 battery electric vehicles. The company’s market position depends on the perception that its technology leads the industry. The ADAS benchmark contributes to that perception. The FSD investigation complicates it.

The Model Y passing eight safety tests is a data point about a car that can detect a pedestrian in a controlled scenario. The FSD investigation is a data point about the same company’s software failing to detect pedestrians, red lights, and oncoming traffic in the real world. The tests and the investigation measure different things. But they measure the same company’s claim to be the leader in vehicle safety and autonomy.

NHTSA now occupies the position of simultaneously certifying Tesla’s basic safety features and investigating whether its advanced features are safe enough to remain on the road. The press release says Tesla is first. The investigation says Tesla may be defective. Both are true. Neither tells the whole story. The distance between a passed benchmark and an open investigation is the distance between what a car can do when the test is defined and what it does when the road is not.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Vibe coding has taken the development world by storm—and it truly is a modern marvel to behold. The problem is, the vibe coding rush is going to leave a lot of apps broken in its wake once people move on to the next craze. At the end of the day, many of us are going to be left with apps that are broken with no fixes in sight.

A lot of vibe “coders” are really just prompt typers

And they’ve never touched a line of code

An AI robot using a computer with a prompt field on the screen. Credit: Lucas Gouveia / How-To Geek

Vibe coding made development available to the masses like never before. You can simply take an AI tool, type a prompt into a text box, and out pops an app. It probably needs some refinement, but, typically, version one is still functional whenever you’re vibe coding.

The problem comes from “developers” who have never written a line of code. They’re just using vibe coding because it’s cool or they think they can make a quick buck, but they really have no knowledge of development—or any desire to learn proper development.

Think of those types of vibe coders as people who realize they can use a calculator and online tools to solve math problems for them, so they try to build a rocket. They might be able to make something work in some way, but they’ll never reach the moon, even though they think they can.

Anyone can vibe code a prototype

But you really need to know what you’re doing to build for the long haul

For those who don’t know what they’re doing, vibe coding is a fantastic way to build a prototype. I’ve vibe coded several projects so far, and out of everything I’ve done, I’ve realized one thing—vibe coding is only as good as the person behind the keyboard. I have spent more time debugging the fruits of my vibe coding than I have actually vibe coding.

Each project that I’ve built with vibe coding could have easily been “viable” within an hour or two, sometimes even less time than that. But, to make something of actual quality, it has always taken many, many hours.

Vibe coding is definitely faster than traditional coding if you’re a one-man team, but it’s not something that is fast by any means if you’re after a quality product. The same goes for continued updates.

I’ve spent the better part of three months building a weather app for iPhone. It’s a simple app, but it also has quite a lot of complex things going on in the background.

It recently got released in the App Store—no small feat at all. But, I still get a few crash reports a week, and I’m constantly squashing bugs and working on new features for the app. This is because I’m planning on supporting the app for a long time, not just the weekend I released it, and that takes a lot more work.

Vibe coders often jump from app to app without thinking of longevity

The app was a weekend project, after all

A relaxed man lounging on an orange beanbag watches as a friendly yellow robot works on a laptop for him, while multiple red exclamation-mark warning icons float around them. Credit: Lucas Gouveia/How-To Geek | ViDI Studio/Shutterstock

I’ve seen it far too often, a vibe coder touting that they built this “complex app” in 48 hours, as if that is something to be celebrated. Sure, it’s cool that a working version of an app was up and running in two days, but how well does it work? How many bugs are still in it? Are there race conditions that cause a random crash?

My weather app has a weird race condition right now I’m tracking down. It crashes, on occasion, when opened from Spotlight on an iPhone. Not every time does that cause a crash, just sometimes.

If a vibe coder’s only goal is to build apps in short amounts of time so they can brag about how fast they built the app, they likely aren’t going to take the time to fix little things like that.

I don’t vibe code my apps that way, and I know many other vibe coders that aren’t that way—but we all started with actual coding, not typing a prompt.


Anyone can be a vibe coder, but not all vibe coders are developers

“And when everyone’s super… no one will be.” – Syndrome, The Incredibles. It might be from a kids’ movie, but it rings true in the era of vibe coding. When everyone thinks they can build an app in a weekend, everyone thinks they’re a developer.

By contrast, not every vibe coder is actually a developer, and that’s the problem. It’s hard to know if the app you’re using was built by someone who has plans to support the app long-term or not—and that’s why there’s going to be a lot of broken apps in the future.

I can see it now, the apps that people built in a weekend as a challenge will simply go without updates. While the app might work for the first few weeks or months just fine, an API update comes along and breaks the app’s compatibility. It’s at that point we’ll see who was vibe coding to build an app versus who was vibe coding just for online clout—and the sad part is, consumers will lose out more often than not with broken apps.



Source link