I remember the optimism of 2015-2016. Every major tech company and car manufacturer seemed to promise self-driving cars within a few years. "Fully autonomous by 2020!" they said.
It's 2025, and I'm still driving my own car. Let me explain what's actually happened and where we really are.
First, some terminology. The industry uses SAE levels 0-5:
Most current systems are Level 2 (like Tesla Autopilot, GM Super Cruise). Level 3 exists in limited scenarios. Level 4 is emerging in specific markets. Level 5? Still science fiction.
Waymo (formerly Google's self-driving project) operates robotaxi services in Phoenix, Arizona and parts of San Francisco. These are Level 4 vehicles—you can ride in them without a safety driver in designated areas.
But they're limited: good weather only, mapped streets, limited speeds, geo-fenced areas.
GM's Cruise had similar robotaxi operations in San Francisco until regulatory issues. The reality: operating robotaxis is harder than expected.
Tesla's "Full Self-Driving" (FSD) is Level 2. The driver must monitor at all times. It's impressive in many situations but can make dangerous mistakes. Several fatalities have occurred involving Tesla's Autopilot system.
Baidu in China, Mobileye (Intel), and various startups are working on robotaxis and autonomous trucking. Progress is being made, but slowly.
Here's what makes self-driving so difficult:
Most driving situations are easy—straight roads, clear weather. But the rare edge cases are what cause accidents. A child chasing a ball into the road. An unusual construction zone. An unexpected obstacle.
These rare situations are hard to anticipate and train for. The "long tail" of edge cases is the challenge.
Computer vision isn't perfect. Rain, snow, fog, bright sunlight—these all degrade performance. LiDAR helps but has its own limitations.
Other drivers, pedestrians, and cyclists behave unpredictably. Aggressive drivers, jaywalkers, hand signals—humans are complex.
When an accident is unavoidable, what should the car do? This raises ethical questions that are hard to solve.
Who's liable when an autonomous car crashes? How should these cars be certified? Regulations are still being developed.
Don't get me wrong—progress has been real. What's available now is impressive:
These systems already save lives. ADAS (Advanced Driver Assistance Systems) has reduced accidents.
Why did everyone get it so wrong? Several reasons:
Seeing the world (perception) is different from understanding it (reasoning). Early researchers underestimated this gap.
99% accuracy sounds great until you realize 1% of a billion miles is a lot of accidents.
Driving 1 million miles in testing reveals certain patterns. But real-world driving reveals others. You can't test for everything.
People overestimate their attention. Level 2 systems create a false sense of security. Drivers stop paying attention, leading to crashes.
My realistic assessment:
Level 5—anywhere, any conditions—is likely decades away, if it ever happens. The complexity is enormous.
Self-driving cars are real and getting better. They're not the revolution that was promised, but they're happening.
My advice: enjoy the assistance features that are available. They're genuinely helpful. But keep your hands on the wheel and eyes on the road. The future is coming, but it's coming more slowly than anyone predicted.
I've learned something from watching this field: progress in AI is often non-linear. We overestimate what we can do in the short term but underestimate what we can do in the long term.
Self-driving cars are a marathon, not a sprint. The companies still in the race are the ones playing the long game. And while the promises of 2020 didn't materialize, the progress that has been made is real and valuable.