featured_image

7 Myths and Misconceptions About Self-Driving Cars

On March 18, 2018, an autonomous test vehicle in Tempe, Arizona struck and killed a pedestrian — a jolt that pushed self-driving cars from lab curiosity into a headline-making public policy issue.

That crash changed the conversation. Public trust, investment decisions, and regulation all move based on what people believe about these systems — and a lot of those beliefs are rooted in myths rather than engineering, law, or economics. This piece debunks seven common myths about self-driving cars and replaces each with a concise reality check grounded in evidence and real incidents.

Below: seven numbered reality checks covering technical safety, legal and ethical questions, and adoption impacts (1–7).

Safety and Technical Reality

Autonomous vehicle sensors and safety systems, including lidar and cameras

Safety is the hottest topic when people discuss autonomous vehicles, but “safe” means different things in different contexts. Engineers use SAE International’s Levels 0–5 to describe automation, and no true Level 5 consumer vehicle exists as of mid‑2024. Many deployments are domain‑restricted: geofenced robotaxis in a few neighborhoods, low‑speed shuttles on campuses, or highway assistance systems. Accidents like the March 18, 2018 Tempe fatality and multiple NHTSA inquiries into driver‑assist crashes show that failures tend to be layered — sensor limitations, edge‑case scenarios, software design, and human factors all play a role. The result: safety gains are real in some areas, but universal “complete safety” remains aspirational.

1. Self-driving cars are already completely safe

Flatly: no. Some AV systems improve safety in narrow settings, but blanket claims of complete safety ignore limits. The March 18, 2018 Uber pedestrian fatality in Tempe and subsequent investigations showed how sensor detection, software interpretation, and operational oversight can fail together.

Manufacturers have adopted staged rollouts for a reason. Waymo began as a Google project in 2009 and launched Waymo One in 2018 after years of simulated and limited real‑world testing. That phased approach reflects a recognition that performance varies by operating domain — city center versus suburban highway — and by environmental conditions.

Advanced driver assistance systems (ADAS) like Tesla Autopilot reduce some risks but still require human attention; NHTSA has opened several reviews into crashes involving driver‑assist use. Practical takeaway: expect incremental safety gains, targeted deployments, and continued need for driver or operator engagement for the foreseeable future.

2. Autonomy means no human oversight is needed

Not today. Most real‑world tests and many commercial pilots still rely on human oversight in one form or another: in‑vehicle safety drivers ready to intervene, remote operators who can guide vehicles out of trouble, or fallback procedures that hand control back to humans.

Those human roles matter because edge cases and sensor failures remain hard to predict. Companies such as Waymo used safety drivers in early phases before allowing no‑driver robotaxis in narrowly defined zones. Cruise and other fleets run supervisory teams and well‑practiced takeover protocols during testing.

Regulators often require human‑in‑the‑loop capabilities during pilots, which affects liability and insurance. Expect human oversight to persist through a long transition to any mass Level 4 or Level 5 operation.

3. Cameras alone are enough — lidar is unnecessary

There’s a real technical debate here. One camp favors vision‑first systems that rely on cameras and neural networks; another insists on sensor fusion combining cameras, radar, and lidar for redundancy and reliable depth sensing.

Vision‑only approaches (Tesla being the most visible example) benefit from lower hardware cost and rapid progress in computer vision. But lidar gives precise distance measurements independent of lighting, which helps in complex scenes and when redundancy matters. Waymo and Cruise, for instance, use lidar as part of multimodal sensor suites.

The honest answer: neither approach is definitively superior in every setting. Trade‑offs depend on operating domain, cost targets, and how much redundancy a system designer wants to build in.

Regulation, Liability, and Ethics

Legal and ethical framework for autonomous vehicle testing and deployment

Law and policy lag technology. Regulators write the rules that permit testing, set safety expectations, and allocate liability — and those rules differ a lot across jurisdictions. In the U.S., federal agencies like NHTSA issue guidance while states control vehicle registration and on‑road testing permits. Europe is moving on its own timetable, and international rulemaking is gradual. The patchwork means companies often pick testing locations with favorable rules, shaping who sees services first and under what conditions.

4. There’s a clear legal framework for driverless cars

Not yet. Regulatory frameworks are uneven and evolving. In the U.S., NHTSA provides safety guidance, but more than 30 states as of 2024 have statutes or executive orders that allow some form of autonomous‑vehicle testing or deployment — and the details vary widely.

States such as California require testing permits and disclosure; Arizona went early and permissive, attracting many pilots. Internationally, the EU and individual countries are moving at different speeds, and standards bodies like SAE supply useful technical taxonomy (Levels 0–5) without dictating law.

Practical effect: companies choose favorable jurisdictions for pilots, which produces uneven geographic access and complicates large‑scale rollouts until rules harmonize more broadly.

5. Insurance will automatically cover all autonomous vehicle crashes

Insurance for autonomous vehicles is complicated and unsettled. Liability can shift depending on what failed: a human driver, fleet operator, software bug, or a supplier component. That matters because product liability and commercial fleet coverages differ from personal auto insurance.

For example, a crash involving a company‑operated robotaxi may trigger product‑liability or operator liability claims against the service provider or OEM, while a crash tied to misuse of ADAS features could leave responsibility with the human driver and the driver’s insurer.

Insurers are already piloting new products and partnerships with manufacturers, but consumers and businesses should expect case‑by‑case settlements and transitional uncertainty for several years.

Adoption, Cost, and Public Perception

Widespread adoption faces three big hurdles: cost, trust, and use case fit. High‑end sensor and compute stacks remain costly, public opinion surveys show mixed willingness to ride without a human, and different use cases will mature at different speeds — think robotaxis and last‑mile delivery before mass private ownership of Level 4/5 cars. Workforce impacts and urban planning consequences add further complexity.

6. Self-driving cars will make human drivers obsolete within a few years

Predictions of rapid, universal replacement are overoptimistic. Technical edge cases, regulatory hurdles, infrastructure needs, and economic incentives slow the pace. Waymo’s trajectory illustrates this: a project that began in 2009 launched Waymo One in 2018 yet remains geographically limited years later.

What’s more likely is domain‑specific uptake. Geofenced robotaxis, campus shuttles, and constrained delivery robots will expand first because their operating environments are predictable. Private ownership of full Level 4/5 vehicles that work everywhere is much harder and will take longer in most markets — often a decade or more.

So: human drivers won’t disappear overnight, but certain driving jobs and services will transform sooner than others.

7. Autonomous vehicles will instantly eliminate congestion and job losses won’t be a problem

Neither claim holds up under scrutiny. The effect of AVs on congestion is ambiguous. Automated fleets can reduce accidents and improve traffic flow, but they can also induce more travel and introduce empty‑vehicle miles as companies reposition cars between fares.

Employment impacts are real. In the U.S. alone, heavy and tractor‑trailer truck drivers number roughly 1.7–1.8 million jobs; add taxi, rideshare, and delivery drivers and you reach several million more worldwide who could be affected. The timing and scale of displacement depend on business models, regulatory choices, and the pace of technical adoption.

Policy responses matter: targeted retraining programs, phased deployment rules, and social supports can ease transitions. Absent those, worker disruption will be significant even if it unfolds over years rather than months.

Summary

  • Safety is nuanced: staged rollouts, SAE Levels 0–5, and incidents like the March 18, 2018 Tempe fatality show that gains are context‑dependent, not instantaneous.
  • Human oversight and diverse sensor strategies (vision‑first vs. sensor fusion) remain central to real deployments and design trade‑offs.
  • Regulation and liability are a patchwork; insurance and legal responsibility will be sorted case‑by‑case during a long transition.
  • Expect domain‑specific adoption first (robotaxis, delivery), gradual workforce impacts (millions of driving jobs at risk), and mixed effects on congestion that depend on fleet behavior and policy.
  • Keep asking the right questions about myths about self-driving cars, support sensible regulation, and back programs that retrain workers and test technologies carefully.

Myths and Misconceptions About Other Topics