Skip to main content
Back to all posts

August 19, 2021

How we’ve built the World’s Most Experienced Urban Driver

  • Technology
A Waymo Vehicle navigates around a pedestrian and double parked car in San Francisco
A Waymo Vehicle navigates around a pedestrian and double parked car in San Francisco

Here’s our autonomous driving system—the Waymo Driver—in San Francisco earlier this year. It’s the kind of journey we’ve made tens of thousands of times since we first started driving autonomously in the city in 2009. As the Waymo Driver navigates dozens of vehicles and pedestrians, it’s met with a huge variety of other road users—from double-parked vehicles whose riders can hop out at any second, to scooters cutting across traffic even when they have a red light.

Any fully autonomous driving system needs to know where it is and where it’s going, see what’s happening around it, understand the intentions and predict the movement of other road users, plan what to do, and drive the vehicle safely. All this can be particularly complicated in cities. Any driver knows that narrow streets, unusual road geometries, frequent occlusions, intricate intersections, constantly evolving layouts, and close social interactions with cities’ many drivers, pedestrians, cyclists, and other road users, can make navigating dense urban environments a challenge.

With over a decade’s experience driving in major cities across the United States, Waymo has designed its technology to handle this complexity. That’s been especially important as we’ve expanded our testing in San Francisco, where we’re currently driving more than 100,000 miles per week. Four parts of our strategy have been key to helping us make progress—and have informed our technological decisions for urban driving since day one:

1) Our advanced sensors. Higher-quality data enables more advanced driving AI and software that can reason better and drive better. So we’ve built the most advanced sensors and perception systems, informed by more than 20 million autonomously driven miles and five generations of development, to provide that data for the Waymo Driver. The fifth-generation Waymo Driver is our most capable and advanced system yet and designed especially to navigate complex environments. Comprised of complementary sensors, including radar, lidar, and cameras, it can see 360 degrees around the vehicle, day and night, even in tough weather conditions such as rain or fog. 

A lidar of people gathered around Dragon's Gate in San Francisco

As one of the Waymo Driver’s most powerful sensors, our lidars generate a dense and highly detailed 3D view of its surroundings as shown in this image (visualizing lidar data only). 

The powerful combination of these sensors allow us to seamlessly and accurately track the many objects around us while navigating city streets, such as in the example below where we’re making an unprotected left turn onto 16th Street. And because of the long range and high density provided by our sensors, the Waymo Driver can detect with high accuracy and intelligently reason about small objects and movement at distances – such as spotting a truck door slowly opening in the flow of traffic as a person gets ready to hop out and deliver some packages.

A fully autonomous Waymo Vehicle navigates around pedestrians out of crosswalks, cyclists, and parklets in San Francisco

Crucially, data from a suite of sensors enables our fully autonomous Driver to safely handle a wide range of situations. Camera-only systems can struggle in adverse weather and poor lighting conditions, for example.

That’s where our radar systems excel. Unlike traditional automotive radar, our fifth-generation imaging radar can detect both stationary and moving objects that are commonplace in urban driving—like rapidly detecting pedestrians popping out behind parked vehicles into the street. It also means we can continue to drive safely even when visibility is poor, such as in the fog. Our radar data also helps the Waymo Driver work out that, for example, the steam rising from the utility holes in Civic Center Plaza is something we can drive through, not an obstacle.

And because our software fuses the outputs of our radar, lidar, and camera sensors together, our machine learning models can reason more intelligently about the world. If our cameras detect a stop sign, our lidar can help the Waymo Driver work out whether it’s a reflection in a West Portal storefront.

Our powerful onboard compute platform allows the Waymo Driver to process vast amounts of data and run real-time inference on large Machine Learning (ML) models to react immediately without additional human input, like when an emergency vehicle is approaching, or when a dog runs out into the street. Our vehicles also don’t need a cell signal to operate—the Waymo Driver knows what to do even if it’s in the Broadway Tunnel.

2) Our powerful ML-based driving software and robust training & evaluation infrastructure. Every major part of our software, whether it’s perception, semantic understanding, behavior prediction, or planning, leverages advanced machine learning models that benefit from our unparalleled driving experience and the richness of the data our sensors gather. Our driving software is based on years of AI research, and we’re contributing to the research community through our Waymo Open Dataset initiative, which we are constantly expanding to include new data and new challenges in several key areas of research from perception, to prediction, to domain adaptation. We’re continuously innovating and pushing the boundaries of state-of-the art AI research and bringing those advances into our production stack, which has allowed us to build the industry’s most advanced ML models to handle the complexities of urban driving.

Take our planning system, for instance. With more than 20 million miles autonomously driven on public roads and over 20 billion miles in simulation under its belt, the Waymo Driver develops an incredibly nuanced understanding of a city’s roads and its drivers’ behavior to adapt to the local driving character.

For example, our Driver knows that the preferred staging position to make a turn differs from intersection to intersection, so it adjusts its starting point based on past experience of the roadway and the many examples of observed behaviors of other drivers.

Similarly, our machine learning models have observed and learned countless other small nuances that help us drive like locals. Through our experience of driving in San Francisco, for example, the Waymo Driver has learned that residents often drive slightly slower while traveling up steep slopes. Therefore, based on its experience and depending on the speed and flow of traffic, the Waymo Driver does, too, to provide San Franciscans with a familiar and comfortable experience navigating the city’s many hills.

Our system is learning many nuanced behaviors that are intuitive to other road users, which are important to build local residents’ trust in our technology. The Waymo Driver might slow down as it approaches an occluded crosswalk; it can work out that pedestrians are likely to be near vehicles with open doors; and it drives more cautiously in areas with lower visibility, like intersections at the top of hills.

Many of these nuances might feel relatively minor when taken in isolation, but combined they lead to a much more natural, comfortable, and predictable experience for both our riders and others with whom we share the road.

The software and machine learning models that power our technology require very large volumes of properly curated data and massive training and evaluation cloud-compute infrastructure. That’s why we invest heavily in infrastructure and frameworks to rapidly train and evaluate our ML models, as well as metrics, tools, simulation environments (such as our Simulation City) to evaluate the performance of the Waymo Driver as a whole.

3) Compounding our experience with our shared tech stack. Because we operate in multiple environments, from cities to freeways, we pool our experience to create a robust generalizable tech stack. In particular, the experience we’ve developed in Arizona running a publicly available ride-hailing service with no human drivers carries over to new cities. Since first opening up our fully autonomous service to the public in October 2020, we have safely served tens of thousands of rides and got valuable field data to support our safety evaluation and readiness frameworks.

Because we’re building a single driver—The Waymo Driver—our vehicles in San Francisco draw on our years’ of experience, which gives us a significant lead when tackling various driving challenges. While the specifics of each city and driving environment are somewhat different, the fundamentals of building, evaluating, deploying, and operating our Driver carry over between domains. As a result, future rollouts become much more efficient and scalable.

4) Our laser focus on full autonomy. We’re building systems that operate without any reliance on human drivers. Either the system can complete the entire trip or it cannot. There is never any confusion about who is in control and responsible for the safety of driving. So we focus on the hardest parts of the challenge, which is critical to reliably scaling vehicles with no human drivers.

And as the only company that is operating a fully autonomous public ride-hailing service in the U.S., we know exactly what it takes to reach full autonomy, including being able to handle the long tail of rare events, and design our roadmap accordingly as we test in other cities.

These are just a fraction of the ways we’ve optimized our Driver for cities, and they illustrate how our teams are designing the Waymo Driver to handle the roads safely and effectively as we scale to new places. As we continue to build new capabilities into the Waymo Driver, we’re expanding opportunities to scale autonomous driving in cities across the country.


As we continue to build the hardware and software to power the Waymo Driver, we’re looking for people to join our growing team. At Waymo, our teams motivate and inspire one another, see their research implemented in tangible ways, and together make real steps toward a positive impact on the world of mobility. Whether you’re an engineer, researcher, or a curious and critical thinker driven to make the roads safer for everyone, we’re looking for people to help us tackle real-world problems. Learn more at