Skip to main content

Scalability in Perception for Autonomous Driving: Waymo Open Dataset

Authors

  • Pei Sun

  • Henrik Kretzschmar
  • Xerxes Dotiwalla

  • Aurelien Chouard

  • Vijaysai Patnaik

  • Paul Tsui

  • James Guo

  • Yin Zhou

  • Yuning Chai

  • Benjamin Caine

  • Vijay Vasudevan

  • Wei Han

  • Jiquan Ngiam

  • Hang Zhao

  • Aleksei Timofeev

  • Scott Ettinger

  • Maxim Krivokon

  • Amy Gao

  • Aditya Joshi

  • Sheng Zhao

  • Shuyang Cheng

  • Yu Zhang

  • Jonathon Shlens

  • Zhifeng Chen

  • Dragomir Anguelov

    Abstract

    The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing self-driving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community’s contributions with real-world self-driving problems, we introduce a new large scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest camera+LiDAR dataset available based on our proposed diversity metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open.