Skip to main content

Large Scale Interactive Motion Forecasting for Autonomous Driving : The Waymo Open Motion Dataset

Authors

  • Scott Ettinger

  • Shuyang Cheng

  • Benjamin Caine

  • Chenxi Liu

  • Hang Zhao

  • Sabeek Pradhan
  • Yuning Chai

  • Ben Sapp

  • Charles Qi

  • Yin Zhou

  • Zoey Yang

  • Aurelien Chouard

  • Pei Sun

  • Jiquan Ngiam

  • Vijay Vasudevan

  • Alexander McCauley

  • Jonathon Shlens

  • Dragomir Anguelov

    Abstract

    As autonomous driving systems mature, motion forecasting has received increasing attention as a critical requirement for planning. Of particular importance are interactive situations such as merges, unprotected turns, etc., where predicting individual object motion is not sufficient. Joint predictions of multiple objects are required for effective route planning. There has been a critical need for high-quality motion data that is rich in both interactions and annotation to develop motion planning models. In this work, we introduce the most diverse interactive motion dataset to our knowledge, and provide specific labels for interacting objects suitable for developing joint prediction models. With over 100,000 scenes, each 20 seconds long at 10 Hz, our new dataset contains more than 570 hours of unique data over 1750 km of roadways. It was collected by mining for interesting interactions between vehicles, pedestrians, and cyclists across six cities within the United States. We use a high-accuracy 3D auto-labeling system to generate high quality 3D bounding boxes for each road agent, and provide corresponding high definition 3D maps for each scene. Furthermore, we introduce a new set of metrics that provides a comprehensive evaluation of both single agent and joint agent interaction motion forecasting models. Finally, we provide strong baseline models for individual-agent prediction and joint-prediction. We hope that this new large-scale interactive motion dataset will provide new opportunities for advancing motion forecasting models.