Ir al contenido principal

WOMD-LiDAR: Raw Sensor Dataset Benchmark for Motion Forecasting

  • Kan Chen

  • Runzhou Ge

  • Hang Qiu

  • Rami Al-Rfou

  • Charles R. Qi

  • Xuanyu Zhou

  • Zoey Yang

  • Scott Ettinger

  • Pei Sun

  • Zhaoqi Leng

  • Mustafa Baniodeh

  • Ivan Bogun

  • Weiyue Wang

  • Mingxing Tan

  • Dragomir Anguelov

Abstract

Widely adopted motion forecasting datasets substitute the observed sensor inputs with higher-level abstractions such as 3D boxes and polylines. These sparse shapes are inferred through annotating the original scenes with perception systems' predictions. Such intermediate representations tie the quality of the motion forecasting models to the performance of computer vision models. Moreover, the human-designed explicit interfaces between perception and motion forecasting typically pass only a subset of the semantic information present in the original sensor input. To study the effect of these modular approaches, design new paradigms that mitigate these limitations, and accelerate the development of end-to-end motion forecasting models, we augment the Waymo Open Motion Dataset (WOMD) with large-scale, high-quality, diverse LiDAR data for the motion forecasting task. We augment over 100K of the original WOMD scenes, each spanning 20 seconds, with well synchronized and calibrated high quality LiDAR point clouds captured across a range of urban and suburban geographies (https://waymo.com/open/data/motion/). Furthermore, we integrate the LiDAR data into the motion forecasting model training and provide a strong baseline. Experiments show that the LiDAR data brings improvement in the motion forecasting research task. We hope that WOMD-LiDAR will provide new opportunities for the motion forecasting research community.

Dataset

The WOMD-LiDAR dataset is available in https://waymo.com/open/data/motion/. The 1.2.0 release of the Motion Dataset adds Lidar points for the first 1 second of each of the 9 second windows.

The Lidar data has a compressed format. After decompression, it has the same format as the Perception Dataset lidar data. Check out the tutorial for decompressing and using the Lidar data in the Motion Dataset.

  • Simulation of vehicles in a parking lot, viewed from above, where a Waymo vehicle uses LIDAR to find a spot.
  • Simulation of vehicles in the street, viewed from above, where a Waymo vehicle uses LIDAR to uses LIDAR to navigate a normal road.
  • Simulation of vehicles in the street, viewed from above, where a Waymo vehicle uses LIDAR to uses LIDAR to navigate a busy road.

    Figure 1: Gifs of WOMD-LiDAR data

Modeling

We provide a baseline model based on the WOMD-LiDAR dataset. The overview of the model is shown below. The WayFormer based model with LiDAR inputs got competitive performance on the WOMD validation set. More visualization results are provided below.

  • Model structure of WayFormer using LiDAR inputs.

    Figure 2. Model architecture of WayFormer-LiDAR model

  • Visualization of WayFormer-LiDAR model's inference on WOMD examples.

    Figure 3. Prediction result comparison visualization between WayFormer (sub-figures on the left) and WayFormer with LiDARinputs (sub-figures on the right). Legends in the figure: Yellow and blue trajectories are predictions for different agents, while blue trajectories are highlighted ones. Red dotted lines are labeled ground truth trajectories for agents in the scene.

Links

Publication

ICRA 2024