Skip to main content

Multi-modal 3D Human Pose Estimation with 2D Weak Supervision in Autonomous Driving

Authors

  • Jingxiao Zheng

  • Xinwei Shi

  • Alexander Gorban

  • Junhua Mao

  • Yang Song

  • Charles R. Qi

  • Ting Liu

  • Visesh Chari

  • Andre Cornman

  • Yin Zhou

  • Congcong Li

  • Dragomir Anguelov

    Abstract

    3D human pose estimation (HPE) in autonomous vehicles (AV) differs from other use cases in many factors, including the 3D resolution and range of data, absence of dense depth maps, failure modes for LiDAR, relative location between the camera and LiDAR, and a high bar for estimation accuracy. Data collected for other use cases (such as virtual reality, gaming, and animation) may therefore not be usable for AV applications. This necessitates the collection and annotation of a large amount of 3D data for HPE in AV, which is time-consuming and expensive. In this paper, we propose one of the first approaches to alleviate this problem in the AV setting. Specifically, we propose a multi-modal approach which uses 2D labels on RGB images as weak supervision to perform 3D HPE. The proposed multi-modal architecture incorporates LiDAR and camera inputs with an auxiliary segmentation branch. On the Waymo Open Dataset, our approach achieves a 22% relative improvement over camera-only 2D HPE baseline, and 6% improvement over LiDAR-only model. Finally, careful ablation studies and parts based analysis illustrate the advantages of each of our contributions.