Rides
Our service
Phoenix
San Francisco
Los Angeles
Austin
Rides
Our service
Phoenix
San Francisco
Los Angeles
Austin
Technology
Technology
About
Our History
Waymo Leadership
Latest Updates
Press Resources
About
Our History
Waymo Leadership
Latest Updates
Press Resources
Safety
Safety
Community
Community
Careers
Benefits
Values
People
Open Roles
Careers
Benefits
Values
People
Open Roles
CramNet: Camera-Radar Fusion with Ray-Constrained Cross-Attention for Robust 3D Object Detection
Jyh-Jing Hwang
Henrik Kretzschmar
Joshua Manela
Sean Rafferty
Nicholas Armstrong-Crews
Tiffany Chen
Dragomir Anguelov
Abstract
Robust 3D object detection is critical for safe autonomous driving. Camera and radar sensors are synergistic as they capture complementary information and work well under different environmental conditions. Fusing camera and radar data is challenging, however, as each of the sensors lacks information along a perpendicular axis, that is, depth is unknown to camera and elevation is unknown to radar. We propose the camera-radar matching network CramNet, an efficient approach to fuse the sensor readings from camera and radar in a joint 3D space. To leverage radar range measurements for better camera depth predictions, we propose a novel ray-constrained cross-attention mechanism that resolves the ambiguity in the geometric correspondences between camera features and radar features. Our method supports training with sensor modality dropout, which leads to robust 3D object detection, even when a camera or radar sensor suddenly malfunctions on a vehicle. We demonstrate the effectiveness of our fusion approach through extensive experiments on the RADIATE dataset, one of the few large-scale datasets that provide radar radio frequency imagery. A camera-only variant of our method achieves competitive performance in monocular 3D object detection on the Waymo Open Dataset.
Share
Links
Download PDF
ArXiv
Copy BibTeX
Copied!
Publication
ECCV 2022
Topics
Perception
2022
ECCV