DisorientLiDAR: Physical Attacks on LiDAR-based Localization

Yizhen Lao1,2,#

Yu Zhang1,#

Ziting Wang1

Chengbo Wang2

Yifei Xue2

Tie Ji2

Wanpeng Shao1, Email

 

1College of Information Science and Engineering, Hunan University, 2 Lushan South Road, Yuelu District, Changsha, Hunan Province, 410082, China
2School of Design, Hunan University, Pailou Road, South Campus, Taozihu, Changsha, Hunan Province, 410082, China
#These authors contributed equally to this work.

Abstract

Deep learning models have been shown to be susceptible to adversarial attacks with visually imperceptible perturbations. Even this poses a serious security challenge for the localization of self-driving cars, there has been very little exploration of attack on it, as most of adversarial attacks have been applied to 3D perception. In this work, we propose a novel adversarial attack framework called DisorientLiDAR targeting LiDAR-based localization. By reverse-engineering localization models (e.g., feature extraction networks), adversaries can identify critical keypoints and strategically remove them, thereby disrupting LiDAR-based localization. Our proposal is first evaluated on three state-of-the-art point-cloud registration models (HRegNet, D3Feat, and GeoTransformer) using the KITTI dataset. Experimental results demonstrate that removing regions containing Top-K keypoints significantly degrades their registration accuracy. We further validate the attack's impact on the Autoware autonomous driving platform, where hiding merely a few critical regions induces noticeable localization drift. Finally, we extended our attacks to the physical world by hiding critical regions with near-infrared absorptive materials, thereby successfully replicate the attack effects observed in KITTI data. This step has been closer toward the realistic physical-world attack that demonstrate the veracity and generality of our proposal.