Multi-sensor Fusion for Autonomous Driving

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Although sensor fusion is an essential prerequisite for autonomous driving, it entails a number of challenges and potential risks. For example, the commonly used deep fusion networks are lacking in interpretability and robustness. To address these fundamental issues, this book introduces the mechanism of deep fusion models from the perspective of uncertainty and models the initial risks in order to create a robust fusion architecture. This book reviews the multi-sensor data fusion methods applied in autonomous driving, and the main body is divided into three parts: Basic, Method, and Advance. Starting from the mechanism of data fusion, it comprehensively reviews the development of automatic perception technology and data fusion technology, and gives a comprehensive overview of various perception tasks based on multimodal data fusion. The book then proposes a series of innovative algorithms for various autonomous driving perception tasks, to effectively improve the accuracy and robustness of autonomous driving-related tasks, and provide ideas for solving the challenges in multi-sensor fusion methods. Furthermore, to transition from technical research to intelligent connected collaboration applications, it proposes a series of exploratory contents such as practical fusion datasets, vehicle-road collaboration, and fusion mechanisms. In contrast to the existing literature on data fusion and autonomous driving, this book focuses more on the deep fusion method for perception-related tasks, emphasizes the theoretical explanation of the fusion method, and fully considers the relevant scenarios in engineering practice. Helping readers acquire an in-depth understanding of fusion methods and theories in autonomous driving, it can be used as a textbook for graduate students and scholars in related fields or as a reference guide for engineers who wish to apply deep fusion methods.

Author(s): Xinyu Zhang, Jun Li, Zhiwei Li, Huaping Liu, Mo Zhou, Li Wang, Zhenhong Zou
Publisher: Springer
Year: 2023

Language: English
Pages: 247

Foreword
Preface
Acknowledgments
Contents
Part I Basic
1 Introduction
1.1 Autonomous Driving
1.2 Sensors
1.3 Perception
1.4 Multi-Sensor Fusion
1.5 Public Datasets
1.6 Challenges
1.7 Summary
References
2 Overview of Data Fusion in Autonomous Driving Perception
2.1 A Brief Review of Deep Learning
2.2 Fusion in Depth Completion
2.3 Fusion in Dynamic Object Detection
2.4 Fusion in Stationary Road Object Detection
2.5 Fusion in Semantic Segmentation
2.6 Fusion in Object Tracking
2.7 Summary
References
Part II Method
3 Multi-Sensor Calibration
3.1 Introduction
3.2 Line-Based Multi-Sensor Calibration
3.2.1 Methodology
3.2.2 Experiment
3.3 Challenges and Prospect
3.4 Summary
References
4 Multi-Sensor Object Detection
4.1 Introduction
4.2 LiDAR-Image Fusion Object Detection
4.2.1 RI-Fusion Framework
4.2.1.1 Data Preprocessing
4.2.1.2 RI-Attention Network
4.2.1.3 Point Cloud Recovery
4.2.2 Experiment
4.2.2.1 Dataset and Evaluation Metrics
4.2.2.2 Implementation Details
4.2.2.3 Results
4.2.2.4 Ablation Studies
4.3 RaDAR-LiDAR Fusion Object Detection
4.3.1 Preprocessing of 4D RaDAR Point Clouds
4.3.2 Interaction-Based Multimodal Fusion (IMMF)
4.3.3 Center-Based Multi-Scale Fusion (CMSF)
4.3.4 Experiments
4.3.4.1 Dataset
4.3.4.2 Implementation Details
4.3.4.3 Training
4.3.4.4 3D Object Detection on Astyx HiRes 2019 Dataset
4.3.4.5 Ablation Studies with M2-Fusion
4.3.4.6 Accuracy Comparison Experiments at Different Ranges
4.3.4.7 Parameter Comparison Experiment
4.3.4.8 Visualization Experiments
4.4 Challenges and Prospect
4.5 Summary
References
5 Multi-Sensor Scene Segmentation
5.1 Background
5.2 Introduction
5.3 Attention in Multimodal Fusion Segmentation
5.3.1 Network Architectures
5.3.2 Experiments and Discussion
5.4 Adaptive Strategies in Multimodal Fusion Segmentation
5.4.1 MIMF Network
5.4.2 Experiment
5.5 Video Multimodal Fusion Segmentation
5.5.1 Method
5.5.2 Experiments
5.6 Summary
5.7 Challenges and Prospect
References
6 Multi-Sensor Fusion Localization
6.1 Introduction
6.2 GF-SLAM
6.2.1 Methodology
6.2.2 Experiment
6.3 Lifelong Localization in Semi-Dynamic Environment
6.3.1 Methodology
6.3.2 Experiment
6.4 Challenges and Prospect
6.5 Summary
References
Part III Advance
7 OpenMPD: An Open Multimodal Perception Dataset
7.1 Introduction
7.2 Automated Driving-Related Datasets
7.2.1 Comprehensive Datasets
7.2.2 Characteristic Datasets
7.2.3 Our Dataset
7.3 OpenMPD
7.3.1 Platform Setup
7.3.2 Calibration
7.3.3 Collecting Route
7.3.4 Combine Annotation
7.4 Data Analysis
7.4.1 Complexity
7.4.2 Occlusion
7.4.3 Scale
7.4.4 Position
7.5 Experiment
7.5.1 Object Detection
7.5.2 Semantic Segmentation
7.6 Summary
References
8 Vehicle-Road Multi-View Interactive Data Fusion
8.1 Introduction
8.2 Methodology
8.3 Experiment
8.4 Summary
References
9 Information Quality in Data Fusion
9.1 Introduction
9.2 Uncertainty in Data Fusion
9.2.1 Methodology
9.2.2 Experiment
9.2.3 Detection Model Degradation Under Noise
9.3 Information in Data Fusion
9.3.1 Multimodal Fusion Within the Context of Information Theory
9.3.2 Multimodal Models
9.3.3 Experiment
9.4 Summary
References
10 Conclusions