Self-driving vehicles are a rapidly growing area of research and expertise. Theories and Practice of Self-Driving Vehicles presents a comprehensive introduction to the technology of self driving vehicles across the three domains of perception, planning and control. The title systematically introduces vehicle systems from principles to practice, including basic knowledge of ROS programming, machine and deep learning, as well as basic modules such as environmental perception and sensor fusion. The book introduces advanced control algorithms as well as important areas of new research. This title offers engineers, technicians and students an accessible handbook to the entire stack of technology in a self-driving vehicle.
Theories and Practice of Self-Driving Vehicles presents an introduction to self-driving vehicle technology from principles to practice. Ten chapters cover the full stack of driverless technology for a self-driving vehicle. Written by two authors experienced in both industry and research, this book offers an accessible and systematic introduction to self-driving vehicle technology.
Author(s): Qingguo Zhou, Zebang Shen, Binbin Yong, Rui Zhao, Peng Zhi
Publisher: Elsevier
Year: 2022
Language: English
Pages: 342
City: Amsterdam
Front Cover
THEORIES AND PRACTICES OF SELF-DRIVING VEHICLES
THEORIES AND PRACTICES OF SELF-DRIVING VEHICLES
Copyright
Contents
Contributors
1 - First acquaintance with unmanned vehicles
1.1 What are unmanned vehicles?
1.1.1 Classification standards for unmanned vehicles
1.1.2 How difficult is the implementation of unmanned vehicles?
1.2 Why do we need unmanned vehicles?
1.2.1 Improvement of road traffic safety
1.2.2 Alleviation of urban traffic congestion
1.2.3 Improvement of travel efficiency
1.2.4 Lowering the threshold for drivers
1.3 Basic framework of the unmanned vehicle system
1.3.1 Environmental perception
1.3.2 Localization
1.3.3 Mission planning
1.3.4 Behavior planning
1.3.5 Motion planning
1.3.6 Control system
1.3.7 Summary
1.4 Development environment configuration
1.4.1 Simple environment installation
1.4.2 Install the robot operating system (ROS)
1.4.3 Install OpenCV
References
2 - Introduction to robot operating system
2.1 ROS introduction
2.1.1 Brief introduction to ROS
2.1.1.1 What is ROS?
2.1.1.2 History of ROS
2.1.1.3 Features of ROS
2.1.2 Concept of ROS
2.1.2.1 Master
2.1.2.2 Node
2.1.2.3 Topic
2.1.2.4 Message
2.1.3 Catkin create system
2.1.4 Project organization structure in ROS
2.1.4.1 CMakeLists.txt
2.1.5 Practice based on husky simulator
2.1.6 Basic ROS programming
2.1.6.1 ROS C++ client library (roscpp)
2.1.6.1.1 Node handle
2.1.6.1.2 ROS logging method
2.1.6.2 Write simple publish and subscribe code
2.1.6.2.1 Object-oriented node coding
2.1.6.3 Parameter services in ROS
2.1.6.4 Small case based on husky robot
2.1.7 ROS services
2.1.8 ROS action
2.1.9 Common tools in ROS
2.1.9.1 Rviz
2.1.9.2 rqt
2.1.9.3 TF coordinate conversion system
2.1.9.4 URDF and SDF
References
3 - Localization for unmanned vehicle
3.1 Principle of achieving localization
3.2 ICP algorithm
3.3 Normal distribution transform
3.3.1 Introduction to NDT algorithm
3.3.2 Basic steps of NDT algorithm
3.3.3 Advantages of NDT algorithm
3.3.4 Algorithm example
3.4 Localization system based on global positioning system (GPS) + inertial navigation system (INS)
3.4.1 Localization principle
3.4.2 Localization fusion of different sensors
3.5 SLAM-based localization system
3.5.1 SLAM localization principle
3.5.2 SLAM applications
References
4 - State estimation and sensor fusion
4.1 Kalman filter and state estimation
4.1.1 What is the Kalman filter?
4.1.2 Kalman filter
4.1.2.1 Status forecast
4.1.2.2 Calculation of forecast error
4.1.2.3 Measurement error
4.1.2.4 Calculation of Kalman gain
4.1.2.5 Calculation of optimal estimate
4.1.2.6 Calculation of the error of the optimal estimate
4.1.3 Kalman filter in autonomous vehicle sensing module
4.1.3.1 Sensors for autonomous vehicle perception module
4.1.3.2 Kalman filter-based pedestrian localization estimation
4.1.3.3 Kalman filter pedestrian state estimation in Python
4.2 Advanced motion modeling and EKF
4.2.1 Advanced motion models for vehicle tracking
4.2.2 EKF
4.2.2.1 Jacobi matrix
4.2.2.2 Process noise
4.2.2.3 Measurement
4.2.2.4 Python implementation
4.3 UKF
4.3.1 Movement model
4.3.2 Nonlinear processing/measurement models
4.3.3 Lossless transformation
4.3.4 Projections
4.3.4.1 Prediction of sigma point
4.3.4.2 Predicted mean and variance
4.3.5 Measurement updates
4.3.5.1 Update status
4.3.6 Summary
References
5 - Introduction of machine learning and neural networks
5.1 Basic concepts of machine learning
5.2 Supervised learning
5.2.1 Empirical risk minimization
5.2.2 Overfitting and underfitting
5.2.3 ``Certain algorithm''-gradient descent algorithm
5.2.4 Summary
5.3 Fundamentals of neural network
5.3.1 Basic structure of the neural network
5.3.2 Unlimited capacity-fitting arbitrary functions
5.3.3 Forward transmission
5.3.4 Stochastic gradient descent
5.4 Using Keras to implement the neural network
5.4.1 Data preparation
5.4.2 A small change in three-layer neural network-deep feedforward neural network
5.4.3 Summary
References
6 - Deep learning and visual perception
6.1 Deep feedforward neural networks-why is it necessary to be deep?
6.1.1 Efficiency of model training under big data
6.1.2 Representation learning
6.2 Regularization technology applied to deep neural networks
6.2.1 Data augmentation
6.2.2 Early stopping
6.2.3 Parameter normalization penalties
6.2.4 Dropout
6.3 Actual combat-traffic sign recognition
6.3.1 Belgium traffic sign dataset
6.3.2 Data preprocessing
6.3.3 Leverage Keras to construct and train a deep feedforward network
6.4 Introduction to convolutional neural networks
6.4.1 What is convolution and the motivation for convolution
6.4.2 Sparse interactions
6.4.3 Parameter sharing
6.4.4 Equivariant representations
6.4.5 Convolutional neural network
6.4.6 Some details of convolution
6.5 Vehicle detection based on YOLO2
6.5.1 Pretrained classification network
6.5.2 Train the detection network
6.5.3 Loss function of YOLO
6.5.4 Test
6.5.5 Vehicle and pedestrian detection based on YOLO
References
7 - Transfer learning and end-to-end self-driving
7.1 Transfer learning
7.2 End-to-end selfdriving
7.3 End-to-end selfdriving simulation
7.3.1 Selection of the simulator
7.3.2 Data collection and processing
7.3.3 Construction of the deep neural network model
7.3.3.1 LeNet deep selfdriving model
7.3.3.2 NVIDIA deep selfdriving model
7.4 Summary of this chapter
References
8 - Getting started with self-driving planning
8.1 A∗ algorithm
8.1.1 Directed graph
8.1.2 Breadth-first search (BFS) algorithm
8.1.3 Data structure
8.1.4 How to generate a route
8.1.5 Directional search (heuristic)
8.1.6 Dijkstra algorithm
8.1.7 A∗ algorithm
8.2 Hierarchical finite state machine (HFSM) and autonomous vehicle behavior planning
8.2.1 Design criteria for decision-making plan system of autonomous vehicles
8.2.2 FSM
8.2.3 Hierarchical FSM
8.2.4 Use of state machines in behavior planning
8.3 Autonomous vehicle route generation based on free boundary cubic spline interpolation
8.3.1 Cubic spline interpolation
8.3.2 Cubic spline interpolation algorithm
8.3.3 Using Python to implement cubic spline interpolation for path generation
8.4 Motion planning method of the autonomous vehicle based on Frenet optimization trajectory
8.4.1 Why use the Frenet coordinate system
8.4.2 Jerk minimization and polynomial solution of fifth degree trajectory
8.4.3 Collision avoidance
8.4.4 Example of motion planning for autonomous vehicles based on Frenet optimization trajectory
References
9 - Vehicle model and advanced control
9.1 Kinematic bicycle model and dynamic bicycle model
9.1.1 Bicycle model
9.1.2 Kinematic bicycle model
9.1.3 Dynamic bicycle model
9.2 Rudiments of autonomous vehicle control
9.2.1 Need for control theory
9.2.2 PID control algorithm
9.2.2.1 Proportional control
9.2.2.2 Proportional and derivative control
9.3 MPC based on kinematic model
9.3.1 Applying PID controller forwards to steering control
9.3.2 Predictive model
9.3.3 Online optimal loop based on time series
9.3.4 Feedback correction
9.4 Trajectory tracking
References
10 - Deep reinforcement learning and application in self-driving
10.1 Overview of reinforcement learning
10.2 Reinforcement learning
10.2.1 Markov decision process
10.2.2 Constituent elements
10.2.2.1 Policy
10.2.2.2 Reward
10.2.3 Value function
10.3 Approximate value function
10.4 Deep Q network algorithm
10.4.1 Q learning algorithm
10.4.2 DQN algorithm
10.4.2.1 Reward function
10.4.2.2 Objective function
10.5 Policy gradient
10.6 Deep deterministic policy gradient and TORCS game control
10.6.1 About TORCS game
10.6.2 TORCS game environment installation
10.6.3 Deep deterministic strategy gradient algorithm
10.6.3.1 Theory
10.6.3.2 Reward function setting
10.6.3.3 Running program
10.7 Summary
References
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
Y
Back Cover