Artificial Intelligence (AI) and Machine Learning (ML) are set to revolutionize all industries, and the Intelligent Transportation Systems (ITS) field is no exception. While ML, especially Deep Learning models, achieve great performance in terms of accuracy, the outcomes provided are not amenable to human scrutiny and can hardly be explained. This can be very problematic, especially for systems of a safety-critical nature such as transportation systems. Explainable AI (XAI) methods have been proposed to tackle this issue by producing human interpretable representations of machine learning models while maintaining performance. These methods hold the potential to increase public acceptance and trust in AI-based ITS.
Artificial Intelligence (AI), particularly Machine and Deep Learning, has been significantly advancing Intelligent Transportation Systems (ITS) research and industry. Due to their ability to recognize and to classify patterns in large datasets, AI algorithms have been successfully applied to address the major problems and challenges associated with traffic management and autonomous driving, e.g., sensing, perception, prediction, detection, and decision-making. However, in their current incarnation, AI models, especially Deep Neural Networks (DNN), suffer from the lack of interpretability. Indeed, the inherent structure of the DNN is not intrinsically set up for providing insights into their internal mechanism of work. This hinders the use and acceptance of these “black-box” models in systems of a safety-critical nature like ITS. Transportation usually involves life-death decisions; entrusting such important decisions to a system that cannot explain or justify itself presents obvious dangers. Hence, explainability and ethical AI are becoming subjects to scrutiny in the context of intelligent transportation.
Explainable Artificial Intelligence (XAI) is an emergent research field that aims to make AI models’ results more human-interpretable without sacrificing performance. XAI is regarded as a key enabler of ethical and sustainable AI adoption in transportation. In contrast with “black-box” systems, explainable and trustworthy intelligent transport systems will lend themselves to easy assessments and control by system designers and regulators. This would pave the way for easy and continual improvements leading to enhanced performance and security, as well as increased public trust.
Given its societal and technical implications, we believe that the field of XAI needs an in-depth investigation in the realm of ITS, especially in a post-pandemic era. This book aims at compiling into a coherent structure the state-of-the-art research and development of explainable models for ITS applications.
Features:
Provides the necessary background for newcomers to the field (both academics and interested practitioners)
Presents a timely snapshot of explainable and interpretable models in ITS applications
Discusses ethical, societal, and legal implications of adopting XAI in the context of ITS
Identifies future research directions and open problems
Author(s): Amina Adadi, Afaf Bouhoute
Publisher: CRC Press
Year: 2023
Language: English
Pages: 286
Cover
Half Title
Title Page
Copyright Page
Contents
Preface
Contributors
SECTION I: Toward Explainable ITS
CHAPTER 1: Explainable Artificial Intelligence for Intelligent Transportation Systems: Are We There Yet?
SECTION II: Interpretable Methods for ITS Applications
CHAPTER 2: Towards Safe, Explainable, and Regulated Autonomous Driving
CHAPTER 3: Explainable Machine Learning Method for Predicting Road-Traffic Accident Injury Severity in Addis Ababa City Based on a New Graph Feature Selection Technique
CHAPTER 4: COVID-19 Pandemic Effects on Traffic Crash Patterns and Injuries in Barcelona, Spain: An Interpretable Approach
CHAPTER 5: Advances in Explainable Reinforcement Learning: An Intelligent Transportation Systems Perspective
CHAPTER 6: Road-Traffic Data Collection: Handling Missing Data
CHAPTER 7: Explainability of Surrogate Models for Traffic Signal Control
CHAPTER 8: Intelligent Techniques and Explainable Artificial Intelligence for Vessel Traffic Service: A Survey
CHAPTER 9: An Explainable Model for Detection and Recognition of Traffic Road Signs
CHAPTER 10: An Interpretable Detection of Transportation Mode Considering GPS, Spatial, and Contextual Data Based on Ensemble Machine Learning
CHAPTER 11: Blockchain and Explainable AI for Trustworthy Autonomous Vehicles
SECTION III: Ethical, Social, and Legal Implications of XAI in ITS
CHAPTER 12: Ethical Decision-Making under Different Perspective-Taking Scenarios and Demographic Characteristics: The Case of Autonomous Vehicles
Index