ideo Based Machine Learning for Traffic Intersections describes the development of computer vision and machine learning-based applications for Intelligent Transportation Systems (ITS) and the challenges encountered during their deployment. This book presents several novel approaches, including a two-stream convolutional network architecture for vehicle detection, tracking, and near-miss detection; an unsupervised approach to detect near-misses in fisheye intersection videos using a deep learning model combined with a camera calibration and spline-based mapping method; and algorithms that utilize video analysis and signal timing data to accurately detect and categorize events based on the phase and type of conflict in pedestrian-vehicle and vehicle-vehicle interactions.
The book makes use of a real-time trajectory prediction approach, combined with aligned Google Maps information, to estimate vehicle travel time across multiple intersections. Novel visualization software, designed by the authors to serve traffic practitioners, is used to analyze the efficiency and safety of intersections. The software offers two modes: a streaming mode and a historical mode, both of which are useful to traffic engineers who need to quickly analyze trajectories to better understand traffic behavior at an intersection.
Overall, this book presents a comprehensive overview of the application of computer vision and machine learning to solve transportation-related problems. Video Based Machine Learning for Traffic Intersections demonstrates how these techniques can be used to improve safety, efficiency, and traffic flow, as well as identify potential conflicts and issues before they occur. The range of novel approaches and techniques presented offers a glimpse of the exciting possibilities that lie ahead for ITS research and development.
Key Features
Describes the development and challenges associated with Intelligent Transportation Systems (ITS)
Provides novel visualization software designed to serve traffic practitioners in analyzing the efficiency and safety of an intersection
Has the potential to proactively identify potential conflict situations and develop an early warning system for real-time vehicle-vehicle and pedestrian-vehicle conflicts
Author(s): Tania Banerjee, Xiaohui Huang, Aotian Wu, Ke Chen, Anand Rangarajan, Sanjay Ranka
Publisher: CRC Press
Year: 2023
Language: English
Pages: 194
Cover
Half Title
Title Page
Copyright Page
Dedication
Contents
Disclaimer
List of Figures
List of Tables
Authors
Chapter 1: Introduction
1.1. Motivation
1.2. Data Sources
1.2.1. Intersection Controller Logs
1.2.2. Video Data
1.3. Chapter Organization
Chapter 2: Detection, Tracking, and Classification
2.1. Introduction
2.2. Computer Vision Approaches
2.2.1. Convolutional Neural Networks
2.2.2. YOLO Object Detection
2.2.3. Simple Online Real-time Tracking (SORT)
2.2.4. DeepSORT
2.3. Two-Stream Architecture for Near-Miss Detection
2.3.1. Object Detection and Classification
2.3.2. Multiple Object Tracking
2.3.3. Metric Learning for Vehicle Reidentification
2.3.4. Image Segmentation to Determine Object Gaps
2.3.5. Near-Accident Detection
2.4. Experiments
2.4.1. A Traffic Near-Accident Dataset (TNAD)
2.4.2. Fisheye and Multi-camera Video
2.4.3. Model Training
2.4.4. Qualitative Results
2.4.5. Quantitative Results
2.4.5.1. Speed Performance
2.4.5.2. Cosine Metric Learning
2.4.5.3. Object Segmentation
2.5. Discussion
Chapter 3: Near-miss Detection
3.1. Introduction
3.2. Trajectory Generation
3.3. Signaling Status
3.4. Comparing Trajectories
3.5. Clustering
3.5.1. Distance Measure
3.5.1.1. Similarity Matrix
3.5.2. Hierarchical Clustering
3.5.2.1. Partitioning Trajectories Based on Movement Phase
3.5.2.2. Clustering Trajectories in a Partition
3.5.2.3. Finding Representative Trajectories
3.6. Anomalous Behavior
3.6.1. Signal Timing Violations
3.6.2. Trajectory Shape Violation
3.7. Near-miss Detection Framework
3.7.1. Fisheye to Cartesian Mapping
3.7.1.1. Calibration and Perspective Correction
3.7.1.2. Thin-plate Spline Mapping
3.7.2. Trajectory and Speed Computation
3.7.3. Near-Miss Detection
3.8. Experiments
3.8.1. Fisheye Video Data
3.8.2. Qualitative Performance
3.8.2.1. Fisheye to Cartesian Mapping
3.8.2.2. Trajectory and Near-miss Detection
3.8.3. Quantitative Evaluation
3.8.3.1. Computational Requirements
3.8.3.2. Trajectory and Near-miss Detection
3.9. Discussion
Chapter 4: Severe Events
4.1. Introduction
4.2. Related Work
4.3. Methodology
4.3.1. Generating Features From Trajectories
4.3.2. Categorization of Severe Events
4.3.3. Event Filtering
4.3.4. Event Modeling
4.4. Experiments
4.4.1. Pedestrian–vehicle Conflict Analysis
4.4.2. Vehicle–vehicle Conflict Analysis
4.5. Discussion
Chapter 5: Performance–Safety Trade-offs
5.1. Introduction
5.2. Related Work
5.2.1. Surrogate Safety Measures
5.2.2. Intersection Sensors
5.2.3. Intersection Safety Analysis Using Video Cameras
5.3. Background
5.3.1. Video Analysis
5.3.2. High-resolution Controller Log Analysis
5.3.3. Feature Computation
5.3.4. Categorization of Severe Events
5.4. Methodology
5.4.1. Evaluation Engine Modules
5.4.1.1. Volume Hotspot Detection Module
5.4.1.2. Conflict Hotspot Detection Module
5.4.1.3. Intersection Performance Evaluation Module
5.4.1.4. Scenario Comparison Module
5.5. Experiments
5.5.1. Intersection 1
5.5.1.1. Pedestrian Volume
5.5.1.2. P2V conflicts
5.5.1.3. V2V conflicts
5.5.1.4. Suggested Countermeasures
5.5.1.5. Countermeasure Evaluation: EPP
5.6. Discussion
Chapter 6: Trajectory Prediction
6.1. Introduction
6.2. Related Work
6.2.1. Prototype-based Trajectory Prediction
6.2.2. A Recurrent Neural Network (RNN) for Trajectory Prediction
6.3. System Overview and Pipeline
6.3.1. Offline Phase
6.3.2. Online Phase
6.4. Trajectory Clustering and Prototype Trajectories
6.4.1. Historical Trajectory Clustering
6.4.1.1. Clustering by Motion Direction
6.4.1.2. Clustering by Graph Spectral Clustering
6.4.2. Prototype Trajectory Generation
6.4.2.1. Complete Trajectory Determination
6.4.2.2. Averaging Complete Trajectories
6.5. Trajectory Prediction Model
6.5.1. Problem Definition
6.5.2. Curvilinear Coordinate System
6.5.2.1. ICS to CCS Transformation
6.5.2.2. CCS to ICS transformation
6.5.3. LSTM Encoder–Decoder Model
6.5.3.1. Network Architecture
6.5.3.2. Training
6.5.3.3. Inference
6.5.3.4. Implementation Detail
6.6. Experiments
6.6.1. Data Collection and Preprocessing
6.6.2. Evaluation
6.6.3. Evaluation of Trajectory Prediction pipeline
6.7. Conclusion
Chapter 7: Vehicle Tracking across Multiple Intersections
7.1. Introduction
7.2. Methodology
7.2.1. Multi-Object Single-Camera Tracking
7.2.2. Pairwise Signature ReID
7.2.2.1. Overall Network
7.2.2.2. Classification Loss
7.2.2.3. Verification Loss
7.2.2.4. Losses
7.2.2.5. Training and Optimization
7.2.3. Multi-Camera Vehicle Tracking
7.2.4. Travel Time Estimation
7.3. Experiments
7.3.1. Experimental Setup and Parameter Setting
7.3.2. Dataset
7.3.3. Qualitative Results
7.3.4. Quantitative Results
7.4. Discussion
Chapter 8: User Interface
8.1. Introduction
8.2. Related Work
8.3. Background
8.3.1. Video Analysis
8.3.2. Trajectory Database
8.3.3. Trajectory Processing
8.3.4. Fusion with Signal Data
8.3.5. Clustering Trajectories
8.4. Visualization
8.4.1. Streaming Visualization
8.4.1.1. Playing Video
8.4.1.2. Displaying Phases
8.4.1.3. Displaying Signal Data
8.4.1.4. Displaying Statistics
8.4.1.5. Displaying Multi-camera Views
8.4.1.6. Displaying Near-miss Events
8.4.1.7. Displaying Track Information
8.4.1.8. Other Settings
8.4.2. Historical Analysis
8.4.2.1. Selecting a Time Window for Trajectories
8.4.2.2. Selecting Phases and Clusters
8.4.2.3. Cluster Centers
8.4.2.4. Trajectories by Object Class
8.4.2.5. Heatmap for Near-misses
8.4.2.6. Individual Tracks
8.4.2.7. Anomalous Tracks
8.4.2.8. Video Play for Selected Track
8.5. Case Study: Traffic Trend Analysis
8.6. Discussion
Chapter 9: Conclusion
Appendix A: Acknowledgments for Materials
References
Index