Visual Object Tracking using Deep Learning

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

The text comprehensively discusses tracking architecture under stochastic and deterministic frameworks and presents experimental results under each framework with qualitative and quantitative analysis. It covers deep learning techniques for feature extraction, template matching, and training the networks in tracking algorithms.

Author(s): Ashish Kumar
Publisher: CRC Press
Year: 2023

Language: English
Pages: 216

Cover
Half Title
Title Page
Copyright Page
Table of Contents
Preface
Author bio
Chapter 1: Introduction to visual tracking in video sequences
1.1 Overview of visual tracking in video sequences
1.2 Motivation and challenges
1.3 Real-time applications of visual tracking
1.4 Emergence from the conventional to deep learning approaches
1.5 Performance evaluation criteria
1.6 Summary
References
Chapter 2: Research orientation for visual tracking models: Standards and models
2.1 Background and preliminaries
2.2 Conventional tracking methods
2.2.1 Stochastic approach
2.2.2 Deterministic approach
2.2.3 Generative approach
2.2.4 Discriminative approach
2.2.5 Multi-stage approach
2.2.6 Collaborative approach
2.3 Deep learning-based methods
2.3.1 Typical deep learning-based visual tracking methods
2.3.2 Hierarchical-feature-based visual tracking methods
2.4 Correlation filter-based visual trackers
2.4.1 Correlation filter-based trackers with context-aware strategy
2.4.2 Correlation filter-based trackers with deep features
2.5 Summary
References
Chapter 3: Saliency feature extraction for visual tracking
3.1 Feature extraction for appearance model
3.2 Handcrafted features
3.2.1 Feature extraction from vision sensors
3.2.1.1 Color feature
3.2.1.2 Texture feature
3.2.1.3 Gradient feature
3.2.1.4 Motion feature
3.2.2 Feature extraction from specialized sensors
3.2.2.1 Depth feature
3.2.2.2 Thermal feature
3.2.2.3 Audio feature
3.3 Deep learning for feature extraction
3.3.1 Deep features extraction
3.3.2 Hierarchical feature extraction
3.4 Multi-feature fusion for efficient tracking
3.5 Summary
References
Chapter 4: Performance metrics for visual tracking: A qualitative and quantitative analysis
4.1 Introduction
4.2 Performance metrics for tracker evaluation
4.3 Performance metrics without ground truth
4.4 Performance metrics with ground truth
4.4.1 Center location error (CLE)
4.4.2 F-measure
4.4.3 Distance precision, overlap precision, and area under the curve
4.4.4 Expected accuracy overlap, robustness, and accuracy
4.4.5 Performance plots
4.5 Summary
References
Chapter 5: Visual tracking data sets: Benchmark for evaluation
5.1 Introduction
5.2 Problems with the self-generated data sets
5.3 Salient features of visual tracking public data sets
5.3.1 Data sets for short-term traditional tracking
5.3.2 Multi-modal data sets for multi-modal tracking
5.4 Large data sets for long-term tracking
5.5 Strengths and limitations of public tracking data sets
5.6 Summary
References
Chapter 6: Conventional framework for visual tracking: Challenges and solutions
6.1 Introduction
6.2 Deterministic tracking approach
6.2.1 Mean shift and its variant-based trackers
6.2.2 Multi-modal deterministic approach
6.3 Generative tracking approach
6.3.1 Subspace learning-based trackers
6.3.2 Sparse representation-based trackers
6.3.3 Multi-modal generative approach for visual tracking
6.4 Discriminative tracking approach
6.4.1 Tracking by detection
6.4.2 Graph-based trackers
6.5 Summary
References
Chapter 7: Stochastic framework for visual tracking: Challenges and solutions
7.1 Introduction
7.2 Particle filter for visual tracking
7.2.1 State estimation using particle filter
7.2.2 Benefits and limitations of particle filter for visual tracking
7.3 Framework and procedure
7.4 Fusion of multi-features and state estimation
7.4.1 Outlier detection mechanism
7.4.2 Optimum resampling approach
7.4.3 State estimation and reliability calculation
7.5 Experimental validation of the particle filter-based tracker
7.5.1 Attributed-based performance
7.5.1.1 Illumination variation and deformation
7.5.1.2 Fast motion and motion blur
7.5.1.3 Scale variations
7.5.1.4 Partial occlusion or full occlusion
7.5.1.5 Background clutters and low resolution
7.5.1.6 Rotational variations
7.5.2 Overall performance evaluation
7.6 Discussion on PF-variants-based tracking
7.7 Summary
References
Chapter 8: Multi-stage and collaborative tracking model
8.1 Introduction
8.2 Multi-stage tracking algorithms
8.2.1 Conventional multi-stage tracking algorithms
8.2.2 Deep learning-based multi-stage tracking algorithms
8.3 Framework and procedure
8.3.1 Feature extraction and fusion strategy
8.3.1.1 Multi-feature fusion and state estimation
8.3.2 Experimental validation
8.3.2.1 Illumination variation and deformation
8.3.2.2 Fast motion and motion blur
8.3.2.3 Scale variations
8.3.2.4 Partial occlusion or full occlusion
8.3.2.5 Background clutter and low resolution
8.3.2.6 Rotational variations
8.3.2.7 Overall performance comparison
8.4 Collaborative tracking algorithms
8.5 Summary
References
Chapter 9: Deep learning-based visual tracking model: A paradigm shift
9.1 Introduction
9.2 Deep learning-based tracking framework
9.2.1 Probabilistic deep convolutional tracking
9.2.2 Tracking by detection deep convolutional tracker
9.3 Hyper-feature-based deep learning networks
9.3.1 Siamese network-based trackers
9.3.2 Specialized deep network-based trackers
9.4 Multi-modal based deep learning trackers
9.5 Summary
References
Chapter 10: Correlation filter-based visual tracking model: Emergence and upgradation
10.1 Introduction
10.2 Correlation filter-based tracking framework
10.2.1 Context-aware correlation filter-based trackers
10.2.2 Part-based correlation filter trackers
10.2.3 Spatial regularization-based correlation filter trackers
10.3 Deep correlation filter-based trackers
10.4 Fusion-based correlation filter trackers
10.4.1 Single-model-based correlation filter trackers
10.4.2 Multi-modal-based correlation filter trackers
10.5 Discussion on correlation filter-based trackers
10.6 Summary
References
Chapter 11: Future prospects of visual tracking: Application-specific analysis
11.1 Introduction
11.2 Pruning for deep neural architecture
11.2.1 Types of pruning network
11.2.2 Benefits of pruning
11.3 Explainable AI
11.3.1 Importance of generalizability for deep neural networks
11.4 Application-specific visual tracking
11.4.1 Pedestrian tracking
11.4.2 Human activity tracking
11.4.3 Autonomous vehicle path tracking
11.5 Summary
References
Chapter 12: Deep learning-based multi-object tracking: Advancement for intelligent video analysis
12.1 Introduction
12.2 Multi-object tracking algorithms
12.2.1 Tracking by detection
12.2.2 Deep learning-based multi-object trackers (DL-MOT)
12.3 Evaluation metrics for performance analysis
12.4 Benchmark for performance evaluation
12.5 Application of MOT algorithms
12.6 Limitations of existing MOT algorithms
12.7 Summary
References
Index