Smart Computer Vision

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This book addresses and disseminates research and development in the applications of intelligent techniques for computer vision, the field that works on enabling computers to see, identify, and process images in the same way that human vision does, and then providing appropriate output. The book provides contributions which include theory, case studies, and intelligent techniques pertaining to computer vision applications. The book helps readers grasp the essence of the recent advances in this complex field. The audience includes researchers, professionals, practitioners, and students from academia and industry who work in this interdisciplinary field. The authors aim to inspire future research both from theoretical and practical viewpoints to spur further advances in the field.

Author(s): B. Vinoth Kumar, P. Sivakumar, B. Surendiran, Junhua Ding
Series: EAI/Springer Innovations in Communication and Computing
Publisher: Springer
Year: 2023

Language: English
Pages: 358
City: Cham

Preface
Contents
A Systematic Review on Machine Learning-Based Sports Video Summarization Techniques
1 Introduction
2 Two Decades of Research in Sports Video Summarization
2.1 Feature-Based Approaches
2.2 Cluster-Based Approaches
2.3 Excitement-Based Approaches
2.4 Key Event-Based Approaches
2.5 Object Detection
2.6 Performance Metrics
2.6.1 Objective Metrics
2.6.2 Subjective Metrics Based on User Experience
3 Evolution of Ideas, Algorithms, and Methods for Sports Video Summarization
4 Scope for Future Research in Video Summarization
4.1 Common Weaknesses of Existing Methods
4.1.1 Audio-Based Methods
4.1.2 Shot and Boundary Detection
4.1.3 Resolution and Samples
4.1.4 Events Detection
4.2 Scope for Further Research
5 Conclusion
References
Shot Boundary Detection from Lecture Video Sequences Using Histogram of Oriented Gradients and Radiometric Correlation
1 Introduction
2 Shot Boundary Detection and Key Frame Extraction
2.1 Feature Extraction
2.2 Radiometric Correlation for Interframe Similarity Measure
2.3 Entropic Measure for Distinguishing Shot Transitions
2.4 Key Frame Extraction
3 Results and Discussions
3.1 Analysis of Results
3.2 Discussions and Future Works
4 Conclusions
References
Detection of Road Potholes Using Computer Vision and Machine Learning Approaches to Assist the Visually Challenged
1 Introduction
2 Related Works
3 Methodologies
3.1 Pothole Detection Using Machine Learning and Computer Vision
3.2 Pothole Detection Using Deep Learning Model
4 Implementation
5 Result Analysis
6 Conclusion
References
Shape Feature Extraction Techniques for Computer VisionApplications
1 Introduction
2 Feature Extraction
3 Various Techniques in Feature Extraction
3.1 Histograms of Edge Directions
3.2 This Harris Corner
3.3 Scale-Invariant Feature Transform
3.4 Eigenvector Approaches
3.5 Angular Radial Partitioning
3.6 Edge Pixel Neighborhood Information
3.7 Color Histograms
3.8 Edge Histogram Descriptor
3.9 Shape Descriptor
4 Shape Signature
4.1 Centroid Distance Function
4.2 Chord Length Function
4.3 . Area Function
5 Real-Time Applications of Shape Feature Extraction and Object Recognition
5.1 Fruit Recognition
5.2 Leaf Recognition 2
5.3 Object Recognition
6 Recent Works
7 Summary and Conclusion
References
GLCM Feature-Based Texture Image Classification Using Machine Learning Algorithms
1 Introduction
2 GLCM
2.1 Computation of GLCM Matrix
2.2 GLCM Features
2.2.1 Energy
2.2.2 Entropy
2.2.3 Sum Entropy
2.2.4 Difference Entropy
2.2.5 Contrast
2.2.6 Variance
2.2.7 Sum Variance
2.2.8 Difference Variance
2.2.9 Local Homogeneity or Inverse Difference Moment (IDM)
2.2.10 Local Homogeneity or Inverse Difference Moment (IDM)
2.2.11 RMS Contrast
2.2.12 Cluster Shade
2.2.13 Cluster Prominence
3 Machine Learning Algorithms
4 Dataset Description
5 Experiment Results
5.1 Performance Metrices
5.1.1 Sensitivity
5.1.2 Specificity
5.1.3 False Positive Rate (FPR)
5.1.4 False Negative Ratio (FNR)
6 Conclusion
References
Progress in Multimodal Affective Computing: From Machine Learning to Deep Learning
1 Introduction
2 Available Datasets
2.1 DEAP Dataset
2.2 AMIGOS Dataset
2.3 CHEVAD 2.0 Dataset
2.4 RECOLA Dataset
2.5 IEMOCAP Dataset
2.6 CMU-MOSEI Dataset
2.7 SEED IV Dataset
2.8 AVEC 2014 Dataset
2.9 SEWA Dataset
2.10 AVEC 2018 Dataset
2.11 DAIC-WOZ Dataset
2.12 UVA Toddler Dataset
2.13 MET Dataset
3 Features for Affect Recognition
3.1 Audio Modality
3.2 Visual Modality
3.3 Textual Modality
3.4 Facial Expression
3.5 Biological Signals
4 Features for Affect Recognition Various Fusion Techniques
4.1 Decision-Level or Late Fusion
4.2 Hierarchical Fusion
4.3 Score-Level Fusion
4.4 Model-Level Fusion
5 Multimodal Affective Computing Techniques
5.1 Machine Learning-Based Techniques
5.2 Deep Learning-Based Techniques
6 Discussion
7 Conclusion
References
Content-Based Image Retrieval Using Deep Features and Hamming Distance
1 Introduction
1.1 Content-Based Image Retrieval: Review
2 Background: Basics of CNN
3 Proposed Model
3.1 Transfer Learning Using Pretrained Weights
3.2 Feature Vector Extraction
3.3 Clustering
3.4 Retrieval Using Distance Metrics
3.4.1 Euclidean Distance
3.4.2 Hamming Distance
4 Dataset Used
5 Results and Discussions
5.1 Retrieval Using Euclidean Distance
5.1.1 Retrieving 40 Images
5.1.2 Retrieving 50 Images
5.1.3 Retrieving 60 Images
5.1.4 Retrieving 70 Images
5.2 Retrieval Using Hamming Distance
5.2.1 Retrieving 40 Images
5.2.2 Retrieving 50 Images
5.2.3 Retrieving 60 Images
5.2.4 Retrieving 70 Images
5.3 Retrieval Analysis Between Euclidean Distance and Hamming Distance
5.4 Comparison with State-of-the-Art Models
6 Conclusion
7 Future Works
References
Bioinspired CNN Approach for Diagnosing COVID-19 Using Images of Chest X-Ray
1 Introduction
2 Related Work
3 Approaches and Tools
3.1 CIFAR Dataset of Chest X-Ray Image
3.2 Image Scaling in Preprocessing
3.3 Training and Validation Steps
3.4 Deep Learning Model
4 Cuckoo-Based Hash Function
5 Research Data and Model Settings
5.1 Estimates of the Proposed Model's Accuracy
6 Conclusion
References
Initial Stage Identification of COVID-19 Using Capsule Networks
1 Introduction
2 Literature Review
3 Dataset Description
4 Methodology
4.1 Overview of Layers Present in Convolutional Neural Networks
4.1.1 Convolutional Layer
4.1.2 Stride(S)
4.1.3 Pooling Layer
4.1.4 ReLU Activation Functions
4.1.5 Generalized Supervised Deep Learning Flowchart
5 Proposed Work
5.1 Capsule Networks
5.2 Proposed Architecture
5.3 Metrics for Evaluation
5.3.1 Accuracy
5.3.2 Precision
5.3.3 Recall
5.3.4 F1-Score
5.3.5 False Positive Rate (FPR)
6 Conclusion
References
Deep Learning in Autoencoder Framework and Shape Prior for Hand Gesture Recognition
1 Introduction
2 State-of-the-Art Techniques
3 Proposed Gesture Recognition Scheme
3.1 Preprocessing
3.1.1 A. Color Space Conversion
3.1.2 Background Removal
3.1.3 Bounding Box and Resizing
3.2 Feature Extraction
3.3 Classification
4 Simulation Results and Discussions
5 Conclusions and Future Works
References
Hierarchical-Based Semantic Segmentation of 3D Point Cloud Using Deep Learning
1 Introduction
2 Related Work
3 NN-Based Point Cloud Segmentation Using Octrees
3.1 Box Search by Octrees
3.2 Feature Hierarchy
3.3 Permutation Invariance
3.4 Size Invariance
3.5 Architecture Details
4 Experiments
4.1 Implementation and Dataset Details
4.2 List of Experiments
4.3 Learning Curves
4.4 Qualitative Results
4.4.1 Shapenet Dataset
5 Conclusions and Future Work
References
Convolution Neural Network and Auto-encoder Hybrid Scheme for Automatic Colorization of Grayscale Images
1 Introduction
2 Basics of Convolution Neural Network
2.1 Convolutional Layer
2.2 Pooling Layer
2.3 Fully Connected Layer
2.4 Overfitting or Dropout
2.5 Activation Functions
3 Auto-encoder and Decoder Model
4 Proposed Research Methodology
4.1 Data Description and Design Approaches
5 Experimental Analysis and Result
5.1 Classification and Validation Process
5.2 Prediction
6 Conclusion
References
Deep Learning-Based Open Set Domain Hyperspectral Image Classification Using Dimension-Reduced Spectral Features
1 Introduction
2 Methodology
2.1 Dataset
2.2 Salinas
2.3 Salinas A
2.4 Pavia U
3 Experiment Results
3.1 Dimensionality Reduction Based on Dynamic Mode Decomposition
3.1.1 Salinas Dataset
3.1.2 Salinas A Dataset
3.1.3 Pavia University Dataset
3.2 Dimension Reduction Using Chebyshev Polynomial Approximation
3.2.1 Salinas Dataset
3.2.2 Salinas A Dataset
3.2.3 Pavia U Dataset
4 Conclusion
References
An Effective Diabetic Retinopathy Detection Using Hybrid Convolutional Neural Network Models
1 Introduction
2 Related Work
3 Methodology
3.1 Research Objectives
3.2 Feature Selection
3.3 Proposed Models
3.3.1 CNN Model
3.3.2 CNN with SVM Classifier
3.3.3 CNN with RF Classifier
4 Experimental Results and Analysis
5 Conclusion and Future Work
References
Modified Discrete Differential Evolution with Neighborhood Approach for Grayscale Image Enhancement
1 Introduction
2 Related Works
3 Differential Evolution
3.1 Classical Differential Evolution
4 Proposed Approach
4.1 Best Neighborhood Differential Evolution (BNDE) Mapping
5 Phase I – Performance Comparison
5.1 Design of Experiments – Phase I
5.2 Results and Discussions – Phase I
6 Phase II – Image Processing Application
6.1 Design of Experiments – Phase II
6.2 Results and Discussions – Phase II
7 Conclusions
References
Swarm-Based Methods Applied to Computer Vision
Abbreviations
1 Introduction
2 Brief Description of Swarm-Based Methods
3 Some Advantages of Swarm-Based Methods
4 Swarm-Based Methods and Computer Vision
4.1 Feature Extraction
4.2 Image Segmentation
4.3 Image Classification
4.4 Object Detection
4.5 Face Recognition
4.6 Gesture Recognition
4.7 Medical Image Processing
References
Index