Deep Learning: Handbook of Statistics

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Deep Learning, Volume 48 in the Handbook of Statistics series, highlights new advances in the field, with this new volume presenting interesting chapters on a variety of timely topics, including Generative Adversarial Networks for Biometric Synthesis, Data Science and Pattern Recognition, Facial Data Analysis, Deep Learning in Electronics, Pattern Recognition, Computer Vision and Image Processing, Mechanical Systems, Crop Technology and Weather, Manipulating Faces for Identity Theft via Morphing and Deepfake, Biomedical Engineering, and more. Provides the authority and expertise of leading contributors from an international board of authors Presents the latest release in the Handbook of Statistics series Includes the latest information on Deep Learning

Author(s): C.R. Rao,Arni S.R. Srinivasa Rao, Venu Govindaraju
Publisher: Springer
Year: 2023

Language: English
Commentary: Handbook of Statistics
Pages: 270

Front Cover
Deep Learning
Copyright
Contents
Contributors
Preface
Chapter 1: Exact deep learning machines
1. Introduction
2. EDLM constructions
3. Conclusions
References
Chapter 2: Multiscale representation learning for biomedical analysis
1. Introduction
2. Representation learning: Background
3. Multiscale embedding motivation
4. Theoretical framework
4.1. Local context embedding
4.2. Wide context embedding
4.3. Multiscale embedding
4.4. Postprocessing and inference for word similarity task
4.5. Evaluation scheme
5. Experiments, results, and discussion
5.1. Datasets
5.1.1. Training stage: Datasets and preprocessing
5.1.2. Testing stage: Datasets
5.2. Wide context embedding (context2vec)
5.3. Quantitative evaluation
5.3.1. Term similarity task
5.3.2. Downstream application task
5.3.3. Drug rediscovery test
5.4. Qualitative analysis
5.5. Error analysis
6. Conclusion and future work
References
Chapter 3: Adversarial attacks and robust defenses in deep learning
1. Introduction
2. Adversarial attacks
2.1. Fast gradient sign method
2.2. Projected gradient descent
2.3. DeepFool
2.4. Carlini and wagner attack
2.5. Adversarial patch
2.6. Elastic
2.7. Fog
2.8. Snow
2.9. Gabor
2.10. JPEG
3. On-manifold robustness
3.1. Defense-GAN
3.2. Dual manifold adversarial training (DMAT)
3.2.1. On-manifold ImageNet
3.2.2. On-manifold AT cannot defend standard attacks and vice versa
3.2.3. Proposed method: Dual manifold adversarial training
3.2.4. DMAT improves generalization and robustness
3.2.5. DMAT improves robustness to unseen attacks
3.2.6. TRADES for DMAT
4. Knowledge distillation-based defenses
5. Defenses for object detector
6. Reverse engineering of deceptions via residual learning
6.1. Adversarial perturbation estimation
6.1.1. Image reconstruction
6.1.2. Feature reconstruction
6.1.3. Image classification
6.1.4. Residual recognition
6.1.5. End-to-end training
6.2. Experimental evaluation
Acknowledgments
References
Chapter 4: Deep metric learning for computer vision: A brief overview
1. Introduction
2. Background
3. Pair-based formulation
3.1. Contrastive loss
3.2. Triplet loss
3.3. N-pair loss
3.4. Multi-Similarity loss
4. Proxy-based methods
4.1. Proxy-NCA and Proxy-NCA++
4.2. Proxy Anchor loss
4.3. ProxyGML Loss
5. Regularizations
5.1. Language guidance
5.2. Direction regularization
6. Conclusion
References
Chapter 5: Source distribution weighted multisource domain adaptation without access to source data
1. Introduction
1.1. Main contributions
2. Related works
2.1. Unsupervised domain adaptation
2.2. Hypothesis transfer learning
2.3. Multisource domain adaptation
2.4. Source-free multisource UDA
3. Problem setting
4. Practical motivation
5. Overall framework of DECISION (Ahmed et al., 2021)—A review
5.1. Weighted information maximization
5.2. Weighted pseudo-labeling
5.3. Optimization
6. Theoretical insights
6.1. Theoretical motivation behind DECISION
7. Source distribution dependent weights (DECISION-mlp)
8. Proof of Lemma 1
9. Experiments
9.1. Experiments on DECISION
9.1.1. Datasets
9.1.2. Baseline methods
9.2. Implementation details
9.2.1. Network architecture
9.2.2. Source model training
9.2.3. Hyper-parameters
9.3. Object recognition
9.3.1. Office
9.3.2. Office–Home
9.4. Ablation study
9.4.1. Contribution of each loss
9.4.2. Analysis on the learned weights
9.4.3. Distillation into a single model
9.5. Results and analyses of DECISION-mlp
10. Conclusions and future work
References
Chapter 6: Deep learning methods for scientific and industrial research
1. Introduction
2. Data and methods
2.1. Different types of data for deep learning
2.1.1. Numerical data
2.1.2. COVID-19 data
2.1.3. Meteorological data
2.1.3.1. Gridded meteorological data
2.1.3.2. Station-level meteorological data
2.1.3.3. Crop production data
2.1.4. Image data
2.2. Methodology
2.2.1. Transfer learning
2.2.2. Federated learning
2.2.3. Long short-term memory (LSTM)
2.2.3.1. Time division LSTM
2.2.3.2. Multivariate LSTM model for COVID-19 prediction
2.2.4. SNN and CNN
3. Applications of DL techniques for multi-disciplinary studies
3.1. Applications of DL models in tumor diagnosis
3.1.1. Performance evaluations of all models trained by numerical data sets for tumor diagnosis
3.1.2. Performance evaluations of all models trained by image data sets for tumor diagnosis
3.2. Application of DL model for classifying molecular subtypes of glioma tissues
3.3. Application of the deep learning model for the prognosis of glioma patients
3.4. Applications of DL model for predicting driver gene mutations in glioma
3.5. Application of Time Division LSTM for short-term prediction of wind speed
3.5.1. Performance evaluations of Time Division LSTM for short term wind speed prediction
3.6. Application of LSTM for the estimation of crop production
3.6.1. Automated model for selection of optimal input data set for designing crop prediction model
3.6.2. Design and performance evolution of crop prediction model
3.7. Classification of tea leaves
3.8. Weather integrated deep learning techniques to predict the COVID-19 cases over states in India
4. Discussion and future prospects
Acknowledgments
References
Chapter 7: On bias and fairness in deep learning-based facial analysis
1. Introduction
2. Tasks in facial analysis
2.1. Face detection and recognition
2.1.1. Face detection
2.1.2. Face verification and identification
2.2. Attribute prediction
3. Facial analysis databases for bias study
4. Evaluation metrics
4.1. Classification parity-based metrics
4.1.1. Statistical parity
4.1.2. Disparate impact (DI)
4.1.3. Equalized odds and equality of opportunity
4.1.4. Predictive parity
4.2. Score-based metrics
4.2.1. Calibration
4.2.2. Balance for positive/negative class
4.3. Facial analysis-specific metrics
4.3.1. Fairness discrepancy rate (FDR)
4.3.2. Inequity rate (IR)
4.3.3. Degree of bias
4.3.4. Precise subgroup equivalence (PSE)
5. Fairness estimation and analysis
5.1. Fairness in face detection and recognition
5.1.1. Discovery
5.1.2. Disparate impact
5.1.3. Incorporation of demographic information during model training
5.1.4. Dataset distribution during model training
5.1.5. Role of latent factors during model training
5.2. Fairness in attribute prediction
5.2.1. Discovery
5.2.2. Disparate impact
5.2.3. Counterfactual analysis
5.2.4. Role of latent factors during model training
6. Fair algorithms and bias mitigation
6.1. Face detection and recognition
6.1.1. Adversarial learning approaches
6.1.2. Pre-trained and black box approaches
6.1.3. Generative approaches
6.1.4. Bias-aware deep learning approaches
6.2. Attribute prediction
6.2.1. Adversarial approaches
6.2.2. Pre-trained and black-box approaches
6.2.3. Generative approaches
6.2.4. Bias-aware deep learning approaches
7. Meta-analysis of algorithms
8. Topography of commercial systems and patents
9. Open challenges
9.1. Fairness in presence of occlusion
9.2. Fairness across intersectional subgroups
9.3. Trade-off between fairness and model performance
9.4. Lack of benchmark databases
9.5. Variation in evaluation protocols
9.6. Unavailability of complete information
9.7. Identification of bias in models
9.8. Quantification of fairness in datasets
10. Discussion
Acknowledgment
References
Chapter 8: Manipulating faces for identity theft via morphing and deepfake: Digital privacy
1. Introduction
2. Identity manipulation techniques
3. Identity manipulation datasets
4. Identity attack detection algorithms
5. Open challenges
6. Conclusion
References
Index
Back Cover