Normalization Techniques in Deep Learning

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

​This book presents and surveys normalization techniques with a deep analysis in training deep neural networks.  In addition, the author provides technical details in designing new normalization methods and network architectures tailored to specific tasks.  Normalization methods can improve the training stability, optimization efficiency, and generalization ability of deep neural networks (DNNs) and have become basic components in most state-of-the-art DNN architectures.  The author provides guidelines for elaborating, understanding, and applying normalization methods.  This book is ideal for readers working on the development of novel deep learning algorithms and/or their applications to solve practical problems in computer vision and machine learning tasks.  The book also serves as a resource researchers, engineers, and students who are new to the field and need to understand and train DNNs.

Author(s): Lei Huang
Series: Synthesis Lectures on Computer Vision
Publisher: Springer
Year: 2022

Language: English
Pages: 116
City: Cham

Preface
Reference
Acknowledgements
References
Contents
About the Author
1 Introduction
[DELETE]
1.1 Denotations and Definitions
1.1.1 Optimization Objective
1.1.2 Neural Networks
1.1.3 Training DNNs
1.1.4 Normalization
2 Motivation and Overview of Normalization in DNNs
[DELETE]
2.1 Theory of Normalizing Input
2.2 Towards Normalizing Activations
2.2.1 Proximal Back-Propagation Framework
2.2.2 K-FAC Approximation
2.2.3 Highlights of Motivation
3 A General View of Normalizing Activations
[DELETE]
3.1 Normalizing Activations by Population Statistics
3.2 Local Statistics in a Sample
3.3 Batch Normalization
4 A Framework for Normalizing Activations as Functions
[DELETE]
4.1 Normalization Area Partitioning
4.2 Normalization Operation
4.2.1 Beyond Standardization Towards Whitening
4.2.2 Variations of Standardization
4.2.3 Reduced Standardization
4.3 Normalization Representation Recovery
5 Multi-mode and Combinational Normalization
[DELETE]
5.1 Multiple Modes
5.2 Combination
6 BN for More Robust Estimation
[DELETE]
6.1 Normalization as Functions Combining Population Statistics
6.2 Robust Inference Methods for BN
7 Normalizing Weights
[DELETE]
7.1 Constraints on Weights
7.2 Training with Constraints
8 Normalizing Gradients
[DELETE]
9 Analysis of Normalization
[DELETE]
9.1 Scale Invariance in Stabilizing Training
9.1.1 Auto-Tuning on Learning Rate
9.2 Improved Conditioning in Optimization
9.3 Stochasticity for Generalization
9.3.1 Theoretical Model for Stochasticity
9.3.2 Empirical Analyses for Stochasticity
9.4 Effects on Representation
9.4.1 Constraint on Feature Representation
9.4.2 Effect on Representational Capacity of Model
10 Normalization in Task-Specific Applications
[DELETE]
10.1 Domain Adaptation
10.1.1 Domain Generalization
10.1.2 Robust Deep Learning Under Covariate Shift
10.1.3 Learning Universal Representations
10.2 Image Style Transfer
10.2.1 Image Translation
10.3 Training GANs
10.4 Efficient Deep Models
11 Summary and Discussion
A Appendix
A.1 Back-Propagation Through Eigenvalue Decomposition
A.2 Derivation of Constraint Number of Normalization Methods
A.3 Proofs of Theorems