Harmonic and Applied Analysis: From Radon Transforms to Machine Learning

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Deep connections exist between harmonic and applied analysis and the diverse yet connected topics of machine learning, data analysis, and imaging science. This volume explores these rapidly growing areas and features contributions presented at the second and third editions of the Summer Schools on Applied Harmonic Analysis, held at the University of Genova in 2017 and 2019. Each chapter offers an introduction to essential material and then demonstrates connections to more advanced research, with the aim of providing an accessible entrance for students and researchers. Topics covered include ill-posed problems; concentration inequalities; regularization and large-scale machine learning; unitarization of the radon transform on symmetric spaces; and proximal gradient methods for machine learning and imaging.

Author(s): Filippo De Mari, Ernesto De Vito (Editors)
Series: Applied and Numerical Harmonic Analysis
Edition: 1
Publisher: Birkhäuser
Year: 2021

Language: English
Pages: 302
Tags: Applied Harmonic Analysis, Concentration Inequalities, Ill-posed Problems, Machine Learning, Imaging Science, Radon-Transform

ANHA Series Preface
Preface
Contents
Contributors
Unitarization of the Horocyclic Radon Transform on Symmetric Spaces
1 Introduction
2 Preliminaries
3 Symmetric Spaces
3.1 Riemannian Globally Symmetric Spaces
3.2 Types of Symmetric Spaces
3.3 Boundary of a Symmetric Space
3.4 Changing the Reference Point
3.5 Horocycles
4 Analysis on Symmetric Spaces
4.1 Measures
4.2 The Helgason–Fourier Transform
4.3 The Horocyclic Radon Transform
5 Unitarization and Intertwining
References
Entropy and Concentration
1 Introduction
2 The Entropy Method
2.1 Markov's Inequality and Exponential Moment Method
2.2 Entropy and Concentration
2.3 Entropy and Energy Fluctuations
2.4 Product Spaces and Conditional Operations
2.5 The Subadditivity of Entropy
2.6 Summary of Results
3 First Applications of the Entropy Method
3.1 The Efron–Stein Inequality
3.2 The Bounded Difference Inequality
3.3 Vector-Valued Concentration
3.4 Rademacher Complexities and Generalization
3.5 The Bennett and Bernstein Inequalities
3.6 Vector-Valued Concentration Revisited
4 Inequalities for Lipschitz Functions and Dimension Free Bounds
4.1 Gaussian Concentration
4.2 Exponential Efron Stein Inequalities
4.3 Convex Lipschitz Functions
4.4 The Operator Norm of a Random Matrix
5 Beyond Uniform Bounds
5.1 Self-boundedness
5.2 Convex Lipschitz Functions Revisited
5.3 Decoupling
5.4 Quadratic Forms
5.5 The Supremum of an Empirical Process
5.6 Another Version of Bernstein's Inequality
6 Appendix I. Table of Notation
References
Ill-Posed Problems: From Linear to Nonlinear and Beyond
1 Introduction
2 Linear Inverse Problems
2.1 The Moore–Penrose Generalized Inverse
2.2 Compact Operators
2.3 General Bounded Linear Transforms
3 Limited Data Computerized Tomography
3.1 Truncated Projections
4 Regularization
4.1 Miller's Theory
4.2 Regularization for the Truncated Hilbert Transform
5 Nonlinear Inverse Problems
6 Phase Retrieval
7 Instabilities in Image Classification
References
Proximal Gradient Methods for Machine Learning and Imaging
1 Introduction
2 Preliminaries on Convex Analysis
2.1 Basic Notations
2.2 Convex Sets and Functions
2.3 Differentiability and Convexity
2.4 Calculus for Nonsmooth Convex Functions
2.5 The Legendre–Fenchel Transform
2.6 The Fenchel–Rockafellar Duality
2.7 Bibliographical Notes
3 The Proximal Gradient Method
3.1 Nonexpansive and Averaged Operators
3.2 The Proximity Operator
3.3 Worst Case Convergence Analysis
3.4 Convergence Analysis Under Strong Convexity Assumptions
3.5 Convergence Analysis Under Geometric Assumptions
3.6 Accelerations
3.7 Bibliographical Notes
4 Stochastic Minimization Algorithms
4.1 The Stochastic Subgradient Method
4.2 Stochastic Proximal Gradient Method
4.3 Randomized Block-Coordinate Descent
4.4 Bibliographical Notes
5 Dual Algorithms
5.1 A Framework for Dual Algorithms
5.2 Dual Proximal Gradient Algorithms
5.3 Bibliographical Notes
6 Applications
6.1 Sparse Recovery
6.2 Image Denoising
6.3 Machine Learning
6.4 Bibliographical Notes
References
Regularization: From Inverse Problems to Large-Scale Machine Learning
1 Introduction
2 Learning as an Inverse Problem
2.1 Inverse Problems
2.2 Statistical Learning Theory
2.3 Learning as an Inverse Problem
2.4 Linear Inverse Problem Associated to Finite Data
3 Reproducing Kernel Hilbert Spaces and Related Operators
3.1 Reproducing Kernels
3.2 The Operators Defined by the Kernel
3.3 The Linear Kernel Case and Compressed Sensing
4 Tikhonov Regularization
4.1 Numerical Aspects
4.2 Error Analysis for Tikhonov Regularization
4.3 Error Decomposition
4.4 Approximation Error
4.5 Sample Error
4.6 Proof of Theorem 18
4.7 Optimization Enters the Game: Statistical and Computational Trade-Offs
5 Iterative Regularization
5.1 Landweber Iteration
5.2 A Regularization View on Gradient Descent
5.3 Landweber Iteration and Iterative Regularization
5.4 Proof Sketch
5.5 A Regularization View on Optimization
5.6 Accelerated Iterative Regularization
5.7 Error Bounds and the Effect of Acceleration
5.8 Incremental and Stochastic Iterative Regularization
5.9 Error Bounds
6 Regularization with Stochastic Projections
6.1 Projection Regularization
6.2 Nÿstrom Approximations
6.3 Error Bounds
6.4 Regularization by Subsampling
6.5 Random Features
7 Conclusions
References
Appendix Applied and Numerical Harmonic Analysis (104 Volumes)