Theoretical foundations and numerical methods for sparse recovery

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

The present collection of four lecture notes is the very first contribution of this type in the field of sparse recovery. Compressed sensing is one of the important facets of the broader concept presented in the book, which by now has made connections with other branches such as mathematical imaging, inverse problems, numerical analysis and simulation. This unique collection will be of value for a broad community and may serve as a textbook for graduate courses.

Author(s): Fornasier M. (ed.)
Series: Radon Series on Computational and Applied Mathematics
Publisher: De Gruyter
Year: 2010

Language: English
Pages: 357
Tags: Математика;Вычислительная математика;

Cover
......Page 1
Theoretical Foundations and Numerical Methods for Sparse Recovery......Page 4
Preface......Page 6
Table of Contents......Page 8
Compressive Sensing and Structured Random Matrices......Page 12
1 Introduction......Page 13
2.1 Preliminaries and Notation......Page 15
2.2 Sparse Recovery......Page 17
2.3 Null Space Property and Restricted Isometry Property......Page 18
2.4 Recovery of Individual Vectors......Page 22
2.5 Coherence......Page 24
2.6 Restricted Isometry Property of Gaussian and Bernoulli Random Matrices......Page 26
3 Structured Random Matrices......Page 27
4 Random Sampling in Bounded Orthonormal Systems......Page 29
4.1 Bounded Orthonormal Systems......Page 30
4.2 Nonuniform Recovery......Page 36
4.3 Uniform Recovery......Page 37
5 Partial Random Circulant Matrices......Page 39
6 Tools from Probability Theory......Page 40
6.1 Basics on Probability......Page 41
6.2 Moments and Tails......Page 43
6.3 Rademacher Sums and Symmetrization......Page 44
6.4 Scalar Khintchine Inequalities......Page 45
6.5 Noncommutative Khintchine Inequalities......Page 51
6.6 Rudelson’s Lemma......Page 57
6.7 Decoupling......Page 58
6.8 Noncommutative Khintchine Inequalities for Decoupled Rademacher Chaos......Page 60
6.9 Dudley’s Inequality......Page 63
6.10 Deviation Inequalities for Suprema of Empirical Processes......Page 70
7 Proof of Nonuniform Recovery Result for Bounded Orthonormal Systems......Page 71
7.1 Nonuniform Recovery with Coef cients of Random Signs......Page 72
7.2 Condition Number Estimate for Column Submatrices......Page 73
7.3 Finishing the proof......Page 77
8.1 Start of Proof......Page 78
8.2 The Crucial Lemma......Page 79
8.3 Covering Number Estimate......Page 81
8.4 Finishing the Proof of the Crucial Lemma......Page 84
8.5 Completing the Proof of Theorem 8.1......Page 86
8.6 Strengthening the Probability Estimate......Page 87
9 Proof of Recovery Theorem for Partial Circulant Matrices......Page 89
9.1 Coherence......Page 90
9.2 Conditioning of Submatrices......Page 91
9.3 Completing the Proof......Page 95
10.1 Covering Numbers for the Unit Ball......Page 96
10.2 Integral Estimates......Page 97
1 Introduction......Page 107
1.1 Notations......Page 109
2.1.1 Adaptive and Compressed Acquisition......Page 110
2.1.3 Optimal Performances of Encoder/Decoder Pairs......Page 111
2.2 Survey on Mathematical Analysis of Compressed Sensing......Page 114
2.2.1 An Intuition Why l1-Minimization Works Well......Page 115
2.2.2 Restricted Isometry Property and Null Space Property Definition 2.7 One says that A . Rm×N has the Null Space Property (NSP......Page 117
2.2.3 Performances of l1-Minimization as an Optimal Decoder......Page 118
2.2.4 Random Matrices and Optimal RIP......Page 120
3 Numerical Methods for Compressed Sensing......Page 121
3.1.1 The Homotopy Method......Page 122
3.1.2 Iteratively Reweighted Least Squares......Page 125
3.1.3 Extensions to the Minimization of Functionals with Total Variation Terms......Page 139
3.1.4 Iterative Hard Thresholding......Page 145
4 Numerical Methods for Sparse Recovery......Page 150
4.1 Iterative Soft-Thresholding in Hilbert Spaces......Page 152
4.1.1 The Surrogate Functional......Page 153
4.1.2 The Algorithm and Preliminary Convergence Properties......Page 154
4.1.3 Weak Convergence of the Algorithm......Page 155
4.1.4 Strong Convergence of the Algorithm......Page 157
4.2 Principles of Acceleration......Page 159
4.2.1 From the Projected Gradient Method to Iterative Soft-Thresholding with Decreasing Thresholding Parameter......Page 160
4.2.2 Sample of Analysis of Acceleration Methods Technical lemmas......Page 164
5.1 Domain Decomposition Methods for l1-Minimization......Page 170
5.1.1 Weak Convergence of the Sequential Algorithm......Page 172
5.1.2 Strong Convergence of the Sequential Algorithm......Page 177
5.1.3 A Parallel Domain Decomposition Method......Page 183
5.2 Domain Decomposition Methods for Total Variation Minimization......Page 184
5.2.1 The Overlapping Domain Decomposition Algorithm......Page 186
5.2.2 Local Minimization by Lagrange Multipliers......Page 188
5.2.3 Convergence of the Sequential Alternating Subspace Minimization......Page 192
5.2.4 A Parallel Algorithm......Page 205
5.2.5 Applications and Numerics......Page 206
Sparse Recovery in Inverse Problems......Page 217
1.1 Road map of the chapter......Page 218
1.2 Remarks on sparse recovery algorithms......Page 219
2 Classical Inverse Problems......Page 220
2.1 Preliminaries......Page 221
2.2 Regularization Theory......Page 225
3.1 Landweber Iteration and Its Discretization......Page 228
3.2 Regularization Theory for A-Priori Parameter Rules......Page 231
3.3 Regularization Theory by A-Posteriori Parameter Rules......Page 232
4 Tikhonov Regularization with Sparsity Constraints......Page 237
4.1 Regularization Result for A-Priori Parameter Rules......Page 238
4.2 Convergence Rates for A-Priori Parameter Rules......Page 239
4.3 Regularization Result for A-Posteriori Parameter Rules......Page 242
4.4 Convergence Rates for A-Posteriori Parameter Rules......Page 245
5 Iterated Shrinkage for Nonlinear Ill-Posed Problems......Page 246
5.1 Properties of the Surrogate Functional......Page 247
5.2 Minimization of the Surrogate Functionals......Page 248
5.3 Convergence Properties......Page 250
5.4 Application of Sparse Recovery to SPECT......Page 252
6 Projected Accelerated Steepest Descent for Nonlinear Ill-Posed Problems......Page 256
6.1 Preleminaries......Page 258
6.2 Projected Steepest Descent and Convergence......Page 259
6.3 Some Algorithmic Aspects......Page 267
6.4 Numerical Experiment: A Nonlinear Sensing Problem......Page 269
1.1.1 The Bayesian Approach to Image Reconstruction
......Page 280
1.1.2 Variational Models in the Continuous Setting......Page 282
1.1.3 A Convex, Yet Edge-preserving Approach......Page 285
1.2 Some Theoretical Facts: Definitions, Properties......Page 287
1.2.2 An Equivalent Definition......Page 288
1.2.3 Main Properties of the Total Variation
......Page 289
1.2.4 Functions with Bounded Variation......Page 290
1.3.1 Definition, and an Inequality Definition 1.5. A measurable set E......Page 293
1.3.2 The Reduced Boundary, and a Generalization of Green’s Formula......Page 294
1.4 The Co-area Formula......Page 296
1.5 The Derivative of a BV Function
......Page 297
2.1 Perimeter Minimization......Page 300
2.2.1 The Euler–Lagrange Equation......Page 301
2.2.2 The Problem Solved by the Level Sets......Page 304
2.2.3 A Few Explicit Solutions......Page 308
2.2.4 The Discontinuity Set......Page 310
2.2.5 Regularity......Page 313
3.1 Discrete Problem......Page 314
3.2.1 Convex Functions – Legendre–Fenchel Conjugate......Page 316
3.2.2 Subgradient......Page 319
3.2.3 The Dual of (ROF)......Page 320
3.2.4 “Proximal” Operator......Page 321
3.3 Gradient Descent......Page 322
3.3.1 Splitting, and Projected Gradient Descent......Page 323
3.3.2 Improvements: Optimal First-order Methods......Page 326
3.5 Primal-dual Approaches......Page 327
3.6 Graph-cut Techniques......Page 330
3.7 Comparisons of the Numerical Algorithms......Page 331
4.1 Total Variation Based Image Deblurring and Zooming......Page 334
4.2 Total Variation with L1 Data Fidelity Term......Page 335
4.3.1 Convex Representation......Page 336
4.4 The Minimal Partition Problem......Page 344