Author(s): Kutyniok, Gitta Eldar, Yonina C.
Publisher: Cambridge University Press
Year: 2012
Language: English
Pages: 558
Tags: Приборостроение;Обработка сигналов;
Cover
......Page 1
Compressed Sensing......Page 3
Title
......Page 5
Copyright......Page 6
Contents......Page 7
Contributors......Page 9
Preface......Page 11
1.1 Introduction......Page 15
1.2.1 Normed vector spaces......Page 18
1.2.2 Bases and frames......Page 20
1.3 Low-dimensional signal models......Page 21
1.3.1 Sparse models......Page 22
1.3.2 Finite unions of subspaces......Page 25
1.3.3 Unions of subspaces for analog signal models......Page 27
1.3.4 Low-rank matrix models......Page 28
1.4 Sensing matrices......Page 29
1.4.1 Null space conditions......Page 30
1.4.2 The restricted isometry property......Page 33
1.4.3 Coherence......Page 38
1.4.4 Sensing matrix constructions......Page 39
1.5 Signal recovery via L1 minimization......Page 41
1.5.1 Noise-free signal recovery......Page 42
1.5.2 Signal recovery in noise......Page 43
1.5.3 Instance-optimal guarantees revisited......Page 49
1.5.4 The cross-polytope and phase transitions......Page 51
1.6 Signal recovery algorithms......Page 52
1.7 Multiple measurement vectors......Page 56
A.1 Proof of Theorem 1.4......Page 59
A.2 Proof of Lemma 1.3......Page 61
A.3 Proof of Lemma 1.6......Page 63
A.4 Proof of Theorem 1.13......Page 65
References......Page 66
2.1 Introduction......Page 79
2.2 Image restoration and inverse problems......Page 81
2.2.1 Traditional sparse modeling......Page 82
2.2.2 Structured sparse modeling......Page 84
2.2.3 Experimental results......Page 88
2.3 Source identification and separation via structured and collaborative models......Page 90
2.3.1 The Group Lasso......Page 92
2.3.2 The Hierarchical Lasso......Page 93
2.3.3 Collaborative Hierarchical Lasso......Page 94
2.3.4 Experimental results......Page 95
References......Page 99
3.1 Introduction......Page 102
3.2 From subspaces to unions......Page 105
3.3.1 Union of subspaces......Page 108
3.3.2 Architecture......Page 110
3.4.1 Sampling in shift-invariant subspaces......Page 112
3.4.2 Sparse union of SI subspaces......Page 113
3.4.3 Infinite measurement model and continuous to finite......Page 115
3.5.1 Signal model and sparse-SI formulation......Page 118
3.5.2 Analog compressed sensing via nonuniform sampling......Page 119
3.5.3 Modeling practical ADC devices......Page 121
3.5.4 Modulated wideband converter......Page 122
3.5.5 Hardware design......Page 126
3.5.6 Sub-Nyquist signal processing......Page 129
3.6.1 Analog signal model......Page 131
3.6.2 Compressive signal acquisition......Page 132
3.6.3 Recovery algorithms......Page 135
3.7.2 Compressive signal acquisition......Page 137
3.7.3 Recovery algorithms......Page 138
3.7.4 Applications......Page 140
3.8 Union modeling vs. finite discretization......Page 141
3.8.1 Random demodulator......Page 142
3.8.2 Finite-model sensitivity......Page 143
3.8.3 Hardware complexity......Page 145
3.8.4 Computational loads......Page 148
3.8.5 Analog vs. discrete CS radar......Page 150
3.9.1 Extending CS to analog signals......Page 152
3.9.2 Is CS a universal sampling scheme?......Page 155
Acknowledgements......Page 156
References......Page 157
4.1 Introduction......Page 162
4.1.1 The sampling scheme......Page 163
4.1.2 History of FRI......Page 165
4.1.4 Notation and conventions......Page 167
4.2.1 Definition of signals with FRI......Page 168
4.2.2 Examples of FRI signals......Page 169
4.3.1 Sampling using the sinc kernel......Page 173
4.3.2 Sampling using the sum of sincs kernel......Page 176
4.3.3 Sampling using exponential reproducing kernels......Page 180
4.3.4 Multichannel sampling......Page 183
4.4 The effect of noise on FRI recovery......Page 190
4.4.1 Performance bounds under continuous-time noise......Page 191
4.4.2 Performance bounds under sampling noise......Page 194
4.4.3 FRI techniques improving robustness to sampling noise......Page 197
4.5.1 Sampling and reconstruction in the noiseless setting......Page 199
4.5.2 Sampling and reconstruction in the presence of noise......Page 200
4.5.3 Periodic vs. semi-periodic FRI signals......Page 206
4.6.1 Sampling piecewise sinusoidal signals......Page 208
4.6.2 Signal compression......Page 209
4.6.4 Ultrasound imaging......Page 211
4.6.5 Multipath medium identification......Page 213
4.6.6 Super-resolution radar......Page 214
Cramér–Rao bounds for the sinc kernel......Page 216
Cramér–Rao bounds for the SoS kernel......Page 218
Cramér–Rao bounds for B-splines......Page 219
Cramér–Rao bounds for E-splines......Page 220
References......Page 221
5.1 Introduction......Page 224
5.2.1 Matrices and their singular values......Page 228
5.2.2 Nets......Page 229
5.2.3 Sub-gaussian random variables......Page 231
5.2.4 Sub-exponential random variables......Page 235
5.2.5 Isotropic random vectors......Page 237
5.2.6 Sums of independent random matrices......Page 241
5.3 Random matrices with independent entries......Page 242
5.3.1 Limit laws and Gaussian matrices......Page 243
5.4 Random matrices with independent rows......Page 245
5.4.1 Sub-gaussian rows......Page 246
5.4.2 Heavy-tailed rows......Page 248
5.4.3 Applications to estimating covariance matrices......Page 254
5.4.4 Applications to random sub-matrices and sub-frames......Page 256
5.5 Random matrices with independent columns......Page 258
5.5.1 Sub-gaussian columns......Page 259
5.5.2 Heavy-tailed columns......Page 263
5.6 Restricted isometries......Page 266
5.6.1 Sub-gaussian restricted isometries......Page 268
5.6.2 Heavy-tailed restricted isometries......Page 270
5.7 Notes......Page 274
References......Page 278
6.1 Introduction......Page 283
6.1.1 Denoising......Page 284
6.1.2 Inverse problems......Page 285
6.1.3 A Bayesian perspective......Page 287
6.1.4 Structured sparsity......Page 288
6.2 Bayesian adaptive sensing......Page 289
6.2.1.1 Single component generative model......Page 292
6.2.1.2 Measurement adaptation......Page 293
6.2.2.1 Multi-component generative model......Page 296
6.2.2.2 Measurement adaptation......Page 299
6.3 Quasi-Bayesian adaptive sensing......Page 300
6.3.1 Denoising using non-adaptive measurements......Page 301
6.3.2 Distilled sensing......Page 303
6.3.2.1 Analysis of distilled sensing......Page 305
6.3.3 Distillation in compressed sensing......Page 310
6.4 Related work and suggestions for further reading......Page 315
References......Page 316
7.1 Introduction......Page 319
7.1.1 Threshold bounds for L1 minimization robustness......Page 322
7.1.3 Comparisons with other threshold bounds......Page 324
7.1.4 Some concepts in high-dimensional geometry......Page 327
7.2 The null space characterization......Page 328
7.3 The Grassmann angle framework for the null space characterization......Page 331
7.4 Evaluating the threshold bound ζ......Page 336
7.5 Computing the internal angle exponent......Page 338
7.6 Computing the external angle exponent......Page 340
7.7 Existence and scaling of ρN(δ,C)......Page 342
7.8 ``Weak,'' ``sectional,'' and ``strong'' robustness......Page 343
7.9 Numerical computations on the bounds of ζ......Page 346
7.10 Recovery thresholds for weighted L1 minimization......Page 348
7.11 Approximate support recovery and iterative reweighted L1......Page 352
7.12 Conclusion......Page 353
7.13.1 Derivation of the internal angles......Page 354
7.13.2 Derivation of the external angles......Page 356
7.13.3 Proof of Lemma 7.7......Page 357
7.13.4 Proof of Lemma 7.8......Page 358
References......Page 359
8.1 Greed, a flexible alternative to convexification......Page 362
8.2.1 General framework......Page 363
8.2.2 Variations in coefficient updates......Page 366
8.2.3 Variations in element selection......Page 369
8.2.5 Performance guarantees......Page 372
8.2.6 Empirical comparisons......Page 374
8.3.1 Iterative Hard Thresholding......Page 376
8.3.2 Compressive Sampling Matching Pursuit and Subspace Pursuit......Page 381
8.3.4 Recovery proof......Page 384
8.4 Generalizations of greedy algorithms to structured models......Page 387
8.4.1 The union of subspaces model......Page 388
8.4.2 Sampling and reconstructing union of subspaces signals......Page 390
8.4.3 Performance guarantees......Page 393
8.4.4 When do the recovery conditions hold?......Page 397
8.4.6 Rank structure in the MMV problem......Page 399
8.5 Conclusions......Page 403
References......Page 404
9.1 Introduction......Page 408
9.1.1 Some useful notation......Page 410
9.2 The basic model and its graph structure......Page 411
9.3 Revisiting the scalar case......Page 413
9.4.1 The min-sum algorithm......Page 418
9.4.2 Simplifying min-sum by quadratic approximation......Page 419
9.5 Approximate message passing......Page 421
9.5.1 The AMP algorithm, some of its properties, …......Page 422
9.5.2 … and its derivation......Page 423
9.6.1 Some numerical experiments with AMP......Page 426
9.6.2 State evolution......Page 428
9.6.3 The risk of the LASSO......Page 431
9.6.5 A heuristic derivation of state evolution......Page 435
9.6.6 The noise sensitivity phase transition......Page 437
9.6.7 On universality......Page 439
9.6.8 Comparison with other analysis approaches......Page 441
9.7.1 Structured priors…......Page 442
9.7.2 Sparse sensing matrices......Page 445
9.7.3 Matrix completion......Page 447
Acknowledgements......Page 448
References......Page 449
10.1 Introduction......Page 453
10.2.1 Notation......Page 455
10.3 Support Vector Machines......Page 456
10.4 Near-isometric projections......Page 459
10.5 Proof of Theorem 10.3......Page 461
10.6 Distance-Preserving via Johnson–Lindenstrauss Property ......Page 465
10.7.1 Johnson–Lindenstrauss and random sensing......Page 470
10.7.2 Experimental results......Page 472
10.8.2 Average-case compressed learning......Page 474
10.8.3 Two fundamental measures of coherence and their role in compressed learning......Page 475
10.8.4.1 Construction of the Delsarte–Goethals frames ......Page 478
10.8.4.2 Compressed sensing via the Delsarte–Goethals frames......Page 481
10.8.4.3 Compressed learning via the Delsarte–Goethals frames......Page 485
10.9.1 Proof of Lemma 10.5......Page 488
10.9.2 Proof of Theorem 10.6......Page 491
References......Page 495
11.1 Introduction......Page 499
11.1.2 Separation algorithms......Page 500
11.1.4 Design of sparse dictionaries......Page 501
11.2.1 Relation with underdetermined problems......Page 502
11.2.2 General separation estimates......Page 504
11.2.3 Clustered sparsity as a novel viewpoint......Page 507
11.2.4 Relation with uncertainty principles......Page 513
11.3.1 Separation of sinusoids and spikes......Page 514
11.3.2 Further variations......Page 517
11.4.1 Empirical results......Page 518
11.4.2 Theoretical results......Page 521
References......Page 526
12.1 Introduction......Page 529
12.2 Problem formulation: sparse representation-based classification......Page 532
12.3 Dimensionality reduction......Page 534
12.4 Recognition with corruption and occlusion......Page 536
12.5 Face alignment......Page 538
12.6 Fast L1 minimization algorithms......Page 542
12.7 Building a complete face recognition system......Page 544
12.8 Overall system evaluation......Page 547
References......Page 550
Index......Page 554