Complex-valued random signals are embedded in the very fabric of science and engineering, yet the usual assumptions made about their statistical behavior are often a poor representation of the underlying physics. This book deals with improper and noncircular complex signals, which do not conform to classical assumptions, and it demonstrates how correct treatment of these signals can have significant payoffs. The book begins with detailed coverage of the fundamental theory and presents a variety of tools and algorithms for dealing with improper and noncircular signals. It provides a comprehensive account of the main applications, covering detection, estimation, and signal analysis of stationary, nonstationary, and cyclostationary processes. Providing a systematic development from the origin of complex signals to their probabilistic description makes the theory accessible to newcomers. This book is ideal for graduate students and researchers working with complex data in a range of research areas from communications to oceanography.
Author(s): Peter J. Schreier, Louis L. Scharf
Edition: 1
Publisher: Cambridge University Press
Year: 2010
Language: English
Pages: 331
Half-title......Page 3
Title......Page 5
Copyright......Page 6
Contents......Page 7
Preface......Page 15
Outline of this book......Page 16
Acknowledgments......Page 17
Functions......Page 19
Commonly used symbols and operators......Page 20
Part I Introduction......Page 23
1 The origins and uses of complex signals......Page 25
1.1 Cartesian, polar, and complex representations of two-dimensional signals......Page 26
1.2 Simple harmonic oscillator and phasors......Page 27
1.3 Lissajous figures, ellipses, and electromagnetic polarization......Page 28
Jones calculus......Page 29
1.4 Complex modulation, the Hilbert transform, and complex analytic signals......Page 30
1.4.1 Complex modulation using the complex envelope......Page 31
1.4.2 The Hilbert transform, phase splitter, and analytic signal......Page 33
1.4.3 Complex demodulation......Page 35
1.4.5 Instantaneous amplitude, frequency, and phase......Page 36
1.4.7 Passband filtering at baseband......Page 37
1.5 Complex signals for the efficient use of the FFT......Page 39
1.5.2 Twofer: two real DFTs from one complex DFT......Page 40
1.6 The bivariate Gaussian distribution and its complex representation......Page 41
1.6.1 Bivariate Gaussian distribution......Page 42
1.6.2 Complex representation of the bivariate Gaussian distribution......Page 43
1.7 Second-order analysis of the polarization ellipse......Page 45
1.8 Mathematical framework......Page 47
1.9 A brief survey of applications......Page 49
2 Introduction to complex random vectors and processes......Page 52
2.1.1 Widely linear transformations......Page 53
2.1.2 Inner products and quadratic forms......Page 55
2.2 Second-order statistical properties......Page 56
2.2.1 Extending definitions from the real to the complex domain......Page 57
2.2.2 Characterization of augmented covariance matrices......Page 58
2.2.3 Power and entropy......Page 59
2.3 Probability distributions and densities......Page 60
2.3.1 Complex Gaussian distribution......Page 61
2.3.2 Conditional complex Gaussian distribution......Page 63
2.3.3 Scalar complex Gaussian distribution......Page 64
2.3.4 Complex elliptical distribution......Page 66
2.4 Sufficient statistics and ML estimators for covariances: complex Wishart distribution......Page 69
Complex Wishart distribution......Page 70
2.5 Characteristic function and higher-order statistical description......Page 71
2.5.2 Higher-order moments......Page 72
2.5.3 Cumulant-generating function......Page 74
2.5.4 Circularity......Page 75
2.6 Complex random processes......Page 76
2.6.1 Wide-sense stationary processes......Page 77
Notes......Page 79
Part II Complex random vectors......Page 81
3 Second-order description of complex random vectors......Page 83
3.1 Eigenvalue decomposition......Page 84
3.1.1 Principal components......Page 85
3.1.2 Rank reduction and transform coding......Page 86
3.2 Circularity coefficients......Page 87
3.2.2 Strong uncorrelating transform (SUT)......Page 89
3.2.3 Characterization of complementary covariance matrices......Page 91
3.3 Degree of impropriety......Page 92
3.3.1 Upper and lower bounds......Page 94
Least improper analog......Page 97
3.3.3 Maximally improper vectors......Page 98
3.4 Testing for impropriety......Page 99
3.5 Independent component analysis......Page 103
Notes......Page 106
4 Correlation analysis......Page 107
4.1 Foundations for measuring multivariate association between two complex random vectors......Page 108
4.1.1 Rotational, reflectional, and total correlations for complex scalars......Page 109
4.1.2 Principle of multivariate correlation analysis......Page 113
4.1.3 Rotational, reflectional, and total correlations for complex vectors......Page 116
4.1.4 Transformations into latent variables......Page 117
4.2.1 Canonical correlations......Page 119
4.2.2 Multivariate linear regression (half-canonical correlations)......Page 122
4.2.3 Partial least squares......Page 123
4.3 Correlation coefficients for complex vectors......Page 124
4.3.1 Canonical correlations......Page 125
4.3.2 Multivariate linear regression (half-canonical correlations)......Page 128
4.4 Correlation spread......Page 130
4.5 Testing for correlation structure......Page 132
4.5.2 Independence within one data set......Page 134
4.5.3 Independence between two data sets......Page 135
Notes......Page 136
5 Estimation......Page 138
5.1 Hilbert-space geometry of second-order random variables......Page 139
5.2 Minimum mean-squared error estimation......Page 141
5.3 Linear MMSE estimation......Page 143
5.3.1 The signal-plus-noise channel model......Page 144
5.3.2 The measurement-plus-error channel model......Page 145
5.3.3 Filtering models......Page 147
5.3.5 Concentration ellipsoids......Page 149
The Gaussian case......Page 150
5.4 Widely linear MMSE estimation......Page 151
5.4.1 Special cases......Page 152
5.4.2 Performance comparison between LMMSE and WLMMSE estimation......Page 153
5.5 Reduced-rank widely linear estimation......Page 154
5.5.1 Minimize mean-squared error (min-trace problem)......Page 155
5.5.2 Maximize mutual information (min-det problem)......Page 157
5.6 Linear and widely linear minimum-variance distortionless response estimators......Page 159
Relation to LMMSE estimator......Page 160
5.6.2 Generalized sidelobe canceler......Page 161
5.6.3 Multi-rank LMVDR receiver......Page 163
5.6.4 Subspace identification for beamforming and spectrum analysis......Page 164
5.6.5 Extension to WLMVDR receiver......Page 165
5.7 Widely linear-quadratic estimation......Page 166
5.7.1 Connection between real and complex quadratic forms......Page 167
5.7.2 WLQMMSE estimation......Page 168
Notes......Page 171
6 Performance bounds for parameter estimation......Page 173
6.1 Frequentists and Bayesians......Page 174
6.1.1 Bias, error covariance, and mean-squared error......Page 176
6.1.2 Connection between frequentist and Bayesian approaches......Page 177
6.2.1 The virtual two-channel experiment and the quadratic frequentist bound......Page 179
6.2.2 Projection-operator and integral-operator representations of quadratic frequentist bounds......Page 181
6.2.3 Extension of the quadratic frequentist bound to improper errors and scores......Page 183
6.3 Fisher score and the Cramer-Rao bound......Page 184
6.3.2 The Cramer-Rao bound in the proper multivariate Gaussian model......Page 186
6.3.3 The separable linear statistical model and the geometry of the Cramer-Rao bound......Page 187
6.3.4 Extension of Fisher score and the Cramer-Rao bound to improper errors and scores......Page 189
6.3.5 The Cramer-Rao bound in the improper multivariate Gaussian model......Page 190
6.3.6 Fisher score and Cramer-Rao bounds for functions of parameters......Page 191
6.4 Quadratic Bayesian bounds......Page 192
6.5 Fisher--Bayes score and Fisher-Bayes bound......Page 193
6.5.1 Fisher-Bayes score and information......Page 194
6.5.2 Fisher-Bayes bound......Page 195
6.6 Connections and orderings among bounds......Page 196
Notes......Page 197
7 Detection......Page 199
7.1 Binary hypothesis testing......Page 200
7.1.1 The Neyman-Pearson lemma......Page 201
7.2 Sufficiency and invariance......Page 202
7.3 Receiver operating characteristic......Page 203
7.4.1 Uncommon means and common covariance......Page 205
7.4.2 Common mean and uncommon covariances......Page 207
7.4.3 Comparison between linear and widely linear detection......Page 208
7.5 Composite hypothesis testing and the Karlin-Rubin theorem......Page 210
7.6 Invariance in hypothesis testing......Page 211
7.6.1 Matched subspace detector......Page 212
7.6.2 CFAR matched subspace detector......Page 215
Notes......Page 216
Part III Complex random processes......Page 217
8.1 Spectral representation and power spectral density......Page 219
8.2 Filtering......Page 222
8.2.1 Analytic and complex baseband signals......Page 223
8.2.2 Noncausal Wiener filter......Page 224
8.3.1 Spectral factorization......Page 225
8.4 Rotary-component and polarization analysis......Page 227
8.4.1 Rotary components......Page 228
8.4.2 Rotary components of random signals......Page 230
Interpretation of the random ellipse......Page 231
Statistical properties of the random ellipse......Page 232
8.4.3 Polarization and coherence......Page 233
8.4.4 Stokes and Jones vectors......Page 235
8.4.5 Joint analysis of two signals......Page 237
8.5 Higher-order spectra......Page 238
8.5.1 Moment spectra and principal domains......Page 239
8.5.2 Analytic signals......Page 240
Notes......Page 243
9 Nonstationary processes......Page 245
9.1 Karhunen-Loeve expansion......Page 246
9.1.1 Estimation......Page 249
9.2 Cramer-Loeve spectral representation......Page 252
9.2.1 Four-corners diagram......Page 253
Wide-sense stationary signals......Page 255
Nonstationary signals......Page 256
9.2.3 Analytic signals......Page 257
9.2.4 Discrete-time signals......Page 258
9.3 Rihaczek time-frequency representation......Page 259
9.3.1 Interpretation......Page 260
9.3.2 Kernel estimators......Page 262
Statistical properties......Page 263
9.4 Rotary-component and polarization analysis......Page 264
9.4.1 Ellipse properties......Page 266
9.4.2 Analytic signals......Page 267
9.5 Higher-order statistics......Page 269
Notes......Page 270
10 Cyclostationary processes......Page 272
10.1.1 Cyclic power spectral density......Page 273
10.1.2 Cyclic spectral coherence......Page 275
10.1.3 Estimating the cyclic power-spectral density......Page 276
10.2.1 Symbol-rate-related cyclostationarity......Page 277
10.2.2 Carrier-frequency-related cyclostationarity......Page 280
10.2.3 Cyclostationarity as frequency diversity......Page 281
10.3 Cyclic Wiener filter......Page 282
10.4.1 Connection between scalar CS and vector WSS processes......Page 284
10.4.2 Sliding-window filter bank......Page 286
10.4.3 Equivalence to FRESH filtering......Page 287
10.4.4 Causal approximation......Page 289
Notes......Page 290
A1.1.2 Eigenvalue decomposition......Page 292
A1.1.3 Singular value decomposition......Page 293
A1.2.2 Updating the Cholesky factors of a Grammian matrix......Page 294
A1.2.3 Partial ordering......Page 295
A1.3.1 Partitioned matrices......Page 296
A1.3.2 Moore-Penrose pseudo-inverse......Page 297
A1.3.3 Projections......Page 298
Appendix 2: Complex differential calculus (Wirtinger calculus)......Page 299
A2.1 Complex gradients......Page 300
A2.1.1 Holomorphic functions......Page 301
A2.1.2 Complex gradients and Jacobians......Page 302
A2.1.3 Properties of Wirtinger derivatives......Page 303
A2.2 Special cases......Page 304
A2.3 Complex Hessians......Page 305
A2.3.2 Extension to complex-valued functions......Page 307
Appendix 3: Introduction to majorization......Page 309
A3.1.1 Majorization......Page 310
A3.1.2 Schur-convex functions......Page 311
A3.2 Tests for Schur-convexity......Page 312
A3.2.1 Specialized tests......Page 313
A3.2.2 Functions defined on D......Page 314
A3.3.1 Diagonal elements and eigenvalues......Page 315
A3.3.2 Diagonal elements and singular values......Page 316
A3.3.3 Partitioned matrices......Page 317
References......Page 318
Index......Page 327