The use of neural networks is permeating every area of signal processing. They can provide powerful means for solving many problems, especially in nonlinear, real-time, adaptive, and blind signal processing. The Handbook of Neural Network Signal Processing brings together applications that were previously scattered among various publications to provide an up-to-date, detailed treatment of the subject from an engineering point of view.The authors cover basic principles, modeling, algorithms, architectures, implementation procedures, and well-designed simulation examples of audio, video, speech, communication, geophysical, sonar, radar, medical, and many other signals. The subject of neural networks and their application to signal processing is constantly improving. You need a handy reference that will inform you of current applications in this new area. The Handbook of Neural Network Signal Processing provides this much needed service for all engineers and scientists in the field.
Author(s): Yu Hen Hu, Jenq-Neng Hwang
Series: Electrical Engineering & Applied Signal Processing Series
Publisher: CRC
Year: 2002
Language: English
Pages: 384
Handbook of Neural Network Signal Processing......Page 1
Book Series......Page 3
Copyright......Page 5
Preface......Page 6
Editors......Page 8
Contributors......Page 10
Contents......Page 11
1.1 Introduction......Page 12
1.2.1.1 McCulloch and Pitts’ Neuron Model......Page 13
1.2.1.2 Neural Network Topology......Page 14
1.2.2.1 Perceptron Model......Page 15
1.2.2.1.1 Applications of the Perceptron Neuron Model......Page 16
1.2.2.3 Error Back-Propagation Training of MLP......Page 17
1.2.2.3.1 Finding the Weights of a Single Neuron MLP......Page 18
1.2.2.3.2 Error Back-Propagation in a Multiple Layer Perceptron......Page 20
1.2.2.3.4 Implementation of the Back-Propagation Learning Algorithm......Page 21
1.2.3 Radial Basis Networks......Page 23
1.2.3.1 Type I Radial Basis Network......Page 24
1.2.3.2 Type II Radial Basis Network......Page 25
1.2.4.1 Orthogonal Linear Networks......Page 27
1.2.4.2.1 Basic Formulation of Self-Organizing Maps (SOMs)......Page 28
1.2.5.2 Mixture of Expert (MoE)Network......Page 29
1.2.6 Support Vector Machines (SVMs)......Page 31
1.3.1 Digital Signal Processing......Page 33
1.3.1.1 A Taxonomy of Digital Signal Processing (DSP)Algorithms......Page 34
1.3.1.3 Linear Transformations......Page 35
1.3.1.4 Pattern Classification......Page 36
1.3.1.6 Time Series Modeling......Page 37
1.3.1.7.1 Function Approximation......Page 38
1.4 Overview of the Handbook......Page 39
References......Page 41
2.1 Introduction......Page 42
2.2.1 Structure and Operation of the MLP......Page 43
2.2.2 Training the MLP Using OWO-HWO......Page 45
2.3.1 Bounding MLP Performance......Page 46
2.3.1.2 Discussion of the Shape of the MSE vs.Nh Curve......Page 47
2.3.1.3 Convexity of the MSE vs.Nh Curve......Page 49
2.3.1.4 Finding the Shape of the Average MSE vs.Nh Curve......Page 50
2.3.2 Estimating PLN Performance......Page 51
2.3.2.1 Convergent PLN Training Algorithm......Page 52
2.3.3 Sizing Algorithm......Page 53
2.3.4 Numerical Results......Page 54
2.4 Bounding MLP Testing Errors from Training Data......Page 56
2.4.1 Bounds on Estimation Error......Page 57
2.4.2.1 Signal Modeling......Page 58
2.4.2.2 Basic Approach......Page 59
2.4.3 Convergence of the Method......Page 61
2.5.1 Description of Data Files......Page 62
2.5.2 CRMAP Bounds and Sizing of FLS Neural Nets......Page 63
2.6 Conclusions......Page 65
Appendix:Simplfied Error Expression for a Linear Network Trained with LMS Algorithm......Page 67
References......Page 68
3.1 Introduction......Page 71
3.2.1 Overview......Page 73
3.2.2 Basis Functions......Page 74
3.2.3 Gaussian RBF Network......Page 77
3.2.4 Example of How an RBF Network Works......Page 78
3.3.2 Universal Approximation......Page 79
3.4.2.1 All Input Data......Page 80
3.4.2.3 Subset Selection......Page 81
3.4.2.4 k-means Clustering......Page 82
3.4.2.6 Supervised Learning......Page 83
3.4.2.7 Support Vector Machines......Page 84
3.4.3 Selecting the Number of Basis Functions......Page 85
3.4.3.1 Orthogononalization and Error Variance Minimization......Page 86
3.5.1 Time Series Modeling......Page 87
3.5.2 Option Pricing in Financial Markets......Page 88
3.5.4 Channel Equalization......Page 90
References......Page 91
4.1 Introduction......Page 94
4.2 Learning to Classify –Some Theoretical Background......Page 95
4.2.2 Margins and VC Dimension......Page 98
4.3 Nonlinear Algorithms in Kernel Feature Spaces......Page 99
4.3.1 Wrapping Up......Page 101
4.4.1 Support Vector Machines......Page 102
4.4.1.2 v-SVMs......Page 104
4.4.1.5 Optimization Techniques for SVMs......Page 105
4.4.1.5.2 Decomposition Methods......Page 106
4.4.2 Kernel Fisher Discriminant......Page 107
4.4.2.1 Optimization......Page 109
4.4.3 Connection between Boosting and Kernel Methods......Page 110
4.5 Unsupervised Learning......Page 111
4.5.1 Kernel PCA......Page 112
4.5.2 Single-Class Classication......Page 114
4.6 Model Selection......Page 117
4.7.1.1 OCR......Page 119
4.7.1.2 Analyzing DNA Data......Page 120
4.7.2 Benchmarks......Page 122
4.7.3.1.3 Interpretation......Page 124
References......Page 126
5.1 Introduction......Page 134
5.2.1 Introduction......Page 135
5.2.2 Simple Averaging and Simple Voting......Page 137
5.2.3 Bagging......Page 138
5.2.4 Boosting......Page 139
5.3.1 Mixtures of Experts......Page 141
5.3.2.2 Alternative Training Procedures......Page 143
5.4 A Bayesian Committee Machine......Page 144
5.4.1 Theoretical Foundations......Page 145
5.4.2 The BCM......Page 146
5.4.3 Experiments......Page 147
5.5 Conclusions......Page 148
Acknowledgments......Page 149
References......Page 150
6.1 Introduction......Page 152
6.2.1 Function Approximation......Page 153
6.2.2 Regression and Classification......Page 155
6.2.2.1 Regression......Page 156
6.2.2.2 Classification......Page 157
6.2.3 Optimal Linear Filtering......Page 158
6.2.4 Dynamic Modeling......Page 159
6.3 Topological Approximation with Static Nonlinear Combinations of Linear Finite Memory Operators......Page 162
6.3.1 The Concept of Approximately Finite Memory (Myopic)......Page 163
6.3.2 Topological Approximation Using the Stone –Weierstrass Theorem......Page 165
6.4.1 Delay Operators in Optimal Filtering......Page 168
6.4.2 The Gamma Delay Operator......Page 169
6.4.3 Kautz Models......Page 173
6.5 Conclusions......Page 175
References......Page 176
7.1.1 What is Blind Signal Separation?......Page 180
7.1.2 What is Blind Deconvolution?......Page 181
7.2.1 Problem Formulation......Page 182
7.2.2.2 Signal Separation Using Temporal Correlation......Page 184
7.2.3.1 Density Matching BSS Using Natural Gradient Adaptation......Page 185
7.2.3.2 Contrast Function Optimization for BSS Using Constrained Adaptation......Page 187
7.2.4 BSS Algorithms Using Temporal Correlation......Page 189
7.3.1 Problem Formulation......Page 191
7.3.2 Relationships between Blind Deconvolution and BSS......Page 192
7.3.2.1 Density Matching Blind Deconvolution Using Natural Gradient Adaptation......Page 193
7.4 Spatio-Temporal Extensions......Page 194
7.4.1 Common Problem Formulation......Page 195
7.4.2.2 Algorithms for Multichannel Blind Deconvolution......Page 196
7.4.3.1 Assumptions and Goals......Page 197
7.5.1 BSS for Instantaneous Mixtures......Page 198
7.5.2 Blind Deconvolution......Page 201
7.5.3 BSS for Convolutive Mixtures......Page 205
7.6 Conclusions and Open Issues......Page 206
References......Page 208
8.1 Introduction......Page 211
8.2 Principal Component Analysis......Page 212
8.3 Hebb ’s Learning Rule......Page 217
8.4.1 Unconstrained Hebbian Learning......Page 219
8.4.2.2 Linearized Normalization (Oja ’s Single Unit Rule)......Page 222
8.4.2.3 The Generalized Hebbian Algorithm (GHA)......Page 224
8.4.2.3.1 Original GHA......Page 225
8.4.2.3.3 The Deflation Transform......Page 226
8.4.2.4 The APEX Learning Rule......Page 227
8.4.2.5.1 Földiák’s Model [31]......Page 229
8.4.2.5.3 The Model of Rubne [34]......Page 230
8.4.2.6 Assessment of Hebbian PCA Models......Page 231
8.4.2.7 Multilayer Perceptrons and PCA......Page 232
8.4.3 Application:Image Compression......Page 233
8.4.4 PCA and Blind Source Separation......Page 235
8.5.1 Nonlinear PCA: A Functional Approach......Page 239
8.5.1.1 Kramer ’s Neural Model......Page 240
8.5.2 Application: Ischemia Detection......Page 242
8.5.3 Nonlinear PCA: A Hebbian Approach......Page 243
8.5.4 Application: Blind Image Separation......Page 244
References......Page 246
9.1 Introduction......Page 249
9.2 Time Series Prediction......Page 250
9.2.2 Traditional Approaches to Time Series Prediction......Page 251
9.3.1.3 Recurrent Neural Network......Page 252
9.3.2.1 Average Sensitivity Measures......Page 253
9.3.2.2 Sensitivities for Individual Exemplars......Page 254
9.3.3 Committees of Predictors......Page 255
9.3.4 Regularizer for Recurrent Learning......Page 256
9.4.1 Task,Data,and Performance Measure......Page 257
9.4.2 Applying the Input Feature Grouping Committee Technique......Page 258
9.4.3 Applying the Regularized Recurrent Learning Technique......Page 260
A.2 Time Series Prediction Competitions......Page 262
References......Page 263
10.1 Introduction......Page 267
10.2.1.2 Nature of Speech Signals......Page 269
10.2.1.4 Modular Recognition Process......Page 270
10.2.2 Early Stage ANN Applications to Speech Recognition......Page 272
10.3.1.2 Functional Form Embodiment of the Entire Process......Page 273
10.3.1.3.1 Probabilistic Descent Theorem......Page 274
10.3.2 Minimum Recognition Error Learning......Page 275
10.3.3 Links with Others......Page 276
10.3.4.2 GPD for Open-Vocabulary Recognition......Page 277
10.3.4.3 GPD for Speaker Recognition......Page 278
10.4.1 Overview......Page 279
10.4.3 Bidirectional Network......Page 280
10.5.1 Fundamentals......Page 282
10.5.2.1 SVM-Based Phoneme Detection......Page 283
10.6.2 Blind Separation......Page 284
10.6.3.1 Separation Using Codebook Projection......Page 286
10.6.3.2 Separation Using a Speech Production Model......Page 287
References......Page 288
11.1 Introduction......Page 291
11.1.1 Relevance Feedback Module......Page 292
11.1.2 Feature Extraction Module......Page 293
11.1.3 Adoption of Neural Network Techniques......Page 294
11.2.2 Training and Searching Algorithm......Page 295
11.2.2.4 Weighted Searching......Page 296
11.2.3.1 Unfavorable Relevance Feedback Situation......Page 297
11.2.4.2 Summary of Comparison......Page 300
11.2.5 Application to Compressed Domain Image Retrieval......Page 301
11.3.1 Network Architecture......Page 306
11.3.1.1 Input Transformation......Page 307
11.3.1.3 Functions of Neurons under Each Subnetwork......Page 308
11.3.1.5 Edge Configurations......Page 309
11.3.2 Network Training Stage......Page 310
11.3.3 Recognition Stage......Page 311
11.3.3.2 Detection of Secondary Edge Points......Page 312
11.3.4 Experimental Results......Page 313
11.4 Conclusion......Page 316
References......Page 318
12.1 Introduction......Page 320
12.2.1 Pixel Modeling......Page 321
12.2.1.1 Parameter Estimation......Page 322
12.2.1.2 Model Order Selection......Page 324
12.2.2 Context Modeling and Segmentation......Page 325
12.2.3 Application Examples......Page 326
12.3 CAD System Design......Page 332
12.3.1.1 Feature Extraction......Page 334
12.3.1.2 Database Mapping......Page 336
12.3.1.3 Data Classification via Supervised Learning......Page 337
12.3.1.4 Application Example......Page 338
12.3.2.1 General Architecture of the CNN......Page 340
12.3.2.2 Supervised Training of the CNN......Page 342
12.3.2.3 Application Example......Page 343
References......Page 344
13.1 Introduction......Page 348
13.2.1 Modules and Hierarchical Levels......Page 350
13.2.2 Decision-Based Neural Networks......Page 352
13.2.2.2 Globally Supervised Learning Rules......Page 353
13.2.2.2.1 Reinforced –Anti-Reinforced Learning Rules......Page 354
13.2.3 Mixture of Experts......Page 355
13.2.4 Sugeno ’s Fuzzy Inference Systems......Page 356
13.2.5.1 FIS and MOE Networks......Page 357
13.2.5.3 Hierarchical Fuzzy Neural Networks......Page 358
13.3.1.1 EM for Fuzzy Neural Networks......Page 359
13.3.1.1.2 EM vs.k-mean......Page 360
13.3.2.1 Motion-Based Video Segmentation......Page 361
13.3.2.2 Texture Classification via Intraclass EM Clustering......Page 362
13.4.1.1 Experts-in-Class Hierarchical Structures......Page 363
13.4.1.2 Classes-in-Expert Hierarchical Structures......Page 364
13.4.2.2 Face Recognition and Content-Based Indexing for Video Browsing......Page 366
13.4.2.4.1 Medical Image Quantification......Page 368
13.4.2.4.2 Computer Aided Diagnosis......Page 369
13.5.1 Neuro-Fuzzy Classifiers with Adjustable Rule Importance......Page 371
13.5.1.1 Architecture of NEFCAR......Page 372
13.5.1.2 Training Strategy......Page 374
13.5.1.3 Updating Formula......Page 375
13.5.2.1.1 Skin Color......Page 376
13.5.2.1.2 Motion Information......Page 377
13.5.2.2.3 Results of Face Detection......Page 378
13.5.2.3 Face Localization and Recognition......Page 380
References......Page 381