Handbook of neural network signal processing

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

The use of neural networks is permeating every area of signal processing. They can provide powerful means for solving many problems, especially in nonlinear, real-time, adaptive, and blind signal processing. The Handbook of Neural Network Signal Processing brings together applications that were previously scattered among various publications to provide an up-to-date, detailed treatment of the subject from an engineering point of view.The authors cover basic principles, modeling, algorithms, architectures, implementation procedures, and well-designed simulation examples of audio, video, speech, communication, geophysical, sonar, radar, medical, and many other signals. The subject of neural networks and their application to signal processing is constantly improving. You need a handy reference that will inform you of current applications in this new area. The Handbook of Neural Network Signal Processing provides this much needed service for all engineers and scientists in the field.

Author(s): Yu Hen Hu, Jenq-Neng Hwang
Series: Electrical engineering and applied signal processing series
Edition: 1
Publisher: CRC Press
Year: 2002

Language: English
Pages: 383
City: Boca Raton

Handbook of NEURAL NETWORK SIGNAL PROCESSING......Page 1
Preface......Page 5
Editors......Page 7
Contributors......Page 9
Table of Contents......Page 10
1.1 Introduction......Page 11
1.2.1.1 McCulloch and Pitts’ Neuron Model......Page 12
1.2.1.2 Neural Network Topology......Page 13
1.2.2.1 Perceptron Model......Page 14
1.2.2.1.1 Applications of the Perceptron Neuron Model......Page 15
1.2.2.3 Error Back-Propagation Training of MLP......Page 16
1.2.2.3.1 Finding the Weights of a Single Neuron MLP......Page 17
1.2.2.3.2 Error Back-Propagation in a Multiple Layer Perceptron......Page 19
1.2.2.3.4 Implementation of the Back-Propagation Learning Algorithm......Page 20
1.2.3 Radial Basis Networks......Page 22
1.2.3.1 Type I Radial Basis Network......Page 23
1.2.3.2 Type II Radial Basis Network......Page 24
1.2.4.1 Orthogonal Linear Networks......Page 26
1.2.4.2.1 Basic Formulation of Self-Organizing Maps (SOMs)......Page 27
1.2.5.2 Mixture of Expert (MoE)Network......Page 28
1.2.6 Support Vector Machines (SVMs)......Page 30
1.3.1 Digital Signal Processing......Page 32
1.3.1.1 A Taxonomy of Digital Signal Processing (DSP)Algorithms......Page 33
1.3.1.3 Linear Transformations......Page 34
1.3.1.4 Pattern Classification......Page 35
1.3.1.6 Time Series Modeling......Page 36
1.3.1.7.1 Function Approximation......Page 37
1.4 Overview of the Handbook......Page 38
References......Page 40
2.1 Introduction......Page 41
2.2.1 Structure and Operation of the MLP......Page 42
2.2.2 Training the MLP Using OWO-HWO......Page 44
2.3.1 Bounding MLP Performance......Page 45
2.3.1.2 Discussion of the Shape of the MSE vs.Nh Curve......Page 46
2.3.1.3 Convexity of the MSE vs.Nh Curve......Page 48
2.3.1.4 Finding the Shape of the Average MSE vs.Nh Curve......Page 49
2.3.2 Estimating PLN Performance......Page 50
2.3.2.1 Convergent PLN Training Algorithm......Page 51
2.3.3 Sizing Algorithm......Page 52
2.3.4 Numerical Results......Page 53
2.4 Bounding MLP Testing Errors from Training Data......Page 55
2.4.1 Bounds on Estimation Error......Page 56
2.4.2.1 Signal Modeling......Page 57
2.4.2.2 Basic Approach......Page 58
2.4.3 Convergence of the Method......Page 60
2.5.1 Description of Data Files......Page 61
2.5.2 CRMAP Bounds and Sizing of FLS Neural Nets......Page 62
2.6 Conclusions......Page 64
Appendix:Simplfied Error Expression for a Linear Network Trained with LMS Algorithm......Page 66
References......Page 67
3.1 Introduction......Page 70
3.2.1 Overview......Page 72
3.2.2 Basis Functions......Page 73
3.2.3 Gaussian RBF Network......Page 76
3.2.4 Example of How an RBF Network Works......Page 77
3.3.2 Universal Approximation......Page 78
3.4.2.1 All Input Data......Page 79
3.4.2.3 Subset Selection......Page 80
3.4.2.4 k-means Clustering......Page 81
3.4.2.6 Supervised Learning......Page 82
3.4.2.7 Support Vector Machines......Page 83
3.4.3 Selecting the Number of Basis Functions......Page 84
3.4.3.1 Orthogononalization and Error Variance Minimization......Page 85
3.5.1 Time Series Modeling......Page 86
3.5.2 Option Pricing in Financial Markets......Page 87
3.5.4 Channel Equalization......Page 89
References......Page 90
4.1 Introduction......Page 93
4.2 Learning to Classify –Some Theoretical Background......Page 94
4.2.2 Margins and VC Dimension......Page 97
4.3 Nonlinear Algorithms in Kernel Feature Spaces......Page 98
4.3.1 Wrapping Up......Page 100
4.4.1 Support Vector Machines......Page 101
4.4.1.2 v-SVMs......Page 103
4.4.1.5 Optimization Techniques for SVMs......Page 104
4.4.1.5.2 Decomposition Methods......Page 105
4.4.2 Kernel Fisher Discriminant......Page 106
4.4.2.1 Optimization......Page 108
4.4.3 Connection between Boosting and Kernel Methods......Page 109
4.5 Unsupervised Learning......Page 110
4.5.1 Kernel PCA......Page 111
4.5.2 Single-Class Classication......Page 113
4.6 Model Selection......Page 116
4.7.1.1 OCR......Page 118
4.7.1.2 Analyzing DNA Data......Page 119
4.7.2 Benchmarks......Page 121
4.7.3.1.3 Interpretation......Page 123
References......Page 125
5.1 Introduction......Page 133
5.2.1 Introduction......Page 134
5.2.2 Simple Averaging and Simple Voting......Page 136
5.2.3 Bagging......Page 137
5.2.4 Boosting......Page 138
5.3.1 Mixtures of Experts......Page 140
5.3.2.2 Alternative Training Procedures......Page 142
5.4 A Bayesian Committee Machine......Page 143
5.4.1 Theoretical Foundations......Page 144
5.4.2 The BCM......Page 145
5.4.3 Experiments......Page 146
5.5 Conclusions......Page 147
Acknowledgments......Page 148
References......Page 149
6.1 Introduction......Page 151
6.2.1 Function Approximation......Page 152
6.2.2 Regression and Classification......Page 154
6.2.2.1 Regression......Page 155
6.2.2.2 Classification......Page 156
6.2.3 Optimal Linear Filtering......Page 157
6.2.4 Dynamic Modeling......Page 158
6.3 Topological Approximation with Static Nonlinear Combinations of Linear Finite Memory Operators......Page 161
6.3.1 The Concept of Approximately Finite Memory (Myopic)......Page 162
6.3.2 Topological Approximation Using the Stone –Weierstrass Theorem......Page 164
6.4.1 Delay Operators in Optimal Filtering......Page 167
6.4.2 The Gamma Delay Operator......Page 168
6.4.3 Kautz Models......Page 172
6.5 Conclusions......Page 174
References......Page 175
7.1.1 What is Blind Signal Separation?......Page 179
7.1.2 What is Blind Deconvolution?......Page 180
7.2.1 Problem Formulation......Page 181
7.2.2.2 Signal Separation Using Temporal Correlation......Page 183
7.2.3.1 Density Matching BSS Using Natural Gradient Adaptation......Page 184
7.2.3.2 Contrast Function Optimization for BSS Using Constrained Adaptation......Page 186
7.2.4 BSS Algorithms Using Temporal Correlation......Page 188
7.3.1 Problem Formulation......Page 190
7.3.2 Relationships between Blind Deconvolution and BSS......Page 191
7.3.2.1 Density Matching Blind Deconvolution Using Natural Gradient Adaptation......Page 192
7.4 Spatio-Temporal Extensions......Page 193
7.4.1 Common Problem Formulation......Page 194
7.4.2.2 Algorithms for Multichannel Blind Deconvolution......Page 195
7.4.3.1 Assumptions and Goals......Page 196
7.5.1 BSS for Instantaneous Mixtures......Page 197
7.5.2 Blind Deconvolution......Page 200
7.5.3 BSS for Convolutive Mixtures......Page 204
7.6 Conclusions and Open Issues......Page 205
References......Page 207
8.1 Introduction......Page 210
8.2 Principal Component Analysis......Page 211
8.3 Hebb ’s Learning Rule......Page 216
8.4.1 Unconstrained Hebbian Learning......Page 218
8.4.2.2 Linearized Normalization (Oja ’s Single Unit Rule)......Page 221
8.4.2.3 The Generalized Hebbian Algorithm (GHA)......Page 223
8.4.2.3.1 Original GHA......Page 224
8.4.2.3.3 The Deflation Transform......Page 225
8.4.2.4 The APEX Learning Rule......Page 226
8.4.2.5.1 Földiák ’s Model [31 ]......Page 228
8.4.2.5.3 The Model of Rubne [34 ]......Page 229
8.4.2.6 Assessment of Hebbian PCA Models......Page 230
8.4.2.7 Multilayer Perceptrons and PCA......Page 231
8.4.3 Application:Image Compression......Page 232
8.4.4 PCA and Blind Source Separation......Page 234
8.5.1 Nonlinear PCA: A Functional Approach......Page 238
8.5.1.1 Kramer ’s Neural Model......Page 239
8.5.2 Application: Ischemia Detection......Page 241
8.5.3 Nonlinear PCA: A Hebbian Approach......Page 242
8.5.4 Application: Blind Image Separation......Page 243
References......Page 245
9.1 Introduction......Page 248
9.2 Time Series Prediction......Page 249
9.2.2 Traditional Approaches to Time Series Prediction......Page 250
9.3.1.3 Recurrent Neural Network......Page 251
9.3.2.1 Average Sensitivity Measures......Page 252
9.3.2.2 Sensitivities for Individual Exemplars......Page 253
9.3.3 Committees of Predictors......Page 254
9.3.4 Regularizer for Recurrent Learning......Page 255
9.4.1 Task,Data,and Performance Measure......Page 256
9.4.2 Applying the Input Feature Grouping Committee Technique......Page 257
9.4.3 Applying the Regularized Recurrent Learning Technique......Page 259
A.2 Time Series Prediction Competitions......Page 261
References......Page 262
10.1 Introduction......Page 266
10.2.1.2 Nature of Speech Signals......Page 268
10.2.1.4 Modular Recognition Process......Page 269
10.2.2 Early Stage ANN Applications to Speech Recognition......Page 271
10.3.1.2 Functional Form Embodiment of the Entire Process......Page 272
10.3.1.3.1 Probabilistic Descent Theorem......Page 273
10.3.2 Minimum Recognition Error Learning......Page 274
10.3.3 Links with Others......Page 275
10.3.4.2 GPD for Open-Vocabulary Recognition......Page 276
10.3.4.3 GPD for Speaker Recognition......Page 277
10.4.1 Overview......Page 278
10.4.3 Bidirectional Network......Page 279
10.5.1 Fundamentals......Page 281
10.5.2.1 SVM-Based Phoneme Detection......Page 282
10.6.2 Blind Separation......Page 283
10.6.3.1 Separation Using Codebook Projection......Page 285
10.6.3.2 Separation Using a Speech Production Model......Page 286
References......Page 287
11.1 Introduction......Page 290
11.1.1 Relevance Feedback Module......Page 291
11.1.2 Feature Extraction Module......Page 292
11.1.3 Adoption of Neural Network Techniques......Page 293
11.2.2 Training and Searching Algorithm......Page 294
11.2.2.4 Weighted Searching......Page 295
11.2.3.1 Unfavorable Relevance Feedback Situation......Page 296
11.2.4.2 Summary of Comparison......Page 299
11.2.5 Application to Compressed Domain Image Retrieval......Page 300
11.3.1 Network Architecture......Page 305
11.3.1.1 Input Transformation......Page 306
11.3.1.3 Functions of Neurons under Each Subnetwork......Page 307
11.3.1.5 Edge Configurations......Page 308
11.3.2 Network Training Stage......Page 309
11.3.3 Recognition Stage......Page 310
11.3.3.2 Detection of Secondary Edge Points......Page 311
11.3.4 Experimental Results......Page 312
11.4 Conclusion......Page 315
References......Page 317
12.1 Introduction......Page 319
12.2.1 Pixel Modeling......Page 320
12.2.1.1 Parameter Estimation......Page 321
12.2.1.2 Model Order Selection......Page 323
12.2.2 Context Modeling and Segmentation......Page 324
12.2.3 Application Examples......Page 325
12.3 CAD System Design......Page 331
12.3.1.1 Feature Extraction......Page 333
12.3.1.2 Database Mapping......Page 335
12.3.1.3 Data Classification via Supervised Learning......Page 336
12.3.1.4 Application Example......Page 337
12.3.2.1 General Architecture of the CNN......Page 339
12.3.2.2 Supervised Training of the CNN......Page 341
12.3.2.3 Application Example......Page 342
References......Page 343
13.1 Introduction......Page 347
13.2.1 Modules and Hierarchical Levels......Page 349
13.2.2 Decision-Based Neural Networks......Page 351
13.2.2.2 Globally Supervised Learning Rules......Page 352
13.2.2.2.1 Reinforced –Anti-Reinforced Learning Rules......Page 353
13.2.3 Mixture of Experts......Page 354
13.2.4 Sugeno ’s Fuzzy Inference Systems......Page 355
13.2.5.1 FIS and MOE Networks......Page 356
13.2.5.3 Hierarchical Fuzzy Neural Networks......Page 357
13.3.1 Expectation –Maximization (EM)Fuzzy Classifier......Page 358
13.3.1.1.1 EM Algorithm......Page 359
13.3.2.1 Motion-Based Video Segmentation......Page 360
13.3.2.2 Texture Classification via Intraclass EM Clustering......Page 361
13.4.1.1 Experts-in-Class Hierarchical Structures......Page 362
13.4.1.2 Classes-in-Expert Hierarchical Structures......Page 363
13.4.2.2 Face Recognition and Content-Based Indexing for Video Browsing......Page 365
13.4.2.4.1 Medical Image Quantification......Page 367
13.4.2.4.2 Computer Aided Diagnosis......Page 368
13.5.1 Neuro-Fuzzy Classifiers with Adjustable Rule Importance......Page 370
13.5.1.1 Architecture of NEFCAR......Page 371
13.5.1.2 Training Strategy......Page 373
13.5.1.3 Updating Formula......Page 374
13.5.2.1.1 Skin Color......Page 375
13.5.2.1.2 Motion Information......Page 376
13.5.2.2.2 Feature Vector......Page 377
13.5.2.2.3 Results of Face Detection......Page 378
13.5.2.3 Face Localization and Recognition......Page 379
References......Page 380