Principles of artificial neural networks

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

The book should serve as a text for a university graduate course or for an advanced undergraduate course on neural networks in engineering and computer science departments. It should also serve as a self-study course for engineers and computer scientists in the industry. Covering major neural network approaches and architectures with the theories, this text presents detailed case studies for each of the approaches, accompanied with complete computer codes and the corresponding computed results. The case studies are designed to allow easy comparison of network performance to illustrate strengths and weaknesses of the different networks.

Author(s): Daniel Graupe
Series: Advanced series on circuits and systems 6
Edition: 2nd ed
Publisher: World Scientific
Year: 2007

Language: English
Pages: 320
City: Singapore; Hackensack, N.J

Contents......Page 14
Acknowledgments......Page 8
Preface to the First Edition......Page 10
Preface to the Second Edition......Page 12
Chapter 1. Introduction and Role of Artificial Neural Networks......Page 18
Chapter 2. Fundamentals of Biological Neural Networks......Page 22
3.1. Basic Principles of ANN Design......Page 26
3.2. Basic Network Structures......Page 27
3.3. The Perceptron's Input-Output Principles......Page 28
3.4.1. LMS training of ALC......Page 29
3.4.2. Steepest descent training of ALC......Page 31
4.1. The Basic Structure......Page 34
4.1.1. Perceptron's activation functions......Page 35
4.2. The Single-Layer Representation Problem......Page 39
4.3. The Limitations of the Single-Layer Perceptron......Page 40
4.4. Many-Layer Perceptrons......Page 41
4.A. Perceptron Case Study: Identifying Autoregressive Parameters of a Signal (AR Time Series Identification)......Page 42
5.1. Madaline Training......Page 54
5.A.1. Problem statement......Page 56
5.A.3. Training of the network......Page 58
5.A.4. Results......Page 60
5.A.5. Conclusions and observations......Page 62
5.A.6. MATLAB code for implementing MADALINE network......Page 63
6.2. Derivation of the BP Algorithm......Page 76
6.3.1. Introduction of bias into NN......Page 80
6.3.3. Other modification concerning convergence......Page 81
6.A.2. Network design......Page 82
6.A.3. Results......Page 84
6.A.4. Discussion and conclusions......Page 85
6.A.5. Program Code (C++)......Page 86
6.B. Back Propagation Case Study: The Exclusive-OR (XOR) Problem (2-Layer BP)......Page 93
6.C. Back Propagation Case Study: The XOR Problem — 3 Layer BP Network......Page 111
7.2. Binary Hopfield Networks......Page 130
7.3. Setting of Weights in Hopfield Nets — Bidirectional Associative Memory (BAM) Principle......Page 131
7.4. Walsh Functions......Page 134
7.5. Network Stability......Page 135
7.6. Summary of the Procedure for Implementing the Hopfield Network......Page 138
7.7. Continuous Hopfield Models......Page 139
7.8. The Continuous Energy (Lyapunov) Function......Page 140
7.A.2. Network design......Page 142
7.A.3. Setting of weights......Page 143
7.A.5. Results and conclusions......Page 144
7.A.6. MATALAB codes......Page 145
7.B.1. Introduction......Page 153
7.B.2. Hopfield neural network design......Page 154
7.B.3. Input selection......Page 158
7.B.4. Implementation details......Page 159
7.B.5. Output results......Page 160
7.B.6. Concluding discussion......Page 163
8.2. Kohonen Self-Organizing Map (SOM) Layer......Page 178
8.4. Training of the Kohonen Layer......Page 179
8.4.1. Preprocessing of Kohonen layer's inputs......Page 180
8.4.2. Initializing the weights of the Kohonen layer......Page 181
8.6. The Combined Counter Propagation Network......Page 182
8.A.2. Network structure......Page 183
8.A.3. Network training......Page 184
8.A.5. Results and conclusions......Page 186
8.A.6. Source codes (MATLAB)......Page 187
9.2. The ART Network Structure......Page 196
9.3. Setting-Up of the ART Network......Page 200
9.4. Network Operation......Page 201
9.6. Discussion and General Comments on ART-I and ART-II......Page 203
9.A.2. The data set......Page 204
9.A.3. Network design......Page 206
9.A.4. Performance results and conclusions......Page 210
9.A.5. Code for ART neural network (Java)......Page 211
9.B.2. Simulation programs Set-Up......Page 218
9.B.3. Computer simulation of ART program (C-language)......Page 221
9.B.4. Simulation results......Page 224
10.3. Network Operation......Page 226
10.4. Cognitron's Network Training......Page 228
10.5. The Neocognitron......Page 230
11.1. Fundamental Philosophy......Page 232
11.3. Simulated Annealing by Boltzman Training of Weights......Page 233
11.6. Cauchy Training of Neural Network......Page 234
11.A.2. Problem statement......Page 236
11.A.4. Computed results......Page 237
11.B.1. Problem set-up......Page 239
11.B.2. Program printout (written in MATLAB — see also Sec. 6.D)......Page 243
11.B.3. Estimated parameter set at each iteration (using stochastic training)......Page 245
12.1. Recurrent/Discrete Time Networks......Page 250
12.2. Fully Recurrent Networks......Page 251
12.3. Continuously Recurrent Back Propagation Networks......Page 252
12.A.2. Design of neural network......Page 253
12.A.3. Results......Page 255
12.A.4. Discussion and conclusions......Page 256
12.A.5. Source code (C++)......Page 257
13.1. Basic Principles of the LAMSTAR Neural Network......Page 266
13.2.1. Basic structural elements......Page 268
13.2.2. Setting of storage weights and determination of winning neurons......Page 270
13.2.4. Links between SOM modules and from SOM modules to output modules......Page 271
13.2.5. Determination of winning decision via link weights......Page 272
13.2.7. Initialization and local minima......Page 273
13.3. Forgetting Feature......Page 274
13.4.1. INPUT WORD for training and for information retrieval......Page 275
13.5.1. Feature extraction and reduction in the LAMSTAR NN......Page 276
13.6.1. Correlation feature......Page 277
13.7. Concluding Comments and Discussion of Applicability......Page 279
13.A.2. Design of the network......Page 282
13.A.3. Fundamental principles......Page 283
13.A.4. Training algorithm......Page 285
13.A.5. Testing procedure......Page 287
13.A.6. Results and their analysis......Page 288
13.A.7. Summary and concluding observations......Page 289
13.A.8. LAMSTAR CODE (MATLAB)......Page 290
13.B. Application to Medical Diagnosis Problems......Page 297
Problems......Page 302
References......Page 308
Author Index......Page 316
Subject Index......Page 318