Neural Networks and Computing. Learning Algorithms and Applications

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Издательство Imperial College Press, 2007, -322 pp.
The area of Neural computing that we shall discuss in this book represents a combination of techniques of classical optimization, statistics, and information theory. Neural network was once widely called artificial neural networks, which represented how the emerging technology was related to artificial intelligence. It once was a topic that had captivated the interest of most computer scientists, engineers, and mathematicians. Its charm of being an adaptive system or universal functional approximator has compelling appeal to most researchers and engineers. The Backpropagation training algorithm was once the most popular keywords used in most engineering conferences. There is an interesting history on this area dated back from the late fifties which we saw the advent of the Mark I Perceptron. But the real intriguing history started from the sixties that we saw Minsky and Papert’s book Perceptrons discredited the early neural research work. For all neural researchers, the late eighties are well remembered because the research of neural networks was reinstated and repositioned. From the nineties to the new millennium is history to be made by all neural researchers. We saw the flourish of this topic and its applications stretched from rigorous mathematical proof to different physical science and even business applications. Researchers now tend to use the term neural networks instead of artificial neural networks when we have understood the theoretical background more. There have been volumes of research literature published on the new development of neural theory and applications. There have been many attempts to discuss this topic from either a very mathematical way or a very practical way. But to most users including students and engineers, how to employ an appropriate neural network learning algorithm and the selection of model for a given physical problem appear to be the main issue.
This book, written from a more application perspective, provides thorough discussions on neural network learning algorithms and their related issues. We strive to find the balance in covering the major topics in neurocomputing, from learning theory, learning algorithms, network architecture to applications. We start the book from the fundamental building block neuron and the earliest neural network model, McCulloh and Pitts Model. In the beginning, we treat the learning concept from the wellknown regression problem which shows how the idea of data fitting can be used to explain the fundamental concept of neural learning. We employ an error convex surface to illustrate the optimization concept of learning algorithm. This is important as it shows readers that the neural learning algorithm is nothing more than a high dimensional optimization problem. One of the beauties of neural network is being a soft computing approach in which the selection of a model structure and initial settings may not have noticeable effect on the final solution. But neural learning process also suffers from its problem of being slow and stuck in local minima especially when it is required to handle a rather complex problem. These are the two main issues addressed in the later chapters of this book. We study the neural learning problem from a new perspective and offer several modified algorithms to enhance the learning speed and its convergence ability. We also show initializations of a network have significant effect on the learning performance. Different initialization methods are then discussed and elaborated.
Later chapters of the book deal with Basis function network, Self-Organizing map, and Feature Selection. These are interesting and useful topics to most engineering and science researchers. The Self-Organizing map is the most widely used unsupervised neural network. It is useful for performing clustering, dimensional reduction, and classification. The SOM is very different from the feedforward neural network in the sense of architecture and learning algorithm. In this book, we have provided thorough discussions and newly developed extended algorithms for readers to use. Classification and Feature selection is discussed in Chapter 6_. We include this topic in the book because bioinformatics has recently become a very important research area. Gene selection using computational methods, and performing cancer classification computationally have become the 21st Century research. This book provides a detail discussion on feature selection and how different methods be applied to gene selection and cancer classification. We hope this book will provide useful and inspiring information to readers. A number of software algorithms written in MATLAB are available for readers to use. Although the authors have gone through the book for few times checking typos and errors, we would appreciate readers notifying us about any typos found.
Introduction
Learning Performance and Enhancement
Generalization and Performance Enhancement
Basis Function Networks for Classification
Self-organizing Maps
Classification and Feature Selection
Engineering Applications

Author(s): Chow T.W.S., Cho S.-Y.

Language: English
Commentary: 1426260
Tags: Информатика и вычислительная техника;Искусственный интеллект;Нейронные сети