This volume includes some of the key research papers in the area of machine learning produced at MIT and Siemens during a three-year joint research effort. It includes papers on many different styles of machine learning, organized into three parts. Part I, theory, includes three papers on theoretical aspects of machine learning. The first two use the theory of computational complexity to derive some fundamental limits on what isefficiently learnable. The third provides an efficient algorithm for identifying finite automata. Part II, artificial intelligence and symbolic learning methods, includes five papers giving an overview of the state of the art and future developments in the field of machine learning, a subfield of artificial intelligence dealing with automated knowledge acquisition and knowledge revision. Part III, neural and collective computation, includes five papers sampling the theoretical diversity and trends in the vigorous new research field of neural networks: massively parallel symbolic induction, task decomposition through competition, phoneme discrimination, behavior-based learning, and self-repairing neural networks.
Author(s): Stephen José Hanson, Werner Remmele (auth.), Stephen José Hanson, Werner Remmele, Ronald L. Rivest (eds.)
Series: Lecture Notes in Computer Science 661
Edition: 1
Publisher: Springer-Verlag Berlin Heidelberg
Year: 1993
Language: English
Commentary: (add ocr)
Pages: 276
Tags: Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Processor Architectures
Strategic directions in machine learning....Pages 1-4
Introduction....Pages 5-7
Training a 3-node neural network is NP-complete....Pages 9-28
Cryptographic limitations on learning Boolean formulae and finite automata....Pages 29-49
Inference of finite automata using homing sequences....Pages 51-73
Introduction....Pages 75-77
Adaptive search by learning from incomplete explanations of failures....Pages 79-92
Learning of rules for fault diagnosis in power supply networks....Pages 93-105
Cross references are features....Pages 107-123
The schema mechanism....Pages 125-138
L-ATMS: A tight integration of EBL and the ATMS....Pages 139-152
Introduction....Pages 153-156
Massively parallel symbolic induction of protein structure/function relationships....Pages 157-173
Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks....Pages 175-202
Phoneme discrimination using connectionist networks....Pages 203-227
Behavior-based learning to control IR oven heating: Preliminary investigations....Pages 229-240
Trellis codes, receptive fields, and fault tolerant, self-repairing neural networks....Pages 241-268