A theoretical framework for Back-Propagation

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

/Proceedings of the 1988 Connectionist Models Summer School, pages 21-28, CMU, Pittsburgh, Pa, 1988
Abstract:
Among all the supervised learning algorithms, back propagation (BP) is probably the most widely used. Although numerous experimental works have demonstrated its capabilities, a deeper theoretical understanding of the algorithm is definitely needed. We present a mathematical framework for studying back-propagation based on the Langrangian formalism. In this framework, inspired by optimal control theory, back-propagation is formulated as an optimisation problem with nonlinear constraints. The Lagrange function is the sum of the output objective function and a constraint term which describes the network dynamics.
This approach suggests many natural extensions to the basic algorithm.
It also provides an extremely simple formulation (and derivation) of continuous recurrent network equations as described by Pineda.
Other easily described variations involve either additional terms in the error function, additional constraints on the set of solutions, or transformations of the parameter space. An interesting kind of constraint is an equality constraint among the weights, which can be implemented with little overhead. It is shown that this sort of constraint provides a way of putting apriory knowledge into the network while reducing the number of free parameters.

Author(s): Le Cun Y.

Language: English
Commentary: 871159
Tags: Информатика и вычислительная техника;Искусственный интеллект;Нейронные сети