Institute for Theoretical Computer Science
Graz University of Technology
The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a method for employing randomly connected circuits, or even found physical objects for meaningful computations, (iii) that it provides a theoretical context where heterogeneous, rather than stereotypical, local gates or processors increase the computational power of a circuit, (iv) that it provides a method for multiplexing different computations (on a common input) within the same circuit. This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices.
The Liquid State Machine (LSM) had been proposed in as a computational model that is more adequate for modelling computations in cortical microcircuits than traditional models, such as Turing machines or attractorbased models in dynamical systems. In contrast to these other models, the LSM is a model for real-time computations on continuous streams of data (such as spike trains, i.e., sequences of action potentials of neurons that provide external inputs to a cortical microcircuit). In other words: both inputs and outputs of a LSM are streams of data in continuous time.