Real-Time Multi-Chip Neural Network for Cognitive Systems

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Simulation of brain neurons in real-time using biophysically-meaningful models is a pre-requisite for comprehensive understanding of how neurons process information and communicate with each other, in effect efficiently complementing in-vivo experiments. In spiking neural networks (SNNs), propagated information is not just encoded by the firing rate of each neuron in the network, as in artificial neural networks (ANNs), but, in addition, by amplitude, spike-train patterns, and the transfer rate. The high level of realism of SNNs and more significant computational and analytic capabilities in comparison with ANNs, however, limit the size of the realized networks. Consequently, the main challenge in building complex and biophysically-accurate SNNs is largely posed by the high computational and data transfer demands. Real-Time Multi-Chip Neural Network for Cognitive Systems presents novel real-time, reconfigurable, multi-chip SNN system architecture based on localized communication, which effectively reduces the communication cost to a linear growth. The system use double floating-point arithmetic for the most biologically accurate cell behavior simulation, and is flexible enough to offer an easy implementation of various neuron network topologies, cell communication schemes, as well as models and kinds of cells. The system offers a high run-time configurability, which reduces the need for resynthesizing the system. In addition, the simulator features configurable on- and off-chip communication latencies as well as neuron calculation latencies. All parts of the system are generated automatically based on the neuron interconnection scheme in use. The simulator allows exploration of different system configurations, e.g. the interconnection scheme between the neurons, the intracellular concentration of different chemical compounds (ions), which affect how action potentials are initiated and propagate.

Author(s): Amir Zjajo, Rene van Leuken
Series: River Publishers Series in Circuits and Systems
Publisher: River Publishers
Year: 2019

Language: English
Pages: 268
City: Gistrup

Front Cover
Half Title
RIVER PUBLISHERS SERIES IN CIRCUITS AND SYSTEMS
Title Page - Real-Time Multi-Chip Neural Network for Cognitive Systems
Copyright page
Dedication
Contents
Preface
List of Contributors
List of Figures
List of Tables
List of Abbreviations
Chapter 1 - Introduction
1.1 A Real-Time Reconfigurable Multi-Chip Architecture for Large-Scale Biophysically Accurate Neuron Simulation
1.2 The Inferior Olivary Nucleus Cell
1.2.1 Abstract Model Description
1.2.2 The ION Cell Design Configuration
1.2.3 The ION Cell Cluster Controller
1.3 Multi-Chip Dataflow Architecture
1.4 Organization of the Book
References
Chapter 2 - Multi-Chip Dataflow Architecture for Massive Scale Biophysically Accurate Neuron Simulation
2.1 Introduction
2.2 System Design Configuration
2.2.1 Requirements
2.2.2 Zero Communication Time: The Optimal Approach
2.2.3 Localising Communication: How to Speed Up the Common Case
2.2.4 Network-on-Chips
2.2.5 Localise Communication between Clusters
2.2.6 Synchronisation between the Clusters
2.2.7 Adjustments to the Network to Scale over Multiple FPGAs
2.2.8 Interfacing the Outside World: Inputs and Outputs
2.2.9 Adding Flexibility: Run-Time Configuration
2.2.10 Parameters of the System
2.2.11 Connectivity and Structure Generation
2.3 System Implementation
2.3.1 Exploiting Locality: Clusters
2.3.2 Connecting Clusters: Routers
2.3.3 Tracking Time: Iteration Controller
2.3.4 Inputs and Outputs
2.3.5 The Control Bus for Run-Time Configuration
2.3.6 Automatic Structure Generation and Connectivity Generation
2.4 Experimental Results
2.5 Conclusions
References
Chapter 3 - A Real-Time Hybrid Neuron Network for Highly Parallel Cognitive Systems
3.1 Introduction
3.2 The Calculation Architecture
3.2.1 The Physical Cell Overview
3.2.2 Initialising the Physical Cells
3.2.3 Axon Hillock + Soma Hardware
3.2.3.1 Exponent operand schedule
3.2.3.2 Axon hillock and soma compartment controller
3.2.4 Dendrite Hardware
3.2.4.1 Dendrite network operation
3.2.4.2 Dendrite combine operation
3.2.4.3 Dendrite compartmental latency
3.2.5 Calculation Architecture Latency
3.2.6 Exponent Architecture
3.3 The Calculation Architecture
3.3.1 Communication Architecture Overview
3.3.2 Cluster Controller
3.3.3 Routing Network
3.3.3.1 Routing method
3.3.3.2 Design specification
3.3.4 Interface Bridge
3.4 Experimental Results
3.4.1 Evaluation Method
3.4.1.1 Building a test set
3.4.1.2 Design simulation
3.4.1.3 SystemC synthesis
3.4.1.4 Post-synthesis simulation
3.4.1.5 VHDL implementation
3.4.2 Evaluation Results
3.4.2.1 Accuracy results
3.4.2.2 Latency results
3.4.2.3 Resource usage
3.4.3 Model Configuration
3.5 Conclusions
References
Chapter 4 - Digital Neuron Cells for Highly Parallel Cognitive Systems
4.1 Introduction
4.2 System Design Configuration
4.2.1 Requirements
4.2.2 Input and Output
4.2.3 Parameters
4.2.4 Scalability of Network
4.2.5 Neuron Models Implementations
4.2.6 Synthesis
4.3 System Design Implementation
4.3.1 Interface
4.3.1.1 Inputs and outputs
4.3.1.2 Locality of data
4.3.1.2.1 Localization of inputs
4.3.1.2.2 Localization of outputs
4.3.2 Implementation of the Neuron Models
4.3.2.1 The extended Hodgkin–Huxley model
4.3.2.1.1 Neuron cell
4.3.2.1.2 Physical cell
4.3.2.1.3 Cluster
4.3.2.2 Integrate-and-fire model
4.3.2.3 Izhikevich model
4.3.2.3.1 Axonal conduction delay
4.3.2.3.2 STDP
4.3.2.3.3 Spike generation
4.3.3 High-level Synthesis
4.3.3.1 Optimization with directives
4.3.3.2 Adjustments of system for HLS
4.3.3.2.1 Hodgkin–Huxley model
4.3.3.2.2 Integrate-and-fire model
4.3.3.2.3 Izhikevich model
4.4 Performance Evaluation
4.4.1 Model Configuration
4.4.2 Experimental Results
4.5 Conclusions
References
Chapter 5 - Energy-Efficient Multipath Ring Network for Heterogeneous Clustered Neuronal Arrays
5.1 Introduction
5.2 State-of-the-Art and Background Concepts
5.2.1 Neuron Models
5.2.2 Simulation Platforms
5.2.3 Communication Network Considerations
5.3 Neural Network Communication Schemes and System Structure
5.3.1 Physical System Structure
5.3.2 Extraction, Insertion, and Configuration Layer
5.3.3 Topological Layer
5.3.3.1 Multipath ring routing scheme
5.3.3.2 Traffic model
5.4 Energy-Delay Product
5.4.1 Mathematical Derivation
5.4.2 Energy-Delay Product Estimation
5.5 Conclusions
References
Chapter 6 - A Hierarchical Dataflow Architecture for Large-Scale Multi-FPGA Biophysically Accurate Neuron Simulation
6.1 Introduction
6.2 The System Overview
6.2.1 Mesh Topology
6.2.2 The Routers
6.2.3 The Clusters
6.2.4 Hodgkin–Huxley Cells
6.3 The Communication Architecture
6.4 Experimental Results
6.5 Conclusions
References
Chapter 7 - Single-Lead Neuromorphic ECG Classification System
7.1 Introduction
7.1.1 ECG Signals and Arrhythmia
7.1.2 Feature Detection
7.1.2.1 Methods and algorithms
7.1.2.1.1 QRS detection
7.1.2.1.2 P and T wave detection
7.1.3 Feature Selection
7.1.3.1 Feature selection choices
7.1.3.2 Methods and algorithms
7.1.4 Classification Methods
7.2 Feature Extraction Implementation
7.2.1 Feature Detection
7.2.1.1 QRS detection
7.2.1.2 P and T wave detection
7.2.2 Feature Selection
7.2.2.1 Feature set
7.2.2.2 Correlation matrix
7.3 Network Configuration and Results
7.3.1 Approach
7.3.2 Silhouette Coefficients
7.3.3 Clustering Methods for the Output
7.3.4 Results
7.4 Conclusion
References
Chapter 8 - Multi-Compartment Synaptic Circuit in Neuromorphic Structures
8.1 Introduction
8.1.1 Synapse
8.1.1.1 Synaptic plasticity
8.1.1.2 Synaptic receptors
8.1.1.2.1 AMPA receptor
8.1.1.2.2 NMDA receptor
8.1.1.2.3 GABA receptor
8.2 Model Extraction
8.2.1 Model of the Synapse
8.2.2 Learning Rules
8.2.2.1 Pair-based STDP
8.2.2.1.1 Triplet-based STDP
8.3 Component Implementations
8.3.1 Learning Rule 1: Classic STDP
8.3.2 Learning Rule 2: Advanced STDP
8.3.3 Learning Rule 3: Triplet-Based STDP
8.3.4 Synaptic Receptors
8.3.4.1 AMPA receptor
8.3.4.2 NMDA receptor
8.3.4.3 GABA receptors
8.4 Component Characterizations
8.4.1 Learning Rule 1: Classic STDP
8.4.2 Learning Rule 2: Advanced STDP
8.4.3 Learning Rule 3: Triplet-based STDP
8.4.4 Synaptic Receptors
8.4.4.1 Environment settings
8.4.4.2 Results
8.5 Neural Network with Multi-Receptor Synapses
8.5.1 Synchrony Detection Tool: Cross-Correlograms
8.5.2 Environment Settings
8.5.3 Input Patterns
8.5.4 Synchrony Detection
8.6 Conclusions
References
Chapter 9 - Conclusion and Future Work
9.1 Summary of the Results
9.2 Recommendations and Future Work
Index
About the Editors
Back Cover