Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Although not a new area, it is only recently that enough knowledge has been gathered to establish computational neuroscience as a scientific discipline in its own right. Given the complexity of the field, and its increasing importance in progressing our understanding of how the brain works, there has long been a need for an introductory text on what is often assumed to be an impenetrable topic.
The new edition of Fundamentals of Computational Neuroscience build on the success and strengths of the previous editions. It introduces the theoretical foundations of neuroscience with a focus on the nature of information processing in the brain. The book covers the introduction and motivation of simplified models of neurons that are suitable for exploring information processing in large brain-like networks. Additionally, it introduces several fundamental network architectures and discusses their relevance for information processing in the brain, giving some examples of models of higher-order cognitive functions to demonstrate the advanced insight that can be gained with such studies.
Each chapter starts by introducing its topic with experimental facts and conceptual questions related to the study of brain function. An additional feature is the inclusion of simple Matlab programs that can be used to explore many of the mechanisms explained in the book. An accompanying webpage includes programs for download. The book will be the essential text for anyone in the brain sciences who wants to get to grips with this topic.
Author(s): Thomas Trappenberg
Edition: 3
Publisher: Oxford University Press
Year: 2023
Language: English
Pages: 410
City: Oxford
Cover
Fundamentals of Computational Neuroscience - Third Edition
Copyright
Preface
Mathematical formulas
Programming examples
References
Acknowledgements
Contents
I Background
1 Introduction and outlook
1.1 What is computational neuroscience?
1.1.1 Embedding within neuroscience
1.2 Organization in the brain
1.2.1 Levels of organization in the brain
1.2.2 Large-scale brain anatomy
1.2.3 Hierarchical organization of cortex
1.2.4 Rapid data transmission in the brain
1.2.5 The layered structure of neocortex
1.2.6 Columnar organization and cortical modules
1.2.7 Connectivity between neocortical layers
1.2.8 Cortical parameters
1.3 What is a model?
1.3.1 Phenomenological and explanatory models
1.3.2 Models in computational neuroscience
1.4 Is there a brain theory?
1.4.1 Emergence and adaptation
1.4.2 Levels of analysis
1.5 A computational theory of the brain
1.5.1 Why do we have brains?
1.5.2 The anticipating brain
1.5.3 Deep sparse predictive coding and the uncertain brain
2 Scientific programming with Python
2.1 The Python programming environment
2.2 Basic language elements
2.2.1 Basic data types and arrays
2.2.2 Control flow
2.2.3 Functions
2.2.4 Plotting
2.2.5 Timing the program
2.3 Code efficiency and vectorization
3 Math and Stats
3.1 Vector and matrix notations
3.2 Distance measures
3.3 The δ-function
3.4 Numerical calculus
3.4.1 Differences and sums
3.4.2 Numerical integration of an initial value problem
3.4.3 Euler method
3.4.4 Higher-order methods
3.5 Basic probability theory
3.5.1 Random numbers and their probability (density) function
3.5.2 Moments: mean, variance, etc.
3.5.3 Examples of probability (density) functions
3.5.3.1 Uniform distribution
3.5.3.2 Normal (Gaussian) distribution
3.5.3.3 Bernoulli distribution
3.5.3.4 Binomial distribution
3.5.3.5 Multinomial distribution
3.5.3.6 Poisson distribution
3.5.4 Cumulative probability (density) function and the Gaussian error function
3.5.5 Functions of random variables and the central limit theorem
3.5.6 Measuring the difference between distributions
3.5.7 Marginal, joined, and conditional distributions
II Neurons
4 Neurons and conductance-based models
4.1 Biological background
4.1.1 Structural properties
4.1.2 Information-processing mechanisms
4.1.3 Membrane potential
4.1.4 Ion channels
4.2 Synaptic mechanisms and dendritic processing
4.2.1 Chemical synapses and neurotransmitters
4.2.2 Excitatory/inhibitory synapses
4.2.3 modelling synaptic responses
Simulation
4.2.4 Different levels of modelling
4.3 The generation of action potentials: Hodgkin–Huxley
4.3.1 The minimal mechanisms
4.3.2 Ion pumps
4.3.3 Hodgkin–Huxley equations
4.3.4 Propagation of action potentials
4.3.5 Above and beyond the Hodgkin–Huxley neuron: the Wilson model
4.4 FitzHugh-Nagumo model
4.5 Neuronal morphologies: compartmental models
4.5.1 Cable theory
4.5.2 Physical shape of neurons
4.5.3 Neuron simulators
5 Integrate-and-fire neurons and population models
5.1 The leaky integrate-and-fire models
5.1.1 Response of IF neurons to very short and constant input currents
5.1.2 Rate gain function
5.1.3 The spike-response model
5.1.4 The Generalized LIF model
5.1.5 The McCulloch–Pitts neuron
5.2 Spike-time variability
5.2.1 Biological irregularities
5.2.2 Noise models for IF neurons
5.2.3 Simulating the variability of real neurons
5.2.4 The activation function depends on input
5.3 Advanced integrate-and-fire models
5.3.1 The Izhikevich neuron
5.4 The neural code and the firing rate hypothesis
5.4.1 Correlation codes and coincidence detectors
5.4.2 How accurate is spike timing?
5.5 Population dynamics: modelling the average behaviour of neurons
5.5.1 Firing rates and population averages
5.5.2 Population dynamics for slow varying input
5.5.3 Motivations for population dynamics
5.5.4 Rapid response of populations
5.5.5 Common activation functions
5.6 Networks with non-classical synapses
5.6.1 Logical AND and sigma–pi nodes
5.6.2 Divisive inhibition
5.6.3 Further sources of modulatory effects between synaptic inputs
6 Associators and synaptic plasticity
6.1 Associative memory and Hebbian learning
6.1.1 Hebbian learning
6.1.2 Associations
6.1.3 Hebbian learning in the conditioning framework
6.1.4 Features of associators and Hebbian learning
Pattern completion and generalization
Prototypes and extraction of central tendencies
Graceful degradation
6.2 The physiology and biophysics of synaptic plasticity
6.2.1 Typical plasticity experiments
6.2.2 Spike timing dependent plasticity
6.2.3 The calcium hypothesis and modelling chemical pathways
6.3 Mathematical formulation of Hebbian plasticity
6.3.1 Spike timing dependent plasticity rules
6.3.2 Hebbian learning in population and rate models
Simulation
6.3.3 Negative weights and crossing synapses
6.4 Synaptic scaling and weight distributions
6.4.1 Examples of STDP with spiking neurons
6.4.2 Weight distributions in rate models
6.4.3 Competitive synaptic scaling and weight decay
6.4.4 Oja’s rule and principal component analysis
6.5 Plasticity with pre- and postsynaptic dynamics
III Networks
7 Feed-forward mapping networks
7.1 Deep representational learning
7.2 The perceptron
7.2.1 The simple perceptron as boolean function
7.2.2 Multilayer perceptron (MLP)
7.2.3 MNIST with MLP
7.2.4 MLP with Keras
7.2.5 Some remarks on gradient learning and biological plausibility of MLPs
7.3 Convolutional neural networks (CNNs)
7.3.1 Invariant object recognition
7.3.2 Image processing and convolutions filters
7.3.3 CNN and MNIST
7.4 Probabilistic interpretation of MLPs
7.4.1 Probabilistic regression
7.4.2 Probabilistic classification
7.4.3 Maximum a posteriori (MAP) and regularization with priors
7.4.4 Mapping networks with context units
7.5 The anticipating brain
7.5.1 The brain as anticipatory system in a probabilistic framework
7.5.2 Variational free energy principle
7.5.3 Deep sparse predictive coding
7.5.4 Predictive coding of MNIST
8 Feature maps and competitive population coding
8.1 Competitive feature representations in cortical tissue
8.2 Self-organizing maps
8.2.1 The basic cortical map model
8.2.2 The Kohonen model
8.2.3 Ongoing refinements of cortical maps
8.3 Dynamic neural field theory
8.3.1 The centre-surround interaction kernel
8.3.2 Asymptotic states and the dynamics of neural fields
8.3.3 Examples of competitive representations in the brain
8.3.4 Formal analysis of attractor states
8.4 ‘Path’ integration and the Hebbian trace rule
8.4.1 Path integration with asymmetrical weight kernels
8.4.2 Self-organization of a rotation network
8.4.3 Updating the network after learning
8.5 Distributed representation and population coding
8.5.1 Sparseness
8.5.2 Probabilistic population coding
8.5.3 Optimal decoding with tuning curves
8.5.4 Implementations of decoding mechanisms
9 Recurrent associative networks and episodic memory
9.1 The auto-associative network and the hippocampus
9.1.1 Different memory types
9.1.2 The hippocampus and episodic memory
9.1.3 Learning and retrieval phase
9.2 Point-attractor neural networks (ANN)
9.2.1 Network dynamics and training
9.2.2 Signal-to-noise analysis
9.2.3 The phase diagram
9.2.4 Spurious states and the advantage of noise
9.2.5 Noisy weights and diluted attractor networks
9.3 Sparse attractor networks and correlated patterns
9.3.1 Sparse patterns and expansion recoding
9.3.2 Control of sparseness in attractor networks
9.4 Chaotic networks: a dynamic systems view
9.4.1 Attractors
9.4.2 Lyapunov functions
9.4.3 The Cohen–Grossberg theorem
9.4.4 Asymmetrical networks
9.4.5 Non-monotonic networks
9.5 The Boltzmann Machine
9.5.1 ANN with hidden nodes
9.5.2 The restricted Boltzmann machine and contrastive Hebbian learning
9.5.3 Example of basic RMB on MNIST data
9.6 Re-entry and gated recurrent networks
9.6.1 Sequence processing
9.6.2 Basic sequence processing with multilayer perceptrons and recurrent neural networks in Keras
9.6.3 Long short-term memory (LSTM) and sentiment analysis
9.6.4 Other gated architectures and attention
IV System-level models
10 Modular networks and complementary systems
10.1 Modular mapping networks
10.1.1 Mixture of experts
10.1.2 The ‘what-and-where’ task
10.1.3 Product of experts
10.2 Coupled attractor networks
10.2.1 Imprinted and composite patterns
10.2.2 Signal-to-noise analysis
10.3 Sequence learning
10.4 Complementary memory systems
10.4.1 Distributed model of working memory
10.4.2 Limited capacity of working memory
10.4.3 The spurious synchronization hypothesis
10.4.4 The interacting-reverberating-memory hypothesis
11 Motor Control and Reinforcement Learning
11.1 Motor learning and control
11.1.1 Feedback controller
11.1.2 Forward and inverse model controller
11.1.3 The actor–critic scheme
11.2 Classical conditioning and reinforcement learning
11.3 Formalization of reinforcement learning
11.3.1 The environmental setting of a Markov decision process
11.3.2 Model-based reinforcement learning
11.3.2.1 The basic Bellman equation
11.3.2.2 Policy iteration
11.3.2.3 Bellman function for optimal policy and value (Q) iteration
11.3.3 Model-free reinforcement learning
11.3.3.1 Temporal difference method for value iteration
11.3.3.2 TD(λ)
11.4 Deep reinforcement learning
11.4.1 Value-function approximation with ANN
11.4.2 Deep Q-learning
11.4.3 Actors and policy search
11.4.4 Actor-critic schemes
11.4.5 Reinforcement learning in the brain
11.4.6 The cerebellum and motor control
11.4.7 Neural implementations of TD learning
11.4.8 Basal Ganglia
12 The cognitive brain
12.1 Attentive vision
12.1.1 Attentive vision
12.1.2 Attentional bias in visual search and object recognition
12.2 An interconnecting workspace hypothesis
12.2.1 The global workspace
12.2.2 Demonstration of the global workspace in the Stroop task
12.3 Complementary decision systems
12.4 Probabilistic reasoning: causal models and Bayesian networks
12.4.1 Graphical mo
12.4.2 The Pearl-example
12.4.3 Probabilistic reasoning in Python using LEA
12.4.4 Expectation maximization
12.5 Structural causal models and learning causality
12.5.1 Out-of-distribution generalization
12.5.2 Structural causal models
12.5.3 Learning causality and explainable AI
12.5.4 The way forward
Index