Evolutionary Deep Learning: Genetic algorithms and neural networks

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Discover one-of-a-kind AI strategies never before seen outside of academic papers! Learn how the principles of evolutionary computation overcome deep learning’s common pitfalls and deliver adaptable model upgrades without constant manual adjustment. In Evolutionary Deep Learning you will learn how to: • Solve complex design and analysis problems with evolutionary computation • Tune deep learning hyperparameters with evolutionary computation (EC), genetic algorithms, and particle swarm optimization • Use unsupervised learning with a deep learning autoencoder to regenerate sample data • Understand the basics of reinforcement learning and the Q-Learning equation • Apply Q-Learning to deep learning to produce deep reinforcement learning • Optimize the loss function and network architecture of unsupervised autoencoders • Make an evolutionary agent that can play an OpenAI Gym game Evolutionary Deep Learning is a guide to improving your deep learning models with AutoML enhancements based on the principles of biological evolution. This exciting new approach utilizes lesser-known AI approaches to boost performance without hours of data annotation or model hyperparameter tuning. In this one-of-a-kind guide, you’ll discover tools for optimizing everything from data collection to your network architecture. About the technology Deep learning meets evolutionary biology in this incredible book. Explore how biology-inspired algorithms and intuitions amplify the power of neural networks to solve tricky search, optimization, and control problems. Relevant, practical, and extremely interesting examples demonstrate how ancient lessons from the natural world are shaping the cutting edge of data science. About the book Evolutionary Deep Learning introduces evolutionary computation (EC) and gives you a toolbox of techniques you can apply throughout the deep learning pipeline. Discover genetic algorithms and EC approaches to network topology, generative modeling, reinforcement learning, and more! Interactive Colab notebooks give you an opportunity to experiment as you explore. What's inside • Solve complex design and analysis problems with evolutionary computation • Tune deep learning hyperparameters • Apply Q-Learning to deep learning to produce deep reinforcement learning • Optimize the loss function and network architecture of unsupervised autoencoders • Make an evolutionary agent that can play an OpenAI Gym game About the reader For data scientists who know Python. About the author Micheal Lanham is a proven software and tech innovator with over 20 years of experience.

Author(s): Micheal Lanham
Edition: 1
Publisher: Manning
Year: 2023

Language: English
Commentary: Publisher's PDF
Pages: 360
City: Shelter Island, NY
Tags: Machine Learning; Genetic Algorithms; Deep Learning; Python; Convolutional Neural Networks; Autoencoders; Generative Adversarial Networks; Hyperparameter Tuning; Game of Life; Evolutionary Algorithms; Simulations; DEAP; NEAT; Evolutionary Learning

Evolutionary Deep Learning
brief contents
contents
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A road map
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1: Getting started
Chapter 1: Introducing evolutionary deep learning
1.1 What is evolutionary deep learning?
1.1.1 Introducing evolutionary computation
1.2 The why and where of evolutionary deep learning
1.3 The need for deep learning optimization
1.3.1 Optimizing the network architecture
1.4 Automating optimization with automated machine learning
1.4.1 What is automated machine learning?
1.5 Applications of evolutionary deep learning
1.5.1 Model selection: Weight search
1.5.2 Model architecture: Architecture optimization
1.5.3 Hyperparameter tuning/optimization
1.5.4 Validation and loss function optimization
1.5.5 Neuroevolution of augmenting topologies
1.5.6 Goals
Chapter 2: Introducing evolutionary computation
2.1 Conway’s Game of Life on Google Colaboratory
2.2 Simulating life with Python
2.2.1 Learning exercises
2.3 Life simulation as optimization
2.3.1 Learning exercises
2.4 Adding evolution to the life simulation
2.4.1 Simulating evolution
2.4.2 Learning exercises
2.4.3 Some background on Darwin and evolution
2.4.4 Natural selection and survival of the fittest
2.5 Genetic algorithms in Python
2.5.1 Understanding genetics and meiosis
2.5.2 Coding genetic algorithms
2.5.3 Constructing the population
2.5.4 Evaluating fitness
2.5.5 Selecting for reproduction (crossover)
2.5.6 Applying crossover: Reproduction
2.5.7 Applying mutation and variation
2.5.8 Putting it all together
2.5.9 Understanding genetic algorithm hyperparameters
2.5.10 Learning exercises
Chapter 3: Introducing genetic algorithms with DEAP
3.1 Genetic algorithms in DEAP
3.1.1 One max with DEAP
3.1.2 Learning exercises
3.2 Solving the Queen’s Gambit
3.2.1 Learning exercises
3.3 Helping a traveling salesman
3.3.1 Building the TSP solver
3.3.2 Learning exercises
3.4 Selecting genetic operators for improved evolution
3.4.1 Learning exercises
3.5 Painting with the EvoLisa
3.5.1 Learning exercises
Chapter 4: More evolutionar y computation with DEAP
4.1 Genetic programming with DEAP
4.1.1 Solving regression with genetic programming
4.1.2 Learning exercises
4.2 Particle swarm optimization with DEAP
4.2.1 Solving equations with PSO
4.2.2 Learning exercises
4.3 Coevolving solutions with DEAP
4.3.1 Coevolving genetic programming with genetic algorithms
4.4 Evolutionary strategies with DEAP
4.4.1 Applying evolutionary strategies to function approximation
4.4.2 Revisiting the EvoLisa
4.4.3 Learning exercises
4.5 Differential evolution with DEAP
4.5.1 Approximating complex and discontinuous functions with DE
4.5.2 Learning exercises
Part 2: Optimizing deep learning
Chapter 5: Automating hyperparameter optimization
5.1 Option selection and hyperparameter tuning
5.1.1 Tuning hyperparameter strategies
5.1.2 Selecting model options
5.2 Automating HPO with random search
5.2.1 Applying random search to HPO
5.3 Grid search and HPO
5.3.1 Using grid search for automatic HPO
5.4 Evolutionary computation for HPO
5.4.1 Particle swarm optimization for HPO
5.4.2 Adding EC and DEAP to automatic HPO
5.5 Genetic algorithms and evolutionary strategies for HPO
5.5.1 Applying evolutionary strategies to HPO
5.5.2 Expanding dimensions with principal component analysis
5.6 Differential evolution for HPO
5.6.1 Differential search for evolving HPO
Chapter 6: Neuroevolution optimization
6.1 Multilayered perceptron in NumPy
6.1.1 Learning exercises
6.2 Genetic algorithms as deep learning optimizers
6.2.1 Learning exercises
6.3 Other evolutionary methods for neurooptimization
6.3.1 Learning exercises
6.4 Applying neuroevolution optimization to Keras
6.4.1 Learning exercises
6.5 Understanding the limits of evolutionary optimization
6.5.1 Learning exercises
Chapter 7: Evolutionary convolutional neural networks
7.1 Reviewing convolutional neural networks in Keras
7.1.1 Understanding CNN layer problems
7.1.2 Learning exercises
7.2 Encoding a network architecture in genes
7.2.1 Learning exercises
7.3 Creating the mating crossover operation
7.4 Developing a custom mutation operator
7.5 Evolving convolutional network architecture
7.5.1 Learning exercises
Part 3: Advanced applications
Chapter 8: Evolving autoencoders
8.1 The convolution autoencoder
8.1.1 Introducing autoencoders
8.1.2 Building a convolutional autoencoder
8.1.3 Learning exercises
8.1.4 Generalizing a convolutional AE
8.1.5 Improving the autoencoder
8.2 Evolutionary AE optimization
8.2.1 Building the AE gene sequence
8.2.2 Learning exercises
8.3 Mating and mutating the autoencoder gene sequence
8.4 Evolving an autoencoder
8.4.1 Learning exercises
8.5 Building variational autoencoders
8.5.1 Variational autoencoders: A review
8.5.2 Implementing a VAE
8.5.3 Learning exercises
Chapter 9: Generative deep learning and evolution
9.1 Generative adversarial networks
9.1.1 Introducing GANs
9.1.2 Building a convolutional generative adversarial network in Keras
9.1.3 Learning exercises
9.2 The challenges of training a GAN
9.2.1 The GAN optimization problem
9.2.2 Observing vanishing gradients
9.2.3 Observing mode collapse in GANs
9.2.4 Observing convergence failures in GANs
9.2.5 Learning exercises
9.3 Fixing GAN problems with Wasserstein loss
9.3.1 Understanding Wasserstein loss
9.3.2 Improving the DCGAN with Wasserstein loss
9.4 Encoding the Wasserstein DCGAN for evolution
9.4.1 Learning exercises
9.5 Optimizing the DCGAN with genetic algorithms
9.5.1 Learning exercises
Chapter 10: NEAT: NeuroEvolution of Augmenting Topologies
10.1 Exploring NEAT with NEAT-Python
10.1.1 Learning exercises
10.2 Visualizing an evolved NEAT network
10.3 Exercising the capabilities of NEAT
10.3.1 Learning exercises
10.4 Exercising NEAT to classify images
10.4.1 Learning exercises
10.5 Uncovering the role of speciation in evolving topologies
10.5.1 Tuning NEAT speciation
10.5.2 Learning exercises
Chapter 11: Evolutionary learning with NEAT
11.1 Introducing reinforcement learning
11.1.1 Q-learning agent on the frozen lake
11.1.2 Learning exercises
11.2 Exploring complex problems from the OpenAI Gym
11.2.1 Learning exercises
11.3 Solving reinforcement learning problems with NEAT
11.3.1 Learning exercises
11.4 Solving Gym’s lunar lander problem with NEAT agents
11.4.1 Learning exercises
11.5 Solving Gym’s lunar lander problem with a deep Q-network
Chapter 12: Evolutionary machine learning and beyond
12.1 Evolution and machine learning with gene expression programming
12.1.1 Learning exercises
12.2 Revisiting reinforcement learning with Geppy
12.2.1 Learning exercises
12.3 Introducing instinctual learning
12.3.1 The basics of instinctual learning
12.3.2 Developing generalized instincts
12.3.3 Evolving generalized solutions without instincts
12.3.4 Learning exercises
12.4 Generalized learning with genetic programming
12.4.1 Learning exercises
12.5 The future of evolutionary machine learning
12.5.1 Is evolution broken?
12.5.2 Evolutionary plasticity
12.5.3 Improving evolution with plasticity
12.5.4 Computation and evolutionary search
12.6 Generalization with instinctual deep and deep reinforcement learning
appendix
A.1 Accessing the source code
A.2 Running code on other platforms
index
A
B
C
D
E
F
G
H
I
K
L
M
N
O
P
Q
R
S
T
U
V
W