Probabilistic Deep Learning with Python

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Probabilistic Deep Learning: With Python, Keras and TensorFlow Probability teaches the increasingly popular probabilistic approach to deep learning that allows you to refine your results more quickly and accurately without much trial-and-error testing. Emphasizing practical techniques that use the Python-based Tensorflow Probability Framework, you’ll learn to build highly-performant deep learning applications that can reliably handle the noise and uncertainty of real-world data. About the technology The world is a noisy and uncertain place. Probabilistic deep learning models capture that noise and uncertainty, pulling it into real-world scenarios. Crucial for self-driving cars and scientific testing, these techniques help deep learning engineers assess the accuracy of their results, spot errors, and improve their understanding of how algorithms work. About the book Probabilistic Deep Learning is a hands-on guide to the principles that support neural networks. Learn to improve network performance with the right distribution for different data types, and discover Bayesian variants that can state their own uncertainty to increase accuracy. This book provides easy-to-apply code and uses popular frameworks to keep you focused on practical applications.

Author(s): Oliver Duerr, Beate Sick, Elvis Murina
Edition: 1
Publisher: Manning Publications
Year: 2020

Language: English
Commentary: Vector PDF
City: Shelter Island, NY
Tags: Probabilistic Models; Neural Networks; Deep Learning; Bayesian Networks; Regression; Python; Convolutional Neural Networks; Bayesian Inference; Classification; Image Classification; Curve Fitting; Loss Functions; Fully Connected Neural Networks; TensorFlow Probability; Normalizing Flow

Probabilistic Deep Learning
brief contents
contents
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A roadmap
About the code
liveBook discussion forum
about the authors
about the cover illustration
Part 1—Basics of deep learning
1 Introduction to probabilistic deep learning
1.1 A first look at probabilistic models
1.2 A first brief look at deep learning (DL)
1.2.1 A success story
1.3 Classification
1.3.1 Traditional approach to image classification
1.3.2 Deep learning approach to image classification
1.3.3 Non-probabilistic classification
1.3.4 Probabilistic classification
1.3.5 Bayesian probabilistic classification
1.4 Curve fitting
1.4.1 Non-probabilistic curve fitting
1.4.2 Probabilistic curve fitting
1.4.3 Bayesian probabilistic curve fitting
1.5 When to use and when not to use DL?
1.5.1 When not to use DL
1.5.2 When to use DL
1.5.3 When to use and when not to use probabilistic models?
1.6 What you’ll learn in this book
Summary
2 Neural network architectures
2.1 Fully connected neural networks (fcNNs)
2.1.1 The biology that inspired the design of artificial NNs
2.1.2 Getting started with implementing an NN
2.1.3 Using a fully connected NN (fcNN) to classify images
2.2 Convolutional NNs for image-like data
2.2.1 Main ideas in a CNN architecture
2.2.2 A minimal CNN for edge lovers
2.2.3 Biological inspiration for a CNN architecture
2.2.4 Building and understanding a CNN
2.3 One-dimensional CNNs for ordered data
2.3.1 Format of time-ordered data
2.3.2 What’s special about ordered data?
2.3.3 Architectures for time-ordered data
Summary
3 Principles of curve fitting
3.1 “Hello world” in curve fitting
3.1.1 Fitting a linear regression model based on a loss function
3.2 Gradient descent method
3.2.1 Loss with one free model parameter
3.2.2 Loss with two free model parameters
3.3 Special DL sauce
3.3.1 Mini-batch gradient descent
3.3.2 Using SGD variants to speed up the learning
3.3.3 Automatic differentiation
3.4 Backpropagation in DL frameworks
3.4.1 Static graph frameworks
3.4.2 Dynamic graph frameworks
Summary
Part 2—Maximum likelihood approaches for probabilistic DL models
4 Building loss functions with the likelihood approach
4.1 Introduction to the MaxLike principle: The mother of all loss functions
4.2 Deriving a loss function for a classification problem
4.2.1 Binary classification problem
4.2.2 Classification problems with more than two classes
4.2.3 Relationship between NLL, cross entropy, and Kullback-Leibler divergence
4.3 Deriving a loss function for regression problems
4.3.1 Using a NN without hidden layers and one output neuron for modeling a linear relationship between input and output
4.3.2 Using a NN with hidden layers to model non-linear relationships between input and output
4.3.3 Using an NN with additional output for regression tasks with nonconstant variance
Summary
5 Probabilistic deep learning models with TensorFlow Probability
5.1 Evaluating and comparing different probabilistic prediction models
5.2 Introducing TensorFlow Probability (TFP)
5.3 Modeling continuous data with TFP
5.3.1 Fitting and evaluating a linear regression model with constant variance
5.3.2 Fitting and evaluating a linear regression model with a nonconstant standard deviation
5.4 Modeling count data with TensorFlow Probability
5.4.1 The Poisson distribution for count data
5.4.2 Extending the Poisson distribution to a zero-inflated Poisson (ZIP) distribution
Summary
6 Probabilistic deep learning models in the wild
6.1 Flexible probability distributions in state-of-the-art DL models
6.1.1 Multinomial distribution as a flexible distribution
6.1.2 Making sense of discretized logistic mixture
6.2 Case study: Bavarian roadkills
6.3 Go with the flow: Introduction to normalizing flows (NFs)
6.3.1 The principle idea of NFs
6.3.2 The change of variable technique for probabilities
6.3.3 Fitting an NF to data
6.3.4 Going deeper by chaining flows
6.3.5 Transformation between higher dimensional spaces*
6.3.6 Using networks to control flows
6.3.7 Fun with flows: Sampling faces
Summary
Part 3—Bayesian approaches for probabilistic DL models
7 Bayesian learning
7.1 What’s wrong with non-Bayesian DL: The elephant in the room
7.2 The first encounter with a Bayesian approach
7.2.1 Bayesian model: The hacker’s way
7.2.2 What did we just do?
7.3 The Bayesian approach for probabilistic models
7.3.1 Training and prediction with a Bayesian model
7.3.2 A coin toss as a Hello World example for Bayesian models
7.3.3 Revisiting the Bayesian linear regression model
Summary
8 Bayesian neural networks
8.1 Bayesian neural networks (BNNs)
8.2 Variational inference (VI) as an approximative Bayes approach
8.2.1 Looking under the hood of VI*
8.2.2 Applying VI to the toy problem*
8.3 Variational inference with TensorFlow Probability
8.4 MC dropout as an approximate Bayes approach
8.4.1 Classical dropout used during training
8.4.2 MC dropout used during train and test times
8.5 Case studies
8.5.1 Regression case study on extrapolation
8.5.2 Classification case study with novel classes
Summary
Glossary of terms and abbreviations
index
Numerics
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z