Updated with new code, new projects, and new chapters, Machine Learning with TensorFlow, Second Edition gives readers a solid foundation in machine-learning concepts and the TensorFlow library. Written by NASA JPL Deputy CTO and Principal Data Scientist Chris Mattmann, all examples are accompanied by downloadable Jupyter Notebooks for a hands-on experience coding TensorFlow with Python. New and revised content expands coverage of core machine learning algorithms, and advancements in neural networks such as VGG-Face facial identification classifiers and deep speech classifiers.
About the Technology
Supercharge your data analysis with machine learning! ML algorithms automatically improve as they process data, so results get better over time. You don’t have to be a mathematician to use ML: Tools like Google’s TensorFlow library help with complex calculations so you can focus on getting the answers you need.
About the book
Machine Learning with TensorFlow, Second Edition is a fully revised guide to building machine learning models using Python and TensorFlow. You’ll apply core ML concepts to real-world challenges, such as sentiment analysis, text classification, and image recognition. Hands-on examples illustrate neural network techniques for deep speech processing, facial identification, and auto-encoding with CIFAR-10.
What's inside
• Machine Learning with TensorFlow
• Choosing the best ML approaches
• Visualizing algorithms with TensorBoard
• Sharing results with collaborators
• Running models in Docker
About the reader
Requires intermediate Python skills and knowledge of general algebraic concepts like vectors and matrices. Examples use the super-stable 1.15.x branch of TensorFlow and TensorFlow 2.x.
About the author
Chris Mattmann is the Division Manager of the Artificial Intelligence, Analytics, and Innovation Organization at NASA Jet Propulsion Lab. The first edition of this book was written by Nishant Shukla with Kenneth Fricklas.
Author(s): Chris A. Mattmann
Edition: 2
Publisher: Manning Publications
Year: 2021
Language: English
Commentary: Vector PDF
Pages: 456
City: Shelter Island, NY
Tags: Machine Learning; Neural Networks; Deep Learning; Natural Language Processing; Reinforcement Learning; Python; Convolutional Neural Networks; Recurrent Neural Networks; Autoencoders; Classification; Clustering; Sentiment Analysis; TensorFlow; Linear Regression; Logistic Regression; Jupyter; Long Short-Term Memory; Markov Models; TensorBoard; Hidden Markov Models
Machine Learning with TensorFlow, Second Edition
brief contents
contents
foreword
preface
acknowledgments
about this book
How this book is organized: A roadmap
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1: Your machine-learning rig
Chapter 1: A machine-learning odyssey
1.1 Machine-learning fundamentals
1.1.1 Parameters
1.1.2 Learning and inference
1.2 Data representation and features
1.3 Distance metrics
1.4 Types of learning
1.4.1 Supervised learning
1.4.2 Unsupervised learning
1.4.3 Reinforcement learning
1.4.4 Meta-learning
1.5 TensorFlow
1.6 Overview of future chapters
Chapter 2: TensorFlow essentials
2.1 Ensuring that TensorFlow works
2.2 Representing tensors
2.3 Creating operators
2.4 Executing operators within sessions
2.5 Understanding code as a graph
2.5.1 Setting session configurations
2.6 Writing code in Jupyter
2.7 Using variables
2.8 Saving and loading variables
2.9 Visualizing data using TensorBoard
2.9.1 Implementing a moving average
2.9.2 Visualizing the moving average
2.10 Putting it all together: The TensorFlow system architecture and API
Part 2: Core learning algorithms
Chapter 3: Linear regression and beyond
3.1 Formal notation
3.1.1 How do you know the regression algorithm is working?
3.2 Linear regression
3.3 Polynomial model
3.4 Regularization
3.5 Application of linear regression
Chapter 4: Using regression for call-center volume prediction
4.1 What is 311?
4.2 Cleaning the data for regression
4.3 What’s in a bell curve? Predicting Gaussian distributions
4.4 Training your call prediction regressor
4.5 Visualizing the results and plotting the error
4.6 Regularization and training test splits
Chapter 5: A gentle introduction to classification
5.1 Formal notation
5.2 Measuring performance
5.2.1 Accuracy
5.2.2 Precision and recall
5.2.3 Receiver operating characteristic curve
5.3 Using linear regression for classification
5.4 Using logistic regression
5.4.1 Solving 1D logistic regression
5.4.2 Solving 2D regression
5.5 Multiclass classifier
5.5.1 One-versus-all
5.5.2 One-versus-one
5.5.3 Softmax regression
5.6 Application of classification
Chapter 6: Sentiment classification: Large movie-review dataset
6.1 Using the Bag of Words model
6.1.1 Applying the Bag of Words model to movie reviews
6.1.2 Cleaning all the movie reviews
6.1.3 Exploratory data analysis on your Bag of Words
6.2 Building a sentiment classifier using logistic regression
6.2.1 Setting up the training for your model
6.2.2 Performing the training for your model
6.3 Making predictions using your sentiment classifier
6.4 Measuring the effectiveness of your classifier
6.5 Creating the softmax-regression sentiment classifier
6.6 Submitting your results to Kaggle
Chapter 7: Automatically clustering data
7.1 Traversing files in TensorFlow
7.2 Extracting features from audio
7.3 Using k-means clustering
7.4 Segmenting audio
7.5 Clustering with a self-organizing map
7.6 Applying clustering
Chapter 8: Inferring user activity from Android accelerometer data
8.1 The User Activity from Walking dataset
8.1.1 Creating the dataset
8.1.2 Computing jerk and extracting the feature vector
8.2 Clustering similar participants based on jerk magnitudes
8.3 Different classes of user activity for a single participant
Chapter 9: Hidden Markov models
9.1 Example of a not-so-interpretable model
9.2 Markov model
9.3 Hidden Markov model
9.4 Forward algorithm
9.5 Viterbi decoding
9.6 Uses of HMMs
9.6.1 Modeling a video
9.6.2 Modeling DNA
9.6.3 Modeling an image
9.7 Application of HMMs
Chapter 10: Part-of-speech tagging and word-sense disambiguation
10.1 Review of HMM example: Rainy or Sunny
10.2 PoS tagging
10.2.1 The big picture: Training and predicting PoS with HMMs
10.2.2 Generating the ambiguity PoS tagged dataset
10.3 Algorithms for building the HMM for PoS disambiguation
10.3.1 Generating the emission probabilities
10.4 Running the HMM and evaluating its output
10.5 Getting more training data from the Brown Corpus
10.6 Defining error bars and metrics for PoS tagging
Part 3: The neural network paradigm
Chapter 11: A peek into autoencoders
11.1 Neural networks
11.2 Autoencoders
11.3 Batch training
11.4 Working with images
11.5 Application of autoencoders
Chapter 12: Applying autoencoders: The CIFAR-10 image dataset
12.1 What is CIFAR-10?
12.1.1 Evaluating your CIFAR-10 autoencoder
12.2 Autoencoders as classifiers
12.2.1 Using the autoencoder as a classifier via loss
12.3 Denoising autoencoders
12.4 Stacked deep autoencoders
Chapter 13: Reinforcement learning
13.1 Formal notions
13.1.1 Policy
13.1.2 Utility
13.2 Applying reinforcement learning
13.3 Implementing reinforcement learning
13.4 Exploring other applications of reinforcement learning
Chapter 14: Convolutional neural networks
14.1 Drawback of neural networks
14.2 Convolutional neural networks
14.3 Preparing the image
14.3.1 Generating filters
14.3.2 Convolving using filters
14.3.3 Max pooling
14.4 Implementing a CNN in TensorFlow
14.4.1 Measuring performance
14.4.2 Training the classifier
14.5 Tips and tricks to improve performance
14.6 Application of CNNs
Chapter 15: Building a real-world CNN: VGG -Face and VGG -Face Lite
15.1 Making a real-world CNN architecture for CIFAR-10
15.1.1 Loading and preparing the CIFAR-10 image data
15.1.2 Performing data augmentation
15.2 Building a deeper CNN architecture for CIFAR-10
15.2.1 CNN optimizations for increasing learned parameter resilience
15.3 Training and applying a better CIFAR-10 CNN
15.4 Testing and evaluating your CNN for CIFAR-10
15.4.1 CIFAR-10 accuracy results and ROC curves
15.4.2 Evaluating the softmax predictions per class
15.5 Building VGG -Face for facial recognition
15.5.1 Picking a subset of VGG -Face for training VGG -Face Lite
15.5.2 TensorFlow’s Dataset API and data augmentation
15.5.3 Creating a TensorFlow dataset
15.5.4 Training using TensorFlow datasets
15.5.5 VGG -Face Lite model and training
15.5.6 Training and evaluating VGG -Face Lite
15.5.7 Evaluating and predicting with VGG -Face Lite
Chapter 16: Recurrent neural networks
16.1 Introduction to RNNs
16.2 Implementing a recurrent neural network
16.3 Using a predictive model for time-series data
16.4 Applying RNNs
Chapter 17: LSTMs and automatic speech recognition
17.1 Preparing the LibriSpeech corpus
17.1.1 Downloading, cleaning, and preparing LibriSpeech OpenSLR data
17.1.2 Converting the audio
17.1.3 Generating per-audio transcripts
17.1.4 Aggregating audio and transcripts
17.2 Using the deep-speech model
17.2.1 Preparing the input audio data for deep speech
17.2.2 Preparing the text transcripts as character-level numerical data
17.2.3 The deep-speech model in TensorFlow
17.2.4 Connectionist temporal classification in TensorFlow
17.3 Training and evaluating deep speech
Chapter 18: Sequence-to-sequence models for chatbots
18.1 Building on classification and RNNs
18.2 Understanding seq2seq architecture
18.3 Vector representation of symbols
18.4 Putting it all together
18.5 Gathering dialogue data
Chapter 19: Utility landscape
19.1 Preference model
19.2 Image embedding
19.3 Ranking images
What’s next
appendix: Installation instructions
A.1 Installing the book’s code with Docker
A.1.1 Installing Docker in Windows
A.1.2 Installing Docker in Linux
A.1.3 Installing Docker in macOS
A.1.4 Using Docker
A.2 Getting the data and storing models
A.3 Necessary libraries
A.4 Converting the call-center example to TensorFlow2
A.4.1 The call-center example with TF2
index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z