Bayesian Optimization : Theory and Practice Using Python

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This book covers the essential theory and implementation of popular Bayesian optimization techniques in an intuitive and well-illustrated manner. The techniques covered in this book will enable you to better tune the hyperparemeters of your machine learning models and learn sample-efficient approaches to global optimization. The book begins by introducing different Bayesian Optimization (BO) techniques, covering both commonly used tools and advanced topics. It follows a “develop from scratch” method using Python, and gradually builds up to more advanced libraries such as BoTorch, an open-source project introduced by Facebook recently. Along the way, you’ll see practical implementations of this important discipline along with thorough coverage and straightforward explanations of essential theories. This book intends to bridge the gap between researchers and practitioners, providing both with a comprehensive, easy-to-digest, and useful reference guide. After completing this book, you will have a firm grasp of Bayesian optimization techniques, which you’ll be able to put into practice in your own machine learning models. What You Will Learn Apply Bayesian Optimization to build better machine learning models Understand and research existing and new Bayesian Optimization techniques Leverage high-performance libraries such as BoTorch, which offer you the ability to dig into and edit the inner working Dig into the inner workings of common optimization algorithms used to guide the search process in Bayesian optimization Who This Book Is For Beginner to intermediate level professionals in machine learning, analytics or other roles relevant in data science.

Author(s): Peng Liu
Publisher: Apress
Year: 2023

Language: English
Pages: 243

​ Bayesian Optimization Overview
Global Optimization
The Objective Function
The Observation Model
Bayesian Statistics
Bayesian Inference
Frequentist vs.​ Bayesian Approach
Joint, Conditional, and Marginal Probabilities
Independence
Prior and Posterior Predictive Distributions
Bayesian Inference:​ An Example
Bayesian Optimization Workflow
Gaussian Process
Acquisition Function
The Full Bayesian Optimization Loop
Summary
Chapter 2:​ Gaussian Processes
Reviewing the Gaussian Basics
Understanding the Covariance Matrix
Marginal and Conditional Distribution of Multivariate Gaussian
Sampling from a Gaussian Distribution
Gaussian Process Regression
The Kernel Function
Extending to Other Variables
Learning from Noisy Observations
Gaussian Process in Practice
Drawing from GP Prior
Obtaining GP Posterior with Noise-Free Observations
Working with Noisy Observations
Experimenting with Different Kernel Parameters
Hyperparameter Tuning
Summary

Chapter 3:​ Bayesian Decision Theory and Expected Improvement
Optimization via the Sequential Decision-Making
Seeking the Optimal Policy
Utility-Driven Optimization
Multi-step Lookahead Policy
Bellman’s Principle of Optimality
Expected Improvement
Deriving the Closed-Form Expression
Implementing the Expected Improvement
Using Bayesian Optimization Libraries
Summary
Chapter 4:​ Gaussian Process Regression with GPyTorch
Introducing GPyTorch
The Basics of PyTorch
Revisiting GP Regression
Building a GP Regression Model
Fine-Tuning the Length Scale of the Kernel Function
Fine-Tuning the Noise Variance
Delving into Kernel Functions
Combining Kernel Functions
Predicting Airline Passenger Counts
Summary
Chapter 5:​ Monte Carlo Acquisition Function with Sobol Sequences and Random Restart
Analytic Expected Improvement Using BoTorch
Introducing Hartmann Function
GP Surrogate with Optimized Hyperparameters
Introducing the Analytic EI
Optimization Using Analytic EI
Grokking the Inner Optimization Routine
Using MC Acquisition Function
Using Monte Carlo Expected Improvement
Summary
Chapter 6:​ Knowledge Gradient:​ Nested Optimization vs.​ One-Shot Learning
Introducing Knowledge Gradient
Monte Carlo Estimation
Optimizing Using Knowledge Gradient
One-Shot Knowledge Gradient
Sample Average Approximation
One-Shot Formulation of KG Using SAA
One-Shot KG in Practice
Optimizing the OKG Acquisition Function
Summary
Chapter 7:​ Case Study:​ Tuning CNN Learning Rate with BoTorch
Seeking Global Optimum of Hartmann
Generating Initial Conditions
Updating GP Posterior
Creating a Monte Carlo Acquisition Function
The Full BO Loop
Hyperparameter Optimization for Convolutional Neural Network
Using MNIST
Defining CNN Architecture
Training CNN
Optimizing the Learning Rate
Entering the Full BO Loop
Summary
Index