Bayesian Optimization in Action (MEAP v10)

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Apply advanced techniques for optimizing Machine Learning processes. Bayesian optimization helps pinpoint the best configuration for your Machine Learning models with speed and accuracy. Bayesian Optimization in Action teaches you how to build Bayesian optimization systems from the ground up. This book transforms state-of-the-art research into usable techniques that you can easily put into practice, all fully illustrated with useful code samples. Hone your understanding of Bayesian optimization through engaging examples—from forecasting the weather, to finding the optimal amount of sugar for coffee, and even deciding if someone is psychic! Along the way, you’ll explore scenarios for when there are multiple objectives, when each decision has its own cost, and when feedback is in the form of pairwise comparisons. With this collection of techniques, you’ll be ready to find the optimal solution for everything from transport and logistics to cancer treatments.

Author(s): Quan Nguyen
Publisher: Manning Publications
Year: 2023

Language: English
Pages: 372

Bayesian Optimization in Action MEAP V10
Copyright
Welcome
Brief contents
Chapter 1: Introduction to Bayesian optimization
1.1 Finding the optimum of an expensive black box function
1.1.1 Hyperparameter tuning as an example of an expensive black box optimization problem
1.1.2 The problem of expensive black box optimization
1.1.3 Other real-world examples of expensive black box optimization problems
1.2 Introducing Bayesian optimization
1.2.1 Modeling with a Gaussian process
1.2.2 Making decisions with a Bayesian optimization policy
1.2.3 Combining the Gaussian process and the optimization policy to form the optimization loop
1.2.4 Bayesian optimization in action
1.3 What will you learn in this book?
1.4 Summary
Chapter 2: Gaussian processes as distributions over functions
2.1 How to sell your house the Bayesian way
2.2 Modeling correlations with multivariate Gaussian distributions and Bayesian updates
2.2.1 Using multivariate Gaussian distributions to jointly model multiple variables
2.2.2 Updating multivariate Gaussian distributions
2.2.3 Modeling many variables with high-dimensional Gaussian distributions
2.3 Going from a finite to an infinite Gaussian
2.4 Implementing Gaussian processes in Python
2.4.1 Setting up the training data
2.4.2 Implementing a Gaussian process class
2.4.3 Making predictions with a Gaussian process
2.4.4 Visualizing predictions of a Gaussian process
2.4.5 Going beyond one-dimensional objective functions
2.5 Summary
2.6 Exercise
Chapter 3: Customizing a Gaussian process with the mean and covariance functions
3.1 The importance of priors in Bayesian models
3.2 Incorporating what you already know into a Gaussian process
3.3 Defining the functional behavior with the mean function
3.3.1 Using the zero mean function as the base strategy
3.3.2 Using the constant function with gradient descent
3.3.3 Using the linear function with gradient descent
3.3.4 Using the quadratic function by implementing a custom mean function
3.4 Defining variability and smoothness with the covariance function
3.4.1 Setting the scales of the covariance function
3.4.2 Controlling smoothness with different covariance functions
3.4.3 Modeling different levels of variability with multiple length scales
3.5 Exercise
3.6 Summary
Chapter 4: Refining the best result with improvement-based policies
4.1 Navigating the search space in Bayesian optimization
4.1.1 The Bayesian optimization loop and policies
4.1.2 Balancing exploration and exploitation
4.2 Finding improvement in Bayesian optimization
4.2.1 Measuring improvement with a Gaussian process
4.2.2 Computing the probability of improvement
4.2.3 Diagnosing the probability of improvement policy
4.2.4 Exercise 1: Encouraging exploration with probability of improvement
4.3 Optimizing the expected value of improvement
4.4 Exercise 2: Bayesian optimization for hyperparameter tuning
4.5 Summary
Chapter 5: Exploring the search space with bandit-style policies
5.1 Introduction to the multi-armed bandit problem
5.1.1 Finding the best slot machine at a casino
5.1.2 From multi-armed bandit to Bayesian optimization
5.2 Being optimistic under uncertainty with the Upper Confidence Bound policy
5.2.1 Optimism under uncertainty
5.2.2 Balancing exploration and exploitation
5.2.3 Implementation with BoTorch
5.3 Smart sampling with the Thompson sampling policy
5.3.1 One sample to represent the unknown
5.3.2 Implementation with BoTorch
5.4 Exercises
5.4.1 Exercise 1: Setting an exploration schedule for the UCB
5.4.2 Exercise 2: Bayesian optimization for hyperparameter tuning
5.5 Summary
Chapter 6: Leveraging information theory with entropy-based policies
6.1 Measuring knowledge with information theory
6.1.1 Measuring uncertainty with entropy
6.1.2 Looking for a remote control using entropy
6.1.3 Binary search using entropy
6.2 Entropy search in Bayesian optimization
6.2.1 Searching for the optimum using information theory
6.2.2 Implementing entropy search with BoTorch
6.3 Summary
6.4 Exercise
6.4.1 Incorporating prior knowledge into entropy search
6.4.2 Bayesian optimization for hyperparameter tuning
Chapter 7: Maximizing throughput with batch optimization
7.1 Making multiple function evaluations simultaneously
7.1.1 Making use of all available resources in parallel
7.1.2 Why can’t we use regular Bayesian optimization policies in the batch setting?
7.2 Computing the improvement and the upper confidence bound of a batch of points
7.2.1 Extending optimization heuristics to the batch setting
7.2.2 Implementing batch improvement and upper confidence bound policies
7.3 Exercise 1: Extending Thompson sampling to the batch setting via resampling
7.4 Computing the value of a batch of points using information theory
7.4.1 Finding the most informative batch of points with cyclic refinement
7.4.2 Implementing batch entropy search with BoTorch
7.5 Summary
7.6 Exercise 2: Optimizing airplane designs
Chapter 8: Satisfying extra constraints with constrained optimization
8.1 Accounting for constraints in a constrained optimization problem
8.1.1 Constraints can change the solution of an optimization problem
8.1.2 The constraint-aware Bayesian optimization framework
8.2 Constraint-aware decision making in Bayesian optimization
8.3 Exercise 1: Manual computation of constrained Expected Improvement
8.4 Implementing constrained Expected Improvement with BoTorch
8.5 Summary
8.6 Exercise 2: Constrained optimization of airplane design
Chapter 9: Balancing utility and cost with multi-fidelity optimization
9.1 Using low-fidelity approximations to study an expensive phenomenon
9.2 Multi-fidelity modeling with Gaussian processes
9.2.1 Formatting a multi-fidelity data set
9.2.2 Training a multi-fidelity Gaussian process
9.3 Balancing information and cost in multi-fidelity optimization
9.3.1 Modeling the costs of querying different fidelities
9.3.2 Optimizing the amount of information per dollar to guide optimization
9.4 Measuring performance in multi-fidelity optimization
9.5 Summary
9.6 Exercise 1: Visualizing average performance in multi-fidelity optimization
9.7 Exercise 2: Multi-fidelity optimization with multiple low-fidelity approximations
Chapter 10: Learning from pairwise comparisons with preference optimization
10.1 Black-box optimization with pairwise comparisons
10.2 Formulating a preference optimization problem and formatting pairwise comparison data
10.3 Training a preference-based Gaussian process
10.4 Preference optimization by playing king of the hill
10.5 Summary
Chapter 11: Optimizing multiple objectives at the same time
11.1 Balancing multiple optimization objectives with Bayesian optimization
11.2 Finding the boundary of the most optimal data points
11.3 Seeking to improve the optimal data boundary
11.4 Summary
11.5 Exercise: Multi-objective optimization of airplane design
Appendix: Solutions to the exercises
A.1 Chapter 2: Gaussian processes as distributions over functions
A.2 Chapter 3: Incorporating prior knowledge with the mean and covariance functions
A.3 Chapter 4: Refining the best result with improvement-based policies
A.3.1 Encouraging exploration with Probability of Improvement
A.3.2 Bayesian optimization for hyperparameter tuning
A.4 Chapter 5: Exploring the search space with bandit-style policies
A.4.1 Setting an exploration schedule for Upper Confidence Bound
A.4.2 Bayesian optimization for hyperparameter tuning
A.5 Chapter 6: Leveraging information theory with entropy-based policies
A.5.1 Incorporating prior knowledge into entropy search
A.5.2 Bayesian optimization for hyperparameter tuning
A.6 Chapter 7: Maximizing throughput with batch optimization
A.6.1 Extending Thompson sampling to the batch setting via resampling
A.6.2 Optimizing airplane designs
A.7 Chapter 8: Satisfying extra constraints with constrained optimization
A.7.1 Manual computation of constrained Expected Improvement
A.7.2 Constrained optimization of airplane design
A.8 Chapter 9: Balancing utility and cost with multi-fidelity optimization
A.8.1 Visualizing average performance in multi-fidelity optimization
A.8.2 Multi-fidelity optimization with multiple low-fidelity approximations
A.9 Chapter 11: Optimizing multiple objectives at the same time
A.10 Chapter 12: Scaling Gaussian processes to large data sets