Practical patterns for scaling machine learning from your laptop to a distributed cluster.
In Distributed Machine Learning Patterns you will learn how to:
• Apply distributed systems patterns to build scalable and reliable machine learning projects
• Construct machine learning pipelines with data ingestion, distributed training, model serving, and more
• Automate machine learning tasks with Kubernetes, TensorFlow, Kubeflow, and Argo Workflows
• Make trade offs between different patterns and approaches
• Manage and monitor machine learning workloads at scale
Distributed Machine Learning Patterns teaches you how to scale machine learning models from your laptop to large distributed clusters. In it, you’ll learn how to apply established distributed systems patterns to machine learning projects, and explore new ML-specific patterns as well. Firmly rooted in the real world, this book demonstrates how to apply patterns using examples based in TensorFlow, Kubernetes, Kubeflow, and Argo Workflows. Real-world scenarios, hands-on projects, and clear, practical DevOps techniques let you easily launch, manage, and monitor cloud-native distributed machine learning pipelines.
About the technology
Scaling up models from standalone devices to large distributed clusters is one of the biggest challenges faced by modern machine learning practitioners. Distributing machine learning systems allow developers to handle extremely large datasets across multiple clusters, take advantage of automation tools, and benefit from hardware accelerations. In this book, Kubeflow co-chair Yuan Tang shares patterns, techniques, and experience gained from years spent building and managing cutting-edge distributed machine learning infrastructure.
About the book
Distributed Machine Learning Patterns is filled with practical patterns for running machine learning systems on distributed Kubernetes clusters in the cloud. Each pattern is designed to help solve common challenges faced when building distributed machine learning systems, including supporting distributed model training, handling unexpected failures, and dynamic model serving traffic. Real-world scenarios provide clear examples of how to apply each pattern, alongside the potential trade offs for each approach. Once you’ve mastered these cutting edge techniques, you’ll put them all into practice and finish up by building a comprehensive distributed machine learning system.
About the reader
For data analysts, data scientists, and software engineers who know the basics of machine learning algorithms and running machine learning in production. Readers should be familiar with the basics of Bash, Python, and Docker.
About the author
Yuan Tang is currently a founding engineer at Akuity. Previously he was a senior software engineer at Alibaba Group, building AI infrastructure and AutoML platforms on Kubernetes. Yuan is co-chair of Kubeflow, maintainer of Argo, TensorFlow, XGBoost, and Apache MXNet. He is the co-author of TensorFlow in Practice and author of the TensorFlow implementation of Dive into Deep Learning.
Author(s): Yuan Tang
Edition: 1
Publisher: Manning Publications
Year: 2024
Language: English
Commentary: Publisher's PDF | Published: November 2023
Pages: 248
City: Shelter Island, NY
Tags: Machine Learning; TensorFlow; Distributed Systems; Sharding; Kubernetes; Batch Learning; Workflows; Data Ingestion; Model Training; KubeFlow; Distributed Training; Model Serving; Argo Workflows; Python
Distributed Machine Learning Patterns
brief contents
contents
preface
acknowledgments
about this book
Who should read this book?
How this book is organized: A roadmap
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1—Basic concepts and background
1 Introduction to distributed machine learning systems
1.1 Large-scale machine learning
1.1.1 The growing scale
1.1.2 What can we do?
1.2 Distributed systems
1.2.1 What is a distributed system?
1.2.2 The complexity and patterns
1.3 Distributed machine learning systems
1.3.1 What is a distributed machine learning system?
1.3.2 Are there similar patterns?
1.3.3 When should we use a distributed machine learning system?
1.3.4 When should we not use a distributed machine learning system?
1.4 What we will learn in this book
Summary
Part 2—Patterns of distributed machine learning systems
2 Data ingestion patterns
2.1 What is data ingestion?
2.2 The Fashion-MNIST dataset
2.3 Batching pattern
2.3.1 The problem: Performing expensive operations for Fashion MNIST dataset with limited memory
2.3.2 The solution
2.3.3 Discussion
2.3.4 Exercises
2.4 Sharding pattern: Splitting extremely large datasets among multiple machines
2.4.1 The problem
2.4.2 The solution
2.4.3 Discussion
2.4.4 Exercises
2.5 Caching pattern
2.5.1 The problem: Re-accessing previously used data for efficient multi-epoch model training
2.5.2 The solution
2.5.3 Discussion
2.5.4 Exercises
2.6 Answers to exercises
Section 2.3.4
Section 2.4.4
Section 2.5.4
Summary
3 Distributed training patterns
3.1 What is distributed training?
3.2 Parameter server pattern: Tagging entities in 8 million YouTube videos
3.2.1 The problem
3.2.2 The solution
3.2.3 Discussion
3.2.4 Exercises
3.3 Collective communication pattern
3.3.1 The problem: Improving performance when parameter servers become a bottleneck
3.3.2 The solution
3.3.3 Discussion
3.3.4 Exercises
3.4 Elasticity and fault-tolerance pattern
3.4.1 The problem: Handling unexpected failures when training with limited computational resources
3.4.2 The solution
3.4.3 Discussion
3.4.4 Exercises
3.5 Answers to exercises
Section 3.2.4
Section 3.3.4
Section 3.4.4
Summary
4 Model serving patterns
4.1 What is model serving?
4.2 Replicated services pattern: Handling the growing number of serving requests
4.2.1 The problem
4.2.2 The solution
4.2.3 Discussion
4.2.4 Exercises
4.3 Sharded services pattern
4.3.1 The problem: Processing large model serving requests with high-resolution videos
4.3.2 The solution
4.3.3 Discussion
4.3.4 Exercises
4.4 The event-driven processing pattern
4.4.1 The problem: Responding to model serving requests based on events
4.4.2 The solution
4.4.3 Discussion
4.4.4 Exercises
4.5 Answers to exercises
Section 4.2
Section 4.3
Section 4.4
Summary
5 Workflow patterns
5.1 What is workflow?
5.2 Fan-in and fan-out patterns: Composing complex machine learning workflows
5.2.1 The problem
5.2.2 The solution
5.2.3 Discussion
5.2.4 Exercises
5.3 Synchronous and asynchronous patterns: Accelerating workflows with concurrency
5.3.1 The problem
5.3.2 The solution
5.3.3 Discussion
5.3.4 Exercises
5.4 Step memoization pattern: Skipping redundant workloads via memoized steps
5.4.1 The problem
5.4.2 The solution
5.4.3 Discussion
5.4.4 Exercises
5.5 Answers to exercises
Section 5.2
Section 5.3
Section 5.4
Summary
6 Operation patterns
6.1 What are operations in machine learning systems?
6.2 Scheduling patterns: Assigning resources effectively in a shared cluster
6.2.1 The problem
6.2.2 The solution
6.2.3 Discussion
6.2.4 Exercises
6.3 Metadata pattern: Handle failures appropriately to minimize the negative effect on users
6.3.1 The problem
6.3.2 The solution
6.3.3 Discussion
6.3.4 Exercises
6.4 Answers to exercises
Section 6.2
Section 6.3
Summary
Part 3—Building a distributed machine learning workflow
7 Project overview and system architecture
7.1 Project overview
7.1.1 Project background
7.1.2 System components
7.2 Data ingestion
7.2.1 The problem
7.2.2 The solution
7.2.3 Exercises
7.3 Model training
7.3.1 The problem
7.3.2 The solution
7.3.3 Exercises
7.4 Model serving
7.4.1 The problem
7.4.2 The solution
7.4.3 Exercises
7.5 End-to-end workflow
7.5.1 The problems
7.5.2 The solutions
7.5.3 Exercises
7.6 Answers to exercises
Section 7.2
Section 7.3
Section 7.4
Section 7.5
Summary
8 Overview of relevant technologies
8.1 TensorFlow: The machine learning framework
8.1.1 The basics
8.1.2 Exercises
8.2 Kubernetes: The distributed container orchestration system
8.2.1 The basics
8.2.2 Exercises
8.3 Kubeflow: Machine learning workloads on Kubernetes
8.3.1 The basics
8.3.2 Exercises
8.4 Argo Workflows: Container-native workflow engine
8.4.1 The basics
8.4.2 Exercises
8.5 Answers to exercises
Section 8.1
Section 8.2
Section 8.3
Section 8.4
Summary
9 A complete implementation
9.1 Data ingestion
9.1.1 Single-node data pipeline
9.1.2 Distributed data pipeline
9.2 Model training
9.2.1 Model definition and single-node training
9.2.2 Distributed model training
9.2.3 Model selection
9.3 Model serving
9.3.1 Single-server model inference
9.3.2 Replicated model servers
9.4 The end-to-end workflow
9.4.1 Sequential steps
9.4.2 Step memoization
Summary
index
A
B
C
D
E
F
G
H
I
K
L
M
N
O
P
R
S
T
U
V
W
Y