Modern systems contain multi-core CPUs and GPUs that have the potential for parallel computing. But many scientific Python tools were not designed to leverage this parallelism. With this short but thorough resource, data scientists and Python programmers will learn how the Dask open source library for parallel computing provides APIs that make it easy to parallelize PyData libraries including NumPy, pandas, and scikit-learn.
Authors Holden Karau and Mika Kimmins show you how to use Dask computations in local systems and then scale to the cloud for heavier workloads. This practical book explains why Dask is popular among industry experts and academics and is used by organizations that include Walmart, Capital One, Harvard Medical School, and NASA.
With this book, you'll learn:
• What Dask is, where you can use it, and how it compares with other tools
• How to use Dask for batch data parallel processing
• Key distributed system concepts for working with Dask
• Methods for using Dask with higher-level APIs and building blocks
• How to work with integrated libraries such as scikit-learn, pandas, and PyTorch
• How to use Dask with GPUs
Author(s): Holden Karau, Mika Kimmins
Edition: 1
Publisher: O'Reilly Media
Year: 2023
Language: English
Commentary: Publisher's PDF; 2023-07-19: First Release
Pages: 223
City: Sebastopol, CA
Tags: Machine Learning; Data Science; Python; Big Data; Parallel Programming; Distributed Systems; Partitioning; Batch Processing; Serialization; scikit-learn; NumPy; pandas; GPU Programming; High Performance Computing; Dask; Task Scheduling; Dask-ML
Copyright
Table of Contents
Preface
A Note on Responsibility
Conventions Used in This Book
Online Figures
License
Using Code Examples
O’Reilly Online Learning
How to Contact Us
Acknowledgments
Chapter 1. What Is Dask?
Why Do You Need Dask?
Where Does Dask Fit in the Ecosystem?
Big Data
Data Science
Parallel to Distributed Python
Dask Community Libraries
What Dask Is Not
Conclusion
Chapter 2. Getting Started with Dask
Installing Dask Locally
Hello Worlds
Task Hello World
Distributed Collections
Dask DataFrame (Pandas/What People Wish Big Data Was)
Conclusion
Chapter 3. How Dask Works: The Basics
Execution Backends
Local Backends
Distributed (Dask Client and Scheduler)
Dask’s Diagnostics User Interface
Serialization and Pickling
Partitioning/Chunking Collections
Dask Arrays
Dask Bags
Dask DataFrames
Shuffles
Partitions During Load
Tasks, Graphs, and Lazy Evaluation
Lazy Evaluation
Task Dependencies
visualize
Intermediate Task Results
Task Sizing
When Task Graphs Get Too Large
Combining Computation
Persist, Caching, and Memoization
Fault Tolerance
Conclusion
Chapter 4. Dask DataFrame
How Dask DataFrames Are Built
Loading and Writing
Formats
Filesystems
Indexing
Shuffles
Rolling Windows and map_overlap
Aggregations
Full Shuffles and Partitioning
Embarrassingly Parallel Operations
Working with Multiple DataFrames
Multi-DataFrame Internals
Missing Functionality
What Does Not Work
What’s Slower
Handling Recursive Algorithms
Re-computed Data
How Other Functions Are Different
Data Science with Dask DataFrame: Putting It Together
Deciding to Use Dask
Exploratory Data Analysis with Dask
Loading Data
Plotting Data
Inspecting Data
Conclusion
Chapter 5. Dask’s Collections
Dask Arrays
Common Use Cases
When Not to Use Dask Arrays
Loading/Saving
What’s Missing
Special Dask Functions
Dask Bags
Common Use Cases
Loading and Saving Dask Bags
Loading Messy Data with a Dask Bag
Limitations
Conclusion
Chapter 6. Advanced Task Scheduling: Futures and Friends
Lazy and Eager Evaluation Revisited
Use Cases for Futures
Launching Futures
Future Life Cycle
Fire-and-Forget
Retrieving Results
Nested Futures
Conclusion
Chapter 7. Adding Changeable/Mutable State with Dask Actors
What Is the Actor Model?
Dask Actors
Your First Actor (It’s a Bank Account)
Scaling Dask Actors
Limitations
When to Use Dask Actors
Conclusion
Chapter 8. How to Evaluate Dask’s Components and Libraries
Qualitative Considerations for Project Evaluation
Project Priorities
Community
Dask-Specific Best Practices
Up-to-Date Dependencies
Documentation
Openness to Contributions
Extensibility
Quantitative Metrics for Open Source Project Evaluation
Release History
Commit Frequency (and Volume)
Library Usage
Code and Best Practices
Conclusion
Chapter 9. Migrating Existing Analytic Engineering
Why Dask?
Limitations of Dask
Migration Road Map
Types of Clusters
Development: Considerations
Deployment Monitoring
Conclusion
Chapter 10. Dask with GPUs and Other Special Resources
Transparent Versus Non-transparent Accelerators
Understanding Whether GPUs or TPUs Can Help
Making Dask Resource-Aware
Installing the Libraries
Using Custom Resources Inside Your Dask Tasks
Decorators (Including Numba)
GPUs
GPU Acceleration Built on Top of Dask
cuDF
BlazingSQL
cuStreamz
Freeing Accelerator Resources
Design Patterns: CPU Fallback
Conclusion
Chapter 11. Machine Learning with Dask
Parallelizing ML
When to Use Dask-ML
Getting Started with Dask-ML and XGBoost
Feature Engineering
Model Selection and Training
When There Is No Dask-ML Equivalent
Use with Dask’s joblib
XGBoost with Dask
ML Models with Dask-SQL
Inference and Deployment
Distributing Data and Models Manually
Large-Scale Inferences with Dask
Conclusion
Chapter 12. Productionizing Dask: Notebooks, Deployment, Tuning, and Monitoring
Factors to Consider in a Deployment Option
Building Dask on a Kubernetes Deployment
Dask on Ray
Dask on YARN
Dask on High-Performance Computing
Setting Up Dask in a Remote Cluster
Connecting a Local Machine to an HPC Cluster
Dask JupyterLab Extension and Magics
Installing JupyterLab Extensions
Launching Clusters
UI
Watching Progress
Understanding Dask Performance
Metrics in Distributed Computing
The Dask Dashboard
Saving and Sharing Dask Metrics/Performance Logs
Advanced Diagnostics
Scaling and Debugging Best Practices
Manual Scaling
Adaptive/Auto-scaling
Persist and Delete Costly Data
Dask Nanny
Worker Memory Management
Cluster Sizing
Chunking, Revisited
Avoid Rechunking
Scheduled Jobs
Deployment Monitoring
Conclusion
Appendix A. Key System Concepts for Dask Users
Testing
Manual Testing
Unit Testing
Integration Testing
Test-Driven Development
Property Testing
Working with Notebooks
Out-of-Notebook Testing
In-Notebook Testing: In-Line Assertions
Data and Output Validation
Peer-to-Peer Versus Centralized Distributed
Methods of Parallelism
Task Parallelism
Data Parallelism
Load Balancing
Network Fault Tolerance and CAP Theorem
Recursion (Tail and Otherwise)
Versioning and Branching: Code and Data
Isolation and Noisy Neighbors
Machine Fault Tolerance
Scalability (Up and Down)
Cache, Memory, Disk, and Networking: How the Performance Changes
Hashing
Data Locality
Exactly Once Versus At Least Once
Conclusion
Appendix B. Scalable DataFrames: A Comparison and Some History
Tools
One Machine Only
Distributed
Conclusion
Appendix C. Debugging Dask
Using Debuggers
General Debugging Tips with Dask
Native Errors
Some Notes on Official Advice for Handling Bad Records
Dask Diagnostics
Conclusion
Appendix D. Streaming with Streamz and Dask
Getting Started with Streamz on Dask
Streaming Data Sources and Sinks
Word Count
GPU Pipelines on Dask Streaming
Limitations, Challenges, and Workarounds
Conclusion
Index
About the Authors
Colophon