Modern systems contain multi-core CPUs and GPUs that have the potential for parallel computing. But many scientific Python tools were not designed to leverage this parallelism. With this short but thorough resource, data scientists and Python programmers will learn how the Dask open source library for parallel computing provides APIs that make it easy to parallelize PyData libraries including NumPy, Pandas, and Scikit-learn.
We wrote this book for data scientists and data engineers familiar with Python and pandas who are looking to handle larger-scale problems than their current tooling allows. Current PySpark users will find that some of this material overlaps with their existing knowledge of PySpark, but we hope they still find it helpful, and not just for getting away from the Java Virtual Machine (JVM).
Authors Holden Karau and Mika Kimmins show you how to use Dask computations in local systems and then scale to the cloud for heavier workloads. This practical book explains why Dask is popular among industry experts and academics and is used by organizations that include Walmart, Capital One, Harvard Medical School, and NASA.
This book is primarily focused on data science and related tasks because, in our opinion, that is where Dask excels the most. If you have a more general problem that Dask does not seem to be quite the right fit for, we would (with a bit of bias again) encourage you to check out Scaling Python with Ray (O’Reilly), which has less of a Data Science focus.
Dask is a framework for parallelized computing with Python that scales from multiple cores on one machine to data centers with thousands of machines. It has both low-level task APIs and higher-level data-focused APIs. The low-level task APIs power Dask’s integration with a wide variety of Python libraries. Having public APIs has allowed an ecosystem of tools to grow around Dask for various use cases. Continuum Analytics, now known as Anaconda Inc, started the open source, DARPA-funded Blaze project, which has evolved into Dask. Continuum has participated in developing many essential libraries and even conferences in the Python data analytics space. Dask remains an open source project, with much of its development now being supported by Coiled. Dask is unique in the distributed computing ecosystem, because it integrates popular data science, parallel, and scientific computing libraries. Dask’s integration of different libraries allows developers to reuse much of their existing knowledge at scale. They can also frequently reuse some of their code with minimal changes.
Dask simplifies scaling analytics, ML, and other code written in Python, allowing you to handle larger and more complex data and problems. Dask aims to fill the space where your existing tools, like pandas DataFrames, or your scikit-learn machine learning pipelines start to become too slow (or do not succeed).
With this book, you'll learn:
What Dask is, where you can use it, and how it compares with other tools
How to use Dask for batch data parallel processing
Key distributed system concepts for working with Dask
Methods for using Dask with higher-level APIs and building blocks
How to work with integrated libraries such as scikit-learn, pandas, and PyTorch
How to use Dask with GPUs
Author(s): Holden Karau; Mika Kimmins
Publisher: O'Reilly Media, Inc.
Year: 2023
Language: English
Pages: 223
Preface
A Note on Responsibility
Conventions Used in This Book
Online Figures
License
Using Code Examples
O’Reilly Online Learning
How to Contact Us
Acknowledgments
1. What Is Dask?
Why Do You Need Dask?
Where Does Dask Fit in the Ecosystem?
Big Data
Data Science
Parallel to Distributed Python
Dask Community Libraries
What Dask Is Not
Conclusion
2. Getting Started with Dask
Installing Dask Locally
Hello Worlds
Task Hello World
Distributed Collections
Dask DataFrame (Pandas/What People Wish Big Data Was)
Conclusion
3. How Dask Works: The Basics
Execution Backends
Local Backends
Distributed (Dask Client and Scheduler)
Dask’s Diagnostics User Interface
Serialization and Pickling
Partitioning/Chunking Collections
Dask Arrays
Dask Bags
Dask DataFrames
Shuffles
Partitions During Load
Tasks, Graphs, and Lazy Evaluation
Lazy Evaluation
Task Dependencies
visualize
Intermediate Task Results
Task Sizing
When Task Graphs Get Too Large
Combining Computation
Persist, Caching, and Memoization
Fault Tolerance
Conclusion
4. Dask DataFrame
How Dask DataFrames Are Built
Loading and Writing
Formats
Filesystems
Indexing
Shuffles
Rolling Windows and map_overlap
Aggregations
Full Shuffles and Partitioning
Embarrassingly Parallel Operations
Working with Multiple DataFrames
Multi-DataFrame Internals
Missing Functionality
What Does Not Work
What’s Slower
Handling Recursive Algorithms
Re-computed Data
How Other Functions Are Different
Data Science with Dask DataFrame: Putting It Together
Deciding to Use Dask
Exploratory Data Analysis with Dask
Loading Data
Plotting Data
Inspecting Data
Conclusion
5. Dask’s Collections
Dask Arrays
Common Use Cases
When Not to Use Dask Arrays
Loading/Saving
What’s Missing
Special Dask Functions
Dask Bags
Common Use Cases
Loading and Saving Dask Bags
Loading Messy Data with a Dask Bag
Limitations
Conclusion
6. Advanced Task Scheduling: Futures and Friends
Lazy and Eager Evaluation Revisited
Use Cases for Futures
Launching Futures
Future Life Cycle
Fire-and-Forget
Retrieving Results
Nested Futures
Conclusion
7. Adding Changeable/Mutable State with Dask Actors
What Is the Actor Model?
Dask Actors
Your First Actor (It’s a Bank Account)
Scaling Dask Actors
Limitations
When to Use Dask Actors
Conclusion
8. How to Evaluate Dask’s Components and Libraries
Qualitative Considerations for Project Evaluation
Project Priorities
Community
Dask-Specific Best Practices
Up-to-Date Dependencies
Documentation
Openness to Contributions
Extensibility
Quantitative Metrics for Open Source Project Evaluation
Release History
Commit Frequency (and Volume)
Library Usage
Code and Best Practices
Conclusion
9. Migrating Existing Analytic Engineering
Why Dask?
Limitations of Dask
Migration Road Map
Types of Clusters
Development: Considerations
Deployment Monitoring
Conclusion
10. Dask with GPUs and Other Special Resources
Transparent Versus Non-transparent Accelerators
Understanding Whether GPUs or TPUs Can Help
Making Dask Resource-Aware
Installing the Libraries
Using Custom Resources Inside Your Dask Tasks
Decorators (Including Numba)
GPUs
GPU Acceleration Built on Top of Dask
cuDF
BlazingSQL
cuStreamz
Freeing Accelerator Resources
Design Patterns: CPU Fallback
Conclusion
11. Machine Learning with Dask
Parallelizing ML
When to Use Dask-ML
Getting Started with Dask-ML and XGBoost
Feature Engineering
Model Selection and Training
When There Is No Dask-ML Equivalent
Use with Dask’s joblib
XGBoost with Dask
ML Models with Dask-SQL
Inference and Deployment
Distributing Data and Models Manually
Large-Scale Inferences with Dask
Conclusion
12. Productionizing Dask: Notebooks, Deployment, Tuning, and Monitoring
Factors to Consider in a Deployment Option
Building Dask on a Kubernetes Deployment
Dask on Ray
Dask on YARN
Dask on High-Performance Computing
Setting Up Dask in a Remote Cluster
Connecting a Local Machine to an HPC Cluster
Dask JupyterLab Extension and Magics
Installing JupyterLab Extensions
Launching Clusters
UI
Watching Progress
Understanding Dask Performance
Metrics in Distributed Computing
The Dask Dashboard
Saving and Sharing Dask Metrics/Performance Logs
Advanced Diagnostics
Scaling and Debugging Best Practices
Manual Scaling
Adaptive/Auto-scaling
Persist and Delete Costly Data
Dask Nanny
Worker Memory Management
Cluster Sizing
Chunking, Revisited
Avoid Rechunking
Scheduled Jobs
Deployment Monitoring
Conclusion
A. Key System Concepts for Dask Users
Testing
Manual Testing
Unit Testing
Integration Testing
Test-Driven Development
Property Testing
Working with Notebooks
Out-of-Notebook Testing
In-Notebook Testing: In-Line Assertions
Data and Output Validation
Peer-to-Peer Versus Centralized Distributed
Methods of Parallelism
Task Parallelism
Data Parallelism
Load Balancing
Network Fault Tolerance and CAP Theorem
Recursion (Tail and Otherwise)
Versioning and Branching: Code and Data
Isolation and Noisy Neighbors
Machine Fault Tolerance
Scalability (Up and Down)
Cache, Memory, Disk, and Networking: How the Performance Changes
Hashing
Data Locality
Exactly Once Versus At Least Once
Conclusion
B. Scalable DataFrames: A Comparison and Some History
Tools
One Machine Only
Distributed
Conclusion
C. Debugging Dask
Using Debuggers
General Debugging Tips with Dask
Native Errors
Some Notes on Official Advice for Handling Bad Records
Dask Diagnostics
Conclusion
D. Streaming with Streamz and Dask
Getting Started with Streamz on Dask
Streaming Data Sources and Sinks
Word Count
GPU Pipelines on Dask Streaming
Limitations, Challenges, and Workarounds
Conclusion
Index
About the Authors