If you have a working knowledge of Haskell, this hands-on book shows you how to use the language’s many APIs and frameworks for writing both parallel and concurrent programs. You’ll learn how parallelism exploits multicore processors to speed up computation-heavy programs, and how concurrency enables you to write programs with threads for multiple interactions.
Author Simon Marlow walks you through the process with lots of code examples that you can run, experiment with, and extend. Divided into separate sections on Parallel and Concurrent Haskell, this book also includes exercises to help you become familiar with the concepts presented:
Express parallelism in Haskell with the Eval monad and Evaluation Strategies
Parallelize ordinary Haskell code with the Par monad
Build parallel array-based computations, using the Repa library
Use the Accelerate library to run computations directly on the GPU
Work with basic interfaces for writing concurrent code
Build trees of threads for larger and more complex programs
Learn how to build high-speed concurrent network servers
Write distributed programs that run on multiple machines in a network
Author(s): Simon Marlow
Publisher: O'Reilly Media
Year: 2013
Language: English
Pages: 322
Table of Contents
Preface
Audience
How to Read This Book
Conventions Used in This Book
Using Sample Code
Safari® Books Online
How to Contact Us
Acknowledgments
Chapter 1. Introduction
Terminology: Parallelism and Concurrency
Tools and Resources
Sample Code
Part I. Parallel Haskell
Chapter 2. Basic Parallelism: The Eval Monad
Lazy Evaluation and Weak Head Normal Form
The Eval Monad, rpar, and rseq
Example: Parallelizing a Sudoku Solver
Deepseq
Chapter 3. Evaluation Strategies
Parameterized Strategies
A Strategy for Evaluating a List in Parallel
Example: The K-Means Problem
Parallelizing K-Means
Performance and Analysis
Visualizing Spark Activity
Granularity
GC’d Sparks and Speculative Parallelism
Parallelizing Lazy Streams with parBuffer
Chunking Strategies
The Identity Property
Chapter 4. Dataflow Parallelism: The Par Monad
Example: Shortest Paths in a Graph
Pipeline Parallelism
Rate-Limiting the Producer
Limitations of Pipeline Parallelism
Example: A Conference Timetable
Adding Parallelism
Example: A Parallel Type Inferencer
Using Different Schedulers
The Par Monad Compared to Strategies
Chapter 5. Data Parallel Programming with Repa
Arrays, Shapes, and Indices
Operations on Arrays
Example: Computing Shortest Paths
Parallelizing the Program
Folding and Shape-Polymorphism
Example: Image Rotation
Summary
Chapter 6. GPU Programming with Accelerate
Overview
Arrays and Indices
Running a Simple Accelerate Computation
Scalar Arrays
Indexing Arrays
Creating Arrays Inside Acc
Zipping Two Arrays
Constants
Example: Shortest Paths
Running on the GPU
Debugging the CUDA Backend
Example: A Mandelbrot Set Generator
Part II. Concurrent Haskell
Chapter 7. Basic Concurrency: Threads and MVars
A Simple Example: Reminders
Communication: MVars
MVar as a Simple Channel: A Logging Service
MVar as a Container for Shared State
MVar as a Building Block: Unbounded Channels
Fairness
Chapter 8. Overlapping Input/Output
Exceptions in Haskell
Error Handling with Async
Merging
Chapter 9. Cancellation and Timeouts
Asynchronous Exceptions
Masking Asynchronous Exceptions
The bracket Operation
Asynchronous Exception Safety for Channels
Timeouts
Catching Asynchronous Exceptions
mask and forkIO
Asynchronous Exceptions: Discussion
Chapter 10. Software Transactional Memory
Running Example: Managing Windows
Blocking
Blocking Until Something Changes
Merging with STM
Async Revisited
Implementing Channels with STM
More Operations Are Possible
Composition of Blocking Operations
Asynchronous Exception Safety
An Alternative Channel Implementation
Bounded Channels
What Can We Not Do with STM?
Performance
Summary
Chapter 11. Higher-Level Concurrency Abstractions
Avoiding Thread Leakage
Symmetric Concurrency Combinators
Timeouts Using race
Adding a Functor Instance
Summary: The Async API
Chapter 12. Concurrent Network Servers
A Trivial Server
Extending the Simple Server with State
Design One: One Giant Lock
Design Two: One Chan Per Server Thread
Design Three: Use a Broadcast Chan
Design Four: Use STM
The Implementation
A Chat Server
Architecture
Client Data
Server Data
The Server
Setting Up a New Client
Running the Client
Recap
Chapter 13. Parallel Programming Using Threads
How to Achieve Parallelism with Concurrency
Example: Searching for Files
Sequential Version
Parallel Version
Performance and Scaling
Limiting the Number of Threads with a Semaphore
The ParIO monad
Chapter 14. Distributed Programming
The Distributed-Process Family of Packages
Distributed Concurrency or Parallelism?
A First Example: Pings
Processes and the Process Monad
Defining a Message Type
The Ping Server Process
The Master Process
The main Function
Summing Up the Ping Example
Multi-Node Ping
Running with Multiple Nodes on One Machine
Running on Multiple Machines
Typed Channels
Merging Channels
Handling Failure
The Philosophy of Distributed Failure
A Distributed Chat Server
Data Types
Sending Messages
Broadcasting
Distribution
Testing the Server
Failure and Adding/Removing Nodes
Exercise: A Distributed Key-Value Store
Chapter 15. Debugging, Tuning, and Interfacing with Foreign Code
Debugging Concurrent Programs
Inspecting the Status of a Thread
Event Logging and ThreadScope
Detecting Deadlock
Tuning Concurrent (and Parallel) Programs
Thread Creation and MVar Operations
Shared Concurrent Data Structures
RTS Options to Tweak
Concurrency and the Foreign Function Interface
Threads and Foreign Out-Calls
Asynchronous Exceptions and Foreign Calls
Threads and Foreign In-Calls
Index