Parallel Programming with Microsoft Visual C++: Design Patterns for Decomposition and Coordination of Multicore Architectures

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This guide shows Visual C++ programmers how to effectively take advantage of the multicore capabilities of modern PCs using the Microsoft platform.

Author(s): Colin Campbell; Ade Miller
Publisher: Microsoft Press
Year: 2011

Language: English
Pages: 172

Contents
Foreword
Foreword
Preface
Why This Book Is Pertinent Now
What You Need to Use the Code
How to Use This Book
Introduction
Parallelism with Control Dependencies Only
Parallelism with Control and Data Dependencies
Dynamic Task Parallelism and Pipelines
Supporting Material
What Is Not Covered
Goals
Acknowledgments
Chapter 1. Introduction
The Importance of Potential Parallelism
Decomposition, Coordination, and Scalable Sharing
Understanding Tasks
Coordinating Tasks
Scalable Sharing of Data
Design Approaches
Selecting the Right Pattern
A Word about Terminology
The Limits of Parallelism
A Few Tips
Exercises
For More Information
Parallel Loops
The Basics
Parallel for Loops
parallel_for_each
What to Expect
An Example
Sequential Credit Review Example
Credit Review Example Using parallel_for_each
Performance Comparison
Variations
Breaking out of Loops Early
Exception Handling
Special Handling of Small Loop Bodies
Controlling the Degree of Parallelism
Anti-Patterns
Hidden Loop Body Dependencies
Small Loop Bodies with Few Iterations
Duplicates in the Input Enumeration
Scheduling Interactions with Cooperative Blocking
Related Patterns
Exercises
Further Reading
Chapter 3. Parallel Tasks
The Basics
An Example
Variations
Coordinating Tasks with Cooperative Blocking
Canceling a Task Group
Handling Exceptions
Speculative Execution
Anti-Patterns
Variables Captured by Closures
Unintended Propagation of Cancellation Requests
The Cost of Synchronization
Design Notes
Task Group Calling Conventions
Tasks and Threads
How Tasks Are Scheduled
Structured Task Groups and Task Handles
Lightweight Tasks
Exercises
Further Reading
Chapter 4. Parallel Aggregation
The Basics
An Example
Variations
Considerations for Small Loop Bodies
Other Uses for Combinable Objects
Design Notes
Related Patterns
Exercises
Further Reading
Chapter 5. Futures
The Basics
Futures
Example: The Adatum Financial Dashboard
The Business Objects
The Analysis Engine
Variations
Canceling Futures
Removing Bottlenecks
Modifying the Graph at Run Time
Design Notes
Decomposition into Futures
Functional Style
Related Patterns
Pipeline Pattern
Master/Worker Pattern
Dynamic Task Parallelism Pattern
Discrete Event Pattern
Exercises
Chapter 6. Dynamic Task Parallelism
An Example
Variations
Parallel While-Not-Empty
Adding Tasks to a Pending Wait Context
Exercises
Further Reading
Chapter 7. Pipelines
Types of Messaging Blocks
The Basics
An Example
Sequential Image Processing
The Image Pipeline
Performance Characteristics
Variations
Asynchronous Pipelines
Canceling a Pipeline
Handling Pipeline Exceptions
Load Balancing Using Multiple Producers
Pipelines and Streams
Anti-Patterns
Copying Large Amounts of Data between Pipeline Stages
Pipeline Stages that Are Too Small
Forgetting to Use Message Passing for Isolation
Infinite Waits
Unbounded Queue Growth
More Information
Design Notes
Related Patterns
Exercises
Further Reading
Appendix A. The Task Scheduler and Resource Manager
Resource Manager
Why It’s Needed
How Resource Management Works
Dynamic Resource Management
Oversubscribing Cores
Querying the Environment
Kinds of Tasks
Lightweight Tasks
Tasks Created Using PPL
Task Schedulers
Managing Task Schedulers
Creating and Attaching a Task Scheduler
Detaching a Task Scheduler
Destroying a Task Scheduler
Scenarios for Using Multiple Task Schedulers
Implementing a Custom Scheduling Component
The Scheduling Algorithm
Schedule Groups
Adding Tasks
Running Tasks
Enhanced Locality Mode
Forward Progress Mode
Task Execution Order
Tasks That Are Run Inline
Using Contexts to Communicate with the Scheduler
Debugging Information
Querying for Cancellation
Interface to Cooperative Blocking
Waiting
The Caching Suballocator
Long-Running I/O Tasks
Setting Scheduler Policy
Anti-Patterns
Multiple Resource Managers
Resource Management Overhead
Unintentional Oversubscription from Inlined Tasks
Deadlock from Thread Starvation
Ignored Process Affinity Mask
References
Appendix B. Debugging and Profiling Parallel Applications
The Parallel Tasks and Parallel Stacks Windows
Breakpoints and Memory Allocation
The Concurrency Visualizer
Scenario Markers
Visual Patterns
Oversubscription
Lock Contention and Serialization
Load Imbalance
Further Reading
Appendix C. Technology Overview
Further Reading
Glossary
Index