Book description
Introducation to Parallel Computing is a complete end-to-end
source of information on almost all aspects of parallel computing
from introduction to architectures to programming paradigms to
algorithms to programming standards. It is the only book to have
complete coverage of traditional Computer Science algorithms
(sorting, graph and matrix algorithms), scientific computing
algorithms (FFT, sparse matrix computations, N-body
methods), and data intensive algorithms (search, dynamic
programming, data-mining).
Table of contents
- Copyright
- Pearson Education
- Preface
- Acknowledgments
- 1. Introduction to Parallel Computing
-
2. Parallel Programming Platforms
- 2.1. Implicit Parallelism: Trends in Microprocessor Architectures*
- 2.2. Limitations of Memory System Performance*
- 2.3. Dichotomy of Parallel Computing Platforms
- 2.4. Physical Organization of Parallel Platforms
- 2.5. Communication Costs in Parallel Machines
- 2.6. Routing Mechanisms for Interconnection Networks
- 2.7. Impact of Process-Processor Mapping and Mapping Techniques
- 2.8. Bibliographic Remarks
- Problems
-
3. Principles of Parallel Algorithm Design
- 3.1. Preliminaries
- 3.2. Decomposition Techniques
- 3.3. Characteristics of Tasks and Interactions
- 3.4. Mapping Techniques for Load Balancing
- 3.5. Methods for Containing Interaction Overheads
- 3.6. Parallel Algorithm Models
- 3.7. Bibliographic Remarks
- Problems
-
4. Basic Communication Operations
- 4.1. One-to-All Broadcast and All-to-One Reduction
- 4.2. All-to-All Broadcast and Reduction
- 4.3. All-Reduce and Prefix-Sum Operations
- 4.4. Scatter and Gather
- 4.5. All-to-All Personalized Communication
- 4.6. Circular Shift
- 4.7. Improving the Speed of Some Communication Operations
- 4.8. Summary
- 4.9. Bibliographic Remarks
- Problems
-
5. Analytical Modeling of Parallel Programs
- 5.1. Sources of Overhead in Parallel Programs
- 5.2. Performance Metrics for Parallel Systems
- 5.3. The Effect of Granularity on Performance
- 5.4. Scalability of Parallel Systems
- 5.5. Minimum Execution Time and Minimum Cost-Optimal Execution Time
- 5.6. Asymptotic Analysis of Parallel Programs
- 5.7. Other Scalability Metrics
- 5.8. Bibliographic Remarks
- Problems
-
6. Programming Using the Message-Passing Paradigm
- 6.1. Principles of Message-Passing Programming
- 6.2. The Building Blocks: Send and Receive Operations
- 6.3. MPI: the Message Passing Interface
- 6.4. Topologies and Embedding
- 6.5. Overlapping Communication with Computation
- 6.6. Collective Communication and Computation Operations
- 6.7. Groups and Communicators
- 6.8. Bibliographic Remarks
- Problems
-
7. Programming Shared Address Space Platforms
- 7.1. Thread Basics
- 7.2. Why Threads?
- 7.3. The POSIX Thread API
- 7.4. Thread Basics: Creation and Termination
- 7.5. Synchronization Primitives in Pthreads
- 7.6. Controlling Thread and Synchronization Attributes
- 7.7. Thread Cancellation
- 7.8. Composite Synchronization Constructs
- 7.9. Tips for Designing Asynchronous Programs
- 7.10. OpenMP: a Standard for Directive Based Parallel Programming
- 7.11. Bibliographic Remarks
- Problems
-
8. Dense Matrix Algorithms
- 8.1. Matrix-Vector Multiplication
- 8.2. Matrix-Matrix Multiplication
- 8.3. Solving a System of Linear Equations
- 8.4. Bibliographic Remarks
- Problems
- 9. Sorting
- 10. Graph Algorithms
-
11. Search Algorithms for Discrete Optimization Problems
- 11.1. Definitions and Examples
- 11.2. Sequential Search Algorithms
- 11.3. Search Overhead Factor
-
11.4. Parallel Depth-First Search
- 11.4.1. Important Parameters of Parallel DFS
- 11.4.2. A General Framework for Analysis of Parallel DFS
- 11.4.3. Analysis of Load-Balancing Schemes
- 11.4.4. Termination Detection
- 11.4.5. Experimental Results
- 11.4.6. Parallel Formulations of Depth-First Branch-and-Bound Search
- 11.4.7. Parallel Formulations of IDA*
- 11.5. Parallel Best-First Search
- 11.6. Speedup Anomalies in Parallel Search Algorithms
- 11.7. Bibliographic Remarks
- Problems
- 12. Dynamic Programming
- 13. Fast Fourier Transform
- A. Complexity of Functions and Order Analysis
- Bibliography
Product information
- Title: Introduction to Parallel Computing, Second Edition
- Author(s):
- Release date: January 2003
- Publisher(s): Addison-Wesley Professional
- ISBN: None
You might also like
book
CUDA for Engineers: An Introduction to High-Performance Parallel Computing
gives you direct, hands-on engagement with personal, high-performance parallel computing, enabling you to do computations on …
book
Parallel and High Performance Computing
Complex calculations, like training deep learning models or running large-scale simulations, can take an extremely long …
book
A Common-Sense Guide to Data Structures and Algorithms, Second Edition, 2nd Edition
Algorithms and data structures are much more than abstract concepts. Mastering them enables you to write …
book
Art of Computer Programming, The: Volume 1: Fundamental Algorithms, 3rd Edition
The bible of all fundamental algorithms and the work that taught many of today’s software developers …