Introduction to Parallel Computing
Introducation to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards. It is the only book to have complete coverage of traditional Computer Science algorithms (sorting, graph and matrix algorithms), scientific computing algorithms (FFT, sparse matrix computations, N-body methods), and data intensive algorithms (search, dynamic programming, data-mining).
Why Read This Book
You should read this book to learn principled techniques for designing, analyzing and reasoning about parallel algorithms and their performance across shared- and distributed-memory machines. It teaches the algorithmic foundations and performance models you'll need to map compute- and data-intensive kernels (FFT, matrix ops, graph algorithms, N-body) efficiently onto parallel hardware, including when you later target FPGAs or other accelerators.
Who Will Benefit
Advanced students, researchers, and engineers who implement high-performance, parallel numerical and data-processing kernels and need rigorous methods to analyze scalability and communication costs.
Level: Advanced — Prerequisites: Good familiarity with sequential algorithms and data structures, discrete math and complexity analysis, linear algebra (for numerical sections), and comfort with basic programming (C or similar).
Key Takeaways
- Analyze the computational and communication complexity of parallel algorithms using formal performance models
- Design scalable parallel algorithms for sorting, graph processing, dense and sparse linear algebra, FFT and N-body simulations
- Apply common parallel programming paradigms and abstractions (message-passing, shared-memory) to real problems
- Evaluate and model load balance, synchronization, and communication trade-offs to guide mapping to target hardware
- Use algorithmic techniques (partitioning, decomposition, recursion, reductions) to reduce communication and improve locality
Topics Covered
- Introduction and motivations for parallel computing
- Models of parallel computation and performance measures
- Parallel programming paradigms: shared-memory and message-passing
- Design techniques for parallel algorithms (partitioning, decomposition, pipelining)
- Sorting and selection in parallel
- Graph algorithms and parallel graph traversals
- Dense matrix algorithms and parallel linear algebra
- Sparse matrix computations and storage schemes
- Fast Fourier Transform and related spectral algorithms
- N-body methods and particle simulations
- Dynamic programming and data-intensive algorithms
- Load balancing, scheduling and scalability analysis
- Case studies, implementation issues and empirical performance
Languages, Platforms & Tools
How It Compares
Covers algorithmic breadth and theoretical analysis similar to Quinn's 'Parallel Programming' textbooks, but is more algorithm- and analysis-focused than architecture/tool-focused; for accelerator/GPU-centric programming, Kirk & Hwu's 'Programming Massively Parallel Processors' is a more practical alternative.











