FPGARelated.com
Books

Introduction to Parallel Computing

Grama, Ananth, Gupta, Anshul, Karypis, George, K 2003

Introducation to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards. It is the only book to have complete coverage of traditional Computer Science algorithms (sorting, graph and matrix algorithms), scientific computing algorithms (FFT, sparse matrix computations, N-body methods), and data intensive algorithms (search, dynamic programming, data-mining).


Why Read This Book

You should read this book to learn principled techniques for designing, analyzing and reasoning about parallel algorithms and their performance across shared- and distributed-memory machines. It teaches the algorithmic foundations and performance models you'll need to map compute- and data-intensive kernels (FFT, matrix ops, graph algorithms, N-body) efficiently onto parallel hardware, including when you later target FPGAs or other accelerators.

Who Will Benefit

Advanced students, researchers, and engineers who implement high-performance, parallel numerical and data-processing kernels and need rigorous methods to analyze scalability and communication costs.

Level: Advanced — Prerequisites: Good familiarity with sequential algorithms and data structures, discrete math and complexity analysis, linear algebra (for numerical sections), and comfort with basic programming (C or similar).

Get This Book

Key Takeaways

  • Analyze the computational and communication complexity of parallel algorithms using formal performance models
  • Design scalable parallel algorithms for sorting, graph processing, dense and sparse linear algebra, FFT and N-body simulations
  • Apply common parallel programming paradigms and abstractions (message-passing, shared-memory) to real problems
  • Evaluate and model load balance, synchronization, and communication trade-offs to guide mapping to target hardware
  • Use algorithmic techniques (partitioning, decomposition, recursion, reductions) to reduce communication and improve locality

Topics Covered

  1. Introduction and motivations for parallel computing
  2. Models of parallel computation and performance measures
  3. Parallel programming paradigms: shared-memory and message-passing
  4. Design techniques for parallel algorithms (partitioning, decomposition, pipelining)
  5. Sorting and selection in parallel
  6. Graph algorithms and parallel graph traversals
  7. Dense matrix algorithms and parallel linear algebra
  8. Sparse matrix computations and storage schemes
  9. Fast Fourier Transform and related spectral algorithms
  10. N-body methods and particle simulations
  11. Dynamic programming and data-intensive algorithms
  12. Load balancing, scheduling and scalability analysis
  13. Case studies, implementation issues and empirical performance

Languages, Platforms & Tools

CMPIOpenMPShared-memory multiprocessorsDistributed-memory clustersPRAM and abstract parallel modelsNone specific 10 (discusses general message-passing and shared-memory programming paradigms rather than vendor tools)

How It Compares

Covers algorithmic breadth and theoretical analysis similar to Quinn's 'Parallel Programming' textbooks, but is more algorithm- and analysis-focused than architecture/tool-focused; for accelerator/GPU-centric programming, Kirk & Hwu's 'Programming Massively Parallel Processors' is a more practical alternative.

Related Books