A five-chapter, tutorial introduction to the MPI library. A carefully crafted series of example programs in Chapters 4, 5, 6, 8, and 9 gradually introduces 27 key MPI functions. Collective communication functions are presented before point-to-point message passing, making it easier for inexperienced parallel programmers to write correct parallel code.
A tutorial introduction to OpenMP. A progressively more complicated series of code segments, functions, and programs allows each OpenMP directive or function to be introduced 'just in time' to meet a need.
Do your students struggle with OpenMP'
Introduction to hybrid parallel programming using both MPI and OpenMP. This is often the most effective way to program clusters constructed out of symmetrical multiprocessors.
An emphasis on design, analysis, implementation, and benchmarking. Chapter 3 introduces a rigorous parallel algorithm design process, which is used throughout the rest of the book to develop parallel algorithms for a wide variety of applications. The book repeatedly demonstrates how benchmarking a sequential program and carefully analyzing a parallel design can lead to accurate predictions of the performance of a parallel program.
An exceptional chapter on performance analysis. The book takes a single, generic speedup formula and derives from it Amdahl's Law, Gustafson-Barsis's Law, the Karp-Flatt metric, and the isoefficiency metric. Readers will learn the purpose of each formula and how they relate to each other.
Parallel algorithms for a wide variety of applications. The book considers parallel implementations of Floyd's algorithm, matrix-vector multiplication, matrix multiplication, Gaussian elimination, the conjugate gradient method, finite difference methods, sorting, the fast Fourier transform, backtrack search, branch-and-bound, and more.
Thorough treatment of Monte Carlo algorithms. A full chapter on this often-neglected topic introduces problems associated with parallel random number generation and introduces random walks, simulated annealing, the Metropolis algorithm, and much more.
Do you cover Monte Carlo algorithms? If so, are you happy with the coverage in your current text'
A complete set of solutions and lecture slides, password-protected for instructor use only, are available through the book's listing at http://www.mhhe.com/quinn.
What ancillaries are important to you
Table Of Contents
Chapter 1 Motivation and History
Chapter 2 Parallel Architectures
Chapter 3 Parallel Algorithm Design
Chapter 4 Message-Passing Programming
Chapter 5 The Sieve of Eratosthenes
Chapter 6 Floyd's Algorithm
Chapter 7 Performance Analysis
Chapter 8 Matrix-Vector Multiplication
Chapter 9 Document Classification
Chapter 10 Monte Carlo Methods
Chapter 11 Matrix Multiplication
Chapter 12 Solving Linear Systems
Chapter 13 Finite Difference Methods
Chapter 14 Sorting
Chapter 15 The Fast Fourier Transform
Chapter 16 Combinatorial Search
Chapter 17 Shared-memory Programming
Chapter 18 Combining MPI and OpenMP
Appendix A MPI Functions
Appendix B Utility Functions
Appendix C Debugging MPI Programs
Appendix D Review of Complex Numbers
Appendix E OpenMP Functions
Bibliography
Author Index
Subject Index
Key Features A five-chapter, tutorial introduction to the MPI library. A carefully crafted series of example programs in Chapters 4, 5, 6, 8, and 9 gradually introduces 27 key MPI functions. Collective communication functions are presented before point-to-point message passing, making it easier for inexperienced parallel programmers to write correct parallel code. A tutorial introduction to OpenMP. A progressively more complicated series of code segments, functions, and programs allows each OpenMP directive or function to be introduced 'just in time' to meet a need. Do your students struggle with OpenMP' Introduction to hybrid parallel programming using both MPI and OpenMP. This is often the most effective way to program clusters constructed out of symmetrical multiprocessors. An emphasis on design, analysis, implementation, and benchmarking. Chapter 3 introduces a rigorous parallel algorithm design process, which is used throughout the rest of the book to develop parallel algorithms for a wide variety of applications. The book repeatedly demonstrates how benchmarking a sequential program and carefully analyzing a parallel design can lead to accurate predictions of the performance of a parallel program. An exceptional chapter on performance analysis. The book takes a single, generic speedup formula and derives from it Amdahl's Law, Gustafson-Barsis's Law, the Karp-Flatt metric, and the isoefficiency metric. Readers will learn the purpose of each formula and how they relate to each other. Parallel algorithms for a wide variety of applications. The book considers parallel implementations of Floyd's algorithm, matrix-vector multiplication, matrix multiplication, Gaussian elimination, the conjugate gradient method, finite difference methods, sorting, the fast Fourier transform, backtrack search, branch-and-bound, and more. Thorough treatment of Monte Carlo algorithms. A full chapter on this often-neglected topic introduces problems associated with parallel random number generation and introduces random walks, simulated annealing, the Metropolis algorithm, and much more. Do you cover Monte Carlo algorithms? If so, are you happy with the coverage in your current text' A complete set of solutions and lecture slides, password-protected for instructor use only, are available through the book's listing at http://www.mhhe.com/quinn. What ancillaries are important to you Table Of Contents Chapter 1 Motivation and History Chapter 2 Parallel Architectures Chapter 3 Parallel Algorithm Design Chapter 4 Message-Passing Programming Chapter 5 The Sieve of Eratosthenes Chapter 6 Floyd's Algorithm Chapter 7 Performance Analysis Chapter 8 Matrix-Vector Multiplication Chapter 9 Document Classification Chapter 10 Monte Carlo Methods Chapter 11 Matrix Multiplication Chapter 12 Solving Linear Systems Chapter 13 Finite Difference Methods Chapter 14 Sorting Chapter 15 The Fast Fourier Transform Chapter 16 Combinatorial Search Chapter 17 Shared-memory Programming Chapter 18 Combining MPI and OpenMP Appendix A MPI Functions Appendix B Utility Functions Appendix C Debugging MPI Programs Appendix D Review of Complex Numbers Appendix E OpenMP Functions Bibliography Author Index Subject Index
The following policies apply for the above product which would be shipped by Infibeam.com 1. Infibeam accept returns if the item shipped is defective or damaged 2. In case of damaged or defective product, the customer is required to raise a concern and ship the product back to us within 15 days from delivery 3. Return shipping costs will be borne by Infibeam.com 4. Infibeam will send a replacement unit as soon as the return package is received 5. Infibeam does not offer any cash refunds