Warehouse Stock Clearance Sale

Grab a bargain today!


Sign Up for Fishpond's Best Deals Delivered to You Every Day
Go
Programming Models for ­Parallel Computing
Scientific and Engineering Computation
By Pavan Balaji (Edited by), William Gropp (Contributions by), Rajeev Thakur (Contributions by), Paul H. Hargrove (Contributions by)

Rating
Format
Paperback, 488 pages
Published
United States, 1 November 2015

An overview of the most prominent contemporary parallel processing programming models, written in a unique tutorial style.

With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today.

The chapters describe the programming models in a unique tutorial style rather than using the formal approach taken in the research literature. The aim is to cover a wide range of parallel programming models, enabling the reader to understand what each has to offer. The book begins with a description of the Message Passing Interface (MPI), the most common parallel programming model for distributed memory computing. It goes on to cover one-sided communication models, ranging from low-level runtime libraries (GASNet, OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to describe their computation and data units as tasks so that the runtime system can manage computation and data movement as necessary; and parallel programming models intended for on-node parallelism in the context of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will be a valuable resource for graduate students, researchers, and any scientist who works with data sets and large computations.

Contributors
Timothy Armstrong, Michael G. Burke, Ralph Butler, Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman, Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale, David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn, Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W. Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng

Show more

Our Price
HK$640
Ships from UK Estimated delivery date: 29th Apr - 6th May from UK
Free Shipping Worldwide

Buy Together
+
Buy together with Using Advanced MPI at a great price!
Buy Together
HK$1,391

Product Description

An overview of the most prominent contemporary parallel processing programming models, written in a unique tutorial style.

With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today.

The chapters describe the programming models in a unique tutorial style rather than using the formal approach taken in the research literature. The aim is to cover a wide range of parallel programming models, enabling the reader to understand what each has to offer. The book begins with a description of the Message Passing Interface (MPI), the most common parallel programming model for distributed memory computing. It goes on to cover one-sided communication models, ranging from low-level runtime libraries (GASNet, OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to describe their computation and data units as tasks so that the runtime system can manage computation and data movement as necessary; and parallel programming models intended for on-node parallelism in the context of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will be a valuable resource for graduate students, researchers, and any scientist who works with data sets and large computations.

Contributors
Timothy Armstrong, Michael G. Burke, Ralph Butler, Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman, Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale, David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn, Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W. Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng

Show more
Product Details
EAN
9780262528818
ISBN
0262528819
Publisher
Other Information
90 b&w illus.; 180 Illustrations, unspecified
Dimensions
22.9 x 20.1 x 2.3 centimeters (0.87 kg)

About the Author

Pavan Balaji holds appointments as Computer Scientist and Group Lead at Argonne National Laboratory, Institute Fellow of the Northwestern-Argonne Institute of Science and Engineering at Northwestern University, and Research Fellow at the Computation Institute at the University of Chicago.

William Gropp is Director of the Parallel Computing Institute and Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign.

Rajeev Thakur is Deputy Director in the Mathematics and Computer Science Division at Argonne National Laboratory.

Ewing Lusk is Argonne Distinguished Fellow Emeritus at Argonne National Laboratory.

Ian Foster is the Arthur Holly Compton Distinguished Service Professor of Computer Science at the University of Chicago and Distinguished Fellow at Argonne National Laboratory.

Barbara Chapman is Professor of Computer Science at the University of Houston.

Charles E. Leiserson is Professor of Computer Science and Engineering at the Massachusetts Institute of Technology.

Timothy G. Mattson is Senior Principal Engineer at Intel Corporation.

Show more
Review this Product
Ask a Question About this Product More...
 
Look for similar items by category
Item ships from and is sold by Fishpond World Ltd.

Back to top