Jump to content

Message passing in computer clusters

From Wikipedia, the free encyclopedia
Technicians working on a cluster consisting of many computers working together by sending messages over a network

Message passing is an inherent element of all computer clusters. All computer clusters, ranging from homemade Beowulfs to some of the fastest supercomputers in the world, rely on message passing to coordinate the activities of the many nodes they encompass.[1][2] Message passing in computer clusters built with commodity servers and switches is used by virtually every internet service.[1]

Recently, the use of computer clusters with more than one thousand nodes has been spreading. As the number of nodes in a cluster increases, the rapid growth in the complexity of the communication subsystem makes message passing delays over the interconnect a serious performance issue in the execution of parallel programs.[3]

Specific tools may be used to simulate, visualize and understand the performance of message passing on computer clusters. Before a large computer cluster is assembled, a trace-based simulator can use a small number of nodes to help predict the performance of message passing on larger configurations. Following test runs on a small number of nodes, the simulator reads the execution and message transfer log files and simulates the performance of the messaging subsystem when many more messages are exchanged between a much larger number of nodes.[4][5]

Messages and computations

[edit]

Approaches to message passing

[edit]

Historically, the two typical approaches to communication between cluster nodes have been PVM, the Parallel Virtual Machine and MPI, the Message Passing Interface.[6] However, MPI has now emerged as the de facto standard for message passing on computer clusters.[7]

PVM predates MPI and was developed at the Oak Ridge National Laboratory around 1989. It provides a set of software libraries that allow a computing node to act as a "parallel virtual machine". It provides run-time environment for message-passing, task and resource management, and fault notification and must be directly installed on every cluster node. PVM can be used by user programs written in C, C++, or Fortran, etc.[6][8]

Unlike PVM, which has a concrete implementation, MPI is a specification rather than a specific set of libraries. The specification emerged in the early 1990 out of discussions between 40 organizations, the initial effort having been supported by ARPA and National Science Foundation. The design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use TCP/IP and socket connections.[6] MPI is now a widely available communications model that enables parallel programs to be written in languages such as C, Fortran, Python, etc.[8] The MPI specification has been implemented in systems such as MPICH and Open MPI.[8][9]

Testing, evaluation and optimization

[edit]
The Virginia Tech Xserve cluster of Apple computers

Computer clusters use a number of strategies for dealing with the distribution of processing over multiple nodes and the resulting communication overhead. Some computer clusters such as Tianhe-I use different processors for message passing than those used for performing computations. Tiahnhe-I uses over two thousand FeiTeng-1000 processors to enhance the operation of its proprietary message passing system, while computations are performed by Xeon and Nvidia Tesla processors.[10][11]

One approach to reducing communication overhead is the use of local neighborhoods (also called locales) for specific tasks. Here computational tasks are assigned to specific "neighborhoods" in the cluster, to increase efficiency by using processors which are closer to each other.[3] However, given that in many cases the actual topology of the computer cluster nodes and their interconnections may not be known to application developers, attempting to fine tune performance at the application program level is quite difficult.[3]

Given that MPI has now emerged as the de facto standard on computer clusters, the increase in the number of cluster nodes has resulted in continued research to improve the efficiency and scalability of MPI libraries. These efforts have included research to reduce the memory footprint of MPI libraries.[7]

From the earliest days MPI provided facilities for performance profiling via the PMPI "profiling system".[12] The use of the PMIPI- prefix allows for the observation of the entry and exit points for messages. However, given the high level nature of this profile, this type of information only provides a glimpse at the real behavior of the communication system. The need for more information resulted in the development of the MPI-Peruse system. Peruse provides a more detailed profile by enabling applications to gain access to state-changes within the MPI-library. This is achieved by registering callbacks with Peruse, and then invoking them as triggers as message events take place.[13] Peruse can work with the PARAVER visualization system. PARAVER has two components, a trace component and a visual component for analyze the traces, the statistics related to specific events, etc. [14] PARAVER may use trace formats from other systems, or perform its own tracing. It operates at the task level, thread level, and in a hybrid format. Traces often include so much information that they are often overwhelming. Thus PARAVER summarizes them to allow users to visualize and analyze them.[13][14][15]

Performance analysis

[edit]

When a large scale, often supercomputer level, parallel system is being developed, it is essential to be able to experiment with multiple configurations and simulate performance. There are a number of approaches to modeling message passing efficiency in this scenario, ranging from analytical models to trace-based simulation and some approaches rely on the use of test environments based on "artificial communications" to perform synthetic tests of message passing performance.[3] Systems such as BIGSIM provide these facilities by allowing the simulation of performance on various node topologies, message passing and scheduling strategies.[4]

Analytical approaches

[edit]

At the analytical level, it is necessary to model the communication time T in term of a set of subcomponents such as the startup latency, the asymptotic bandwidth and the number of processors. A well known model is Hockney's model which simply relies on point to point communication, using T = L + (M / R) where M is the message size, L is the startup latency and R is the asymptotic bandwidth in MB/s.[16]

Xu and Hwang generalized Hockney's model to include the number of processors, so that both the latency and the asymptotic bandwidth are functions of the number of processors.[16][17] Gunawan and Cai then generalized this further by introducing cache size, and separated the messages based on their sizes, obtaining two separate models, one for messages below cache size, and one for those above.[16]

Performance simulation

[edit]
The IBM Roadrunner cluster supercomputer

Specific tools may be used to simulate and understand the performance of message passing on computer clusters. For instance, CLUSTERSIM uses a Java-based visual environment for discrete-event simulation. In this approach computed nodes and network topology is visually modeled. Jobs and their duration and complexity are represented with specific probability distributions allowing various parallel job scheduling algorithms to be proposed and experimented with. The communication overhead for MPI message passing can thus be simulated and better understood in the context of large-scale parallel job execution.[18]

Other simulation tools include MPI-sim and BIGSIM.[19] MPI-Sim is an execution-driven simulator that requires C or C++ programs to operate.[18][19] ClusterSim, on the other hand uses a hybrid higher-level modeling system independent of the programming language used for program execution.[18]

Unlike MPI-Sim, BIGSIM is a trace-driven system that simulates based on the logs of executions saved in files by a separate emulator program.[5][19] BIGSIM includes an emulator, and a simulator. The emulator executes applications on a small number of nodes and stores the results, so the simulator can use them and simulate activities on a much larger number of nodes.[5] The emulator stores information of sequential execution blocks (SEBs) for multiple processors in log files, with each SEB recording the messages sent, their sources and destinations, dependencies, timings, etc. The simulator reads the log files and simulates them, and may star additional messages which are then also stored as SEBs.[4][5] The simulator can thus provide a view of the performance of very large applications, based on the execution traces provided by the emulator on a much smaller number of nodes, before the entire machine is available, or configured.[5]

See also

[edit]

References

[edit]
  1. ^ a b Computer Organization and Design by David A. Patterson and John L. Hennessy 2011 ISBN 0123747503 page 641 [1]
  2. ^ Beowulf Cluster Computing With Windows by Thomas Lawrence Sterling 2001 ISBN 0262692759 MIT Press pages 7–9
  3. ^ a b c d Recent Advances in the Message Passing Interface by Yiannis Cotronis, Anthony Danalis, Dimitris Nikolopoulos and Jack Dongarra 2011 ISBN 3642244483 pages 160–162
  4. ^ a b c Petascale Computing: Algorithms and Applications by David A. Bader 2007 ISBN 1584889098 pages 435–435
  5. ^ a b c d e Languages and Compilers for Parallel Computing edited by Keith Cooper, John Mellor-Crummey and Vivek Sarkar 2011 ISBN 3642195946 pages 202–203
  6. ^ a b c Distributed services with OpenAFS: for enterprise and education by Franco Milicchio, Wolfgang Alexander Gehrke 2007, pp. 339-341
  7. ^ a b Recent Advances in Parallel Virtual Machine and Message Passing Interface by Matti Ropo, Jan Westerholm and Jack Dongarra 2009 ISBN 3642037690 page 231
  8. ^ a b c Grid and Cluster Computing by J. Prabhu 2008 ISBN 8120334280 pages 109–112
  9. ^ Gropp, William; Lusk, Ewing; Skjellum, Anthony (1996). "A High-Performance, Portable Implementation of the MPI Message Passing Interface". Parallel Computing. doi:10.1016/0167-8191(96)00024-5.
  10. ^ The TianHe-1A Supercomputer: Its Hardware and Software by Xue-Jun Yang, Xiang-Ke Liao, et al in the Journal of Computer Science and Technology, Volume 26, Number 3, May 2011, pages 344–351 "Archived copy". Archived from the original on 2011-06-21. Retrieved 2012-02-08.{{cite web}}: CS1 maint: archived copy as title (link)
  11. ^ U.S. says China building 'entirely indigenous' supercomputer, by Patrick Thibodeau Computerworld, November 4, 2010 [2]
  12. ^ What is PMPI?
  13. ^ a b Recent Advances in Parallel Virtual Machine and Message Passing Interface by Bernd Mohr, Jesper Larsson Träff, Joachim Worringen and Jack Dongarra 2006 ISBN 354039110X page 347
  14. ^ a b PARAVER: A Tool to Visualize and Analyze Parallel Code by Vincent Pillet et al, Proceedings of the conference on Transputer and Occam Developments, 1995, pages 17–31
  15. ^ Computational Science-- Iccs 2003 edited by Peter M.A. Sloot, David Abramson, Alexander V. Bogdanov and Jack J. Dongarra ISBN 3540401970 page 183
  16. ^ a b c Modeling Message Passing Overhead by C.Y Chou et al. in Advances in Grid and Pervasive Computing: First International Conference, GPC 2006 edited by Yeh-Ching Chung and José E. Moreira ISBN 3540338098 pages 299–307
  17. ^ High-Performance Computing and Networking edited by Peter Sloot, Marian Bubak and Bob Hertzberge 1998 ISBN 3540644431 page 935
  18. ^ a b c High Performance Computational Science and Engineering edited by Michael K. Ng, Andrei Doncescu, Laurence T. Yang and Tau Leng, 2005 ISBN 0387240489 pages 59–63
  19. ^ a b c Advances in Computer Science, Environment, Ecoinformatics, and Education edited by Song Lin and Xiong Huang 2011 ISBN 3642233236 page 16