chapter 4 message-passing programming. the message-passing model

33
Chapter 4 Chapter 4 Message-Passing Programming

Upload: cornelia-thornton

Post on 13-Dec-2015

254 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Chapter 4 Message-Passing Programming. The Message-Passing Model

Chapter 4Chapter 4

Message-Passing Programming

Page 2: Chapter 4 Message-Passing Programming. The Message-Passing Model

The Message-Passing The Message-Passing ModelModel

Page 3: Chapter 4 Message-Passing Programming. The Message-Passing Model

What is MPI?What is MPI?

MPI (Message Passing Interface).MPI is a library specification for message

passing.MPI was designed for high performance

on both massively parallel machines and on workstation clusters.

Page 4: Chapter 4 Message-Passing Programming. The Message-Passing Model

MPI HistoryMPI History

In 1992, Supercomputing conference agreed to develop and then implement a common standard for message passing.

The first MPI standard, called MPI-1 was completed in May 1994.

 The second MPI standard, MPI-2, was completed in 1998.

Page 5: Chapter 4 Message-Passing Programming. The Message-Passing Model

MPI HistoryMPI History

 The most popular one being the Argonne's MPICH  (based on the P4 package and Chameleon  - hence the ``CH'' suffix).

There were still many supercomputer vendors that released their own implementations of MPI. (i.e. LAM MPI, Intel MPI e.t.c.)

Page 6: Chapter 4 Message-Passing Programming. The Message-Passing Model

What’s in MPI-2What’s in MPI-2

Dynamic process management◦Adding processes to a running MPI

computationParallel I/OC++ and Fortran 90 bindingsMisc

◦Interaction with threads◦Interoperability between language◦Extensions/enhancements to MPI-1

Page 7: Chapter 4 Message-Passing Programming. The Message-Passing Model

Why MPI?Why MPI?Portability: MPI has been implemented for

almost every distributed memory architecture.

Speed: Each implementation is in principle optimized for the hardware upon which it runs.

Most MPI implementations are directly callable from Fortran, C and C++, and from any language capable of interfacing with such libraries (such as C#, JAVA or Python).

Page 8: Chapter 4 Message-Passing Programming. The Message-Passing Model

Message PassingMessage Passing

A paradigm of communication where messages are sent from a sender to one or more recipients.

Message Passing Systems may have satisfied the following conditions: 1. Transferred reliably.2. Guaranteed to be delivered in order.3. Synchronous or asynchronous.4. Passed one-to-one, one-to-many or many-to-one.

Page 9: Chapter 4 Message-Passing Programming. The Message-Passing Model

Programming Model (1/2)Programming Model (1/2)MPI lends itself to virtually any distributed

memory parallel programming model.As shared memory systems became more

popular, particularly SMP/NUMA architectures, MPI implementations for these platforms appeared.

MPI is now used on just about any common parallel architecture including massively parallel machines, SMP clusters, workstation cluster and heterogeneous networks.

Page 10: Chapter 4 Message-Passing Programming. The Message-Passing Model

Programming Model (2/2)Programming Model (2/2)

The number of tasks dedicated to run a parallel program is static. New tasks can not be dynamically spawned during run time.

All parallelism is explicit:◦The programmer is responsible for correctly

identifying parallelism and implementing parallel algorithms using MPI constructs.

Page 11: Chapter 4 Message-Passing Programming. The Message-Passing Model

Error HandlingError Handling

By default, an error causes all processes to abort.

The user can cause routines to return (with an error code) instead.◦In C++, exceptions are thrown (MPI-2)

A user can also write and install custom error handlers.

Page 12: Chapter 4 Message-Passing Programming. The Message-Passing Model

The ProcessThe ProcessA process is a program in

execution. A program is not a process; a program is a passive entity, whereas a process is an active entity.

Page 13: Chapter 4 Message-Passing Programming. The Message-Passing Model

SPMD Coding ModelSPMD Coding ModelSingle Program Multiple Data

Page 14: Chapter 4 Message-Passing Programming. The Message-Passing Model

Communicators and GroupsCommunicators and Groups

Communicator objects connect groups of processes in the MPI session.

Each communicator gives each contained process an independent identifier and arranges its contained processes in an ordered topology.

MPI understands single group intracommunicator operations, and bilateral intercommunicator communication.

Page 15: Chapter 4 Message-Passing Programming. The Message-Passing Model

RankRank

Within a communicator, every process has its own unique integer identifier (i.e., Rank).

A rank is also called a “task ID”.Ranks are contiguous and begin at zero.Used by the programmer to specify the

source and destination of messages.

Page 16: Chapter 4 Message-Passing Programming. The Message-Passing Model

MPI Program ArchitectureMPI Program Architecture

MPI include file

Initialize MPI environment(Parallel code begins)

Do work and make message passing calls

Terminate MPI Environment(Parallel code ends)

Declarations, prototypes, etc.Program Begins

(Serial code)

(Other Serial code)

Page 17: Chapter 4 Message-Passing Programming. The Message-Passing Model

Booting the MPI in Cluster for Booting the MPI in Cluster for AdministerAdministerIn administer (root), the Intel MPI

Library uses a Multi-Purpose Daemon (MPD) job startup mechanism. To run programs compiled with mpicc (or related) commands, you must first set up MPD daemons.

In administer (root), shutdown the MPD daemon:

pn1: ~ # mpd &[1] 5380

pn1: ~ # mpdallexit

Page 18: Chapter 4 Message-Passing Programming. The Message-Passing Model

Booting the MPI in Cluster for Booting the MPI in Cluster for UsersUsersBefore running a MPI program, you need

to boot machines by “mpdboot” command

Use the MPDTRACE daemon to trace booting nodes (computers):

user@pn1:~> mpdtracepn1pn4pn3pn2

user@pn1:~> mpdboot -n 4

Page 19: Chapter 4 Message-Passing Programming. The Message-Passing Model

Compiling MPI ProgramsCompiling MPI Programs

In general, starting an MPI program is dependent on the implementation of MPI you are using, and might require various scripts, program arguments, and/or environment variables.

In C language:◦Use the gcc compiler

◦Use the intel compiler

user@pn1:~> mpicc -o a.out sample.c

user@pn1:~> mpiicc -o a.out sample.c

Page 20: Chapter 4 Message-Passing Programming. The Message-Passing Model

Running a Program in Running a Program in ClusterClusterTo run a MPI program, the command is

as follow:◦-n is the number of processes that run on the

program.user@pn1:~> mpiexec -n 8 ./a.outProcess 5 of 8 is on pn4Process 6 of 8 is on pn4Process 0 of 8 is on pn1Process 2 of 8 is on pn1Process 1 of 8 is on pn1Process 3 of 8 is on pn1Process 7 of 8 is on pn4Process 4 of 8 is on pn4pi is approximately 3.1415926535896137, Error is 0.0000000000001794wall clock time = 0.231002

Page 21: Chapter 4 Message-Passing Programming. The Message-Passing Model

A Minimal MPI Program in CA Minimal MPI Program in C#include “mpi.h”#include <stdio.h>

int main(int argc, char *argv[]){int id, t_process;MPI_Init(&argc,&argv);MPI_Comm_rank(MPI_COMM_WORLD, &id);MPI_Comm_size(MPI_COMM_WORLD, &t_process);printf(“Hello, world! This is process %d of %d\n”,id,

t_process);MPI_Finalize();return 0;

}

Page 22: Chapter 4 Message-Passing Programming. The Message-Passing Model

Sample ResultSample Result

The sample result is as follow:

user@pn1:~> mpiicc -o a.out sample.cuser@pn1:~> mpirexec -n 8 ./a.outHello, world! This is process 0 of 8Hello, world! This is process 6 of 8Hello, world! This is process 4 of 8Hello, world! This is process 5 of 8Hello, world! This is process 7 of 8Hello, world! This is process 2 of 8Hello, world! This is process 1 of 8Hello, world! This is process 3 of 8

Page 23: Chapter 4 Message-Passing Programming. The Message-Passing Model

Sample ResultSample Result

Process1

MPI_Init

MPI_Rankid = 0

MPI_Rankid = 1

MPI_Rankid = 2

MPI_Comm_sizet_process = 3

MPI_Comm_sizet_process = 3

MPI_Comm_sizet_process = 3

Print Hello world! This is process 0 of 3

Print Hello world! This is process 1 of 3

Print Hello world! This is process 2 of 3

START

Process2Process0

MPI_Init MPI_Init

MPI_Finalize MPI_Finalize MPI_Finalize

STOP

Page 24: Chapter 4 Message-Passing Programming. The Message-Passing Model

MPI Initial FunctionMPI Initial Function

MPI_Init must be called ◦before any other MPI function,◦in every MPI program,◦only once in an MPI program.

Note: All MPI identifiers, including function identifiers, begin with the prefix MPI_ , followed by a capital letter and a series of lowercase letters and underscores.

Page 25: Chapter 4 Message-Passing Programming. The Message-Passing Model

MPI_Comm_rank and MPI_Comm_sizeMPI_Comm_rank and MPI_Comm_size

MPI_COMM_WORLD is the default communicator that you get “for free”.

MPI_Comm_rank is to determine its rank within a communicator.

MPI_Comm_size is to determine the total number of processes in a communicator.

Page 26: Chapter 4 Message-Passing Programming. The Message-Passing Model

MPI Finalize FunctionMPI Finalize Function

Terminates the MPI execution environment.

Should be the last MPI routine called in every MPI program.

Allowing the system to free up resources (such as memory) that have been allocated to MPI.

Page 27: Chapter 4 Message-Passing Programming. The Message-Passing Model

Circuit SatisfiabilityCircuit Satisfiability

Page 28: Chapter 4 Message-Passing Programming. The Message-Passing Model
Page 29: Chapter 4 Message-Passing Programming. The Message-Passing Model

#include <mpi.h>#include <stdio.h>

Int main (int argc, char *argv[]) { int i; int id; int p; void check_circuit (int, int);

MPI_Init (&argc, &argv); MPI_Comm_rank (MPI_COMM_WORLD, &id); MPI_Comm_size (MPICOMM_WORLD, &p);

for (i = id; i < 65536; i += p) check_circuit (id, i);

printf {“process %d is done\n”, id); fflush (stdout); MPI_Finalize (); return 0;}

Page 30: Chapter 4 Message-Passing Programming. The Message-Passing Model

#define EXTRACT_BIT(n,i) ((n& (I << i))? 1:0)

void check_circuit (int id, int z) { int v[16]; int i;

for (i = 0; i < 16; i++) v[i] = EXTRACT_BIT (z,i); if ((v[0] || v[1]) && (!v[1] || !v[3]) && (v[2] || v[3]) && ……) { printf (“%d) %d%d ….%d“, id, v[0], v[1], …, v[15]); fflush (stdout); }}

Page 31: Chapter 4 Message-Passing Programming. The Message-Passing Model

Output Output (1/3)(1/3)

Page 32: Chapter 4 Message-Passing Programming. The Message-Passing Model

Output Output (2/3)(2/3)

Page 33: Chapter 4 Message-Passing Programming. The Message-Passing Model

Output Output (3/3)(3/3)