pp lab mpi programming vi. program 1 break up a long vector into subvectors of equal length....
TRANSCRIPT
PP Lab
MPI programming VI
Program 1Break up a long vector into subvectors of equal
length. Distribute subvectors to processes. Let them compute the partial sums. Collect the partial sums from the processes and add them, to deliver the final sum to the user.
Functions to be used• MPI_Reduce: Reduces values on all processes to a single value • Synopsis
#include "mpi.h" int MPI_Reduce ( void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm )
• Input Parameters:– sendbuf: address of send buffer– count: number of elements in send buffer– datatype: data type of elements of send buffer– op: reduce operation– root: rank of root process– comm: communicator
• Output Parameter:– recvbuf: address of receive buffer (significant only at root)
Reduction OperationsOperation Handle Operation
MPI_MAX Maximum
MPI_MIN Minimum
MPI_PROD Product
MPI_SUM Sum
MPI_LAND Logical AND
MPI_LOR Logical OR
MPI_LXOR Logical Exclusive OR
MPI_BAND Bitwise AND
MPI_BOR Bitwise OR
MPI_BXOR Bitwise Exclusive OR
MPI_MAXLOC Maximum value and location
MPI_MINLOC Minimum value and location
Functions to be used• MPI_Scatter: Sends data from one task to all other tasks in a group.• Synopsis:
#include "mpi.h" int MPI_Scatter ( void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm )
• Input Parameters:– sendbuf: address of send buffer– sendcount: number of elements sent to each process– sendtype: data type of send buffer elements
the above three arguments are significant only at root.– recvcount: number of elements in receive buffer– recvtype: data type of receive buffer elements– root: rank of sending process– comm: communicator
• Output Parameter:– recvbuf: address of receive buffer
Code#include <mpi.h>#include <stdio.h>main (int argc,char *argv[]){int i, rank, b[3], psum, tsum;int a[9]={1,2,3,4,5,6,7,8,9};MPI_Init (&argc,&argv);MPI_Comm_rank (MPI_COMM_WORLD, &rank);MPI_Scatter (a, 3, MPI_INT, b, 3, MPI_INT, 0,
MPI_COMM_WORLD);psum=0;for(i=0;i<3;i++) psum+=b[i];MPI_Reduce (&psum, &tsum, 1, MPI_INT, MPI_SUM, 0,
MPI_COMM_WORLD);If(rank==0) printf (“sum of the vector is %d\n”,tsum);MPI_Finalize();}
Program 2Execute the prefix sum problem on 8 processes.s0=x0
s1=x0+x1=s0+x1
s2=x0+x1+x2=s1+x2
s3=x0+x1+x2+x3=s2+x3
…s7=x0+x1+x2+…+x7=s6+x7
Function to be used• MPI_Scan: Computes the scan (partial reductions) of data on a
collection of processes • Synopsis:
#include "mpi.h" int MPI_Scan (void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
• Input Parameters– sendbuf: starting address of send buffer– count: number of elements in input buffer– datatype: data type of elements of input buffer– op: operation– comm: communicator
• Output Parameter:– recvbuf: starting address of receive buffer
00 332211 44 776655
00 663311 1010 282821211515
0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7
Code#include <mpi.h>#include <stdio.h>main(int argc, char**argv){int id, a, b; MPI_Init(&argc,&argv);MPI_Comm_rank (MPI_COMM_WORLD, &id);a=id; MPI_Scan (&a, &b, 1, MPI_INT, MPI_SUM,
MPI_COMM_WORLD);printf (“process=%d\tprefix sum=%d\n”,id, b);MPI_Finalize ();}
Assignment• Write and explain the argument lists of the
following and say how they are different from the two functions you have seen:– MPI_Allreduce– MPI_Reduce_scatter