communicators self test with solution. self test 1.mpi_comm_group may be used to: a)create a new...

22
Communicators Self Test with solution

Post on 21-Dec-2015

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Communicators

Self Test with solution

Page 2: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Self Test

1. MPI_Comm_group may be used to:a) create a new group.

b) determine group handle of a communicator.

c) create a new communicator.

Page 3: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Self Test

2. MPI_Group_incl is used to select processes of old_group to form new_group. If the selection include process(es) not in old_group, it would cause the MPI job:

a) to print error messages and abort the job.

b) to print error messages but execution continues.

c) to continue with warning messages.

Page 4: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Self Test

3. Assuming that a calling process belongs to the new_group of MPI_Group_excl(old_group, count, nonmembers, new_group, ierr), if nonmembers' order were altered,

a) the corresponding rank of the calling process in the new_group would change.

b) the corresponding rank of the calling process in the new_group remain unchanged.

c) the corresponding rank of the calling process in the new_group might or might not change.

Page 5: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Self Test

4. In MPI_Group_excl(old_group, count, nonmembers, new_group, ierr), if count = 0, then

a) new_group is identical to old_group.

b) new_group has no members.

c) error results.

Page 6: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Self Test

5. In MPI_Group_excl(old_group, count, nonmembers, new_group, ierr) if the nonmembers array is not unique ( i.e., one or more entries of nonmembers point to the same rank in old_group), then

a) MPI ignores the repetition.

b) error results.

c) it returns MPI_GROUP_EMPTY.

Page 7: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Self Test

6. MPI_Group_rank is used to query the calling process' rank number in group. If the calling process does not belong to the group, then

a) error results.

b) the returned group rank has a value of -1, indicating that the calling process is not a member of the group.

c) the returned group rank is MPI_UNDEFINED.

Page 8: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Self Test

7. In MPI_Comm_split, if two processes of the same color are assigned the same key, then

a) error results.

b) their rank numbers in the new communicator are ordered according to their relative rank order in the old communicator.

c) they both share the same rank in the new communicator.

Page 9: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Self Test

8. MPI_Comm_split(old_comm, color, key, new_comm) is equivalent to MPI_Comm_create(old_comm, group, new_comm) when

a) color=Iam, key=0; calling process Iam belongs to group; ELSE color=MPI_UNDEFINED for all other processes in old_comm.

b) color=0, key=Iam; calling process Iam belongs to group; ELSE color=MPI_UNDEFINED for all other processes in old_comm.

c) color=0, key=0

Page 10: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Answers

1. B

2. A

3. B

4. A

5. B

6. C

7. B

8. B

Page 11: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Course Problem

• For this chapter, a new version of the Course Problem is presented in which the average value each processor calculates when a target location is found, is calculated in a different manner.

• Specifically, the average will be calculated from the "neighbor" values of the target.

• This is a classic style of programming (called calculations with a stencil) used at important array locations. Stencil calculations are used in many applications including numerical solutions of differential equations and image processesing to name two.

• This new Course Problem will also entail more message passing between the searching processors because in order to calculate the average they will have to get values of the global array they do not have in their subarray.

Page 12: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Description

• Our new problem still implements a parallel search of an integer array. The program should find all occurences of a certain integer which will be called the target. When a processor of a certain rank finds a target location, it should then calculate the average of – The target value – An element from the processor with rank one higher (the "right"

processor). The right processor should send the first element from its local array.

– An element from the processor with rank one less (the "left" processor). The left processor should send the first element from its local array.

Page 13: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Description

• For example, if processor 1 finds the target at index 33 in its local array, it should get from processors 0 (left) and 2 (right) the first element of their local arrays. These three numbers should then be averaged.

• In terms of right and left neighbors, you should visualize the four processors connected in a ring. That is, the left neighbor for P0 should be P3, and the right neighbor for P3 should be P0.

• Both the target location and the average should be written to an output file. As usual, the program should read both the target value and all the array elements from an input file.

Page 14: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Exercise

• Solve this new version of the Course Problem by modifying your code from Chapter 6. Specifically, change the code to perform the new method of calculating the average value at each target location.

Page 15: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Solution

• Note:– the section of code shown in blue is new code in

which each processor calculates its left and right neighbor.

– The sections of code in red are the new data transfers each processor performs to get the first elements of its left and right processor's local arrays. Also shown in red is the new calculation of the average.

Page 16: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Solution

#include <stdio.h>#include <mpi.h>

#define N 300

int main(int argc, char **argv){

int i, target; /*local variables*/int b[N], a[N/4]; /*a is name of the array each slave searches*/int rank, size, err;MPI_Status status;int end_cnt;int gi; /*global index*/float ave; /*average*/FILE *sourceFile;FILE *destinationFile;

int left, right; /*the left and right processes*/int lx, rx; /*store the left and right elements*/

int blocklengths[2] = {1, 1}; /* initialize blocklengths array */MPI_Datatype types[2] = {MPI_INT, MPI_FLOAT}; /* initialize types array */MPI_Aint displacements[2];MPI_Datatype MPI_Pair;

Page 17: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Solution

err = MPI_Init(&argc, &argv);err = MPI_Comm_rank(MPI_COMM_WORLD, &rank);err = MPI_Comm_size(MPI_COMM_WORLD, &size);

/* Initialize displacements array with */err = MPI_Address(&gi, &displacements[0]);/* memory addresses */err = MPI_Address(&ave, &displacements[1]);/* This routine creates the new data type MPI_Pair */err = MPI_Type_struct(2, blocklengths, displacements, types, &MPI_Pair);/* This routine allows it to be used in communication */err = MPI_Type_commit(&MPI_Pair);

if(size != 4) {printf("Error: You must use 4 processes to run this program.\n");return 1;

}

Page 18: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Solution

if (rank == 0){

/* File b.data has the target value on the first line *//* The remaining 300 lines of b.data have the values for the b array */sourceFile = fopen("b.data", "r");

/* File found.data will contain the indices of b where the target is */destinationFile = fopen("found.data", "w");

if(sourceFile==NULL) {printf("Error: can't access file.c.\n");return 1;

} else if(destinationFile==NULL) {printf("Error: can't create file for writing.\n");return 1;

} else {/* Read in the target */fscanf(sourceFile, "%d", &target);

}}

Page 19: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Solution

/*Notice the broadcast is outside of the if, all processors must call it*/err = MPI_Bcast(&target, 1, MPI_INT, 0, MPI_COMM_WORLD);

if (rank == 0) {/* Read in b array */for (i=0; i<N; i++) {

fscanf(sourceFile,"%d", &b[i]);}

}

/* Again, the scatter is after the if, all processors must call it */err = MPI_Scatter(b, N/size, MPI_INT, a, N/size, MPI_INT, 0, MPI_COMM_WORLD);

if (rank == 0) { /* Handle the special case of Processor 0 */left = size - 1;right = rank + 1;

}else if (rank == size - 1) { /* Handle the special case of last Processor */

left = rank - 1;right = 0;

}else { /* The "normal" calculation of the neighbor processors */

left = rank - 1;right = rank + 1;

}

Page 20: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Solution

if (rank == 0) {/* P0 sends the first element of its subarray a to its neighbors */err = MPI_Send(&a[0], 1, MPI_INT, left, 33, MPI_COMM_WORLD);err = MPI_Send(&a[0], 1, MPI_INT, right, 33, MPI_COMM_WORLD);

/* P0 gets the first elements of its left and right processor's arrays */err = MPI_Recv(&lx, 1, MPI_INT, left, 33, MPI_COMM_WORLD, &status);err = MPI_Recv(&rx, 1, MPI_INT, right, 33, MPI_COMM_WORLD, &status);

/*Master now searches the first fourth of the array for the target */for (i=0; i<N/size; i++) {

if (a[i] == target) {gi = (rank)*N/size+i+1;ave = (target+lx+rx)/3.0;fprintf(destinationFile,"P %d, %d %f\n", rank, gi, ave);

}}

end_cnt = 0;while (end_cnt != 3) {

err = MPI_Recv(MPI_BOTTOM, 1, MPI_Pair, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);

if (status.MPI_TAG == 52)end_cnt++; /*See Comment*/

elsefprintf(destinationFile,"P %d, %d %f\n", status.MPI_SOURCE, gi, ave);

}

fclose(sourceFile);fclose(destinationFile);

}

Page 21: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Solution

else {/* Each slave sends the first element of its subarray a to its neighbors */err = MPI_Send(&a[0], 1, MPI_INT, left, 33, MPI_COMM_WORLD);err = MPI_Send(&a[0], 1, MPI_INT, right, 33, MPI_COMM_WORLD);

/* Each slave gets the first elements of its left and right processor's arrays */err = MPI_Recv(&lx, 1, MPI_INT, left, 33, MPI_COMM_WORLD, &status);err = MPI_Recv(&rx, 1, MPI_INT, right, 33, MPI_COMM_WORLD, &status);

/* Search the b array and output the target locations */for (i=0; i<N/size; i++) {

if (a[i] == target) {gi = (rank)*N/size+i+1; /*Equation to convert local index to global index*/ave = (target+lx+rx)/3.0;err = MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 19, MPI_COMM_WORLD);

}}

gi = target; /* Both are fake values */ave=3.45; /* The point of this send is the "end" tag (See Chapter 4) */err = MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 52, MPI_COMM_WORLD); /*See Comment*/

}

err = MPI_Type_free(&MPI_Pair);err = MPI_Finalize();return 0;

}

Page 22: Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create

Solution

• The results obtained from running this code are in the file "found.data" which contains the following: P 0, 62, -7.666667P 2, 183, -7.666667P 3, 271, 19.666666P 3, 291, 19.666666P 3, 296, 19.666666

• Notice that in this new parallel version the average values have changed due to the stencil calculation, but, as expected, the target locations are the same.

• If you want to confirm that these results are correct, run the parallel code shown above using the input file "b.data" from Chapter 2.