1 bİl 542 parallel computing. 2 message passing chapter 2

23
1 BİL 542 Parallel Computing

Upload: barbra-hampton

Post on 18-Jan-2016

221 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

1

BİL 542 Parallel Computing

Page 2: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2

Message Passing

Chapter 2

Page 3: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.3

Message-Passing ComputingChapter 2

– Programming a Message Passing computer

1. Using a special parallel Prog. Lang.

2. Extending an existing language

3. Using a high-level language and providing a message-passing library

– Here, the third option is employed

Page 4: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.4

Message-Passing Programming using User-level Message-Passing Libraries

Two primary mechanisms needed:

1. A method of creating separate processes for execution on different computers

• Static process creation: Before the execution number of processes are fixed

• Dynamic process creation: At runtime processess can be created

2. A method of sending and receiving messages

Page 5: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.5

Programming Models: 1. Multiple program, multiple data (MPMD) model

Sourcefile

Executable

Processor 0 Processor p - 1

Compile to suitprocessor

Sourcefile

Page 6: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

COMPE472 Parallel Computing 2.6

Programming models: 2. Single Program Multiple Data (SPMD) model

.

Sourcefile

Executables

Processor 0 Processor p - 1

Compile to suitprocessor

Basic MPI way

Different processes merged into one program. Control statements select different parts for each processor to execute. All executables started together - static process creation

Page 7: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

COMPE472 Parallel Computing 2.7

Multiple Program Multiple Data (MPMD) Model

Process 1

Process 2spawn();

Time

Start executionof process 2

Separate programs for each processor. One processor executes master process. Other processes started from within master process - dynamic process creation.

Page 8: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

COMPE472 Parallel Computing 2.8

Basic “point-to-point”Send and Receive Routines

Process 1 Process 2

send(&x, 2);

recv(&y, 1);

x y

Movementof data

Generic syntax (actual formats will explained later)

Passing a message between processes using send() and recv() library calls:

Page 9: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.9

Synchronous Message Passing

Routines that actually return when message transfer completed.

Synchronous send routine– Waits until complete message can be accepted by the receiving

process before sending the message.

Synchronous receive routine– Waits until the message it is expecting arrives. – No need for buffer storage

Synchronous routines perform two actions: They transfer data and they synchronize processes.

Page 10: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.10

Synchronous send() and recv() using 3-way protocol

Process 1 Process 2

send();

recv();SuspendTime

process Acknowledgment

MessageBoth processescontinue

Request to send

Page 11: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.11

Asynchronous Message Passing

• Routines that do not wait for actions to complete before returning. Usually require local storage for messages.

• In general, they do not synchronize processes but allow processes to move forward sooner. Must be used with care.

Page 12: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.12

Message passing:Blocking and Non-Blocking

• Blocking: A blocking message occurs when one of the processors performs a send operation and does not continue (i.e. does not execute any following instruction) unless it is sure that the message buffer can be reclaimed.

Page 13: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

COMPE472 Parallel Computing 2.13

Message passing:Blocking and Non-Blocking

• Non-blocking - A non-blocking message is the opposite of a blocking message where a processor performs a send or a receive operation and immediately continues (to the next instruction in the code) without caring whether the message has been received or not. 

Page 14: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.14

How message-passing routines continue before message transfer completed

Process 1 Process 2

send();

recv();

Message buffer

Readmessage buffer

Continueprocess

Time

Message buffer needed between source and destination to hold message:

Page 15: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.15

Asynchronous (blocking) routines changing to synchronous routines

• Once local actions completed and message is safely on its way, sending process can continue with subsequent work.

• Buffers only of finite length and if all available buffer space exhausted, send routine will held up.

• Then, send routine will wait until storage becomes re-available.

Page 16: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.16

Message Tag

• Used to differentiate between different types of messages being sent.

• Message tag is carried within message.

• If special type matching is not required, a wild card message tag is used, so that the recv() will match with any send().

Page 17: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.17

Message Tag Example

Process 1 Process 2

send(&x,2, 5);

recv(&y,1, 5);

x y

Movementof data

Waits for a message from process 1 with a tag of 5

To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:

Page 18: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.18

“Group” message passing routines

Have routines that send message(s) to a group of processes or receive message(s) from a group of processes

Higher efficiency than separate point-to-point routines.

Page 19: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.19

Scatter

scatter();

buf

data datadata

Process 0 Process p - 1Process 1

Action

Code

MPI form

Sending each element of an array in root process to a separate process. Contents of i-th location of array sent to i-th process.

Page 20: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.20

Gather

buf

gather();

data

gather();

datadata

Process 0 Process p - 1Process 1

Action

Code

MPI form

Having one process collect individual values from set of processes.

scatter();

Page 21: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

COMPE472 Parallel Computing 2.21

Reduce

reduce();

buf

data datadata

Process 0 Process p - 1Process 1

+

Action

Code

Gather operation combined with specified arithmetic/logical operation.

For example, let’s say we have a list of numbers [1, 2, 3, 4, 5]. Reducing this list of numbers with the sum function would produce sum([1, 2, 3, 4, 5]) = 15. 

Page 22: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

2.22

AllGather & AllReduce• So far, we have covered two MPI routines

that perform many-to-one or one-to-many communication patterns, which simply means that many processes send/receive to one process.

• Oftentimes it is useful to be able to send many elements to many processes (i.e. a many-to-many communication pattern).

• MPI_Allgather and MPI_Allreduce has this characteristic.

Page 23: 1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2

COMPE472 Parallel Computing 2.23

BarrierBarrier: synchronization point• One of the things to

remember about collective communication is that it implies a synchronization point among processes. This means that all processes must reach a point in their code before they can all begin executing again.