1 bİl 542 parallel computing. 2 message passing chapter 2
TRANSCRIPT
1
BİL 542 Parallel Computing
2
Message Passing
Chapter 2
2.3
Message-Passing ComputingChapter 2
– Programming a Message Passing computer
1. Using a special parallel Prog. Lang.
2. Extending an existing language
3. Using a high-level language and providing a message-passing library
– Here, the third option is employed
2.4
Message-Passing Programming using User-level Message-Passing Libraries
Two primary mechanisms needed:
1. A method of creating separate processes for execution on different computers
• Static process creation: Before the execution number of processes are fixed
• Dynamic process creation: At runtime processess can be created
2. A method of sending and receiving messages
2.5
Programming Models: 1. Multiple program, multiple data (MPMD) model
Sourcefile
Executable
Processor 0 Processor p - 1
Compile to suitprocessor
Sourcefile
COMPE472 Parallel Computing 2.6
Programming models: 2. Single Program Multiple Data (SPMD) model
.
Sourcefile
Executables
Processor 0 Processor p - 1
Compile to suitprocessor
Basic MPI way
Different processes merged into one program. Control statements select different parts for each processor to execute. All executables started together - static process creation
COMPE472 Parallel Computing 2.7
Multiple Program Multiple Data (MPMD) Model
Process 1
Process 2spawn();
Time
Start executionof process 2
Separate programs for each processor. One processor executes master process. Other processes started from within master process - dynamic process creation.
COMPE472 Parallel Computing 2.8
Basic “point-to-point”Send and Receive Routines
Process 1 Process 2
send(&x, 2);
recv(&y, 1);
x y
Movementof data
Generic syntax (actual formats will explained later)
Passing a message between processes using send() and recv() library calls:
2.9
Synchronous Message Passing
Routines that actually return when message transfer completed.
Synchronous send routine– Waits until complete message can be accepted by the receiving
process before sending the message.
Synchronous receive routine– Waits until the message it is expecting arrives. – No need for buffer storage
Synchronous routines perform two actions: They transfer data and they synchronize processes.
2.10
Synchronous send() and recv() using 3-way protocol
Process 1 Process 2
send();
recv();SuspendTime
process Acknowledgment
MessageBoth processescontinue
Request to send
2.11
Asynchronous Message Passing
• Routines that do not wait for actions to complete before returning. Usually require local storage for messages.
• In general, they do not synchronize processes but allow processes to move forward sooner. Must be used with care.
2.12
Message passing:Blocking and Non-Blocking
• Blocking: A blocking message occurs when one of the processors performs a send operation and does not continue (i.e. does not execute any following instruction) unless it is sure that the message buffer can be reclaimed.
COMPE472 Parallel Computing 2.13
Message passing:Blocking and Non-Blocking
• Non-blocking - A non-blocking message is the opposite of a blocking message where a processor performs a send or a receive operation and immediately continues (to the next instruction in the code) without caring whether the message has been received or not.
2.14
How message-passing routines continue before message transfer completed
Process 1 Process 2
send();
recv();
Message buffer
Readmessage buffer
Continueprocess
Time
Message buffer needed between source and destination to hold message:
2.15
Asynchronous (blocking) routines changing to synchronous routines
• Once local actions completed and message is safely on its way, sending process can continue with subsequent work.
• Buffers only of finite length and if all available buffer space exhausted, send routine will held up.
• Then, send routine will wait until storage becomes re-available.
2.16
Message Tag
• Used to differentiate between different types of messages being sent.
• Message tag is carried within message.
• If special type matching is not required, a wild card message tag is used, so that the recv() will match with any send().
2.17
Message Tag Example
Process 1 Process 2
send(&x,2, 5);
recv(&y,1, 5);
x y
Movementof data
Waits for a message from process 1 with a tag of 5
To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:
2.18
“Group” message passing routines
Have routines that send message(s) to a group of processes or receive message(s) from a group of processes
Higher efficiency than separate point-to-point routines.
2.19
Scatter
scatter();
buf
data datadata
Process 0 Process p - 1Process 1
Action
Code
MPI form
Sending each element of an array in root process to a separate process. Contents of i-th location of array sent to i-th process.
2.20
Gather
buf
gather();
data
gather();
datadata
Process 0 Process p - 1Process 1
Action
Code
MPI form
Having one process collect individual values from set of processes.
scatter();
COMPE472 Parallel Computing 2.21
Reduce
reduce();
buf
data datadata
Process 0 Process p - 1Process 1
+
Action
Code
Gather operation combined with specified arithmetic/logical operation.
For example, let’s say we have a list of numbers [1, 2, 3, 4, 5]. Reducing this list of numbers with the sum function would produce sum([1, 2, 3, 4, 5]) = 15.
2.22
AllGather & AllReduce• So far, we have covered two MPI routines
that perform many-to-one or one-to-many communication patterns, which simply means that many processes send/receive to one process.
• Oftentimes it is useful to be able to send many elements to many processes (i.e. a many-to-many communication pattern).
• MPI_Allgather and MPI_Allreduce has this characteristic.
COMPE472 Parallel Computing 2.23
BarrierBarrier: synchronization point• One of the things to
remember about collective communication is that it implies a synchronization point among processes. This means that all processes must reach a point in their code before they can all begin executing again.