2 less fish … more fish! parallelism means doing multiple things at the same time: you can get...

67
Parallelism Algorithm Design

Upload: valentine-haynes

Post on 16-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Parallelism Algorithm Design

2

Parallelism

Less fish …

More fish!

Parallelism means doing multiple things at the same time: you can get more work done in the same time.

3

The Jigsaw Puzzle Analogy

4

Serial ComputingSuppose you want to do a jigsaw puzzlethat has, say, a thousand pieces.

We can imagine that it’ll take you acertain amount of time. Let’s saythat you can put the puzzle together inan hour.

5

Shared Memory ParallelismIf Scott sits across the table from you, then he can work on his half of the puzzle and you can work on yours. Once in a while, you’ll both reach into the pile of pieces at the same time (you’ll contend for the same resource), which will cause a little bit of slowdown. And from time to time you’ll have to work together (communicate) at the interface between his half and yours. The speedup will be nearly 2-to-1: y’all might take 35 minutes instead of 30.

6

The More the Merrier?Now let’s put Paul and Charlie on the other two sides of the table. Each of you can work on a part of the puzzle, but there’ll be a lot more contention for the shared resource (the pile of puzzle pieces) and a lot more communication at the interfaces. So y’all will get noticeably less than a 4-to-1 speedup, but you’ll still have an improvement, maybe something like 3-to-1: the four of you can get it done in 20 minutes instead of an hour.

7

Diminishing ReturnsIf we now put Dave and Tom and Kate and Brandon on the corners of the table, there’s going to be a whole lot of contention for the shared resource, and a lot of communication at the many interfaces. So the speedup y’all get will be much less than we’d like; you’ll be lucky to get 5-to-1.

So we can see that adding more and more workers onto a shared resource is eventually going to have a diminishing return.

8

Distributed Parallelism

Now let’s try something a little different. Let’s set up two tables, and let’s put you at one of them and Scott at the other. Let’s put half of the puzzle pieces on your table and the other half of the pieces on Scott’s. Now y’all can work completely independently, without any contention for a shared resource. BUT, the cost per communication is MUCH higher (you have to scootch your tables together), and you need the ability to split up (decompose) the puzzle pieces reasonably evenly, which may be tricky to do for some puzzles.

9

More Distributed ProcessorsIt’s a lot easier to add more processors in distributed parallelism. But, you always have to be aware of the need to decompose the problem and to communicate among the processors. Also, as you add more processors, it may be harder to load balance the amount of work that each processor gets.

10

Load Balancing

Load balancing means ensuring that everyone completes their workload at roughly the same time.

For example, if the jigsaw puzzle is half grass and half sky, then you can do the grass and Scott can do the sky, and then y’all only have to communicate at the horizon – and the amount of work that each of you does on your own is roughly equal. So you’ll get pretty good speedup.

11

Load Balancing

Load balancing can be easy, if the problem splits up into chunks of roughly equal size, with one chunk per processor. Or load balancing can be very hard.

EASY

HARD

Parallel computation = set of tasks Task

◦ Program◦ Local memory◦ Collection of I/O ports

Tasks interact by sending messages through channels

Task/Channel Model

Task/Channel Model

TaskTaskChannelChannel

Partitioning◦ Dividing the Problem into Tasks

Communication◦ Determine what needs to be communicated between the

Tasks over Channels

Agglomeration◦ Group or Consolidate Tasks to improve efficiency or

simplify the programming solution

Mapping◦ Assign tasks to the Computer Processors

14

Foster’s Design Methodology

Foster’s Methodology

P ro b lemP artitio ning

C o m m unic atio n

A gglo m eratio nM ap p ing

Domain/Data Decomposition – Data Centric Approach◦ Divide up most frequently used data◦ Associate the computations with the divided data

Functional/Task Decomposition – Computation Centric Approach◦ Divide up the computation◦ Associate the data with the divided computations

Primitive Tasks: Resulting Pieces from either Decomposition◦ The goal is to have as many of these as possible

Step 1: PartitioningDivide Computation & Data into Pieces

Task Decomposition

• Decompose a problem by the functions it performs

• Gardening analogy– Need to mow and weed– Two gardeners

• One mows

• One weeds

– Need to synchronize a bit so we don’t weed the spot in the yard that is currently being mowed

Data Decomposition• Decompose problem by the data worked on

• Gardening analogy– Need to mow and weed– Two gardeners

• Each mows and weeds ½ the yard

– Each gets its own part of the yard, so less synchronization between mowing/weeding

– However• Gardeners can’t be specialized

• Contention for resources (single mower)

Scaling• Task

– Adding 8 more gardeners is only beneficial if:• there are 8 more tasks (raking, blowing, etc)

• Data– Adding 8 more gardeners is only beneficial if:

• there are enough mowers for everyone

• the yard is big enough that the time it takes to get the mower out is worth it for the size mowed

Hybrid Approaches

• Can combine both approaches– Gardener1 can mow/weed ½ the yard– Gardener2 can mow/weed ½ the yard– Gardener3 can rake/blow ½ the yard– Gardener4 can rake/blow ½ the yard

Example Domain Decompositions

Example Functional Decomposition

Lots of Tasks◦ e.g, at least 10x more primitive tasks than

processors in target computer Minimize redundant computations and

data Load Balancing

◦ Primitive tasks roughly the same size Scalable

◦ Number of tasks an increasing function of problem size

Partitioning Checklist

Local Communication◦ When Tasks need data from a small number of

other Tasks◦ Channel from Producing Task to Consuming Task

Created

Global Communication◦ When Task need data from many or all other Tasks◦ Channels for this type of communication are not

created during this step

Step 2: CommunicationDetermine Communication Patterns between Primitive Tasks

Balanced◦ Communication operations balanced among tasks

Small degree◦ Each task communicates with only small group of

neighbors Concurrency

◦ Tasks can perform communications concurrently◦ Task can perform computations concurrently

Communication Checklist

Increase Locality◦ remove communication by agglomerating Tasks that

Communicate with one another◦ Combine groups of sending & receiving task

Send fewer, larger messages rather than more short messages which incur more message latency.

Maintain Scalability of the Parallel Design◦ Be careful not to agglomerate Tasks so much that moving

to a machine with more processors will not be possible

Reduce Software Engineering costs◦ Leveraging existing sequential code can reduce the

expense of engineering a parallel algorithm

Step 3: AgglomerationGroup Tasks to Improve Efficiency or Simplify Programming

Eliminate communication between primitive tasks agglomerated into consolidated task

Combine groups of sending and receiving tasks

Agglomeration Can Improve Performance

Locality of parallel algorithm has increased Tradeoff between agglomeration and code

modifications costs is reasonable Agglomerated tasks have similar

computational and communications costs Number of tasks increases with problem

size Number of tasks suitable for likely target

systems

Agglomeration Checklist

Maximize Processor Utilization◦ Ensure

computation is evenly balanced across all processors

Minimize Interprocess Communication

29

Step 4: MappingAssigning Tasks to Processors

Finding optimal mapping is NP-hard Must rely on heuristics

Optimal Mapping

Decision Tree for Parallel Algorithm Design

Mapping based on one task per processor and multiple tasks per processor have been considered

Both static and dynamic allocation of tasks to processors have been evaluated

If a dynamic allocation of tasks to processors is chosen, the Task allocator is not a bottleneck

If Static allocation of tasks to processors is chosen, the ratio of tasks to processors is at least 10 to 1

32

Mapping Goals

Case StudiesBoundary value problemFinding the maximumThe n-body problemAdding data input

Boundary Value Problem

Ice water Rod Insulation

Rod Cools as Time Progresses

Finite Difference Approximation

One data item per grid point

Associate one primitive task with each grid point

Two-dimensional domain decomposition

Partitioning

Identify communication pattern between primitive tasks:

◦ Each interior primitive task has three incoming and three outgoing channels

Communication

Agglomeration and Mapping

Agglomeration

– time to update element n – number of elements m – number of iterations Sequential execution time: mn

p – number of processors – message latency Parallel execution time m(n/p+2)

Execution Time Estimate

Finding the Maximum Error

Computed 0.15 0.16 0.16 0.19Correct 0.15 0.16 0.17 0.18Error (%) 0.00% 0.00% 6.25% 5.26%

6.25%

Given associative operator a0 a1 a2 … an-1 Examples

◦ Add◦ Multiply◦ And, Or◦ Maximum, Minimum

Reduction

Parallel Reduction Evolution

Parallel Reduction Evolution

Parallel Reduction Evolution

Binomial Trees

Subgraph of hypercube

Finding Global Sum

4 2 0 7

-3 5 -6 -3

8 1 2 3

-4 4 6 -1

Finding Global Sum

1 7 -6 4

4 5 8 2

Finding Global Sum

8 -2

9 10

Finding Global Sum

17 8

Finding Global Sum

25

Binomial Tree

Agglomeration

Agglomeration

sum

sum sum

sum

The n-body Problem

The n-body Problem

Domain partitioning Assume one task per particle Task has particle’s position, velocity vector Iteration

◦ Get positions of all other particles◦ Compute new position, velocity

Partitioning

Gather

All-gather

Complete Graph for All-gather

Hypercube for All-gather

Communication Time

p

pnp

p

np

i

)1(

log2

log

1

1-i

Hypercube

Complete graph

p

pnp

pnp

)1(

)1()/

)(1(

Adding Data Input

Scatter

Scatter in log p Steps

12345678 56781234 56 12

7834

Parallel computation◦ Set of tasks◦ Interactions through channels

Good designs◦ Maximize local computations◦ Minimize communications◦ Scale up

Summary: Task/channel Model

Partition computation Agglomerate tasks Map tasks to processors Goals

◦ Maximize processor utilization◦ Minimize inter-processor communication

Summary: Design Steps

Reduction Gather and scatter All-gather

Summary: Fundamental Algorithms