amdahl`s law -processor performance

2
Imdad Hussain AMDAHLS LAW Amdahl's Law is a law governing the speedup of using parallel processors on a problem, versus using only one serial processor. Before we examine Amdahl's Law, we should gain a better understanding of what is meant by speedup. Speedup: The speed of a program is the time it takes the program to excecute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute in serial (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is: T(1) S = ------------- T(j) Where T(j) is the time it takes to execute the program when using j processors. Efficiency is the speedup, divided by the number of processors used. This is an important factor to consider. Due to the cost of multiprocessor super computers, a company wants to get the most bang for their dollar. To explore speedup more, we shall do a bit of analysis. If there are N workers working on a project, we may assume that they would be able to do a job in 1/N time of one worker working alone. Now, if we assume the strictly serial part of the program is performed in B*T(1) time, then the strictly parallel part is performed in ((1-B)*T(1)) / N time. With some substitution and number manipulation, we get the formula for speedup as: N S = ----------------------- (B*N)+(1-B) This formula is known as Amdahl's Law. The following is a quote from Gene Amdahl in 1967:

Upload: comsats-institute-of-information-technology

Post on 15-Jul-2015

22 views

Category:

Education


4 download

TRANSCRIPT

Page 1: Amdahl`s law -Processor performance

Imdad Hussain

AMDAHLS LAW

Amdahl's Law is a law governing the speedup of using parallel processors on a

problem, versus using only one serial processor. Before we examine Amdahl's Law,

we should gain a better understanding of what is meant by speedup.

Speedup:

The speed of a program is the time it takes the program to excecute. This could be

measured in any increment of time. Speedup is defined as the time it takes a program

to execute in serial (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is:

T(1)

S = -------------

T(j)

Where T(j) is the time it takes to execute the program when using j processors. Efficiency is the speedup, divided by the number of processors used. This is an

important factor to consider. Due to the cost of multiprocessor super computers, a company wants to get the most bang for their dollar.

To explore speedup more, we shall do a bit of analysis. If there are N workers working on a project, we may assume that they would be able to do a job in 1/N time

of one worker working alone. Now, if we assume the strictly serial part of the

program is performed in B*T(1) time, then the strictly parallel part is performed in ((1-B)*T(1)) / N time. With some substitution and number manipulation, we get the

formula for speedup as:

N

S = -----------------------

(B*N)+(1-B)

This formula is known as Amdahl's Law. The following is a quote from Gene Amdahl in 1967:

Page 2: Amdahl`s law -Processor performance

For over a decade prophets have voiced the contention that the

organization of a single computer has reached its limits and that truly significant advances can be made only by interconnection of a multiplicity

of computers in such a manner as to permit co-operative solution...The

nature of this overhead (in parallelism) appears to be sequential so that it is unlikely to be amenable to parallel processing techniques. Overhead

alone would then place an upper limit on throughput of five to seven times

the sequential processing rate, even if the housekeeping were done in a

separate processor...At any point in time it is difficult to foresee how the previous bottlenecks in a sequential computer will be effectively overcome.

Let us investigate speedup curves:

Now that we have determined speedup and efficiency, let us turn to using this

information to make sense of Amdahl's Law. We will refer to a Speedup Curve to do this. A Speedup Curve is simply a graph with an X-axis of the number of processors,

compared against a Y-axis of the speedup. The best speed we could hope for, S = N,

would yield a 45 degree curve. That is, if there were ten processors, we would realize a ten fold speedup. Anything better would mean that the program ran faster on a

single processor than in parallel, which would not make it a good candidate for

parallel computing. When B is constant (recall B = the percentage of the strictly parallel portion of the program), Amdahl's Law yields a speedup curve which is

logarithmic and remains below the line S=N. This law shows that it is indeed the

algorithm and not the number of processors which limits the speedup. Also note that

as the curve begins to flatten out, efficiency is drastically being reduced.