parallel architectures - unipv · parallel architectures • introduction • multithreading •...

49
Parallel Architectures • Introduction • Multithreading Multiprocessors (shared memory architecture) Multicomputers (message passing architecture) •1

Upload: lekhanh

Post on 22-Aug-2019

229 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Parallel Architectures

•  Introduction •  Multithreading •  Multiprocessors (shared memory architecture) •  Multicomputers (message passing architecture)

• 1

Page 2: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Explicit Parallelism •  So far, we considered various ways to exploit the implicit

parallelism embedded in the instructions that make up a program.

•  The programmer conceives a program to solve a given problem as a sequence of instructions that will be executed by the CPU one after another

•  The programmer ignores (he is entitled to do so) how the compiler will manage these instructions (static ILP) and how the CPU will actually execute them (dynamic ILP).

• 2

Page 3: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Explicit Parallelism

•  What can we do if the program runs too slowly ? Well, we can surely use a “quicker” CPU, and possibly code in a better way the program. And once all this has been done?

•  The only way is using an architecture that embeds more computing units; possible, at least some fraction of the problem can be parallelised.

•  If the programmer is aware of the parallel architecture, he can choose an algorithm that can exploit this new capability: parallel execution.

• 3

Page 4: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Limits in Explicit Parallelism •  Actually, many problems lend themselves to a solution based on a set

of programs running in parallel.

•  However, (but for very particular instances) the increase in performance (speed-up) one can obtain by using multple CPUs to run many programs in parallel is less than linear in the number of available CPUs (or cores).

•  Indeed, programs running in parallel must synchronize, to share the data produced by each of them; alternatively, a preliminary common phase is required, before the actual parallel computations can start: in either case, there is alays a fraction of computation that cannot be carried out in parallel.

• 4

Page 5: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

•  Let us consider the following compilation command:

–  gcc main.c function1.c function2.c –o output

•  Let us assume to compile on a mono-processor architecture, with the following figures:

–  3 seconds to compile main.c

–  2 seconds to compile function1.c

–  1 second to compile function2.c

–  1 second to link object modules (main.o, function1.o, function2.o)

•  giving a total of 7 seconds. • 5

Limits in Explicit Parallelism

Page 6: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

•  With three CPUs, the three sources can be compiled in parallel to produce the correspoding object modules.

•  Once the objects have been generated, they can be linked using one of the three processors.

•  However, linking can start only after all three object codes have been generated, that is, only after 3 seconds.

•  The total time to produce output is 3+1=4 seconds, against 7 seconds in the mono-processor, with a speed-up of 7/4 = 1.75, with 3 processors !!

•  Even if all compilations required 1 second, the speed-up would be 4/2 = 2.

•  What happen if the RAM is not shared among the CPUs? • 6

Limits in Explicit Parallelism

Page 7: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Limit in Explicit Parallelism •  Computational tasks exhibit a different “outcome” (formally, a

different speed-up) when one tries to solve them by distributing the work among multiple CPUs (Tanenbaum, Fig. 8.47):

• 7

A typical operational research problem

a checkboard game

a 5 speed-up maximum problem, whichever the # of CPUs

Page 8: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Limits in Explicit Parallelism •  Every program consists (also) of a set of sequential operations for

which no parallel execution is possible. Formally:

•  Let P be a program run in time T on a single processor, and let f be the fraction of T due to inherently sequential code, and (1-f) the fraction due to parallelizable code (Tanenbaum, Fig. 8.48):

• 8

Page 9: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Limits in Explicit Parallelism •  The excution time due to the parallelizable fraction changes from

(1-fT) to (1-f)T/n if n processors are available.

•  The speed-up is obtained as the ratio of execution time in a single CPU over execution time on n CPUs:

\

• 9

n[fT + (1 - f)T]

nfT + (1 - f)T

fT + (1 - f)T

fT + (1 - f)T/n

nT

T(1 + nf - f) n

1 + (n-1)f Amdahl’s Law

speed-up = =

= =

Page 10: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Limits in Explicit Parallelism

•  Amdahl’s Law states that a perfect speed-up, equal to the number of available CPUs, is only possible if f = 0.

•  As an instance, which fraction of the original computation can be sequential, if we want a speed-up of 80 with 100 CPUs?

•  80 = 100 /(1 + 99f); f = 20/(80 x 99) = 0.0025252525

•  That is, only 0.25% of the original computation time can be due to sequential code.

•  Is Amdhal’s Law valid in single-core architectures?

•  Can we think of a “superlinear” speed-up, in actual cases ?

• 10

Page 11: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Limits in Explicit Parallelism •  Actually, the main problems with Explicit Parallelism are two:

1) software and 2) hardware.

1.  The limit amount of parallelism available in programs, or at least the amount that can be made explicit and thus used and deployed.

•  Parallel algorithm design is still today a very active research area, quite because of the potential gains it offers.

2.  High costs of processor/memory communication, which can raise considerably the cost of a cache miss, or the synchronization between two processes run on different CPUs.

•  These costs depend both on the architecture and on the number of CPUs, and are generally much higher than in a uniprocessor system.

• 11

Page 12: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Explicit Parallelism •  Of course, achieving a sub-linear speed-up (e.g. n with 2n

processors), is very welcome in many applications, since CPUs cost is decreasing.

•  The increase in computational power is not the only driving force for designing multi-CPU Architectures:

a)  Having multiple processors raises systems reliability: if one of them fails, the others can step in and carry out its work.

b)  Services that are inherently issued on a geografical scale must be realized with a distributed architecture. If the system were centralized in a single node, concurrent accesses to this node would become a bottleneck and would slow down the service offered, ultimately making it unavailable.

• 12

Page 13: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Explicit Parallelism •  At a coarse scale, we can list three types of explicit Parallel

Architectures (Tanenbaum, Fig. 8.1):

1.  Multi-threading (a)

2.  Shared-memory systems (b,c)

3.  Distributed-memory systems (d,e)

• 13

Page 14: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Multithreading

•  Introduction •  Fine-grained Multithreading •  Coarse-grained Multithreading •  Simultaneous Multithreading •  Multithreading in Intel processors

• 14

Page 15: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Multi-Threading

•  A multithreaded CPU is not a parallel architecture, strictly speaking; multithreading is obtained through a single CPU, but it allows a programmer to design and develop applications as a set of programs that can virtually execute in parallel: namely, threads.

•  If these programs run on a “multithreaded” CPU, they will best exploit its architectural features.

•  What about their execution on a CPU that does not support multithreading?

• 15

Page 16: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Multi-Threading •  Multithreading addresses a basic problem of any pipelined CPU: a

cache miss causes a “long” wait, necessary to fetch from RAM the missing information. If no other independent instruction is available to be executed, the pipeline stalls.

•  Multithreading is solution to avoid waisting clock cycles as the missing data is fetched: making the CPU manage more peer-threads concurrently; if a thread gets blocked, the CPU can execute instructions of another thread, thus keeping functional units busy.

•  So, why cannot be threads form different tasks be issued as well?

• 16

Page 17: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Multi-Threading •  To realize multithreading, the CPU must manage the computation

state of each single thread.

•  Each thread must have a private Program Counter and a set of private registers, separate from other threads.

•  Furthermore, thread switch must be much more efficient than process switch, that requires usually hundreds or thousands of clock cycles (process switch is a software procedure, mostly)

•  There are two basic techniques for multithreading:

1.  fine-grained multithreading

2.  coarse-grained multithreading

•  NB: in the following, we cover initially “single-issue” processors • 17

Page 18: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Fine-grained Multi-Threading

1.  Fine-grained Multithreading: switching among threads happens at each instruction, independently from the the fact that the thread instruction has caused a cache miss.

•  Instructions “scheduling” among threads obeys a round robin policy, and the CPU must carry out the switch with no overhead, since overhead cannot be tolerated

•  If there is a sufficient number of threads, it is likely that at least one is active (not stalled), and the CPU can be kept running.

• 18

Page 19: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Fine-grained Multi-Threading •  (a)-(c) three threads and associated stalls (empty slots).

(d) Fine-grained multithreading. Each slot is a clock cycle, and we assume for simplicity that each instruction can be completed in a clock cycle, unless a stall happens. (Tanenbaum, Fig. 8.7)

• 19

•  In this example, 3 threads keep the CPU running, but what if A2 stall lasts 3 or more clock cycles?

Page 20: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Fine-grained Multi-Threading •  CPU stalls can be due to a cache miss, but also to a true data

dependence, or to a branch: dynamic ILP techniques do not always guarantee that a pipeline stall is avoided.

•  With fine-grained multithreading in a pipelined Architecture, if:

–  the pipeline has k stages, –  there are at least k threads to be executed, –  and the CPU can execute a thread switch at each clock cycle

•  then there can never be more than a single instruction per thread in the pipeline at any instant, so there cannot be hazards due to dependencies, and the pipeline never stalls ( … another assumption is required …).

• 20

Page 21: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Fine-grained Multi-Threading •  Fine-grained multithreading in a CPU with a 5-stage pipeline: there

are never two instructions of the same thread concurrently active in the pipeline. If instructions can be executed out of order, then it is possible to keep the CPU fully busy even in case of a cache miss.

• 21

A1 A2 A3 A4 A5 A6 ...

B1 B2 B3 B4 B5 B6 ...

C1 C2 C3 C4 C5 C6 ...

D1 D2 D3 D4 D5 D6 ...

E1 E2 E3 E4 E5 E6 ...

E1 IF

D1 ID

C1 EX

B1 MEM

A1 WB

A2 IF

E1 ID

D1 EX

C1 MEM

B1 WB

B2 IF

A2 ID

E1 EX

D1 MEM

C1 WB

clock 5 threads in execution:

Page 22: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Fine-grained Multi-Threading •  Besides requiring an efficient context switch among threads, threads

fine-grained scheduling at each instruction slows down a thread even when the thread could go on since it is not causing a stall.

•  Furthermore, there might be fewer threads than stages in the pipeline (actually, this is the usual case), so keeping the CPU busy is no easy matter.

•  Keeping into account these problems, a different approach is followed in coarse-grained multithreading.

• 22

Page 23: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Coarse-grained Multi-Threading

2.  Coarse-grained Multithreading: a switch only happens when the thread in execution causes a stall, thus wasting a clock cycle.

•  At this point, a switch is made to another thread. When this thread in turn causes a stall, a third thread is scheduled (or possibly the first one is re-scheduled) and so on.

•  This approach potentially wastes more clock cycles than the fine-grained one, because the switch happens only when a stall happens.

•  but if there are few active threads (even just two), they can be enough to keep the CPU busy.

• 23

Page 24: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Coarse vs Fine-grained Multi-Threading

•  (a)-(c) three threads with associated stalls (empty slots). (d) Fine-grained multithreading. (e) Coarse-grained multi-threading (Tanenbaum, Fig. 8.7)

• 24 any error in this schedule?

Page 25: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Coarse vs Fine-grained Multi-Threading

• 25

•  In the preceeding drawing, fine-grained multithreading seems to work better, but this is not always the case.

•  Specifically, a switch among threads cannot be carried out without any waste in clock cycles.

•  So, if the instructions of the threads do not cause stalls frequently, a coarse-grained scheduling can be more convenient than a fine-grained one, where the context switch overhead is paid at each clock cycle (this overhead is very small, but never null).

Page 26: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Coarse e Fine-grained Multi-Threading

• 26

•  Are Coarse and Fine grained multi-threading similar to concepts discussed in a standard Operating Systems course ?

Page 27: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Medium-grained Multi-Threading 3.  Medium-grained multithreading: an intermediate approach

between fine and coarse - grained multithreading consists of switching among threads only when the running one is about to issue an instruction that might cause a long-lasting stall, such as a load (requesting non-cached data), or a branch.

•  The instruction is issued, but the processor carries out the switch to another thread. With this approach, one spares even the small, one-cycle waste due to the stall by the executing load (unavoidable in multithreading coarse-grained).

• 27

Page 28: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Multi-Threading

•  How can the pipeline know which thread an instruction belongs to?

•  In fine-grained MT, the only way is to associate each instruction with a thread identifier, e.g. the unique ID attached to the thread within the thread set it belongs to.

•  In coarse-grained MT, besides the solution above, the pipeline can be emptied at each thread switch: in this way, the pipeline only contains instructions from a single thread.

•  The last option is affordable only if the switch happens at intervals that are much longer than the time required to empty the pipeline.

• 28

Page 29: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Multi-Threading

•  Finally, all instructions from the executing threads are (as much as possible) in the instruction cache; otherwise, each context switch causes a cache miss, and all advantages from threading are lost.

• 29

Page 30: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Simultaneous Multi-Threading and Multiple Issue

•  Modern superscalar, multiple issue and dynamic scheduling pipeline architectures allow to exploit both ILP (instruction level) and TLP (thread level) parallelism.

•  ILP + TLP = Simultaneous Multi-Threading (SMT)

•  SMT is convenient since modern multiple-issue CPUs have a number of functional units that cannot be kept busy with instructions from a single thread.

•  By applying register renaming and dynamic scheduling, instructions belonging to different threads can be executed concurrently.

• 30

Page 31: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Simultaneous Multi-Threading •  In SMT, multiple instructions are issued at each clock cycle,

possibly belonging to different threads; this increases the utilization of the various CPU resources (Hennessy-Patterson, Fig. 6.44: each slot/colour couple represents a single instruction in a thread).

• 31

1: Superscalar

clock cycle

2: Coarse MT 3: Fine MT 4: SMT

Issue Slots

Page 32: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Simultaneous Multi-Threading •  In superscalar CPUs with no multithreading, multiple issue can be

useless if there is not enough ILP in each thread, and if a long lasting stall (a L3 cache miss) freezes the whole processor.

•  In coarse-grained MT, long-lasting stalls are hidden by thread switching, but a poor ILP level in each thread limits CPU resource exploitation (e.g., not all issue slots available can effectively be used)

•  Even in fine-grained MT, a poor ILP level in each thread limits CPU resource exploitation.

•  SMT: instructions belonging to different threads are (almost certainly) independent, and by issuing them concurrently, CPU resources utilization raises.

• 32

Page 33: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Simultaneous Multi-Threading •  Even with SMT, it is not always guaranteed to issue the maximum

number of instructions per clock cycle, because of limited number of available functional units, reservation stations, I-Cache capability to feed threads with instructions, and shear number of threads.

•  Clearly, SMT is viable only if there is a wealth of registers available for renaming.

•  Furthermore, in a CPU supportig speculation, there must be a ROB (at least logically) distinct for each thread, so that retirement (instructions commit) is carried out independently by each thread.

• 33

Page 34: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Simultaneous Multi-Threading •  Realizing a processor fully exploiting SMT is definitely a complex

task; is it worth doing?

•  A simple simulation: a superscalar multithreaded CPU with the following features:

–  a 9-stage pipeline –  4 general purpose floating point units –  2 ALUs –  up to 4 load or store per cycle –  100 integer + 100 FP rename registers –  up to 12 commit per cycle –  128k + 128k L1 caches –  16MB L2 cache –  1k BTB –  8 contexts for SMT (8 PC, 8 architectural registers) –  up to 8 instruction issued per clock cycle, from 2 different contexts • 34

Page 35: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Simultaneous Multi-Threading •  This hypothetical processor has slightly more resources than a

modern real processor; notably, it handles up to 8 concurrent threads. Would SMT really be beneficial? Look at figures from benchmarks of concurrent applications: number of retired instructions per clock cycle (Hennessy-Patterson, Fig. 6.46):

• 35

Page 36: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Simultaneous Multi-Threading •  An intriguing question: with multithreading, one usually refers to a

set of “peer threads” whose instructions are concurrently executed in a multithreaded CPU.

•  What about the concurrent execution of instructions from different processes ?

•  Would some specific additional resource be necessary?

• 36

Page 37: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Intel Multi-Threading

•  Multithreading was first introduced by Intel in Xeon processor in 2002, later in the 3,06 GHz Pentium 4, with code name hyperthreading. The name is attractive, actually hyperthreading supports only two threads in SMT mode.

•  According to Intel, designers had speculated that multithreading was the simplest way to increase performance: an increase by 5% of CPU area would allow to run a second thread, thus effectively using CPU resources otherwise wasted.

•  Intel benchmark suggested an increase of CPU performance by 25% -- 30%.

• 37

Page 38: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Intel Multi-Threading

•  To the Operating system, a multithreaded processor is indeed a double processor, with two CPUs sharing caches and RAM: if two applications can run independently and share the same address space, they can be executed in parallel in two threads.

•  A movie editing code can use different filters to be applied in each frame. The code can be structured as two threads, that process odd/even frames, and that execute in parallel.

• 38

Page 39: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Intel Multi-Threading

• 39

!"#$%&'()&)*+,+-.*$/)+,-(0!"#$%&'(")*+ ,-#$%&'("*.('("*/01*234

• !"5%.("1*.('("• !"6'7")*!28• 9'15"*#'5"*:498

;'1(%(%06")*+ 2('(%&'$$<*'$$0&'(")*="(>""6*(?1"').

• @"<*=-//"1.A*90')B*2(01"B*!"01)"1• 27'$$*#'5"*:498

C07#"(%(%D"$<*.?'1")*+ ,"#"6).*06*(?1"')E.*)<6'7%&*="?'D%01!"."1D'(%06*.('(%06

1.'23-45+$6$7899:$%*+)($1.3'.3,+-.*;$<(($3-45+0$3)0)3=)>;$

?@+5)3$A3,*>0$,*>$*,&)0$,3)$+5)$'3.')3+2$.B$+5)-3$3)0')C+-=)$.D*)30;

• !"."1D'(%06*.('(%06• C'&?".• ,'('*498.B*F6)*$"D"$*498

G6'>'1"

• HI"&-(%06*-6%(.

J##$%&'(%06.*(?'(*>%$$*="6"/%(C07#$"I*7"701<*'&&"..*K*7"701<*'&&"..*.('$$.*L3%I*0/*%6.(1-&(%06*(<#".*K*"M5 %6("5"1*'6)*N;*&07#-('(%06*L

Page 40: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Intel Multi-Threading •  Since two threads can use the CPU concurrently, it is necessary to

design a strategy that allows both threads to effectively use CPU resources.

•  Intel uses 4 different strategies to share resources between the two threads.

•  Replication. Obviously, some resources have to be replicated, in order to manage the two threads: two program counters and registers mapping tables (ISA registers vs rename registers) so that each thread has an independent set of registers. This replications accounts for the 5% increase in processor area.

• 40

Page 41: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Intel Multi-Threading

•  Partitioning. Some hardware resources are rigidly partitioned between the two threads. Each thread can use exactly half of each resource. This applies to all buffers (for LOAD, STORE instructions) and to the ROB (“retirement queue” in Intel terminology).

•  Partitioning can of course reduce the utilization of the partitioned resources, when a thread does not use its part of the resource, which could be used by another thread.

• 41

Page 42: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Intel Multi-Threading

•  Sharing. The hardware resource is completely shared. The thirst thread that gets hold of the resources uses it, and the other thread waits.

•  This type of resource management solves the problem due to an unused resource (if the thread does not need it), since it can be allocated to the second one. Obviously, the reverse problem arises: a thread can be slowed down if the required resource is completely allocated to the other one.

•  For this reason, in Intel processor the only resources completely shared are those available in a great quantity: for them, it is unlikely that a “starvation” problems arises, e.g. cache lines.

• 42

Page 43: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

•  Threshold sharing. A thread can use dynamically the resource, up to a given percentage; so, a part remains available for the other task (possibly less than half).

•  The scheduler that dispatches uops to the reservation stations uses this policy.

• 43

Intel Multi-Threading

Page 44: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

•  Resourse sharing in Pentium 4 pipeline (Tanenabum, Fig. 8.9).

• 44

Intel Multi-Threading

Page 45: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

•  Resourse sharing in Pentium 4 pipeline (Intel source).

• 45

Intel Multi-Threading

Intel Technology Journal Q1, 2002

Hyper-Threading Technology Architecture and Microarchitecture 10

Execution Engine in the pipeline flow. The uop queue is partitioned such that each logical processor has half the entries. This partitioning allows both logical processors to make independent forward progress regardless of front-end stalls (e.g., TC miss) or execution stalls.

OUT-OF-ORDER EXECUTION ENGINE The out-of-order execution engine consists of the allocation, register renaming, scheduling, and execution functions, as shown in Figure 6. This part of the machine re-orders instructions and executes them as

quickly as their inputs are ready, without regard to the original program order.

Allocator The out-of-order execution engine has several buffers to perform its re-ordering, tracing, and sequencing operations. The allocator logic takes uops from the uop queue and allocates many of the key machine buffers needed to execute each uop, including the 126 re-order buffer entries, 128 integer and 128 floating-point physical registers, 48 load and 24 store buffer entries. Some of these key buffers are partitioned such that each logical processor can use at most half the entries.

Rename QueueRegister

Read Execute L1 CacheRegister

Write Retire

Registers

SchedUop

Queue

RegisterRenameRegisterRename

RegistersRe-Order

Buffer

Store Buffer

L1 D-Cache

AllocateAllocate

Figure 6: Out-of-order execution engine detailed pipeline

Specifically, each logical processor can use up to a maximum of 63 re-order buffer entries, 24 load buffers, and 12 store buffer entries.

If there are uops for both logical processors in the uop queue, the allocator will alternate selecting uops from the logical processors every clock cycle to assign resources. If a logical processor has used its limit of a needed resource, such as store buffer entries, the allocator will signal “stall” for that logical processor and continue to assign resources for the other logical processor. In addition, if the uop queue only contains uops for one logical processor, the allocator will try to assign resources for that logical processor every cycle to optimize allocation bandwidth, though the resource limits would still be enforced.

By limiting the maximum resource usage of key buffers, the machine helps enforce fairness and prevents deadlocks.

Register Rename The register rename logic renames the architectural IA-32 registers onto the machine’s physical registers. This allows the 8 general-use IA-32 integer registers to be dynamically expanded to use the available 128 physical registers. The renaming logic uses a Register Alias Table (RAT) to track the latest version of each architectural register to tell the next instruction(s) where to get its input operands.

Since each logical processor must maintain and track its own complete architecture state, there are two RATs, one for each logical processor. The register renaming process is done in parallel to the allocator logic described above, so the register rename logic works on the same uops to which the allocator is assigning resources.

Once uops have completed the allocation and register rename processes, they are placed into two sets of

Page 46: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

• 46

!"#$#%&'()$!'*'+,-./$0.-/,1

!"#$%&'()*+'+#,')#*-./*0%,&'+,#*&-1#.2*/#,#$&'()*+(')&/*3-#1#*&-#*.1$-'&#$&%1#*$.)*/#,#$&*&(*3(14*5(1*()#*(5*&-#*6*,(7'$.,*&-1#.2/***

ROBPredict/Fetch Decode

IQ

AllocRS

Schedule EX Retire

2.34&-5%,$6$7899:$;/,'*$2.&3.&(,-./<$=**$&-5%,1$&'1'&>')<$

?@,%'&$A&(/)1$(/)$/(B'1$(&'$,%'$3&.3'&,4$.C$,%'-&$&'13'+,->'$.D/'&1<

• Select thread to fetch instructions from• Select instruction to decode• Select u-operation to allocate• Select instruction to retire• Additional selection points in memory pipeline like scheduling of MOB entries ( memory order buffer )

Intel Multi-Threading

Page 47: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Intel Multi-Threading

• 47

!"#$%&'()'*+,-&$./+'0

!"#!$#

!%#

&'#

$(#

!)#

&"#

&)#

$"#

$)#

("# Performance Gain SMT enabled vs disabled

!"#$%&'()*+&,-(."*/01#$2+0"*'(3$&43$"5#%+0"*()*+&,6(."$&7(08(3$"%&99"$(:0+;(<(%;2**&,(==><(?&?"$@-( !"#$%#&'()"*+",+,*'(-*#'+.(/,*'#"*&"',0#"-*0,.(/*,1").$.)*)%&10+"#*,2,+"&,*'(-*3*%#*)%&1%("(+,*'(-*#"$4")+*+5"*'11#%6.&'+"*1"#$%#&'()"*%$*7(+"4*1#%-0)+,*',*&"',0#"-*82*+5%,"*+",+,9*:(2*-.$$"#"()"*.(*,2,+"&*5'#-;'#"*%#*,%$+;'#"*-",./(*%#*)%($./0#'+.%(*&'2*'$$")+*')+0'4*1"#$%#&'()"9**<02"#,*,5%04-*)%(,04+*%+5"#*,%0#)",*%$*.($%#&'+.%(*+%*"='40'+"*+5"*1"#$%#&'()"*%$*,2,+"&,*%#*)%&1%("(+,*+5"2*'#"*)%(,.-"#.(/*10#)5',.(/9*>%#*&%#"*.($%#&'+.%(*%(*1"#$%#&'()"*+",+,*'(-*%(*+5"*1"#$%#&'()"*%$*7(+"4*1#%-0)+,?*=.,.+*5++1@33;;;9.(+"49)%&31"#$%#&'()"3

!AB.C(!AB.0*+C(!AB./3C(2*5(!AB.$2+& 2$&(+$25&?2$D9("/(+;&(!+2*52$5(A&$/"$?2*%&(BE2,#2+0"*(."$3"$2+0"*-(F"$(?"$&(0*/"$?2+0"*("*(!AB.(G&*%;?2$D9C(9&&'(;++3'HH:::-93&%-"$1

*#!"#

"#

)#

!"#

Floating Point 3dsMax* Integer Cinebench* 10POV-Ray* 3.7 beta 25

3DMark* Vantage* CPUIntel® Core™ i7

F,"2+0*1(A"0*+(09(G29&5("*(!AB./3I$2+&IG29&JKKLM(&9+0?2+&)*+&1&$(09(G29&5("*(!AB.0*+I$2+&IG29&JKKLM(&9+0?2+&

Page 48: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

Intel Multi-Threading

•  Threshold sharing applied at run time requires run time monitoring of resources utilization; additional hardware is necessary, and some computational overhead ensues.

•  Complete sharing can also cause problems. This is true especially with cache memory. Sharing cache lines makes cache management simple, but what happens if both threads require each ¾ of the cache lines for a speedy execution?

–  A high number of cache miss, that would not arise, if only a single thread were executing (would coarse-grained multithreading be more efficient?)

• 48

Page 49: Parallel Architectures - unipv · Parallel Architectures • Introduction • Multithreading • Multiprocessors (shared memory architecture) • Multicomputers (message passing architecture)

CPU Multi-Threading •  Intel has temporarely dropped hyper-threading technology in dual

core processors (dual-core processors are absed on an updated version of P6 microarchitecture, that does not support multithreading).

•  Hyper-threading has been re-introduced since 2008 in Nehalem type processors.

•  Other processors using multithreading are: –  IBM Power 5 (2004) was dual core with 2 SMT;

–  IBM Power 7 (2010) 2-8 core with 4 SMT

–  UltraSPARC T2 (Niagara) and T3 (Rainbow Fall) have fine-grained multithreading with 8 threads per core

• 49