lecture 8: ilp and speculation contd. chapter 3, sections 3.6-3.8

31
DAP.F96 1 Lecture 8: ILP and Speculation Contd. Chapter 3, Sections 3.6-3.8 L.N. Bhuyan CS 203A

Upload: loren

Post on 14-Feb-2016

37 views

Category:

Documents


3 download

DESCRIPTION

Lecture 8: ILP and Speculation Contd. Chapter 3, Sections 3.6-3.8. L.N. Bhuyan CS 203A. Superscalar with Speculation. Speculative execution – execute control dependent instructions even when we are not sure if they should be executed - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 1

Lecture 8: ILP and Speculation Contd. Chapter 3, Sections 3.6-3.8

L.N. BhuyanCS 203A

Page 2: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 2

Superscalar with Speculation• Speculative execution – execute control

dependent instructions even when we are not sure if they should be executed

• With branch prediction, we speculate on the outcome of the branches and execute the program as if our guesses were correct. Misprediction? Hardware undo

– Instructions after the branch can be fetched and issued, but can not execute before the branch is resolved

– Speculation allows them to execute with care.

• Multi-issue + branch prediction + Tomasulo• Implemented in a number of processors:

PowerPC 603/604/G3/G4, Pentium II/III/4, Alpha 21264, AMD K5/K6/Athlon, MIPS R10k/R12k

Page 3: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 3

Hardware Modifications• Speculated instructions execute and generate results. Should they be

written into register file? Should they be passed onto dependent instructions (in reservation stations)?

• Separate the bypassing paths from actual completion of an instruction. Do not allow speculated instructions to perform any updates that cannot be undone.

• When instructions are no longer speculative, allow them to update register or memory – instruction commit.

– Out-of-order execution, in-order commit (provide precise exception handling)

• Then where are the instructions and their results between execution completion and instruction commit? Instructions may finish considerably before their commit.

• Reorder buffer (ROB) holds the results of instructions that have finished execution but have not committed.

– ROB is a source of operands for instructions, much like the store buffer

Page 4: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 4

HW support for More ILP• Speculation: allow an instruction to issue that is dependent on branch predicted to be taken without any consequences (including exceptions) if branch is not actually taken (“HW undo”); called “boosting”

• Combine branch prediction with dynamic scheduling to execute before branches resolved

• Separate speculative bypassing of results from real bypassing of results

– When instruction no longer speculative, write boosted results (instruction commit)or discard boosted results

– execute out-of-order but commit in-order to prevent irrevocable action (update state or exception) until instruction commits

HW support for More ILP

Page 5: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 5

HW support for More ILP• Need HW buffer for results of

uncommitted instructions: reorder buffer

– 3 fields: instr, destination, value– Reorder buffer can be operand

source => more registers like RS– No more store buffers beforeMemory

(Fig. 3.29)– Use reorder buffer number instead of

reservation station when execution completes

– Supplies operands between execution complete & commit

– Once operand commits, result is put into register

– Instructions commit in order– As a result, its easy to undo

speculated instructions on mispredicted branches or on exceptions

ReorderBuffer

FP Regs

FPOp

Queue

FP Adder FP Adder

Res Stations Res Stations

Page 6: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 6

Four Steps of Speculative Tomasulo Algorithm

1. Issue— get instruction from FP Op Queue If reservation station and reorder buffer slot free, issue instr & send

operands & reorder buffer no. for destination (this stage sometimes called “dispatch”)

2. Execution— operate on operands (EX) When both operands ready then execute; if not ready, watch CDB for

result; when both in reservation station, execute; checks RAW (sometimes called “issue”)

3. Write result— finish execution (WB) Write on Common Data Bus to all awaiting FUs

& reorder buffer; mark reservation station available.4. Commit— update register with reorder result

When instr. at head of reorder buffer & result present, update register with result (or store to memory) and remove instr from reorder buffer. Mispredicted branch flushes reorder buffer (sometimes called “graduation”)

Page 7: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 7

Page 8: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 8

With Hardware Speculation

Page 9: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 9

Additional Functionalities of ROB• Dynamically execute instructions while maintaining precise

interrupt model.– In-order commit allows handling interrupts in-order at commit time

• Undo speculative actions when a branch is mispredicted– In reality, misprediction is expected to be handled as soon as possible.

Flushing all the entries that appear after the branch, allowing those preceding instructions to continue.

– Performance is very sensitive to branch-prediction mechanism» Prediction accuracy, misprediction detection and recovery

• Avoids hazards through memory (memory disambiguation)– WAW and WAR are removed since updating memory is done in order– RAW hazards are maintained by 2 restrictions:

» A load’s effective address is computed after all earlier stores» A load can not read from memory if there is an earlier store in ROB

having the same effective address (some machine simply bypass the value from store to the load)

Page 10: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 10

Getting CPI < 1: Issuing Multiple Instructions/Cycle

• Vector Processing: Explicit coding of independent loops as operations on large vectors of numbers

– Multimedia instructions being added to many processors

• Superscalar: varying no. instructions/cycle (1 to 8), scheduled by compiler or by HW (Tomasulo)

– IBM PowerPC, Sun UltraSparc, DEC Alpha, Pentium III/4

• (Very) Long Instruction Words (V)LIW: fixed number of instructions (4-16) scheduled by the compiler; put ops into wide templates (TBD)

– Intel Architecture-64 (IA-64) 64-bit address» Renamed: “Explicitly Parallel Instruction Computer (EPIC)”

• Anticipated success of multiple instructions lead to Instructions Per Clock cycle (IPC) vs. CPI

Page 11: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 11

Getting CPI < 1: IssuingMultiple Instructions/Cycle

• Superscalar MIPS: 2 instructions, 1 FP & 1 anything– Fetch 64-bits/clock cycle; Int on left, FP on right– Can only issue 2nd instruction if 1st instruction issues– More ports for FP registers to do FP load & FP op in a pair

Type PipeStagesInt. instruction IF ID EX MEM WBFP instruction IF ID EX MEM WBInt. instruction IF ID EX MEM WBFP instruction IF ID EX MEM WBInt. instruction IF ID EX MEM WBFP instruction IF ID EX MEM WB

• 1 cycle load delay expands to 3 instructions in SS– instruction in right half can’t use it, nor instructions in next slot

Page 12: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 12

Multiple Issue with Speculation

Page 13: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 13

Example

Page 14: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 14

Timing of Multiple Issue without Speculation with two CDBs

Page 15: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 15

Timing of Multiple Issue with Speculation – with multiple CDBs and commit operations!

Note: Perfect branch prediction and speculation is assumed in Fig. 3.34. Otherwise the performance will be lower.

Page 16: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 16

Multiple Issue Issues• issue packet: group of instructions from fetch unit

that could potentially issue in 1 clock– If instruction causes structural hazard or a data hazard either

due to earlier instruction in execution or to earlier instruction in issue packet, then instruction does not issue

– 0 to N instruction issues per clock cycle, for N-issue• Performing issue: (register dependencies among

the issuing instructions) checks in 1 cycle could limit clock cycle time: (n2-n) comparisons for n instructions => 2450 comparisons for 50 instructions => poses a limit on instruction window size

– => issue stage usually split and pipelined– 1st stage decides how many instructions from within this

packet can issue, 2nd stage examines hazards among selected instructions and those already been issued

– => higher branch penalties => prediction accuracy important

Page 17: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 17

Multiple Issue Challenges• While Integer/FP split is simple for the HW, get CPI of 0.5

only for programs with:– Exactly 50% FP operations AND No hazards

• If more instructions issue at same time, greater difficulty of decode and issue:

– Even 2-scalar => examine 2 opcodes, 6 register specifiers, & decide if 1 or 2 instructions can issue; (N-issue ~O(N2-N) comparisons)

– Register file: need 2x reads and 1x writes/cycle– Rename logic: must be able to rename same register multiple times in

one cycle! For instance, consider 4-way issue:add r1, r2, r3 add p11, p4, p7sub r4, r1, r2 sub p22, p11, p4lw r1, 4(r4) lw p23, 4(p22)add r5, r1, r2 add p12, p23, p4

Imagine doing this transformation in a single cycle!– Result buses: Need to complete multiple instructions/cycle

» So, need multiple buses with associated matching logic at every reservation station.

» Or, need multiple forwarding paths

Page 18: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 18

Dynamic Scheduling in SuperscalarThe easy way

• How to issue two instructions and keep in-order instruction issue for Tomasulo?

– Assume 1 integer + 1 floating point– 1 Tomasulo control for integer, 1 for floating point

• Issue 2X Clock Rate, so that issue remains in order• Only loads/stores might cause dependency between

integer and FP issue:– Replace load reservation station with a load queue;

operands must be read in the order they are fetched– Load checks addresses in Store Queue to avoid RAW violation– Store checks addresses in Load Queue to avoid WAR,WAW

Page 19: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 19

Register renaming, virtual registers versus Reorder Buffers

• Alternative to Reorder Buffer is a larger virtual set of registers and register renaming

• Virtual registers hold both architecturally visible registers + temporary values

– replace functions of reorder buffer and reservation station

• Renaming process maps names of architectural registers to registers in virtual register set

– Changing subset of virtual registers contains architecturally visible registers

• Simplifies instruction commit: mark register as no longer speculative, free register with old value

• Adds 40-80 extra registers: Alpha, Pentium,…– Size limits no. instructions in execution (used until commit)

Page 20: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 20

How much to speculate?

• Speculation Pro: uncover events that would otherwise stall the pipeline (cache misses)

• Speculation Con: speculate costly if exceptional event occurs when speculation was incorrect

• Typical solution: speculation allows only low-cost exceptional events (1st-level cache miss)

• When expensive exceptional event occurs, (2nd-level cache miss or TLB miss) processor waits until the instruction causing event is no longer speculative before handling the event

• Assuming single branch per cycle: future may speculate across multiple branches!

Page 21: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 21

Limits to ILP

• Conflicting studies of amount– Benchmarks (vectorized Fortran FP vs. integer C programs)– Hardware sophistication– Compiler sophistication

• How much ILP is available using existing mechanisms with increasing HW budgets?

• Do we need to invent new HW/SW mechanisms to keep on processor performance curve?

– Intel MMX, SSE (Streaming SIMD Extensions): 64 bit ints – Intel SSE2: 128 bit, including 2 64-bit Fl. Pt. per clock– Motorola AltaVec: 128 bit ints and FPs– Supersparc Multimedia ops, etc.

Page 22: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 22

Limits to ILPInitial HW Model here; MIPS compilers. Assumptions for ideal/perfect machine to start:

1. Register renaming – infinite virtual registers => all register WAW & WAR hazards are avoided2. Branch prediction – perfect; no mispredictions 3. Jump prediction – all jumps perfectly predicted 2 & 3 => machine with perfect speculation & an unbounded buffer of instructions available4. Memory-address alias analysis – addresses are known & a store can be moved before a load provided addresses not equal

Also: unlimited number of instructions issued/clock cycle; perfect caches;1 cycle latency for all instructions (FP *,/);

Page 23: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 23

Upper Limit to ILP: Ideal Machine(Figure 3.35 p. 242)

Programs

Inst

ruct

ion

Issu

es p

er c

ycle

0

20

40

60

80

100

120

140

160

gcc espresso li fpppp doducd tomcatv

54.862.6

17.9

75.2

118.7

150.1

Integer: 18 - 60

FP: 75 - 150

IPC

How is this data generated?

Page 24: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 24

35

41

16

6158

60

9

1210

48

15

67 6

46

13

45

6 6 7

45

14

45

2 2 2

29

4

19

46

0

10

20

30

40

50

60

gcc espresso li fpppp doducd tomcatvProgram

Perfect Selective predictor Standard 2-bit Static None

More Realistic HW: Branch ImpactFigure 3.37

Change from Infinite window to examine to 2000 and maximum issue of 64 instructions per clock cycle

ProfileBHT (512)TournamentPerfect No prediction

FP: 15 - 45

Integer: 6 - 12

IPC

Page 25: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 25

11

15

12

29

54

10

15

12

49

16

1013

12

35

15

44

9 10 11

20

11

28

5 5 6 5 57

4 45

45 5

59

45

0

10

20

30

40

50

60

70

gcc espresso li fpppp doducd tomcatvProgram

Infinite 256 128 64 32 None

More Realistic HW: Renaming Register Impact

Figure 3.41

Change 2000 instr window, 64 instr issue, 8K 2 level Prediction

64 None256Infinite 32128

Integer: 5 - 15

FP: 11 - 45

IPC

Page 26: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 26

Program

Inst

ruct

ion

issu

es p

er c

ycle

0

5

10

15

20

25

30

35

40

45

50

gcc espresso li fpppp doducd tomcatv

10

15

12

49

16

45

7 79

49

16

45 4 4

6 53

53 3 4 4

45

Perfect Global/stack Perfect Inspection None

More Realistic HW: Memory Address Alias Impact

Figure 3.44Change 2000 instr window, 64 instr issue, 8K 2 level Prediction, 256 renaming registers

NoneGlobal/Stack perf;heap conflicts

Perfect Inspec.Assem.

FP: 4 - 45(Fortran,no heap)

Integer: 4 - 9

IPC

Page 27: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 27

Program

Inst

ruct

ion

issu

es p

er c

ycle

0

10

20

30

40

50

60

gcc expresso li fpppp doducd tomcatv

10

15

12

52

17

56

10

15

12

47

16

10

1311

35

15

34

910 11

22

12

8 8 9

14

9

14

6 6 68

79

4 4 4 5 46

3 2 3 3 3 3

45

22

Infinite 256 128 64 32 16 8 4

Realistic HW: Window Impact(Figure 3.46)

Perfect disambiguation (HW), 1K Selective Prediction, 16 entry return, 64 registers, issue as many as window

64 16256Infinite 32128 8 4

Integer: 6 - 12

FP: 8 - 45

IPC

Page 28: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 28

How to Exceed ILP Limits of this study?

• WAR and WAW hazards through memory– eliminated WAW and WAR hazards on registers through

renaming, but not in memory usage• Unnecessary dependences (compiler not

unrolling loops so iteration variable dependence)• Overcoming the data flow limit: value prediction,

predicting values and speculating on prediction– Address value prediction and speculation predicts addresses

and speculates by reordering loads and stores; could provide better aliasing analysis, only need predict if addresses =

• Use multiple threads of control

Page 29: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 29

Workstation Microprocessors 3/2001

Source: Microprocessor Report, www.MPRonline.com

• Max issue: 4 instructions (many CPUs)Max rename registers: 128 (Pentium 4) Max BHT: 4K x 9 (Alpha 21264B), 16Kx2 (Ultra III)Max Window Size (OOO): 126 intructions (Pent. 4)Max Pipeline: 22/24 stages (Pentium 4)

Page 30: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 30

SPEC 2000 Performance 3/2001 Source: Microprocessor Report, www.MPRonline.com

1.6X

3.8X

1.2X

1.7X

1.5X

Page 31: Lecture 8:  ILP and Speculation Contd.  Chapter 3, Sections 3.6-3.8

DAP.F96 31

Conclusion

• 1985-2000: 1000X performance – Moore’s Law transistors/chip => Moore’s Law for Performance/MPU

• Hennessy: industry been following a roadmap of ideas known in 1985 to exploit Instruction Level Parallelism and (real) Moore’s Law to get 1.55X/year– Caches, Pipelining, Superscalar, Branch Prediction, Out-of-order

execution, …• ILP limits: To make performance progress in future

need to have explicit parallelism from programmer vs. implicit parallelism of ILP exploited by compiler, HW?– Otherwise drop to old rate of 1.3X per year?– Less than 1.3X because of processor-memory performance gap?

• Impact on you: if you care about performance, better think about explicitly parallel algorithms vs. rely on ILP?