alireza khalafi david morano david kaeli northeastern university

20
MEDEA 02/09/22 DAM 1 Alireza Khalafi David Morano David Kaeli Northeastern University Realizing High IPC Through A Scalable Memory-Latency Tolerant Multipath Microarchitecture presented at Workshop on Memory Access Decoupled Architectures (MEDEA) 2002 September 22, 2002 in conjunction with PACT 2002 Augustus Uht University of Rhode Island Northeastern University Computer Architecture Research

Upload: shakti

Post on 12-Jan-2016

50 views

Category:

Documents


7 download

DESCRIPTION

Realizing High IPC Through A Scalable Memory-Latency Tolerant Multipath Microarchitecture. Alireza Khalafi David Morano David Kaeli Northeastern University. Augustus Uht University of Rhode Island. Northeastern University Computer Architecture Research. presented at - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 1

Alireza Khalafi David Morano

David KaeliNortheastern University

Realizing High IPC ThroughA Scalable Memory-Latency Tolerant

Multipath Microarchitecture

presented at

Workshop on Memory Access Decoupled Architectures (MEDEA) 2002September 22, 2002

in conjunction with PACT 2002

Augustus UhtUniversity of Rhode Island

Northeastern University Computer Architecture Research

Page 2: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 2

• introduction• related work• key ideas used• the microarchitecture• simple execution example• experimental methodology & results

– IPC results

– L1-I cache latency tolerance

– L1-D cache latency tolerance

– L2 cache latency tolerance

– main-memory latency tolerance

– L0 effect

• summary

outline

Page 3: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 3

introduction

• want to explore a large microarchitecture (ISA independent) capable of extracting high ILP from sequential programs– control & value speculation– multipath execution– massive out-of-order execution

• large size really requires scalability of the microarchitecture• proper program ordering is a big problem and needs to be handled

creatively• further, memory is slow ! -- how much is performance affected ?

– L2 is failing behind processor -- but DRAM is becoming a big problem !

• L1 & L2 (L3) are good, but is something closer than the L1 cache ?– L0 cache (distributed within the microarchitecture)

Page 4: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 4

related work

• Riseman & Foster (1972), Lam & Wilson (1992), others :

– explored unconstrained resource use

– there is much ILP in general purpose sequential codes

– > 100 (but general machines now only extract about 1-2 in IPC)

• Sohi & Franklin (1992)

– register lifetimes are short, meaning that effective bypassing of the architected register file is possible with limited length buses -- key to spatial scalability of a machine

• Cleary, et al -- Warp Engine (1995)

– used time-tags for program ordering, however they were cumbersome being represented in floating-point

• newer work on large issue windows (ISCA 2002, others)

– but these do not scale and lack distributed execution units

Page 5: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 5

keys ideas used in the machine

• modified issue slot or reservation station -- termed "active station" (AS), for managing instruction execution and re-execution !

• time-tags for all operand and program ordering

• microarchitectural predication for all instructions– facilitates out-of-order execution

– allows for execution of control-flow independent instructions beyond conditional branches

• disjoint spawning of alternative (not predicted) execution paths from conditional branches

• value speculation for input operands to instructions

• L0 caches (small cache close to execution units) distributed throughout an array of execution units

• out-of-order execution, only based on priority related to likelihood of instruction commitment, and availability of execution resources ("resource flow computing")

Page 6: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 6

microarchitecture overview

• ASes arranged in columns forming the Execution Window

• processing elements (PEs) are distributed throughout e-window

• columns logically rotate as whole columns are loaded and committed

L1

D-cache

L1

I-cache

I-fetch

store-Q

column

column

0 1 2 3 m-1

column

column

column

AS

AS

n X m Execution Window

committedmemory stores

temporaily latestinstruction

unified L2 cacheand main memory

not shown

temporailyearliest

instruction

memoryloads

Page 7: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 7

execution window

• ASes and PEs are grouped into Sharing Groups ( SGs)

• buses are broken into fixed length segments by Forward Units (FUs)

• there are different type FUs for each operand type :

– register -- RFU– memory - MFU– predicate -- PFU

• both forwward and backward interconnections are provided (backward is not shown)

• L0 caches are distributed throughout -- one inside each of the MFUs• shown: 3 SG rows, 2 AS rows per SG, 3 SG columns

ASAS

AS AS

PE

ASAS

AS AS

PE

ASAS

AS AS

PE

ASAS

AS AS

PE

ASAS

AS AS

PE

ASAS

AS AS

PE

SG

ASAS

AS AS

PEFU

ASAS

AS AS

PE

ASAS

AS AS

PE

FU

FU

FU

FU

FU

FU

FU

FU

SG SG

Page 8: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 8

a distributed L0 cache

forwarding & backwarding buses

stores

cacheentries

L0 cache

MFU

results

lookups

• MFU provides– forwarding function

– backward propagation for requests

– filtering function

– cache storage through L0 to satisfy backwarding requests

• L0 may be a variety of configurations– we employed fully-

associative but others are possible

past

future

Page 9: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 9

snoop/snarf operation

repeated for each input operand per instruction

LDpathtime-tag instr.

time-tag

==>= <!=

addressvalueLD

execute or re-execute

tt addrvalue path tt

tt value

result operand forwarding bus

Page 10: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 10

simple execution example (memory)

code fragment example execution schedule and transactions

• "ma1" -- some memory address, "(ma1)" -- some memory value

• we want r4 to have committed value =1, from execution of I1

• we want r5 to have committed value =2, from execution of I4

• I4 executes in clock 2 after snarfing (ma1) from I1, resulting in wrong result

• I4 executes again after snarfing (ma1) from I3, giving correct result

label TT instructionI1 0 (ma1) <= 1I2 1 r4 <= (ma1)I3 2 (ma1) <= 2I4 3 r5 <= (ma1)

cycle execute forward snarf-1 Ix{(ma1)=?} I2{(ma1)=?}, I4{(ma1)=?}0 I11 I1{(ma1)=1} I2{(ma1)=1}, I4{(ma1)=1}2 I2, I43 I3 I2{r4=1}, I4{r5=1}4 I3{(ma1)=2} I4{(ma1)=2}5 I46 I4{r5=2}

Page 11: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 11

experimental methodology

• used a custom simulator to handle distributed multipath execution (handles alternate and wrong paths)

• used the MIPS-1 ISA (some MIPS-2/3 instructions also)

• SGI native C language compiler, '-O' optimization

• used the Irix-6.4 standard libraries within the executables

• evaluated using SpecInt (1995 & 2000) integer benchmarks

• executed all benchmarks for 600M instructions from program start, but only accumulated data over the latter 500M instructions

• we first evaluated IPC

• then we evaluated tolerance to various memory hierarchy latencies

– L1-D cache

– L2-unified cache

– main memory

Page 12: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 12

L0 cache

L1 I/D cache

L2 cache

main memory

memory interleave factor

forward units (all)

forwarding buses (all)

branch predictor

default machine parameters

32 entries, one word each, fully associative

1 clock hit latency

64 KB, 32 byte block size

2-way set associative

unified I/D

10 clock hit latency

2 MB, 32 byte block size

direct mapped

100 clock latency

4

minimum delay 1 clock

delay 1 clock

4 in parallel

PAg. 1024 PBHT, 4096 GPHT, 2-bit sat. counter

Page 13: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 13

IPC results

latencies:L1 1 ckL2 10 cksmain memory 100 cks

geometry:- SG rows- AS rows per SG- SG columns- max alternative paths

geometry 8-4-8-8 8-8-8-8 16-8-8-8 32-2-16-16 32-4-16-16

bzip2 4.2 5.0 5.8 5.4 5.7go 5.1 5.9 6.7 6.5 6.8gap 6.0 7.5 7.5 8.9 7.9gzip 5.0 6.3 7.0 6.7 7.2parser 4.3 4.6 5.3 5.0 5.4

HAR-MEAN 4.8 5.7 6.4 6.3 6.5speedup over SP 50% 46% 39% 50% 41%

Page 14: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 14

L1-D cache latency tolerance

L1 D-cache hit latency (clocks)

1 2 4 8

0.8

0.6

0.4

1.0

Re

lativ

e IP

C

Page 15: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 15

L2 cache latency tolerance

1.00

0.95

0.90

0.85

0.80

1 2 4 8 16

L2 cache hit latency (clocks)

Re

lativ

e IP

C

Page 16: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 16

main-memory latency tolerance

1.00

0.95

0.90

0.85

0.80

25 50 100 200 800

main memory hit latency (clocks)

400

Re

lativ

e IP

C

Page 17: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 17

effect of L0 caches

• memory load requests satisfied from L0 caches

• all other memory load requests satisfied from either :

– an adjacent instruction in the same sharing group

– L1 cache and beyond

bzip2 parser% of all loadssatisfied by L0 due 3.6% 5.2%to a backwarding request% of all loadssatisfied by L0 but w/o 18.2% 28.8%any backwarding request

Page 18: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 18

summary

• we have described a scalable and distributed microarchitecture

– ISA independent

– time-tags for program ordering

– microarchitectural predication

– multipath execution

• high IPC

– with realistic memory hierarchy

– multipath adds about 40% speed-up over single path

• shown a tolerance to substantial main-memory latency

– about 90% performance retained even out to 800 clock latency main memory

– due in part to L1 and L2

– but also due to L0, and operand bypass of L1 within e-window

Page 19: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 19

acknowledgements

• supported by :

– National Science Foundation

– University of Rhode Island Office of the Provost

– International Business Machines (IBM)

– Intel

– Mentor Graphics

– Xilinx

– Ministry of Education, Culture, and Sports of Spain

Page 20: Alireza Khalafi  David Morano David Kaeli Northeastern University

MEDEA 02/09/22 DAM 20

simple execution example (registers)

code fragment example execution schedulelabel TT instructionI1 0 r3 <= 1I2 1 r4 <= r3 + 1I3 2 r3 <= 2I4 3 r5 <= r3 + 2

• we want r3 to have value =2 after execution of I3

• we want r5 to have value =4 after execution of I4

• I4 executes in clock 2 after snarfing r3 from I1, resulting in wrong result

• I4 executes again after snarfing r3 from I3, giving correct result

cycle execute forward snarf-1 Ix{r3=?}, Iy{r4=?} I2{r3=?}, I4{r3=?}0 I1 Iz{r5=?}1 I1{r3=1} I2{r3=1}, I4{r3=1}2 I2, I43 I3 I2{r4=2}, I4{r5=3}4 I3{r3=2} I4{r3=2}5 I46 I4{r5=4}