Transcript
Page 1: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Memory Systems Then, Now, and To Come

Prof.Dr. Bruce JacobKeystone Professor & Director of Computer Engineering ProgramElectrical & Computer EngineeringUniversity of Maryland at College Park

ISC’10New Memory & Storage Hierarchies for HPC – Opportunities & Challenges

Page 2: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

THEN

Page 3: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

ENEE 446: Digital Computer Design — IBM 360/91’s Out-of-Order Fixed-Point Pipe

1

This is a guess (my guess) as to the implementation of the out-of-order instruction issue andcommit mechanism in IBM’s System/360 Model 91, fixed-point pipeline. The guess is based onthe text and figures 2, 3, 6 and 7 in the article “The IBM System/360 Model 91: MachinePhilosophy and Instruction-Handling” by Anderson, Sparacio, and Tomasulo.

The fundamental problem is this: how does the system know when a given instruction may writeto the general-purpose register file? The pipeline has in-order enqueue, out-of-order execution andcompletion, and it synchronizes through the register file: instructions reading their operands fromthe register file do not obtain them from forwarding paths. The pipeline enforces coherent writingto the register file by scheduling when instructions that would otherwise cause a write-after-writehazard are allowed access to the register file. The article mentions that each instruction that writesto a GPR increments a counter associated with that register during decode and decrements thatcounter at the time of register file update, and the article says that no instruction may read anoperand from a GPR unless its associated counter has returned to zero. What is missing is themechanism by which an instruction knows that it is its turn to write its result into the GPR.

The paper does not give a detailed diagram of the fixed-point pipeline ... the diagrams are a bitsimple, but this is probably enough detail (taken from figure 2):

For comparison, here is the floating-point pipeline, taken from figure 3:

We’ll assume for the moment that the fixed-point pipeline is similar to the floating-point pipeline,as is suggested by the general flow shown in figure 2. Here is the behavior of four instructions in

INSTRUCTION 1

GENERATE I-ADDRESS1

INSTRUCTION ACCESS1

DECODE & GENERATE OPERAND ADDRESS1

OPERAND ACCESS1

EXECUTE INSTRUCTION1

RESULT1 AVAILABLE

INSTRUCTION 2

GENERATE I-ADDRESS2

INSTRUCTION ACCESS2

DECODE & GENERATE OPERAND ADDRESS2

OPERAND ACCESS2

EXECUTE INSTRUCTION2

RESULT2 AVAILABLE

INSTRUCTION 3

GENERATE I-ADDRESS3

INSTRUCTION ACCESS3

DECODE & GENERATE OPERAND ADDRESS3

OPERAND ACCESS3

EXECUTE INSTRUCTION3

RESULT3 AVAILABLE

FLOATINGEXECUTION

TRANSMITINST. TO FLOATING

DECODE EXECUTIONHARDWARE

MOVE INST.TO

WAIT FOROPERAND

ARITHMETICUNIT

GENERATEINST.

ADDRESSINSTRUCTION

ACCESS

MOVEINST.TO

DECODEAREA

DECODEINST

GENERATEOPERANDADDRESS

OPERANDACCESS

STORAGEOPERANDRETURN

INSTRUCTIONEXECUTION

TRANSMITOPERAND

TOEXECUTIONHARDWARE

60 NSEC

EXECUTIONDECODEISSUE TO

ARITHMETICUNIT

IBM 360/91’s Out-of-Order Fixed-Point PipeENEE 446: Digital Computer Design, Fall 2000Prof. Bruce Jacob

IBM 360/91 Fixed-Point Pipe

Page 4: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

NOW

Page 5: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

DRAM Read Timing

ColumnRead

tRP = 15ns tRCD = 15ns, tRAS = 37.5ns

CL = 8

Bank Precharge

Row Activate (15ns)and Data Restore (another 22ns)

DATA (on bus)

BL = 8TIME

Cost of access is high; requires significant effort to amortize this over the (increasingly short) payoff.

Page 6: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

CPU/$

“Significant Effort” [deep pipes, reordering]

CPU/$

Outgoing bus request

MC

read dataread data

Read BWrite X, data

Read ZWrite Q, data

Read A

Write A, dataRead WRead ZRead Y A

CT

RD

PR

E

RD

RD

PR

E

PR

EA

CT

WR

WR

AC

TR

D

PRE ACTRDread data

beat

cmd

Page 7: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Consequence: Due to buffering & reordering at multiple levels, the average latency is typically much higher than the minimum latency

Page 8: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

TO COME

Page 9: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Move from concurrency via pipelining to concurrency via parallelism(mirrors recent developments in CPU design)

Page 10: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Problem: Capacity

Page 11: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Problem: Bandwidth

• Like capacity, primarily a power and heat issue: can get more BW by adding busses, but they need to be narrow & thus fast. Fast = hot.

• Required BW per core is roughly 1 GB/s, and cores per chip is increasing

• Graph: Thread-based load (SPECjbb), memory set to 52GB/s sustained … cf. 32-core Sun Niagara: saturates at 25.6 GB/s

Page 12: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Problem: TLB Reach

• Doesn’t scale at all (still small and not upgradeable)

• Currently accounts for 20+%of system overhead

• Higher associativity (which offsets the TLB’s small size) can create a power issue

• The TLB’s “reach” is actually much worse than it looks,because of differentaccess granularities

!

"#$%&$'$()*+'((%,-'*-.)$,',/)0$

!"#$%&'()*(+,-.'/0'(1"*-2/ 12%3)($*4%$3'56($4-*/-5$*/)$()*$'57$4-88$5%*$2)98',)$*/)$:%(*+2),)5*8;+<()7$

38%,60$12%.-7)($9%4)2$7-((-9'*-%5$*/'*$-($=#$%&$'$/->/8;+3'56)7$7-2),*+:'99)7$,',/)?$',,)(($*-:)$*/'*$-($@#$

%&$'$/->/8;+3'56)7$7-2),*+:'99)7$,',/)?$'57$9)2&%2:'5,)$*/'*$-($"#$%&$'$()*+'((%,-'*-.)$,',/)0$

A8)'28;?$).)5$*/)$/->/+9)2&%2:'5,)$.)2(-%5$-($

!""#$%&#'()'(*%+,'

!"#$%&'()'*+&'+,-+.,--/0",1"2&'0,0+&3' -.(#/0&%)12%$1/)#%)3#/,(0%$1/)#/4#%#.%'.*%''/51%$16(#5%5.(#1'#5/+,%0(3#$/#$.%$#/4#%#$0%31$1/)%"#)*7%8'($*%''/51%$16(#5%5.(#9:;#%)3#%#$0%31$1/)%"#)*7%8#:%)<(3#310(5$*+%,,(3#5%5.(#9:;=#>)#,%0$15?"%0@#)/$(#$.%$#$.(#$0%31$1/)%"#)*7%8#'($*%''/51%$16(5%5.(#3016('#$.(#'()'(#%+,'#4/0#)#$%&#%00%8'#%)3#+%<('#)#$%&#5/+,%01'/)'=#-.(#310(5$*+%,,(3#%)3#.%'.*%''/51%$16(#5%5.('#(%5.#3016(#/)(#'($/4#'()'(#%+,'#%)3#+%<(#/)(#$%&#5/+,%01'/)=#

A?$,?$#

B44(5$16(#!330(''

CDE@

F/03

-GH

'($#I :8$(

'()'( '()'(

-!J K!-!

'()'( '()'(

-!J K!-!

H8$(#1)#H"/5<

C(0+1''1/)'

L%5.(>)3(M

A)(#'($

4

N1$ON1$O

%0(#%5$16%$(3A)"8#/)(#:%)<

A?$,?$#

B44(5$16(#!330(''

CDE@

F/03

-GH

'($#I :8$(

'()'( '()'(

-!J K!-!

'()'( '()'(

-!J K!-!

H8$(#1)#H"/5<

C(0+1''1/)'

L%5.(>)3(M

A)(#'($

4

N1$O

1'#%5$16%$(3

:%)<#'("(5$

PCE#:1$'

PCE#:1$'

5,6'*%,7"1"/8,9'8.:,;'-&1.,--/0",1"2&'0,0+& 5<6'*%,7"1"/8,9'7"%&01.=,>>&7'0,0+&?'8'<,8@-

A)"8#/)(#:%)<

A?$,?$#

B44(5$16(#!330(''

CDE@

F/03

-GH

:%)<#'("(5$ '($#I :8$(

'()'( '()'(

-!J K!-!

'()'( '()'(

-!J K!-!

H8$(#1)#H"/5<

C(0+1''1/)'

L%5.(>)3(M

A)(#'($

4

N1$O

1'#%5$16%$(3

.%'.:%)<#'("(5$

PCE#:1$'

506'A,-+.,--/0",1"2&'0,0+&

:

Maps ~1MB

~10MB

Page 13: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Trend: Disk, Flash, and other NV

Rotating Disks vs. SSDsMain take-aways

Forget everything you knew about rotating disks. SSDs are different

SSDs are complex software systems

One size doesn’t fit all

Rotating Disks vs. SSDsMain take-aways

Forget everything you knew about rotating disks. SSDs are different

SSDs are complex software systems

One size doesn’t fit all

Magnet structure of

voice coil motor

Spindle & Motor

Disk

Actuator

Flash

Memory Arrays

Load / Unload

Mechanism

(a) HDD (b) SSD

• Flash is currently eating Disk’s lunch

• PCM is expected to eat Flash’s lunch

Page 14: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Obvious Conclusions I

• A new take on superpages that might overcome previous barriers• A new cache design that enables very large L1 caches• A virtual memory system for modern capacities

!ese are ideas that have been in development in our research group over the past 5–6 years. Fully Bu!ered DIMM, take 2 (aka “BOB”)

In the near term, the desired solution for the DRAM system is one that allows existing commodity DDRx DIMMs to be used, one that supports 100 DIMMs per CPU socket at a bare minimum, and one that does not require active heartbeats to keep its channels alive—i.e., it

What Every CS/CE Needs to Know about the Memory System — Bruce Jacob, U. Maryland

31

CPU (e.g. multicore)

MC MC MC

Master Memory Controller

MC MC MC

Figure X. A DRAM-system organization to solve the capacity & power problems

Fast, wide channel Fast, narrow channels

Slow, wide channel

• Want capacity without sacrificing bandwidth

• Need a new memory system architecture

• This is coming(details will change,of course)

Page 15: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Obvious Conclusions II

• Flash/NV is inexpensive, is fast (rel. to disk), and has better capacity roadmap than DRAM

• Make it a first-class citizen in the memory hierarchy

• Access it via load/store interface, use DRAM to buffer writes, software management

• Probably reduces capacity pressure on DRAM system

$CPU

Page 16: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Obvious Conclusions II

• Flash/NV is inexpensive, is fast (rel. to disk), and has better capacity roadmap than DRAM

• Make it a first-class citizen in the memory hierarchy

• Access it via load/store interface, use DRAM to buffer writes, software management

• Probably reduces capacity pressure on DRAM system

$CPU

DRAMFLASH

Page 17: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Obvious Conclusions III

• Reduce translation overhead (both in performance & power)

• Need an OS/arch redesign

• Revisit superpages,multi-level TLBs

• Revisit SASOS concepts,*location of translation point/s* (i.e., PGAS)

• Arguably a good programming model for CMP

Chapter 31 VIRTUAL MEMORY 899

pages to physical pages is one-to-one, there are no virtual cache synonym problems.

When the synonym problem is eliminated, there is no longer a need to fl ush a virtual cache or a TLB for consistency reasons. The only time fl ushing is required is when virtual segments are remapped to new physi-cal pages, such as when the operating system runs out of unused segment identifi ers and needs to reuse old ones. If there is any data left in the caches or TLB tagged by the old virtual address, data inconsistencies can occur. Direct Memory Access (DMA) also requires fl ushing of the affected region before a transaction, as an I/O controller does not know whether the data it overwrites is currently in a virtual cache.

The issue becomes one of segment granularity. If segments represent the granularity of sharing and data placement within an address space (but not the gran-ularity of data movement between memory and disk), then segments must be numerous and small. They should still be larger than the L1 cache to keep the criti-cal path between address generation and cache access clear. Therefore, the address space should be divided into a large number of small segments, for instance, 1024 4-MB segments, 4096 1-MB segments, etc.

Disjunct Page TableFigure 31.15 illustrates an example mechanism. The

segmentation granularity is 4 MB. The 4-GB address space is divided into 1024 segments. This simplifi es

the design and should make the discussion clear. A 4-byte PTE can map a 4-KB page, which can, in turn, map an entire 4-MB segment. The “disjunct” page table organization uses a single global table to map the entire 52-bit segmented virtual-address space yet gives each process-address space its own addressing scope. Any single process is mapped onto 4 GB of this global space, and so it requires 4 MB of the global table at any given moment (this is easily modifi ed to sup-port MIPS-style addressing in which the user process owns only half the 4 GB [Kane & Heinrich 1992]). The page table organization is pictured in Figure 31.16. It shows the global table as a 4-TB linear structure at the top of the global virtual-address space, composed of 230 4-KB PTE pages that each map a 4-MB segment. If each user process has a 4-MB address space, the user space can be mapped by 1024 PTE pages in the global page table. These 1024 PTE pages make up a user page table, a disjunct set of virtual pages at the top of the global address space. These 1024 pages can be mapped by 1024 PTEs—a collective structure small enough to wire down in physical memory for every running process (4 KB, if each is 4 bytes). This struc-ture is termed the per-user root page table in Figure 31.16. In addition, there must be a table for every pro-cess containing 1024 segment IDs and per-segment protection information.

Global Virtual Space

Process A Process B Process C

Physical Memory

NULL(segment onlypartially-used)

PagedSegment

FIGURE 31.14: The use of segments to provide virtual-address aliasing.

TLB andPage Table

32-bit Effective Address

Segno (10 bits) Segment & Page Offsets (22 bits)

Segment Registers

Segment & Page Offsets (22 bits)Segment ID (30 bits)

52-bit Virtual Address

Cache

FIGURE 31.15: Segmentation mechanism used in discussion.

ch31_P379751.indd 899ch31_P379751.indd 899 8/8/07 3:30:10 PM8/8/07 3:30:10 PM

898 Memory Systems: Cache, DRAM, Disk

TLB & Page Table

Segment Registers (16)

Segment Offset

32-bit Effective Address

Seg#

Segment OffsetSegment ID

52-bit VirtualAddress

Page Offset

32-bit Physical Address

Page Frame Number

FIGURE 31.13: The PowerPC segmentation mechanism. Seg-mentation extends a 32-bit user address into a 52-bit global address. The global address can be used to index the caches.

is that the Pentium’s global space is no larger than an individual user-level address space, and there is no mechanism to prevent different segments from over-lapping one another in the global 4-GB space.

In contrast, the IBM 801 [Chang & Mergen 1988] introduced a fi xed-size segmented architecture that continued through to the POWER and PowerPC archi-tectures [IBM & Motorola 1993, May et al. 1994, Weiss & Smith 1994], shown in Figure 31.13. The PowerPC mem-ory-management design maps user addresses onto a global fl at address space much larger than each per-process address space. It is this extended virtual address space that is mapped by the TLBs and page table.

Segmented architectures need not use address- space identifi ers; address space protection is guaranteed by the segmentation mechanism.4 If two

processes have the same segment identifi er, they share that virtual segment by defi nition. Similarly, if a process has a given segment identifi er in several of its segment registers, it has mapped the segment into its address space at multiple locations. The operating system can enforce inter-process protec-tion by disallowing shared segment identifi ers, or it can share memory between processes by overlap-ping segment identifi ers.

The “Virtue” of SegmentationOne obvious solution to the synonym and shared

memory problems is to use global naming, as in a SASOS implementation, so that every physical address corresponds to exactly one virtual location. This eliminates redundancy of PTEs for any given physical page, with signifi cant performance and space savings. However, it does not allow processes to map objects at multiple locations within their address spaces; all processes must use the same name for the same data, which can create headaches for an oper-ating system, as described earlier in “Perspective on Aliasing.”

A segmented architecture avoids this problem; seg-mentation divides virtual aliasing and the synonym problem into two orthogonal issues. A one-to-one mapping from global space to physical space can be maintained—thereby eliminating the synonym prob-lem—while supporting virtual aliases by indepen-dently mapping segments in process-address spaces onto segments in the global space. Such an organiza-tion is illustrated in Figure 31.14. In the fi gure, three processes share two different segments and have mapped the segments into arbitrary segment slots. Two of the processes have mapped the same segment at multiple locations in their address spaces. The page table maps the segments onto physical memory at the granularity of pages. If the mapping of global

4Page-level protection is a different thing entirely. Whereas address space protection is intended to keep processes from accessing each other’s data, page-level protection is intended to protect pages from misuse. For instance, page-level protection keeps processes from writing to text pages by marking them read-only, etc. Page-level protection is typically supported through a TLB, but could be supported on a larger granularity through the segmentation mechanism. However, there is nothing intrinsic to segments that provide page-level protection, whereas address space protection is intrinsic to their nature.

ch31_P379751.indd 898ch31_P379751.indd 898 8/8/07 3:30:10 PM8/8/07 3:30:10 PM

Page 18: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Acknowledgements & Shameless Plugs

• Much of this has appeared previously in our books, papers, etc.

• The Memory System (You Can’t Avoid It; You Can’t Ignore It; You Can’t Fake It). B. Jacob, with contributions by S. Srinivasan and D. T. Wang. ISBN 978-1598295870. Morgan & Claypool Publishers: San Rafael CA, 2009.

• Memory Systems: Cache, DRAM, Disk. B. Jacob, S. Ng, and D. Wang, with contributions by S. Rodriguez. ISBN 978-0123797513. Morgan Kaufmann: San Francisco CA, 2007.

• Support from Intel, DoD, DOE, Sandia National Lab, Micron, Cypress Semiconductor

Page 19: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Acknowledgements & Shameless Plugs

• DRAMsim — the world’s most accurate (hardware-validated) DRAM-system simulator:

• “DRAMsim: A memory-system simulator.” D. Wang, B. Ganesh, N. Tuaycharoen, K. Baynes, A. Jaleel, and B. Jacob. SIGARCH Computer Architecture News, vol. 33, no. 4, pp. 100–107. September 2005.

• Version II now available at

www.ece.umd.edu/dramsim

Page 20: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

ETC.

Page 21: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Problem: We don’t understand it very well

Page 22: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

How it is represented

if (cache_miss(addr)) {

cycle_count += DRAM_LATENCY;

}

even in simulators with “cycle accurate” memory systems—no lie

Page 23: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

Problem: Capacity

MC MC

JEDEC DDRx~10W/DIMM, ~20W total

FB-DIMM~10W/DIMM, ~300W total

Page 24: Memory Systems Then, Now, and To Comeblj/talks/ISC-2010.pdf · 2010-06-07 · Memory Systems Then, Now, and To Come Prof.Dr. Bruce Jacob Keystone Professor & Director of Computer

The BlackWidow system has a number of innovative attributes,

including:

• scalable address translation that allows all of physical mem-

ory to be mapped simultaneously,

• load buffers to provide abundant concurrency for global

memory references,

• decoupled vector load-store and execution units, allowing

dynamic tolerance of memory latency,

• decoupled vector and scalar execution units, allowing run-

ahead scalar execution with efficient scalar-vector synchro-

nization primitives,

• vector atomic memory operations (AMOs) with a small

cache co-located with each memory bank for efficient read-

modify-write operations to main memory,

• a highly banked cache hierarchy with hashing to avoid stride

sensitivity,

• a high-bandwidth memory system optimized for good effi-

ciency on small granularity accesses, and

• a cache coherence protocol optimized for migratory sharing

and efficient scaling to large system size, combined with a

relaxed memory consistency model with release and acquire

semantics to exploit concurrency of global memory refer-

ences.

In this paper, we present the architecture of the Cray Black-

Widow multiprocessor. As a starting point, we describe the node

organization, packaging and system topology in Section 2. We de-

scribe the BW processor microarchitecture in Section 3, the mem-

ory system in Section 4, and a number of reliability features in

Section 5. Section 6 presents preliminary performance results.

Section 7 highlights prior related work. Finally, Section 8 sum-

marizes the key attributes of the BlackWidow system architecture.

2 BlackWidow System Overview

The BlackWidow system is built upon four-processor SMP

nodes, interconnected with a high-radix folded-Clos (a.k.a. fat-

tree) network. This section describes the node organization, net-

work topology and physical packaging of the system.

2.1 Node Organization

Figure 1 shows a block diagram of a BlackWidow compute

node consisting of four BW processors, and 16 Weaver chips

with their associated DDR2 memory parts co-located on a mem-ory daughter card (MDC). The processor to memory channels

between each BW chip and Weaver chip use a 4-bit wide 5.0

Gbaud serializer/deserializer (SerDes) for an aggregate channel

bandwidth of 16×2.5 Gbytes/s = 40 Gbytes/s per direction — 160

Gbytes/s per direction for each node.

The Weaver chips serve as pin expanders, converting a small

number of high-speed differential signals from the BW processors

into a large number of single-ended signals that interface to com-

modity DDR2 memory parts. Each Weaver chip manages four

DDR2 memory channels, each with a 40-bit-wide data/ECC path.

The 32-bit data path, coupled with the four-deep memory access

Figure 1. BlackWidow node organization.

bursts of DDR2, provides a minimum transfer granularity of only

16 bytes. Thus the BlackWidow memory daughter card has twice

the peak data bandwidth and four times the single-word bandwidth

of a standard 72-bit-wide DIMM. Each of the eight MDCs con-

tains 20 or 40 memory parts, providing up to 128 Gbytes of mem-

ory capacity per node using 1-Gbit memory parts.

2.2 Network Topology

To reduce the cost and the latency of the network, BlackWidow

uses a high-radix, folded-Clos topology, which is modified to per-

mit sidelinks amongst multiple peer subtrees. Deterministic rout-

ing is performed using a hash function to obliviously balance net-

work traffic while maintaining point-to-point ordering on a cache

line basis. A BlackWidow system of up to 1024 processors can

be constructed by connecting up to 32 rank 1 (R1) subtrees, each

with 32 processors, to rank 2 (R2) routers. A system with up to

4608 processors can be constructed by connecting up to nine 512-

processor R2 subtrees via side links. Up to 16K processors may be

connected by a rank 3 (R3) network where up to 32 512-processor

R2 subtrees are connected by R3 routers. Multiple R3 subtrees can

be interconnected using sidelinks to scale up to 32K processors.

The BlackWidow system topology and packaging scheme en-

ables very flexible provisioning of network bandwidth. For in-

stance, by only using a single rank 1 router module, instead of two

as shown in Figure 2(a), the port bandwidth of each processor is

reduced in half — halving both the cost of the network and its

global bandwidth. An additional bandwidth taper can be achieved

by connecting only a subset of the rank 1 to rank 2 network ca-

bles, reducing cabling cost and R2 router cost at the expense of

the bandwidth taper as shown by the14 taper in Figure 2(b).

The YARC chip is a high-radix router2

used in the network of

the Cray BlackWidow multiprocessor. Using YARC routers, each

with 64 3-bit wide ports, the BlackWidow scales up to 32K proces-

sors using a folded-Clos [4] topology with a worst-case diameter

2Each YARC port has a peak data rate of 6.25 Gb/s in each direc-

tion, however, to tolerate longer network cables, we reduced the target

frequency to 5.0 Gb/s

Problem: Bandwidth

Sometimes bandwidth is everything ...

Cray Black Widow memory system


Top Related