more solutions

35
1 Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights More solutions • Superpipelining – more stages means more than 5 stages of pipelining • Dynamic pipeline scheduling change the order of executing instructions to fill gaps if possible (= instead of bubbles) • Superscalar- performing things in parallel Performing two instructions simultaneously. This means fetch two instructions together, decode them at the same time(have more inputs and outputs in the GPR), execute, i.e., almost double the hardware

Upload: jace

Post on 23-Jan-2016

37 views

Category:

Documents


0 download

DESCRIPTION

More solutions. Superpipelining – more stages means more than 5 stages of pipelining Dynamic pipeline scheduling change the order of executing instructions to fill gaps if possible (= instead of bubbles) Superscalar- performing things in parallel - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: More solutions

1

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

More solutions

• Superpipelining – more stages – means more than 5 stages of pipelining

• Dynamic pipeline scheduling – change the order of executing instructions to fill gaps if possible (=

instead of bubbles)

• Superscalar- performing things in parallel– Performing two instructions simultaneously. This

means fetch two instructions together, decode them at the same time(have more inputs and outputs in the GPR), execute, i.e., almost double the hardware

Page 2: More solutions

2

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Memory Hierarchy

In 1998

SRAM 2 - 25ns $100 to $250 per Mbyte. Cache

DRAM 60-120ns $5 to $10 per Mbyte. Memory

Disk 10 to 20 million ns $0.10 to $0.20 per Mbyte. Disk

CPU

Level n

Level 2

Level 1

Levels in thememory hierarchy

Increasing distance from the CPU in

access time

Size of the memory at each level

Users want fast and inexpensive memory

The solution: a memory hierarchy

A memory hierarchy in which the faster but smaller part is “close” to the CPU and used most of the time and in which slower but larger part is ‘’far” from the CPU, will give us the illusion of having a fast large inexpensive memory

Page 3: More solutions

3

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Locality

• temporal locality:– If we accessed a certain address, the chances are high to access it again shortly. For data

this is so since we probably update it, for instruction it is so since we tend to use loops

• spatial locality:– If we accessed a certain address, the chances are high to access its neighbors.

– For instructions this is so due to the sequential nature of programs. For data this is so

since we use groups of variable such as arrays.

• So, let’s keep recent data and instructions in a fast memory (i.e., close to the CPU) This memory is called the cache.

Page 4: More solutions

4

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

The cache principle

The important terms

Hit: a successful search of info in the cache. If it is in the

cache, we have a hit. We continue executing the instructions.

Miss: an unsuccessful search of info in the cache. If it is not in the cache, we have a miss and we have to bring the requested data from a slower memory up one level in the hierarchy.

Until then, we must stall the pipeline!

Block: The basic unit that is loaded into the cache when miss occurs is a block. The minimal size of block is a single word.

Page 5: More solutions

5

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Direct Mapped Cache

00001 00101 01001 01101 10001 10101 11001 11101

000

Cache

Memory

001

010

011

100

101

110

111

Page 6: More solutions

6

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Direct Mapped Cacheblock = 1 word

size of cache=16words

2n blocks

Page 7: More solutions

7

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

One possible arrangement for MIPS cache:

Ad dress (sh owing bit p osi tions)

20 10

Byteoffset

V alid T ag DataIn dex

0

1

2

10 21

10 22

10 23

T a g

Ind ex

Hit D ata

20 32

3 1 30 1 3 12 11 2 1 0

Page 8: More solutions

8

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Another possibility for MIPS (actual DECStation 3100):

Page 9: More solutions

9

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

For any 32 bit address CPU:

2n locations

Ad dress (sh ow ing b it p os itions )

30-n n

By teoffset

V alid T ag D a ta

Ta g

Ind ex

H it D ata

32

3 1 3 n+2 n+1 2

1 0

Inde x

0

1

2

2n -1

30-n

Page 10: More solutions

10

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Handling writes:

• Write through – Anything we write is written to the cache and to the memory (we now

discuss a single word block)

• Write through usually uses a Write buffer– Since writing to the slower memory take too much time, we use an

intermediate buffer. It gets the write “bursts’ of the program and slowly but surely writes it to the memory. (If the buffer gets full, we must stall the CPU)

• Write-back– Another method is to copy the cache into the memory only when the

block is replaced with another block. This is called write-back or copy-back.

Page 11: More solutions

11

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

The block size

ProgramBlock size in

wordsInstruction miss rate

Data miss rate

Effective combined miss rate

gcc 1 6.1% 2.1% 5.4%4 2.0% 1.7% 1.9%

spice 1 1.2% 1.3% 1.2%4 0.3% 0.6% 0.4%

The block does not have to be a single word. When we increase the size of the cache blocks, we improve the hit rate since we reduce the misses due to spatial locality of the program (mainly) but also the data (e.g., in image processing). Here is a comparison of the miss rate of two programs with a single word vs 4 words blocks:

Page 12: More solutions

12

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Direct Mapped Cacheblock = 1 word

size of cache=16words

Page 13: More solutions

13

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Direct Mapped Cacheblock = 4 word

size of cache=16words

This is still called a direct mapped cache since each block in the memory is mapped directly to a single block in the cache

1 block = 4 words

Page 14: More solutions

14

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

A 4 words block direct mapped implementationAddress (showing bit positions)

16 12 Byteoffset

V Tag Data

HitData

16 32

4Kentries

16 bits 128 bits

Mux

32 32 32

2

32

Block offsetIndex

Tag

31 16 15 4 32 1 0

When we have more than a single word in a block, the efficiency of storage is slightly higher since we have 1 tag for each block instead of for each word. On the other hand we slow the cache somewhat since we add multiplexors. Anyhow, this is not the issue. The issue is reducing miss rate

Page 15: More solutions

15

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

A 2m words block implementationAddress (showing bit positions)

30-n-mn

Byte Offset inside a word

V Tag Data

HitData

32

2n

entries

32*2m bits

Mux

32 32 32

m

32

Block offsetIndex

Tag

31

n+m+1 m+1 2 1 0

m

30-n-m bits

30-n-m

Page 16: More solutions

16

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

1 KB

8 KB

16 KB

64 KB

256 KB

256

40%

35%

30%

25%

20%

15%

10%

5%

0%

Mis

s ra

te

64164

Block size (bytes)

Block size and miss rate:

When we increase the size of the block, the miss rate, especially for instructions, is reduced. However, in case we leave the cache size as is, we’ll get to a situation where there are too few blocks, so we have to change them even before we took advantage on the locality, i.e., before we used the entire block. That will

increase the miss rate(explains the right hand side of the graphs below)

Page 17: More solutions

17

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Block size and write:

When we have more than a single word in a block, then when we write (a single word) into a block, we must first read the entire block from the memory (unless its already in the cache) and only then write to the cache and to the memory.

If we had the block in the cache, the process is exactly as it was for a single word block cache.

Separate instruction and data caches

Note that usually we have separate instruction and data caches. Having a single cache for both could give some flexibility since we have sometimes more room for data but the 2 separate caches have twice the bandwidth, I.e., we can read both at the same time (2 times faster). That is why most CPUs use separate instruction and data caches.

Page 18: More solutions

18

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Block size and read:

When we have more than a single word in a block, then when need to wait longer to read the entire block. There are some techniques to start the writing into the cache as soon as possible. The other approach is to design the memory so reading is faster, especially reading consecutive addresses. This is done by reading several words in parallel.

Page 19: More solutions

19

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Faster CPUs need better cachesIt is shown in the book (section 7.3 pp. 565-567) that when we improve the CPU (shorten the CPI or the CK period) but leave the cache as is, the percentage of miss penaly is increased. This means that we need better caches for faster CPUs. Better means we should reduce the miss rate and reduce the miss penalty.

Reducing the miss rateThis is done by letting the cache more flexibility in keeping data.

So far we allowed a memory block to be mapped to a single block in cache. We called it a direct mapped cache. There is no flexibility here. The most flexible scheme is that a block can be store at any of the cache blocks. That way, we can keep some frequently used blocked that always competed on the same cache block in direct mapped block implementation. Such a flexible scheme is called fully associative cache. In a fully associative cachethe tag should be compared to all cache entries.

We have also a compromise called “N-way set associative” cache. Here each memory block is mapped to one of an N blocks of the cache.

Note that for caches having more than 1 possible mapping, we should employ some replacement policy. (LRU or Random are used)

Page 20: More solutions

20

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Direct Mapped Cacheblock = 4 word

size of cache=16words

1 block = 4 words

Page 21: More solutions

21

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

2 way set associative Cacheblock = 4 word

size of cache=32words

1 block = 4 words

1

2

1

2

1

2

1

2

N*2n blocksof 2m words

Page 22: More solutions

22

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

A 4-way set associative cache

Here the block size is 1 word. We see that we have actually 4 “regular” caches + a multiplexor

Page 23: More solutions

23

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

A 2-way set associative cache

Address (showing bit positions)

30-n-m nByte Offset inside a word

V Tag Data

Hit1

Data2

32

2n

entries

(32*2m bits)

32 32 32

m

32

Block offsetindex

Tag

31

0

m

30-n-m bits

30-n-m

V Tag Data

Hit2

Data2

32

2n

entries

Mux

32 32 32

32m

30-n-m bits

30-n-m

Tag

32*2m bits

Data

Mux

Mux

Hit

Here the block size is 2m word. We see that we have actually 2 caches + a multiplexor

Page 24: More solutions

24

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Fully associative Cacheblock = 4 word

size of cache=32words

1 block = 4 words

3

4

1

2

5

6

7

8

Page 25: More solutions

25

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

A fully associative cache

Here the block size is 2m word. We see that we have only N blocks

30-m 2

m

Tag

30-m

m

3232

32 32 32 3232TagV

32

32 32 32 3232TagV

32

32 32 32 3232TagV

32

32

hitData

Page 26: More solutions

26

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

N-way set associative

N*2n (N=2k-n)

1

2Tag

Data

Block # 0 1 2 3 4 5 6 7

Search

Direct mapped

1

2Tag

Data

Set # 0 1 2 3

Search

Set associative

1

2Tag

Data

Search

Fully associative

Directed mapped 1*2n (n=k)

Fully associativeN=2k

Suppose we have 2k words in a cache

Searching for address 12 (marked) in 3 types of caches.

Page 27: More solutions

27

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Faster CPUs need better cachesBetter means we should reduce the miss rate. For that we used 4 set associative cache.

Better also means reduce the miss penalty.

Reducing the miss penaltyThis is done by using 2 levels cache. The first cache will be on the same chip as the CPU, actually it is a part of the CPU. It is very fast (1-2ns=less than 1 ck cycle) it is small, the block is also small, so it can be 4-way set associative. The level 2 cache is out of the chip 10 times slower but still 10 times faster than the memory (DRAM). It has larger block almost always 2-way set associative or direct mapped. Mainly aimed to reduce the read penalty. Analyzing such caches is complicated. Ususally simulations are required.

An optimal single level cache is usually larger and slower than the level1 cache and faster and smaller than the level2 cache.

Note that usually we have separate instruction and data caches.

Page 28: More solutions

28

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Virtual MemoryVirtual Memory (VM) enables to have a small physical memory (the actual memory in the computer) while the program uses a much larger virtual memory. In principle, the idea is similar to this of the cache. Here the physical memory (DRAM) of the processor keeps the data used right now, while the disk keeps the entire program. The blocks here are called pages (4KB-16KB or even up to 64KB today). Miss in VM is called page fault.There are 2 reasons to use VM:1) To have larger virtual address space than the actual physical memory available2) To enable several programs, or processes, to run simultaneously, each “thinks” it has the same virtual

address (since we cannot compile them knowing the real addresses to be used later)

So we map the virtual address created by the processor (PC or ALUOut) to a physical address. We need an address translation.

Page 29: More solutions

29

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Address translationThe translation is simple. We use the LSBs to point at the address inside a page and the rest of the bits, the MSBs to point at a “virtual” page. The translation should replace the virtual page number with physical page number, having a smaller number of bits. This means that the physical memory is smaller than the virtual memory and so, we’ll have to load and store pages whenever required.

Before VM, the programmer was responsible to load and replace “overlays” of code or data. VM take this burden away.

By the way, using pages with “relocating” the code and the data every time it is loaded into memory also enables better usage of memory. Large contiguous areas are not required.

Page 30: More solutions

30

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

Address translationThe translation is done by a table called the page table. WE have such a table, residing in the main memory, for each process. A special register, the page table register, points at the start of the table. When switching the program, I.e, switching to another process, we change the contents of that register so it points to the appropriate page table. [ To switch a process means also storing all the registers including the PC of the current process and retrieving those of the process we want to switch to. This is done by the Operating System every now and then according to some predetermined rule] .

We need to have a valid bit, same as in caches, which tells whether the page is valid or not.

In VM we have fully associative placement of pages in the physical memory.To reduce chances to page fault. We also apply sophisticated algorithms for replacement of pages.

Since the read/write time (from/to disk) is very long, we use s/w mechanism instead of h/w (used in caches). Also, we use write-back scheme and not write-through.

Page 31: More solutions

31

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

The page tableThe operating system (OS) creates a copy of all the pages of a process on the disk. It loads the requested pages into the physical memory and keeps track on which page is loaded and which is not. The page table can be used to point at the pages on the disk .If the valid bit is on, the table has the physical page address. If the valid bit is off, the table has its disk address. When a page fault occurs, if all physical memory is used, the OS must choose which page to be replaced. LRU is often used. However, to simplify things, we set a “use” bit or “reference” bit by h/w every time a page is accessed. Every now and then these bits are cleared by the OS. So, according to these bits, the OS can decide which page has a higher chance of being used and keep it in memory.

The page table could be very big. So there are technique to keep it small. We do not prepare room for all virtual addresses possible, but add an entry whenever a new page is requested. We sometimes have a page table with two parts the heap, growing upwards and the stack growing downwards. Some OS uses hashing to translate between the virtual page address and the page table.

Sometimes the page table itself is allowed to be paged.

Note that every access to the memory is made of two reads, 1st we read the physical page address from the page table, then we can perform the real read.

Page 32: More solutions

32

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

TLBNote that every access to the memory is made of two reads, 1st we read the physical page address from the page table, then we can perform the real read.

In order to avoid that we use a special cache for address translation. It is called a “Translation-Lookaside Buffer” (TLB). It is a amall cache (32-4096 entries) with blocks of 1 or 2 page addresses with a very fast hit time (less than 1/2 a CK cycle to leave enoufh time for getting the data according to the address from the TLB) and has a small miss rate (0.01%-1%) . TLB miss cayses a delay of 10-30 CK cycles to access the real page table and update the TLB.

What about write? Whenever we write to a page in the physical memory, we must set a bit in the TLB (and eventually, when it is replaced in the TLB, in the page table). This bit is called the “dirty” bit. When a ‘dirty” page is removed from the physical memory, ir shopuld be copied to the disk to replace the old un-updated page that was originally on the disk. If the dirty bit is off, no copy is required since the original page is untoutched.

Page 33: More solutions

33

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

TLB and Cache togetherSo here is the complete picture:

The CPU generates a virtual address (PC in fetch or ALUOut during lw or sw instruction), The bits goe directly to the TLB. If thereis a hit, the output of the TLB provides the physical page address. We combine these lines with the LSBs of the virtual address and use the resulting physical address to access the memory. This addrees is connected to the cache. If cache hit is detected, the data immediately appears at the output of the cahe.

All of this took less than a CK cycle so we can use tha dats in the next rising edge of the CK.

Page 34: More solutions

34

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

ProtectionDuring the process of having a page fault we can detect that a program is trying to access a virtual page that is not defined. A regular process cannot be allowed to access the page table itself, i.e., read and write to the page table. Only kernel (OS) processes can do that. There can also be restrictions on writing to certain pages. All this can be achieved with special bits in the TLB (kernel bit, write access bit etc.). Any violation should cause an exception that will be handled by the OS.

In some OS and CPUs, not all pages have the same size. We then use the term segment instead of page. In such case we need to have h/w support that detects that the CPU tries to access an address which is beyond the limit of the segment.

Page 35: More solutions

35

Copyright 1998 Morgan Kaufmann Publishers, Inc. All rights reserved.

End of caches & VM