lecture9 virtual memory two print - 会津大学公式ウェ...

8
Operating Systems: Operating Systems: Lecture 9 Lecture 9 Memory Management Virtual Memory Virtual Memory 2 Many systems provide some help in the form of a reference bit. The reference bit for a page is set, by the hardware, whenever that page is referenced (either a read or a write to any byte in the page). Reference bits are associated with each entry in the Page Table. Virtual Memory: Virtual Memory: LRU Approximation Algorithms LRU Approximation Algorithms 3 Additional Additional - - Reference Reference - - Bits Algorithm Bits Algorithm We can gain additional ordering information by recording the reference bits at regular intervals. We can keep an 8 bit byte (History Register (HR)) for each page in a table in memory. At regular intervals (say every 100 milliseconds), a timer interrupts transfer control to the OS. The OS shifts the reference bit for each page into the high order bit of its HR, shifting the other bits right 1 bit, discarding the low order bit. Virtual Memory: LRU Approximation Algorithms (1) 4 Additional Additional - - Reference Reference - - Bits Algorithm Bits Algorithm These HRs contain the history of page use for the last eight time periods. If the HR contains 00000000, then the page has not been used for eight time periods; A page that is used at least once each period would have a HR’s value of 11111111. Virtual Memory: LRU Approximation Algorithms (1)

Upload: trantram

Post on 14-Apr-2018

219 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Lecture9 Virtual Memory Two Print - 会津大学公式ウェ …sparth.u-aizu.ac.jp/CLASSES/OS/LECTURES/PDF/lecture9.pdfa timer interrupts transfer control to the OS. ... LFU Algorithm:

Operating Systems: Operating Systems: Lecture 9Lecture 9

Memory Management

Virtual MemoryVirtual Memory

2

�Many systems provide some help in the form of a reference bit.

�The reference bit for a page is set, by the hardware, whenever that page is referenced (either a read or a write to any byte in the page).

�Reference bits are associated with each entry in the Page Table.

Virtual Memory:Virtual Memory: LRU Approximation AlgorithmsLRU Approximation Algorithms

3

�� AdditionalAdditional--ReferenceReference--Bits AlgorithmBits Algorithm

� We can gain additional ordering information by recording the reference bits at regular intervals.

� We can keep an 8 bit byte (History Register (HR)) for each page in a table in memory.

� At regular intervals (say every 100 milliseconds), a timer interrupts transfer control to the OS.

� The OS shifts the reference bit for each page into the high order bit of its HR, shifting the other bits right 1 bit, discarding the low order bit.

Virtual Memory: LRU Approximation Algorithms (1)

4

�� AdditionalAdditional--ReferenceReference--Bits AlgorithmBits Algorithm

� These HRs contain the history of page use for the last eight time periods.

� If the HR contains 00000000, then the page has not been used for eight time periods;

� A page that is used at least once each period would have a HR’s value of 11111111.

Virtual Memory: LRU Approximation Algorithms (1)

Page 2: Lecture9 Virtual Memory Two Print - 会津大学公式ウェ …sparth.u-aizu.ac.jp/CLASSES/OS/LECTURES/PDF/lecture9.pdfa timer interrupts transfer control to the OS. ... LFU Algorithm:

5

�� AdditionalAdditional--ReferenceReference--Bits AlgorithmBits Algorithm

Virtual Memory:Virtual Memory: LRU Approximation Algorithms (1)LRU Approximation Algorithms (1)

timetime

Interval:Interval:

Reference bits:Reference bits:

<HR> = 11000100<HR> = 11000100

<HR> = 11100010<HR> = 11100010

<HR> = 01110001<HR> = 01110001

A page with A page with <HR> = 11000100<HR> = 11000100is more recently referenced thenis more recently referenced thena page with a page with <HR> = 01110111<HR> = 01110111

If we interpret <HR> as unsignedIf we interpret <HR> as unsignedintegers, the page with lowest integers, the page with lowest number is the LRU page, and itnumber is the LRU page, and itcan be replacedcan be replaced

The number of bits in HR can beThe number of bits in HR can bevaried (selection should be fast as varied (selection should be fast as possible). In the extreme case, thepossible). In the extreme case, thenumber can be reduced to zero,number can be reduced to zero,leaving only the leaving only the reference bitreference bit itself.itself.This is theThis is the secondsecond--chancechance algorithm.algorithm.

0 0 1 0 0 0 1 1 1 00 0 1 0 0 0 1 1 1 0

6

�� SecondSecond--Chance AlgorithmChance Algorithm (Clock Algorithm)(Clock Algorithm)

� The basic algorithm of a second chance replacement is a FIFO replacement algorithm. When a page has been selected, however, we inspect its reference bit.

� If the value is 0, we proceed to replace this page.

� If the reference bit is 1, however, we give that page a second chance and move on to select the next FIFO page.

Virtual Memory:Virtual Memory: LRU Approximation Algorithms (2)LRU Approximation Algorithms (2)

7

�� SecondSecond--Chance AlgorithmChance Algorithm (Clock Algorithm)(Clock Algorithm)

� When a page gets a second chance, its reference bit is cleared and its arrival time is reset to current time.

� Thus, a page that is given a second chance will not be replaced until all other pages are replaced.

� In additional, if a page is used often enough to keep its reference bit set, it will never be replaced.

� One way to implement this algorithm is as a circular queue.

Virtual Memory:Virtual Memory: LRU Approximation Algorithms (2)LRU Approximation Algorithms (2)

8

�Second-Chance Algorithm (Clock Algorithm)

Virtual Memory:Virtual Memory: LRU Approximation Algorithms (2)LRU Approximation Algorithms (2)

PointerPointercc 11

dd 11

ee 00

ff 11

gg 00

mm 11

aa 00

bb 00

cc 00

dd 00

hh 00

ff 11

gg 00

mm 11

aa 00

bb 00bb 00

PagePageRef. bitRef. bit

BeforeBefore AfterAfter

circular queue of pagescircular queue of pages circular queue of pagescircular queue of pages

Page Page ““ hh”” is referencedis referencedPointerPointer

Page 3: Lecture9 Virtual Memory Two Print - 会津大学公式ウェ …sparth.u-aizu.ac.jp/CLASSES/OS/LECTURES/PDF/lecture9.pdfa timer interrupts transfer control to the OS. ... LFU Algorithm:

9

�Second-Chance Algorithm (Clock Algorithm)

� If page to be replacement (in clock order) has reference bit = 1, then:

�set reference bit 0;

�leave page in memory;

�replace next page (in clock order),subject to same rules.

Virtual Memory:Virtual Memory: LRU Approximation Algorithms (2)LRU Approximation Algorithms (2)

10

�� Enhanced SecondEnhanced Second --Chance AlgorithmChance Algorithm� Use the modify bit, MM, along with the reference bit, R.

� Depending on (R, M) pages are classified in four classes:1) (0, 0): not recently used nor modified - best page to replace;

2) (0, 1): not recently used, but modified - not quite as good, because the page will need to be written out before replacement;

3) (1, 0): recently used, but clean - probably will be used again soon;

4) (1, 1): recently used and modified - probably will be used again, and write out will be needed before replacin g it.

�� Class (1) has theClass (1) has the lowestlowest priority while class (4) thepriority while class (4) thehighesthighest..We replace the first page encountered in the lowest nonempty We replace the first page encountered in the lowest nonempty class.class.

Virtual Memory:Virtual Memory: LRU Approximation Algorithms (3)LRU Approximation Algorithms (3)

11

�� Counting AlgorithmsCounting Algorithms - keep a counter of the number of references that have been made to each page.

�� LFU Algorithm:LFU Algorithm: The least frequently used (LFU) algorithm replaces page with smallestsmallest count.

�� MFU Algorithm:MFU Algorithm: The most frequently used (MFU) algorithm is based on the argument that the page with the smallest count was probably just brought in and has yet to be used.

�� PagePage--Buffering AlgorithmBuffering Algorithm - desired page is read into a free frame from the poolpool before the victim is written out.Implementation is expensive.Approximation of OPT replacement is not very good.

Virtual Memory:Virtual Memory: LRU Approximation Algorithms (4)LRU Approximation Algorithms (4)

12

Virtual MemoryVirtual Memory

�Factors which determines the page fault rate

�Program;

�Memory size;

�Replacement algorithm;

�Page size;

�Prefetch algorithm;

(From (From W.StallingsW.Stallings, Operating Systems, 1992, Macmillan , Operating Systems, 1992, Macmillan PublPubl. Co., p. 289). Co., p. 289)

Pag

e F

aults

per

100

0 R

efer

ence

sP

age

Fau

lts p

er 1

000

Ref

eren

ces

Number of Page Frames AllocatedNumber of Page Frames Allocated

00

55

1010

1515

2020

2525

3030

66 88 1010 14141212

3535

4040

OPTOPTLRULRUClockClockFIFOFIFO

Comparison of Page Comparison of Page Replacement AlgorithmsReplacement Algorithms

Page 4: Lecture9 Virtual Memory Two Print - 会津大学公式ウェ …sparth.u-aizu.ac.jp/CLASSES/OS/LECTURES/PDF/lecture9.pdfa timer interrupts transfer control to the OS. ... LFU Algorithm:

13

�The simplest case of virtual memory is the single-user system. There are many variations for distributing of frames.

�A different problem arises when demand paging is combined with multiprogramming. Multiprogramming puts two (or more) processes in Multiprogramming puts two (or more) processes in memory at the same time.memory at the same time.

Virtual Memory: Virtual Memory: Allocation Allocation of Framesof Frames

14

�There is a minimum number of frames that must be allocated.

�Obviously, as the number of frames allocated to each process decreases, the page fault-rate increases, slowing process execution.

Virtual Memory: Virtual Memory: Allocation Allocation of Framesof Frames

15

� This minimum number of frames is defined by the instruction-setarchitecture. (Remember that, when a page fault occurs before an executing instruction is complete, the instruction must be restarted)

Example:

� Consider a machine in which all memory-reference instructions have only one memory address. We need at least one frame for theinstruction and one frame for the memory reference.

� In addition, if one-level indirect addressing is allowed, then paging requires at least three frames per process.

� IBM 370, MVS (Move Character) instruction (storage-to-storage):

� instruction is 6 bytes, might span 2 pages;

� 2 pages to handle from ;

� 2 pages to handle to => 6 pages to handle.

Virtual Memory: Virtual Memory: Allocation of Allocation of FramesFrames

16

�Two major allocation schemes: fixed and priority allocations.

�� Fixed allocationFixed allocation ::�� Equal allocationEqual allocation => if 100 frames and 5 processes, give each 20

pages.

�� Proportional allocationProportional allocation => allocate according to the size of process:

�si = size of process pi ;

�S = Σ si ;�m = total number of frames;�ai = allocation for pi = (si / S) x m .

Example: m = 64, s1 = 10, s2 = 127, a1 = (10/137) x 64 ~ 5,a2 = (127/137) x 64 ~ 59.

Virtual Memory: Virtual Memory: Allocation Allocation of Framesof Frames

Page 5: Lecture9 Virtual Memory Two Print - 会津大学公式ウェ …sparth.u-aizu.ac.jp/CLASSES/OS/LECTURES/PDF/lecture9.pdfa timer interrupts transfer control to the OS. ... LFU Algorithm:

17

�� Priority allocationPriority allocation ::

� Use a proportional allocation scheme using prioritiespriorities rather than size.

� If process PPii generates a page fault:

�select for replacement one of its frames;

�select for replacement a frame from a process with lower priority number.

Virtual Memory: Virtual Memory: Allocation Allocation of Framesof Frames

18

�We can classify page-replacement algorithms into two broad categories:

�� Global replacementGlobal replacement - process select a replacement frame from the set of all frames; one process can take a frame from another.

Gives more performance; may cause trashing.Gives more performance; may cause trashing.

�� Local replacementLocal replacement - each process selects a replacement frame from only its own set of allocated frames.

Not affected by other active processes.Not affected by other active processes.

Virtual Memory: Virtual Memory: Allocation Allocation of Framesof Frames

19

� If a process does not have “enough” pages, the page-fault rate is very high:

The OS monitors CPU utilization:

� if CPU utilization is too low,

� we increase the degree of multiprogramming by introducing a new process to the system

� a global-replacement algorithm is used, replacing pages with no regard to the process to which they belong (steals pages from active process)

� active processes have fewer pages,

� more page faults

� CPU utilization decreases

Virtual Memory: Virtual Memory: ThrashingThrashing

20

�A high paging activity is called thrashing. A process is thrashing if it is spending more time paging than executing. Thrashing =Thrashing = a process is busy swapping pages in and out.

� As a result, CPU utilization drops even further, and the CPU scheduler tries to increase the degree of multiprogramming even more. The page-fault rate increases tremendously.

Virtual Memory: Virtual Memory: ThrashingThrashing

degree of multiprogrammingdegree of multiprogramming

CP

U u

tiliz

atio

nC

PU

uti

lizat

ion

thrashingthrashing

Page 6: Lecture9 Virtual Memory Two Print - 会津大学公式ウェ …sparth.u-aizu.ac.jp/CLASSES/OS/LECTURES/PDF/lecture9.pdfa timer interrupts transfer control to the OS. ... LFU Algorithm:

21

�To prevent thrashing, we must provide a process as many frames as it needs. But how do we know how many frames it “needs”?There are several techniques.

� The working-set strategy starts by looking at how many frames a process is actually using. This approach defines the locality modelof process execution.

� The locality model states that, as a process executes, it migrates from one locality to another. A localitylocality is a set of pages that are actively used together. A program is generally composed of several different localities, which may overlap.We can see that localities are defined by the progr am structure and its data structures.

9090--10 rule10 rule : programs spend 90% of their time in 10% of their co de

Virtual Memory: Virtual Memory: Thrashing:Thrashing: How to How to prevent itprevent it

22

�The working-set model is based on the assumption of locality. This model used a parameter, ∆∆∆∆∆∆∆∆, (a time unit) to define the working-set window. The idea is to examine the most recent ∆∆∆∆∆∆∆∆page references.The set of pages in the most recent ∆∆∆∆∆∆∆∆ page references is the working setworking set.If the page is in active use, it will be in the working set.If it is no longer being used, it will drop from the working set ∆∆∆∆∆∆∆∆ time units after its last reference. Thus, the working set is an approximation of the program’s locality.

Virtual Memory: Virtual Memory: Thrashing:Thrashing: WorkingWorking --Set ModelSet Model

. . . 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 . . . 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 1 3 2 3 4 4 4 3 4 4 4 . . .4 3 4 3 4 4 4 1 3 2 3 4 4 4 3 4 4 4 . . .

∆∆∆∆∆∆∆∆

tt11

WS(WS(tt11) = {1,2,5,6,7}) = {1,2,5,6,7}

∆∆∆∆∆∆∆∆

tt22WS(WS(tt22) = {3,4}) = {3,4}

23

�The most important property of the working set is its size.If we compute the working-set size, WSSi (total number of pages referenced in the most recent ∆ ∆ ∆ ∆ ), for each process in the system, we can then consider D = ΣΣΣΣ WSSi ,where D is the total demand frames. Each process is actively using the pages in its working set. Thus, process i needs WSSi frames. If the total demand is greater than the total numbe r of available frames ( D > m), thrashing will occur, because some processes will not have enough frames.Policy if D > m , then suspend one of the processes.

Thrashing:Thrashing: WorkingWorking --Set ModelSet Model

24

�Allocate the working set size to each process.

�Prevents thrashing while maximizing the multiprogramming level.

�The overhead of keeping track of the working set is non-trivial. Approximations have been used in actual implementation.

� Approximation of the working set using the reference bits and interval timer:

�Set the interval timer to interrupt at every T time-units;

�When the timer interrupts, check the page frames with the reference bits set to 1, which are considered to be in the working set.

Virtual Memory: Virtual Memory: Thrashing:Thrashing: WorkingWorking --Set StrategySet Strategy

Page 7: Lecture9 Virtual Memory Two Print - 会津大学公式ウェ …sparth.u-aizu.ac.jp/CLASSES/OS/LECTURES/PDF/lecture9.pdfa timer interrupts transfer control to the OS. ... LFU Algorithm:

25

�Thrashing has a high page-fault rate. Thus, we want to control the page-fault rate. The page-fault frequency (PFF) strategy takes a more direct approach than Working-Set Model.

� When a page-fault rate is too high, we know that the process needs more frames.

� Similarly, if the page-fault rate is too low, then the process may have too many frames.

�We can establish upper and lower bounds on the desired page-fault rate.

Virtual Memory: Virtual Memory: Thrashing:Thrashing: PagePage--Fault FrequencyFault Frequency

26

� If the actual page-fault rate exceeds the upper limit, we allocate that process another frame;

� If the page-fault rate falls below the lower limit, we remove a frame from that process.Thus, we can directly measure and control the page- fault rate to prevent thrashing.

Virtual Memory: Virtual Memory: Thrashing:Thrashing: PagePage--Fault Fault FrequencyFrequency

number of framesnumber of frames

page

page

-- fau

ltfa

ult --

rate

rate

upper boundupper bound

lower boundlower bound

increase numberincrease numberof framesof frames

decrease numberdecrease numberof framesof frames

27

�Page sizes are usually hardware defined.

�No ideal choice:� Bigger pages reduce table fragmentation;

� Smaller pages give better localities and reduce internal fragmentation;

� I/O is more efficient for bigger pages;

� Smaller pages may reduce the amount of I/O needed;

� Large pages reduce the number of page faults.

�Most contemporary system have settled on 4 kilobytes.

Virtual Memory: Virtual Memory: Page Size Page Size ConsiderationConsideration

28

�Demand paging is designed to be transparent to the user program. However, in many cases system performance can be improved by an awareness of the underlying demand paging.� Example: Assume pages are 128 words in size. Program is to initialize to

0 each element of a 128 x 128 array A(128,128).for i := 1 to 128 do

for j := 1 to 128 doA(i,j) := 0 ;

How many page faults are generated?Case 1: Pascal, C, PL/1 => matrix is stored in row major {A(1,1), A(1,2), …, A(1,128)} are stored in the same page => 128 page-faultsCase 2: Fortran => matrix is stored in column major{A(1,1), A(2,1), …, A(128,1)} are stored in the same page => 128x128 = 16384

page-faults

Virtual Memory: Virtual Memory: Program Program StructureStructure

Page 8: Lecture9 Virtual Memory Two Print - 会津大学公式ウェ …sparth.u-aizu.ac.jp/CLASSES/OS/LECTURES/PDF/lecture9.pdfa timer interrupts transfer control to the OS. ... LFU Algorithm:

29

�The problem with I/O in swapping systems still exists in paging system.

�The solution is usually to lock individual pages in memory (these with I/O pending) rather than locking whole process in memory.

�The overall impact is better than the overhead of copying in and out of OS buffers.

Virtual Memory: Virtual Memory: I/O I/O InterlockInterlock

30

�Demand Paging is the most efficient virtual-memory system, but implementation is very complicated (hardware).

�Demand Segmentation - used when insufficient hardware to implement demand paging.� OS/2 (80286) allocates memory in segments, which it keeps

track of through segment descriptors.

� Segment descriptor contains a valid bit to indicate whether the segment is currently in memory.

�If segment is in main memory, access continues;

�If not in memory, segment fault.

Virtual Memory: Virtual Memory: Demand Demand SegmentationSegmentation

31

The EndThe End

MemoryMemory

ManagementManagement