operating systems
DESCRIPTION
Operating Systems. Certificate Program in Software Development CSE-TC and CSIM, AIT September -- November, 2003. Objectives describe virtual memory , a technique that allows the execution of processes that may not be completely in memory (RAM). 9. Virtual Memory (Ch.9, S&G). - PowerPoint PPT PresentationTRANSCRIPT
OSes: 9. VM 1
Operating SystemsOperating Systems
ObjectivesObjectives– describe describe virtual memoryvirtual memory, a technique that , a technique that
allows the execution of processes that may allows the execution of processes that may not be completely in memory (RAM)not be completely in memory (RAM)
Certificate Program in Software DevelopmentCSE-TC and CSIM, AITSeptember -- November, 2003
9. Virtual Memory(Ch.9, S&G)
ch 10 in the 6th ed.
OSes: 9. VM 2
ContentsContents
1.1. MotivationMotivation
2.2. Virtual MemoryVirtual Memory
3.3. Demand PagingDemand Paging
4.4. Demand Paging PerformanceDemand Paging Performance
5.5. Page ReplacementPage Replacement
continued
OSes: 9. VM 3
6.6. Page Replacement AlgorithmsPage Replacement Algorithms
7.7. Allocation of FramesAllocation of Frames
8.8. ThrashingThrashing
9.9. Other ConsiderationsOther Considerations
10.10. Demand SegmentationDemand Segmentation
OSes: 9. VM 4
1. Motivation1. Motivation
Loading all of a program/process into Loading all of a program/process into memory is frequently unnecessary memory is frequently unnecessary because:because:– many routines/functions are rarely usedmany routines/functions are rarely used
– programmers make data structures too largeprogrammers make data structures too large e.g.e.g. char line[500];
continued
OSes: 9. VM 5
Advantages of being able to run a program Advantages of being able to run a program that is only partially loaded into memory:that is only partially loaded into memory:– programs are no longer limited by physical programs are no longer limited by physical
memory sizememory size
– more programs can be loaded and runmore programs can be loaded and run increased CPU utilization, throughputincreased CPU utilization, throughput
– the OS can pretend to have a larger the OS can pretend to have a larger virtual virtual address spaceaddress space
OSes: 9. VM 6
2. Virtual Memory2. Virtual Memory
Virtual memory separates the user’s logical Virtual memory separates the user’s logical view of memory from the physical memory.view of memory from the physical memory.
Virtual memory appears to be much larger, Virtual memory appears to be much larger, and more uniform, than physical memoryand more uniform, than physical memory– a better abstractiona better abstraction– requires the hardware to support swapping requires the hardware to support swapping
in/out from secondary storagein/out from secondary storage
OSes: 9. VM 7
Virtual Memory DiagramVirtual Memory DiagramFig. 9.1, p.291
virtual memory
page 0
page 1
page 2
page 3
page n
memorymap physical
memory
secondarystorage
OSes: 9. VM 8
3. Demand Paging3. Demand Paging
Virtual memory is commonly implemented Virtual memory is commonly implemented by demand pagingby demand paging– similar to a page system with swappingsimilar to a page system with swapping
The OS only swaps a page into memory The OS only swaps a page into memory when it is required by a processwhen it is required by a process– called called lazy paginglazy paging (or lazy swapping) (or lazy swapping)
OSes: 9. VM 9
Swapping in/outSwapping in/out Fig. 9.2, p.292
mainmemory
programA
programB
swap out
swap in
OSes: 9. VM 10
ImplementationImplementation
One approach is to add a One approach is to add a valid-invalid bitvalid-invalid bit to to each page in the page tableeach page in the page table– validvalid means that the page is in memory and can means that the page is in memory and can
be legally referenced by the processbe legally referenced by the process
– invalidinvalid means that the page is not in memory means that the page is not in memory
oror that it cannot be legally referenced by the that it cannot be legally referenced by the processprocess
OSes: 9. VM 11
Valid-Invalid DiagramValid-Invalid Diagram Fig. 9.3, p.293
B
ED
A
C
virtual memory
A
page tablephysicalmemory
F
0
1
2
3
4
5
6
7
B
C
D
E
F
G
H
012345
67
4 viviivii
6
9
012345
6789
A
C
F
OSes: 9. VM 12
Page FaultsPage Faults
If a process tries to use a page which is If a process tries to use a page which is marked invalid then a page-fault trap marked invalid then a page-fault trap occurs.occurs.
OSes: 9. VM 13
Page Fault StepsPage Fault Steps Fig. 9.4, p.294
loadM
i
OS
free frame
page table
physical memory
1
3
2
4
6
5 bring inmissing page
page is onbacking store
trap
restartinst.
reference
reset page table
OSes: 9. VM 14
NotesNotes
Must distinguish between the two meanings Must distinguish between the two meanings of ‘invalid’:of ‘invalid’:– an illegal reference an illegal reference process termination process termination– page not in memory page not in memory swap it in swap it in
Free frames in physical memory are Free frames in physical memory are registered in a registered in a free-frame listfree-frame list
OSes: 9. VM 15
Resuming a ProcessResuming a Process
Before the page fault occurs, the state of the Before the page fault occurs, the state of the process must be savedprocess must be saved– e.g. record its register values, PCe.g. record its register values, PC
The saved state allows the process to be The saved state allows the process to be resumed from the line where it was resumed from the line where it was interrupted.interrupted.
OSes: 9. VM 16
Resuming an InstructionResuming an Instruction
A single instruction may involve several A single instruction may involve several pages:pages:– e.g.e.g. ADD A B ResultADD A B Result
The need to swap in several pages may The need to swap in several pages may require the interruption and resumption of require the interruption and resumption of an instruction several timesan instruction several times– not always easynot always easy
OSes: 9. VM 17
4. Demand Page Performance4. Demand Page Performance
LetLet p = probability of a page fault p = probability of a page fault ma = memory access time ma = memory access time
(typically 10 - 200 nsecs;(typically 10 - 200 nsecs; use 100 nsecsuse 100 nsecs))
Effective Access TimeEffective Access Time (EAT) (EAT)= ((1-p) * ma) + (p * page-fault-rate)= ((1-p) * ma) + (p * page-fault-rate)
OSes: 9. VM 18
Page Fault TimePage Fault Time
Main components:Main components:(1)(1) service the page fault interruptservice the page fault interrupt
(2)(2) read the page into memoryread the page into memory
(3)(3) restart the processrestart the process
(1) and (3) take ~1-100 microsecs each(1) and (3) take ~1-100 microsecs each
continued
OSes: 9. VM 19
(2) Reading in the page ~=(2) Reading in the page ~=– hard disk latency (~8 msecs) +hard disk latency (~8 msecs) +
seek time (~15 msecs) +seek time (~15 msecs) +transfer time (~1 msec)transfer time (~1 msec)
24 msecs 24 msecs
Page fault time = (1) + (2) + (3)Page fault time = (1) + (2) + (3) = about 25 msecs = about 25 msecs
continued
OSes: 9. VM 20
Effective Access Time (in nsecs)Effective Access Time (in nsecs)= ((1-p) * 100) + p * 25,000,000= ((1-p) * 100) + p * 25,000,000
= 100 + p*24,999,900 nsecs= 100 + p*24,999,900 nsecs
OSes: 9. VM 21
ExamplesExamples
Assume 1 access out of 1000 causes a Assume 1 access out of 1000 causes a page fault.page fault.
EAT = 100 + 0.001 * 24,999,900EAT = 100 + 0.001 * 24,999,900 = about 25,000 nsecs = about 25,000 nsecs
Slowdown is about Slowdown is about 250 times250 times!!
continued
OSes: 9. VM 22
What is the page fault rate to make the What is the page fault rate to make the slowdown less than 10%?slowdown less than 10%?
110 > 100 + p*24,999,900110 > 100 + p*24,999,900so 10 > p*24,999,900so 10 > p*24,999,900so p < ~ 1/2,500,000so p < ~ 1/2,500,000
A page fault must occur less than once in A page fault must occur less than once in 2,500,000 memory accesses.2,500,000 memory accesses.
OSes: 9. VM 23
ObservationsObservations
The page fault rate must be very low in a The page fault rate must be very low in a demand paging system or the effective demand paging system or the effective access time increases access time increases dramaticallydramatically..
Transfer time is a small part of the total Transfer time is a small part of the total time for reading in a page.time for reading in a page.
OSes: 9. VM 24
5. Page Replacement5. Page Replacement
Assume an OS with 4 frames.Assume an OS with 4 frames.
A process pA process p00 which would fill 4 pages if which would fill 4 pages if
loaded into memory all at once may only loaded into memory all at once may only require at most 2 pages at any one time.require at most 2 pages at any one time.
continued
OSes: 9. VM 25
Knowing this, the OS can allocate 2 pages Knowing this, the OS can allocate 2 pages to pto p00 and give the other 2 pages to another and give the other 2 pages to another
process pprocess p11
– increases CPU utilization and throughputincreases CPU utilization and throughput
Called Called over-allocationover-allocation of memory of memory
OSes: 9. VM 26
ProblemProblem
Process usage varies with data, and so in the Process usage varies with data, and so in the future the pfuture the p00 process might need 3,4, etc process might need 3,4, etc
pagespages– but no longer any free framesbut no longer any free frames
What to do?What to do?
OSes: 9. VM 27
Possible SolutionsPossible Solutions Terminate the processTerminate the process
– restart it laterrestart it later
Swap out the processSwap out the process– swap it back when there are free framesswap it back when there are free frames
Page replacementPage replacement– find a page that is not required and swap it outfind a page that is not required and swap it out– this frees the frame for the process’ pagethis frees the frame for the process’ page
OSes: 9. VM 28
Page Replacement DiagramPage Replacement DiagramFig. 9.6, p.302
page table physicalmemory
v
f victim
frame
f
2
43
1
reset pagetable fornew page
change toinvalid
swap outvictim page
swapdesiredpage in
OSes: 9. VM 29
ImplementationImplementation There must be a There must be a page replacement algorithmpage replacement algorithm
which selects a victim page to swap out.which selects a victim page to swap out.
Also requires a Also requires a frame allocation algorithmframe allocation algorithm– how many frames to allocate to each process?how many frames to allocate to each process?
Two page transfers are required:Two page transfers are required:– doubles the page fault ratedoubles the page fault rate– makes the EAT worsemakes the EAT worse
OSes: 9. VM 30
Reducing the OverheadReducing the Overhead
Each page can have a modify (Each page can have a modify (dirtydirty) bit) bit– indicates if the page has been modifiedindicates if the page has been modified
No need to swap out an unmodified victim No need to swap out an unmodified victim page.page.
OSes: 9. VM 31
6. Page Replacement Algorithms6. Page Replacement Algorithms
AimAim: to reduce the page fault rate by as much as : to reduce the page fault rate by as much as possible.possible.
Different algorithms are evaluated by calculating Different algorithms are evaluated by calculating the number of page faults they cause on a the number of page faults they cause on a reference string.reference string.
A A reference stringreference string is a sequence of addresses is a sequence of addresses referenced by a program.referenced by a program.
OSes: 9. VM 32
ExampleExample
An address reference string:An address reference string:0100 0432 0101 0612 0102 0103 0104 01010100 0432 0101 0612 0102 0103 0104 01010611 0103 0104 0101 0610 0102 0103 01040611 0103 0104 0101 0610 0102 0103 01040101 0609 0102 01050101 0609 0102 0105
Assuming 100 bytes/page, then its Assuming 100 bytes/page, then its corresponding page reference string is:corresponding page reference string is:
1 4 1 6 1 6 1 6 1 6 11 4 1 6 1 6 1 6 1 6 1
continued
11 pages
OSes: 9. VM 33
How many page faults occur?How many page faults occur?– depends on the number of frames availabledepends on the number of frames available
If there are 3 or more frames:If there are 3 or more frames:– 3 page faults occur3 page faults occur
If 1 frame:If 1 frame:– 11 page faults occur11 page faults occur
OSes: 9. VM 34
Page Fault/Frames CurvePage Fault/Frames CurveFig. 9.7, p.304
02468101214
1 2 3 4 5 6
Number of Frames
Nu
mb
er
of
Pag
e F
ault
s
OSes: 9. VM 35
Comparison of AlgorithmsComparison of Algorithms
We will use the reference string:We will use the reference string:7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 17 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Assume memory contains 3 frames.Assume memory contains 3 frames.
OSes: 9. VM 36
Algorithms ExaminedAlgorithms Examined
6.1.6.1. FIFOFIFO
6.2.6.2. OptimalOptimal
6.3.6.3. LRULRU
6.4.6.4. LRU ApproximationLRU Approximation6.4.1.6.4.1. Addition Reference BitsAddition Reference Bits
6.4.2.6.4.2. Second ChanceSecond Chance– Enhanced Second Chance (Enhanced Second Chance (XX))
Counting (Counting (XX)) Page Buffering (Page Buffering (XX))
OSes: 9. VM 37
6.1. FIFO Algorithm6.1. FIFO Algorithm
FIFO = first-in, first-outFIFO = first-in, first-out
When a page must be replaced, the oldest When a page must be replaced, the oldest one is chosen.one is chosen.
Easy to understand/programEasy to understand/program– but performance is not always goodbut performance is not always good
OSes: 9. VM 38
FIFO ExampleFIFO Example Fig. 9.8, p.305
7 0 1 2 0 3 0 4 2 3 ...7 7
0
… 0 3 2 1 2 0 1 7 0 1
7
0
1
2
0
1
2
3
1
2
3
0
4
3
0
4
2
0
4
2
3
0
2
3
0
1
3
0
1
2
7
1
2
7
0
2
7
0
1
15pagefaults
OSes: 9. VM 39
Belady’s AnomalyBelady’s Anomaly
It is not always true that increased memory It is not always true that increased memory (frames) reduces the number of page faults.(frames) reduces the number of page faults.
Consider the reference string:Consider the reference string:1 2 3 4 1 2 5 1 2 3 4 51 2 3 4 1 2 5 1 2 3 4 5
OSes: 9. VM 40
GraphGraph Fig. 9.9, p.306
0
2
4
6
8
10
12
14
1 2 3 4 5 6 7
Number of frames
Nu
mb
er
of
pag
e f
ault
s
OSes: 9. VM 41
6.2. Optimal Algorithm6.2. Optimal Algorithm
Replace the page that will not be used for Replace the page that will not be used for the longest period of time in the future the longest period of time in the future – guaranteed lowest page fault rateguaranteed lowest page fault rate
Must know the future behaviour of the Must know the future behaviour of the processprocess– very difficult to implementvery difficult to implement– useful for comparison with other algorithmsuseful for comparison with other algorithms
OSes: 9. VM 42
Optimal ExampleOptimal Example Fig. 9.10, p.307
7 0 1 2 0 3 0 4 2 3 ...7 7
0
… 0 3 2 1 2 0 1 7 0 1
7
0
1
2
0
1
2
0
3
2
4
3
2
0
3
2
0
1
7
0
1
9pagefaults
OSes: 9. VM 43
6.3. LRU Algorithm6.3. LRU Algorithm LRU = Least Recently UsedLRU = Least Recently Used Replace the page that has not been used for Replace the page that has not been used for
the longest period of time.the longest period of time.
Hard to implement efficiently:Hard to implement efficiently:– use an extra time-of-use field in a pageuse an extra time-of-use field in a page
replace the oldest pagereplace the oldest page
– store a stack of page numbersstore a stack of page numbers replace the bottom pagereplace the bottom page
OSes: 9. VM 44
LRU ExampleLRU Example Fig. 9.11, p.307
7 0 1 2 0 3 0 4 2 3 ...7 7
0
… 0 3 2 1 2 0 1 7 0 1
7
0
1
2
0
1
2
0
3
4
0
3
4
0
2
4
3
2
0
3
2
1
3
2
1
0
2
1
0
7
12pagefaults
OSes: 9. VM 45
6.4. LRU Approximations6.4. LRU Approximations
Few machines have sufficient hardware to Few machines have sufficient hardware to support LRUsupport LRU– so approximate it with the available hardwareso approximate it with the available hardware
Many systems have a Many systems have a reference bitreference bit in each in each pagepage– it is set when a page is read or writtenit is set when a page is read or written
continued
OSes: 9. VM 46
Over time, the reference bits of pages show Over time, the reference bits of pages show which have been used, which notwhich have been used, which not– can be used as the basis of an approximate LRUcan be used as the basis of an approximate LRU
OSes: 9. VM 47
6.4.1. Additional Reference Bits Algorithm6.4.1. Additional Reference Bits Algorithm
Use a byte to hold 8 reference bits for each page.Use a byte to hold 8 reference bits for each page.
At regular intervals, shift the bits of the ref. byte At regular intervals, shift the bits of the ref. byte to the right by 1 bit, discarding the low-order bit, to the right by 1 bit, discarding the low-order bit, and updating the high-order bit (set to 1 or 0).and updating the high-order bit (set to 1 or 0).
The byte holds a history of the last 8 time The byte holds a history of the last 8 time periods.periods.
OSes: 9. VM 48
ExampleExample
The reference byte The reference byte 0000000000000000 means that the means that the page has not been referenced.page has not been referenced.
1111111111111111 means its page has been means its page has been referenced in all 8 time periods.referenced in all 8 time periods.
Convert the byte to an unsigned integerConvert the byte to an unsigned integer– the lowest number is the least used, and so the lowest number is the least used, and so
should be replacedshould be replaced
OSes: 9. VM 49
6.4.2. Second Chance Algorithm6.4.2. Second Chance Algorithm
If a selected page has 0 in its reference bit If a selected page has 0 in its reference bit then replace it.then replace it.
If the reference bit is 1 (it has been If the reference bit is 1 (it has been accessed) then set the bit to 0 and move to accessed) then set the bit to 0 and move to the next pagethe next page– i.e. give the page a second chancei.e. give the page a second chance
continued
OSes: 9. VM 50
Can be implemented by treating the page Can be implemented by treating the page table used by a process as a circular queuetable used by a process as a circular queue– this version is called the this version is called the clock algorithmclock algorithm
OSes: 9. VM 51
Clock Algorithm DiagramClock Algorithm DiagramFig. 9.13, p.311
0
1
1
0
1
1
ref.bits
pages
nextvictim
later
0
0
0
0
1
1
ref.bits
pages
nextvictim
OSes: 9. VM 52
7. Allocation of Frames7. Allocation of Frames
How many frames should be allocated to How many frames should be allocated to each process?each process?
The minimum no. of frames per a process The minimum no. of frames per a process must be enough to hold all the pages that a must be enough to hold all the pages that a single instruction might referencesingle instruction might reference– may be hard to calculate if an instruction use may be hard to calculate if an instruction use
unusual addressing modesunusual addressing modes
OSes: 9. VM 53
7.1. Allocation Algorithms7.1. Allocation Algorithms
The simplest way to split The simplest way to split mm frames among frames among nn processes is to give each one an equal processes is to give each one an equal share:share:– m/nm/n frames each frames each– called an called an equal allowanceequal allowance
Leftover frames can be place in the Leftover frames can be place in the free-frame list.free-frame list.
OSes: 9. VM 54
Proportional AllowanceProportional Allowance
Allocate frames to each process according Allocate frames to each process according to the size of each process.to the size of each process.
Let the size of the virtual memory for Let the size of the virtual memory for process pprocess pii be s be sii::
– SS = total size of virtual memory= total size of virtual memory
= sum( s= sum( sii ) )
continued
OSes: 9. VM 55
Let the total number of available frames Let the total number of available frames be m.be m.
Allocate aAllocate aii frames to process p frames to process pii where where
aaii = (s = (sii / S) * m / S) * m
OSes: 9. VM 56
ExampleExample
Assume:Assume:– m = 62 framesm = 62 frames
– pp11 uses 10 pages; p uses 10 pages; p22 uses 127 pages uses 127 pages
Allocation:Allocation:– aa11 = (10/(10+127)) * 62 = (10/(10+127)) * 62 = about 4 frames= about 4 frames
– aa22 = (127/(10+127)) * 62 = (127/(10+127)) * 62 = about 57 frames= about 57 frames
OSes: 9. VM 57
7.2. Global vs. Local Allocation7.2. Global vs. Local Allocation
Global replacementGlobal replacement– let a process select a replacement frame from let a process select a replacement frame from
the set of all framesthe set of all frames even those of other processeseven those of other processes
Local replacementLocal replacement– let a process select from only its own set of let a process select from only its own set of
allocated framesallocated frames
OSes: 9. VM 58
ComparisonsComparisons Global replacement changes the number of Global replacement changes the number of
frames allocated to a process.frames allocated to a process.
A process has no control over its page fault A process has no control over its page fault rate when global replacement is used.rate when global replacement is used.
Global replacement gives greater system Global replacement gives greater system throughputthroughput– preferred for this reasonpreferred for this reason
OSes: 9. VM 59
8. Thrashing8. Thrashing
A process is thrashing when it spends more A process is thrashing when it spends more time paging than executing.time paging than executing.
One cause is a poor CPU scheduling One cause is a poor CPU scheduling algorithm which keeps adding processes to algorithm which keeps adding processes to memory if the CPU is under used.memory if the CPU is under used.
continued
OSes: 9. VM 60
The poor utilization may be caused by The poor utilization may be caused by processes waiting for their pages to be processes waiting for their pages to be swapped in.swapped in.
Adding more processes will decrease Adding more processes will decrease performance since they will add to the performance since they will add to the average waiting time for swapped in pages.average waiting time for swapped in pages.
OSes: 9. VM 61
Thrashing CurveThrashing Curve Fig. 9.14, p.318
Degree of multiprogramming
CP
U u
tili
zati
on
thrashing increases
OSes: 9. VM 62
8.1. Locality Model8.1. Locality Model
As a process executes, it moves from As a process executes, it moves from locality to locality.locality to locality.
A A localitylocality is a set of pages actively used is a set of pages actively used togethertogether– closely related to program and data structuresclosely related to program and data structures
continued
OSes: 9. VM 63
ExampleExample: when a function is called, a new : when a function is called, a new locality is createdlocality is created– it consists of the pages holding the function it consists of the pages holding the function
code, the local variables, some global variablescode, the local variables, some global variables
Page faults are related to localityPage faults are related to locality– when a new locality is entered, a series of page when a new locality is entered, a series of page
faults will be triggeredfaults will be triggered
OSes: 9. VM 64
8.2. The Working Set Model8.2. The Working Set Model
A A working setworking set if an approximation to a if an approximation to a program’s localityprogram’s locality– the set of pages in the most recent delta page the set of pages in the most recent delta page
referencesreferences
OSes: 9. VM 65
ExampleExample
Assume delta = 10 memory references.Assume delta = 10 memory references.
Fig. 9.16, p.320
...2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 1 3 2 3 4 4 4 3 4 ...
WS(t1) = {1,2,5,6,7} WS(t2) = {3,4}
delta delta
OSes: 9. VM 66
Working Set Size (WSS)Working Set Size (WSS) The The working set sizeworking set size of a process i (WSS of a process i (WSSii) )
is the current size of the working set to is the current size of the working set to cover its locality possibilities.cover its locality possibilities.
D D == current total demand for framescurrent total demand for frames== sum( WSSsum( WSSii ) )
If D > total number of available frames then If D > total number of available frames then thrashing will occur since at least one of the thrashing will occur since at least one of the processes will not have enough frames.processes will not have enough frames.
OSes: 9. VM 67
ImplementationImplementation
Implementing the working set model is hard Implementing the working set model is hard since it is difficult to track the working set since it is difficult to track the working set during process execution.during process execution.
Can approximate it by examining page Can approximate it by examining page reference bits at periodic intervals.reference bits at periodic intervals.
OSes: 9. VM 68
8.3. Page Fault Frequency (PFF)8.3. Page Fault Frequency (PFF)
Set an upper and lower bound on the Set an upper and lower bound on the desired page fault rates for each process.desired page fault rates for each process.
If the upper limit is exceeded, try to allocate If the upper limit is exceeded, try to allocate the process another frame (or suspend it)the process another frame (or suspend it)– avoids thrashingavoids thrashing
continued
OSes: 9. VM 69
If the process page fault rate drops below If the process page fault rate drops below the lower limit then remove a frame from the lower limit then remove a frame from the processthe process– store it for future usestore it for future use
OSes: 9. VM 70
Page Fault Frequency GraphPage Fault Frequency GraphFig. 9.17, p.322
Number of frames
Pag
e fa
ult r
ate upper bound
lower bounddecrease no.of frames
increase no.of frames
OSes: 9. VM 71
9. Other Considerations9. Other Considerations
9.1.9.1. PrepagingPrepaging
9.2.9.2. Page SizePage Size
9.3.9.3. Program StructureProgram Structure
9.4.9.4. I/O InterlockI/O Interlock
9.5.9.5. Real-Time Real-Time ProcessingProcessing
OSes: 9. VM 72
9.1. Prepaging9.1. Prepaging
Bring into memory at one time all the Bring into memory at one time all the pages that will be needed pages that will be needed – may be used initially and at swap-in timemay be used initially and at swap-in time– can use the working set recorded at swap-out can use the working set recorded at swap-out
timetime– less page faults laterless page faults later
Cost of prepaging vs. cost of page faults?Cost of prepaging vs. cost of page faults?
OSes: 9. VM 73
9.2. Page Size9.2. Page Size
Small page size:Small page size:– reduced internal fragmentationreduced internal fragmentation– better localitybetter locality
Large page size:Large page size:– reduced page table sizereduced page table size– reduced I/O timereduced I/O time
Trend is towards large page sizes.Trend is towards large page sizes.
OSes: 9. VM 74
9.3. Program Structure9.3. Program Structure
Code restructuring can sometimes improve Code restructuring can sometimes improve demand pagingdemand paging– improved locality improved locality reduced page faults reduced page faults
– not recommended unless the code is highly not recommended unless the code is highly time/memory critical (e.g. a real-time system)time/memory critical (e.g. a real-time system)
continued
OSes: 9. VM 75
Data structures spanning several pages have Data structures spanning several pages have poor locality:poor locality:– e.g. large arrays, hash tables, pointer structurese.g. large arrays, hash tables, pointer structures
The compiler/loader can reduce paging:The compiler/loader can reduce paging:– distinguish between data and codedistinguish between data and code– avoid placing routines across pagesavoid placing routines across pages– reduce inter-page referencesreduce inter-page references
OSes: 9. VM 76
9.4. I/O Interlock9.4. I/O Interlock
ProblemProblem: a page fault may swap out a page : a page fault may swap out a page containing the I/O buffer of a waiting containing the I/O buffer of a waiting process.process.
Two common solutions:Two common solutions:– do I/O in system memorydo I/O in system memory
need to copy to/from user memoryneed to copy to/from user memory
– lock the pages involved in the I/Olock the pages involved in the I/O use a lock bit in each pageuse a lock bit in each page
OSes: 9. VM 77
9.5. Real-Time Processing9.5. Real-Time Processing
Virtual memory can introduce unexpected Virtual memory can introduce unexpected delays into the execution of a process as delays into the execution of a process as its pages are swapped in/out.its pages are swapped in/out.– the the worstworst behaviour for real-time processing behaviour for real-time processing
which needs dependable execution timeswhich needs dependable execution times
Solaris 2 supports time-sharing and Solaris 2 supports time-sharing and real-time computing:real-time computing:– has page locks and page swapping preferenceshas page locks and page swapping preferences
OSes: 9. VM 78
10. Demand Segmentation10. Demand Segmentation
Demand paging is the usual way of Demand paging is the usual way of implementing virtual memory, but it implementing virtual memory, but it requires hardware support.requires hardware support.
If support is not present then demand If support is not present then demand segmentation can be used (e.g. in OS/2).segmentation can be used (e.g. in OS/2).
continued
OSes: 9. VM 79
Demand segmentation allocates memory in Demand segmentation allocates memory in segments, not in pages.segments, not in pages.
The choice of segments to swap in/out is The choice of segments to swap in/out is based on an based on an accessed bitaccessed bit (like the reference (like the reference bit used in demand paging) stored in each bit used in demand paging) stored in each segment descriptor.segment descriptor.
continued
OSes: 9. VM 80
OS/2 records segment information in a OS/2 records segment information in a queue, placing the details of a newly queue, placing the details of a newly accessed segment at the front.accessed segment at the front.
Swapping out is done on the segment at the Swapping out is done on the segment at the end of the queue.end of the queue.