1 memory management. 2 fixed partitions legend free space 0k 4k 16k 64k 128k internal fragmentation...
TRANSCRIPT
2
Fixed Partitions
Legend
Free Space0k
4k
16k
64k
128k
Internalfragmentation
(cannot be reallocated)
Divide memory into n (possible unequal) partitions.
Problem: Fragmentation
3
Fixed Partition Allocation Implementation Issues
Separate input queue for each partition Inefficient utilization of memory
when the queue for a large partition is empty but the queue for a small partition is full. Small jobs have to wait to get into memory even though plenty of memory is free.
One single input queue for all partitions. Allocate a partition where the job fits in.
Best Fit Available Fit
4
Relocation
Correct starting address when a program should start in the memory
Different jobs will run at different addresses Logical addresses, Virtual addresses
Logical address space , range (0 to max) Physical addresses, Physical address space
range (R+0 to R+max) for base value R. Memory-management unit (MMU)
map virtual to physical addresses. Relocation register
Mapping requires hardware (MMU) with the base register
5
Relocation Register
Memory
Base Register
CPU Instruction
Address
+
BA
MA MA+BA
PhysicalAddress
LogicalAddress
6
Question 1 - Protection
Problem: How to prevent a malicious process to write or
jump into other user's or OS partitions
Solution: Base bounds registers
7
Base Bounds Registers
Memory
Bounds Register Base Register
CPUAddress
< +MemoryAddress MA
LogicalAddress LA
Physical AddressPA
Fault
Base Address
Limit Address
MA+BA
BaseAddressBA
14
Job 6
Solve Fragmentation w. Compaction
Monitor Job 3 FreeJob 5 Job 6Job 7 Job 85
Monitor Job 3 FreeJob 5 Job 6Job 7 Job 86
Monitor Job 3Job 5Job 7 Job 87
Monitor Job 3 FreeJob 5 Job 6Job 7 Job 88
Monitor Job 3 FreeJob 5 Job 6Job 7 Job 89
15
Storage Management Problems
Fixed partitions suffer from internal fragmentation
Variable partitions suffer from external fragmentation
Compaction suffers from overhead
16
Paging
Provide user with virtual memory that is as big as user needs
Store virtual memory on disk Cache parts of virtual memory being used in
real memory Load and store cached virtual memory without
user program intervention
17
Benefits of Virtual Memory Use secondary storage($)
Extend DRAM($$$) with reasonable performance Protection
Programs do not step over each other Convenience
Flat address space Programs have the same view of the world Load and store cached virtual memory without user
program intervention Reduce fragmentation:
make cacheable units all the same size (page)
18
Paging Request
3 1234
Disk
Cache
Virtual Memory Stored on Disk
1 2 3 4 5 6 7 8
1 2 3 4
1
2
3
4
Page TableVM Frame
Real MemoryRequest Address withinVirtual Memory Page 3
19
Paging
3 11 2
34
Disk
Cache
Virtual Memory Stored on Disk
1 2 3 4 5 6 7 8
1 2 3 4
1
2
3
4
Page TableVM Frame
Real MemoryRequest Address withinVirtual Memory Page 1
20
Paging
3 116
234
Disk
Cache
Virtual Memory Stored on Disk
1 2 3 4 5 6 7 8
1 2 3 4
1
2
3
4
Page TableVM Frame
Real MemoryRequest Address withinVirtual Memory Page 6
21
Paging
3 116
234
Disk
Cache
Virtual Memory Stored on Disk
1 2 3 4 5 6 7 8
1 2 3 4
1
2
3
4
Page TableVM Frame
Real MemoryRequest Address withinVirtual Memory Page 2
2
22
Paging
3 116
234
Disk
Cache
Virtual Memory Stored on Disk
1 2 3 4 5 6 7 8
1 2 3 4
1
2
3
4
Page TableVM Frame
Real MemoryStore Virtual Memory Page1 to disk
2
Request Address within Virtual Memory 8
23
Paging
3 1
6234
Disk
Cache
Virtual Memory Stored on Disk
1 2 3 4 5 6 7 8
1 2 3 4
1
2
3
4
Page TableVM Frame
Real Memory
2
Load Virtual Memory Page 8 to cache
88
24
Page Mapping Hardware
Contents(P,D)
Contents(F,D)
P D
F D
P-> F
0101101
Page Table
Virtual Memory
Physical Memory
Virtual Address (P,D)
Physical Address (F,D)
P
F
D
D
P
25
Page Mapping Hardware
Contents(4006)
Contents(5006)
004 006
005 006
4-> 5
0101101
Page Table
Virtual Memory
Physical Memory
Virtual Address (004006)
Physical Address (F,D)
004
005
006
006
4
Page size 1000Number of Possible Virtual Pages 1000
26
Paging Issues
size of page is 2n , usually 512, 1k, 2k, 4k, or 8k E.g. 32 bit VM address may have 220 (1 meg)
pages with 4k (212 ) bytes per page
27
Paging Issues
Question Physical memory size: 32 MB (2^25 ) Page size 4K bytes How many pages?
2^13
Page Table base register must be changed for context switch
Why? Different page table
NO external fragmentationInternal fragmentation on last page ONLY
28
Paging - Caching the Page Table
Can cache page table in registers and keep page table in memory at location given by a page table base register
Page table base register changed at context switch time
29
Paging Implementation Issues
Caching scheme can use associative registers, look-aside memory or content-addressable memory TLB
Page address cache (TLB) hit ratio: percentage of time page found in associative memory
If not found in associative memory, must load from page tables: requires additional memory reference
30
Page Mapping Hardware
P D
F D
P-> F
0101101
Page TableVirtual Memory Address (P,D)
Physical Address (F,D)
P
Associative Look Up
P F
First
31
Page Mapping Hardware
004 006
009 006
004-> 009
0101101
Page TableVirtual Memory Address (P,D)
Physical Address (F,D)
4
Associative Look Up1 127
193
637
First
Table organized byLRU
4 9
32
Page Mapping Hardware
004 006
009 006
004-> 009
0101101
Page TableVirtual Memory Address (P,D)
Physical Address (F,D)
4
Associative Look Up1 124
193
937
First
Table organized byLRU
TLB Function If a virtual address is presented to MMU, the
hardware checks TLB by comparing all entries simultaneously (in parallel).
If match is valid, the page is taken from TLB without going through page table.
If match is not valid MMU detects miss and does an ordinary page table
lookup. It then evicts one page out of TLB and replaces it
with the new entry, so that next time that page is found in TLB.
Multilevel Page TablesSince the page table can be very large, one
solution is to page the page tableDivide the page number into
An index into a page table of second level page tables
A page within a second level page tableAdvantage
No need to keeping all the page tables in memory all the time
Only recently accessed memory’s mapping need to be kept in memory, the rest can be fetched on demand
Example Addressing on a Multilevel Page Table System
A logical address (on 32-bit x86 with 4k page size) is divided into A page number consisting of 20 bits A page offset consisting of 12 bits
Divide the page number into A 10-bit page number A 10-bit page offset