segmentation user mode and memory management case studies o pentium o unix o linux o windows...
TRANSCRIPT
Segmentation
User Mode and Memory Management
Case studieso Pentium
o Unix
o Linux
o Windows
Memory Management - Part 3: Outline
1Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation Several address spaces per
process A compiler needs segments for
o source texto symbol tableo constants segmento stacko parse treeo compiler executable code
Most of these segments grow during execution
symboltable
source textSource Text
constant table
parse tree
call stack
symboltable
2Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Users view of segments
Symboltable
Sourcetext
Constants
Parse tree Call
stack
Segmented memory allows each table to grow or shrink independently of the other tables.
3Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation - segment table
4Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation Hardware
5Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation vs. PagingConsiderationConsideration PagingPaging SegmentationSegmentationNeed the program be aware of the technique? no yesHow many per-process virtual address spaces? 1 manyCan the total address space exceed physical memory?
yes yes
Can procedures and data be distinguished? no yes
Sharing of procedures among users facilitated? no yesMotivation for the technique Get larger
linear space, eliminate external fragmentation
Programs and data in logical independent address spaces, sharing and protection made simpler
6Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Advantages:o Growing and shrinking independentlyo Sharing between processes simplero Linking is easiero Protection easier
Disadvantages:o Pure segmentation external fragmentationo Segments may be very large. What if they don't fit into
physical memory?
Segmentation pros and cons
7Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation Architecture Logical address composed of the pair
<segment-number, offset> Segment table – maps to linear address space; each table
entry has:o basebase – contains the starting linear address where the segment resides
in memory.o limitlimit – specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR.
8Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation Architecture (Cont.) Protection: each segment table entry contains:
o validation bit = 0 illegal segmento read/write/execute privileges
Protection bits associated with segments; code sharing occurs at segment level.
Since segments vary in length, memory allocation is a dynamic storage-allocation problem (external fragmentation problem)
9Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Sharing of segments
10Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation with Paging
Segments may be too large to keep in (physical) memory Cause external fragmentation The two approaches may be combined:
o Segment table.o Pages inside a segment. o Solves fragmentation problems.
Most systems today provide a combination of segmentation and paging
11Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation
User Mode and Memory Management
Case studieso Pentium
o Unix
o Linux
o Windows
Memory management, part 3: outline
12Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
User Mode and Memory Management
Heap Managero User Mode component in charge of handling
malloc/free requests by program.o The Heap Manager can trigger system calls to
expand/allocate Heap Segments to the process.Memory Mapped FilesShared MemoryLocking Memory in RAM
Heap Allocation vs. Growing Process RAM
In User Mode, processes use a library function to allocate and free memory from the Heap. This function is called malloc and is used by new in C++ and internally in Java.
The Heap Allocator must be thread-safe (since the heap is shared by multiple threads) which can be a source of performance issues.
When a process is created, it is allocated a default heap. When malloc cannot find sufficient memory in the current heap, it
must invoke a system call to increase the heap. This system call is called brk in Linux/Unix and VirtualAlloc in Windows.
User mode library functions (malloc, new, HeapCreate, HeapAlloc) can only allocate memory inside the current heap. When they need memory, they all call brk/VirtualAlloc.
See http://www.flounder.com/inside_storage_allocation.htm
14Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Heap Manager The Heap Manager is part of the user mode program (not of the kernel). It handles a block of memory (the Heap Segment) provided by the OS and places a
header at the top. The heap segment header contains at least: o the size of the heap segmento a CRITICAL_SECTION to provide thread-safe access.
There can be multiple Heap Segments in one process memory. This may be useful to reduce contention in multi-threaded applications.
The Heap Manager divides the remaining memory into two classes of storage: o Allocated storage: blocks which are "owned" by the calling programo Free storage, which is owned by the allocator.
The Heap Manager is not permitted to do anything to allocated memory, except free it when asked to free it. It is free to do whatever it wants with the free storage.
The heap segment contents is divided into memory blocks. Each memory block has a header at the front, which is not part of the visible space a user of the allocator sees, and there is always at least one such memory block.
When the allocator starts, all the storage is free, so there is a single heap segment header for the entire heap segment, and a single block header saying all the rest is free.
Heap Manager: Allocate When the program calls “malloc”, the Heap Manager must
locate a block large enough for the allocation, mark it as allocated and return a pointer to it.
Algorithms to locate a block include "first fit", "best fit" "quick fit“ and “buddy”.
“First fit“ is the simplest: the list of free blocks is scanned until a free block large enough to satisfy the request is found. The block is split into two blocks; the first part is returned as the result of the allocation request, and the rest of it remains as a new, smaller, block on the free list.
If I have a 32K block at the front, and ask for a 1K allocation, the allocator does not search for a 1K free block, or the smallest-block-not-less-than-1K (this is "best fit") but simply splits the first block it finds that is large enough to satisfy the request, hence the name "first fit".
Heap Manager: Free The programmer returns a block to the Heap Manager free list by calling the
“free” method on a pointer previously returned by malloc. When a block is freed, it is marked as available. It is erroneous for the application
to ever use the pointer to that block again for any purpose. The freed block is marked as "free" and ownership returns to the allocator. Under
certain conditions, the free block may be coalesced with adjacent free blocks to form larger blocks.
It is good practice to set pointers to NULL after freeing the object they point to. This avoids using what is called a "dangling pointer" error, that is, a pointer to a block of memory which has been freed because trying to access the NULL pointer will always trigger a hardware fault instead of allowing risky memory overlap.
Imagine if you have allocated a 100-byte block of storage, freed it, it coalesced with some other block, the space got reallocated, and then you write into the data structure you thought you once had a pointer to. You will corrupt either the control blocks (headers placed before each block in the heap) used by the heap allocator, or corrupt data that legitimately is "owned" by some other part of your program.
When freed blocks cannot be coalesced, the heap becomes “fragmented”.
Heap Manager: Compaction
If 2 non-adjacent memory 1K memory blocks are freed – we obtain two separate free blocks of 1K. If the program asks to allocate a 2K block, nothing would happen. While there are 2K bytes of "free" storage, they are non-adjacent, and therefore useless for this purpose.
It would be nice to "slide" the block between the non-adjacent blocks down into the position of the second block, thus making the third block free; two adjacent 1K free blocks will coalesce into a single 2K free block, and the allocation would succeed. This is called compaction.
The problem in languages like C or C++ is that there are pointers to the address of the used block, and it would be necessary to find all the pointers to this block in the process memory, change them to point to the new position, move the block down, and then coalesce. In languages like C/C++, there is no way the Heap Manager can find all the pointers, since they could be in local variables (in the stacks), global variables, heap-allocated structures, and even temporary registers. Such transformations have to be thread-safe, which makes it essentially impossible to achieve.
Java and C# can manage heap compaction in a thread-safe fashion (as part of Garbage Collection).
Heap Manager: Fragmentation
The consequence of having "holes" in the heap that are not usable to satisfy a request is called Heap fragmentation.
It is often the consequence of having allocations of lots of different-size objects that have different lifetimes.
First-fit tends to create high fregmentation. Best-fit creates lower fragmentation, but requires longer
search times to locate a good block, which increases multi-thread contention.
An optimization is to keep the free-list sorted in order of ascending block size.
Another optimization is to use different heap segments for different block sizes.
Heap Manager: Expanding the Heap When the Heap Manager cannot find blocks to answer an
allocation request, it can either:o Try to coalesce more free blocks.o Return NULL to indicate the malloc operation failed.o Ask the OS to expand the Heap Segment.
The Heap Manager can manage multiple Heap Segments – or expand an existing Heap Segment.
This is achieved by invoking a system call called brk (or sbrk) in Unix/Linux and VirtualAlloc in Windows.
Brk and VirtualAlloc allocate new pages to the Virtual Memory of the process, and it also commit these pages – meaning, it will actually take physical page frames, which causes risks of page faults. It also commits space in the page file – if the size of all committed pages is larger than the size of the page file, the system call will fail.
Memory Mapped Files: Definition
Memory Mapped Files are a mechanism that allows programmers to intervene in the way content is mapped into virtual memory.
A memory-mapped file is a segment of virtual memory which has been assigned a direct mapping with a segment of a file.
After it is established, the view on the file through the memory allows user mode applications to view the mapped portion as if it were in RAM.
When using Memory Mapped Files, the programmer intervenes in mapping Pages from the Process Page Table to files in the file system instead of the usual location in the Swap File used by the kernel.
21Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Memory Mapped Files: Usage
Memory Mapped Files are useful when the program accesses large files especially in Read mode. The content of the file is accessed without going through system calls for each read operation.
Useful in accessing files that are larger than physical RAM size – and results in lazy-loading (the parts of the file that need loading are only loaded when needed).
Memory Mapped Files are used by the Process Loader: when a process is executed (with system calls such as exec), the kernel loads the program executable and libraries using memory mapped files.
Memory Mapped Files can also allow inter-process communication: 2 processes map the same file in their virtual memory.
22Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Memory Mapped Files in Linux
2 system calls: mmap and munmap
#include <sys/mman.h> void* mmap(
void *addr, // Address where file is mapped
size_t length // Size of the segment mapped in RAM
int prot, // Pages can be R/W/X
int flags, // SHARED / FIXED …
int fd, // File descriptor
off_t offset); // Offset within the file
int munmap(void *addr, size_t length);
In general – addr passed as argument should be NULL (meaning let the kernel decide where in the RAM the file segment will be mapped).
23Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Memory Mapped Files in Linux#include <sys/mman.h> #include <sys/stat.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0)
int main(int argc, char *argv[]) { char *addr; int fd; struct stat sb; off_t offset, pa_offset; size_t length; ssize_t s; if (argc < 3 || argc > 4) {
fprintf(stderr, "%s file offset [length]\n", argv[0]); exit(EXIT_FAILURE); } fd = open(argv[1], O_RDONLY); if (fd == -1) handle_error("open"); /* To obtain file size */ if (fstat(fd, &sb) == -1) handle_error("fstat"); offset = atoi(argv[2]); pa_offset = offset & ~(sysconf(_SC_PAGE_SIZE) - 1); /* offset for mmap() must be page aligned */ if (offset >= sb.st_size) {
fprintf(stderr, "offset is past end of file\n"); exit(EXIT_FAILURE); } if (argc == 4) {
length = atoi(argv[3]); if (offset + length > sb.st_size)
length = sb.st_size - offset; /* Cannot display bytes past end of file */ } else {
length = sb.st_size - offset; /* No length arg ==> display to end of file */ } addr = mmap(NULL, length + offset - pa_offset, PROT_READ, MAP_PRIVATE, fd, pa_offset); if (addr == MAP_FAILED) handle_error("mmap"); s = write(STDOUT_FILENO, addr + offset - pa_offset, length); if (s != length) {
if (s == -1) handle_error("write"); fprintf(stderr, "partial write"); exit(EXIT_FAILURE);
} exit(EXIT_SUCCESS);
}
Memory Mapped Files and Writing
When the array in RAM mapped to an mmap file is updated by the process, the corresponding Pages are marked as “dirty” by the hardware MMU.
Dirty pages are written back to storage according to the kernel schedule (when the page is swapped out, or pre-emptively).
In all cases, when calling munmap(), all dirty pages are committed back to disk.
Dirty pages can also be explicitly saved to file by invoking the system call msync():
int msync(void *addr, size_t length, int flags);
Memory Mapped Files in Windows
http://msdn.microsoft.com/en-us/library/ms810613.aspx System calls:
o CreateFileMappingo OpenFileMappingoMapViewOfFileo UnmapViewOfFileo FlushViewOfFileo CloseHandle
26Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Shared Memory
Shared Memory is a mechanism to share parts of the virtual memory pages among different processes.
Shared Memory is a method of Inter Process Communication (that only applies when the processes run on the same host).
Shared Memory benefits are to save memory space: if the same content is shared by multiple processes, it is loaded in RAM only once.o Example: Shared Libraries (DLLs) can be loaded once and mapped to shared
memory.o Closely related to memory mapped files with SHARED mode.
Examples: http://www.cs.cf.ac.uk/Dave/C/node27.html
27Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Shared Memory: Steps
To use Shared Memory, processes must:o Agree on a shared name for the Shared Memory segment.o Allocate the Shared Memory segment
The first allocation creates new Virtual Memory Pages Subsequent allocations with the same key return reference to
existing pages.o Attach the Shared Memory segment (by mapping it into
the memory of the process).o Use a mutex mechanism (semaphore) to synchronize
access to the shared memory to avoid collisions when writing.
o Detach the Shared Memory segment.o One process must de-allocate the segment.
Shared Memory: System Calls
Allocate Shared Memory: shmget()int segment_id = shmget (shm_key,
getpagesize (),IPC_CREAT | S_IRUSR | S_IWUSER);
Attach Shared Memory: shmat()Detach Shared Memory: shmdt()De-allocate Shared Memory: shmctl()
Shared Memory: Linux Example// Server#include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <stdio.h> #define SHMSZ 27 main() { char c; int shmid; key_t key; char *shm, *s; /* We'll name our shared memory segment "5678". */ key = 5678;
/* Create the segment. */ if ((shmid = shmget(key, SHMSZ, IPC_CREAT | 0666)) < 0)
{ perror("shmget"); exit(1); }
/* Attach the segment to our data space. */ if ((shm = shmat(shmid, NULL, 0)) == (char *) -1) { perror("shmat"); exit(1); }
/* Put some things into the memory for the other process to read. */
s = shm; for (c = 'a'; c <= 'z'; c++) *s++ = c; *s = NULL;
/* Wait until the other process changes the first character of our memory * to '*', indicating that it has read what is
there. */ while (*shm != '*') sleep(1); exit(0); }
// client #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <stdio.h> #define SHMSZ 27 main() { int shmid; key_t key; char *shm, *s; /*We need to get the segment named * "5678", created by the server. */ key = 5678;
/* Locate the segment. */ if ((shmid = shmget(key, SHMSZ, 0666)) < 0) { perror("shmget"); exit(1); }
/* Attach the segment to our data space. */ if ((shm = shmat(shmid, NULL, 0)) == (char *) -1) { perror("shmat"); exit(1); }
/* Read what the server put in the memory. */ for (s = shm; *s != NULL; s++) putchar(*s); putchar('\n');
/* Finally, change the first character of the segment to '*', indicating we have read * the segment. */ *shm = '*'; exit(0); }
[Would be better to use a semaphore to coordinate client/server]
Locking Memory: Definition Some user mode programs may want to prevent swapping out parts of their
memory for performance reasons – to prevent the uncertainty of a page fault. In Linux:
o #include <sys/mman.h> o int mlock(const void *addr, size_t len); o int munlock(const void *addr, size_t len); o int mlockall(int flags); o int munlockall(void);
mlock() locks part of the calling process's virtual address space into RAM, preventing that memory from being paged to the swap area.
munlock() unlocks part of the calling process's virtual address space, so that pages in the specified virtual address range may once more be swapped out if required by the kernel memory manager. Memory locking and unlocking are performed in units of whole pages.
31Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Locking Memory: Usages (From mlock man page): Memory locking has 3 main applications:
o real-time algorithms and o high-security data processing o long running high-performance servers
Real-time applications require deterministic timing, and, like scheduling, paging is one major cause of unexpected program execution delays. Real-time applications will usually also switch to a real-time scheduler with sched_setscheduler.
Cryptographic security software may handle critical data like passwords or secret keys as data structures. As a result of paging, these secrets could be transferred onto disk, where they might be accessible to the enemy long after the security software has erased the secrets in RAM and terminated.
Server programs (for example, SQL Servers), often use Memory Locking to obtain predictable high performance.
32Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Memory Locking: Real-time considerationshttps://writer.zoho.com/public/rreginelli/Chapter-4---Memory-Locking-R21/noband A real-time process should lock down its memory and not grow. 1. Problem: Paging can impede a real-time process from meeting critical deadlines. Solution: A real-time process should lock down its entire address space using mlockall() Discussion: Locking down memory prevents the operating system from discarding a process’s pages from main memory.
The process address space of a real-time application should always be resident in main memory because the cost, in time, of retrieving a page from backing store is prohibitively expensive (10s of msec) and unpredictable.
2. Problem: malloc calls during real-time processing induce unpredictable delays from the memory allocator and the paging subsystem. Solution: Pre-allocate and lock down all memory needed by the real-time application before real-time processing begins. Discussion: Dynamic memory allocation can impact timing in two ways: 1. The Linux heap manager (implemented in user mode library, and accessed through functions such as malloc, new, and free) does not run in
constant time because it uses a complex heap management algorithm, whose runtime depends on the current heap fragmentation level and on thread contention level (heap manager must be thread-safe and uses mutexes).2. Dynamic memory allocation can induce a page fault when the heap does not have sufficient memory to fulfill a request for memory. If the heap is full, the process will ask the kernel to increase its address space.
The delay due to the page fault occurs at two points:a. If the process is locked in memory with the MCL_FUTURE flag, a page fault is generated immediately. The MCL_FUTURE ensures that any growth in the
process address space is also locked into memory. b. If the process is not locked into memory, a page fault is deferred until later. This is known as demand paging. Under demand paging, the memory
allocator provisions virtual space (creates an entry in the process page table), but not physical space (no physical page frame is allocated), until the new memory is accessed for the first time.
Whether the penalty of the page fault is incurred immediately or left pending, the unexpected delay can jeopardize the timely completion of critical real-
time processing.
Risk: memory locking leads to hard-commitment. When multiple processes lock their pages in RAM – swapping is impossible, and the OS can only run processes that can fit in RAM.
Memory Locking: mlockall() mlockall() locks all pages mapped to the address space of the calling
process into main memory. o Any shared library utilized by the application is also locked into memory in its
entirety. o This prevents any part of the process from being paged out of main
memory. All pages are guaranteed to be resident in main memory upon successful return from mlockall(). All pages mapped to the calling process remain resident in main memory until the process either exits or calls munlockall().
mlockall() gets flags as input:
o MCL_CURRENT: lock all pages mapped to the process at the time mlockall() is called. If the process address space grows later, through dynamic memory allocation, those new pages will not be locked into memory.
o MCL_FUTURE: any new pages provisioned and mapped to the process after the call to mlockall() are also locked into RAM. Therefore, if the process’ address space grows through dynamic memory allocation, all new pages will be automatically locked into main memory as soon as allocated.
Memory Locking: mlockall()
With MCL_FUTURENew allocated data isautomatically locked.
Memory Locking: Example 1#include <stdio.h>#include <sys/mman.h>#define SIZE 500
int main(void) {
/* A big record */ struct record { char name[128]; int count; unsigned char dummy_array[1024000]; struct record *next; };
/* An array of ~ 500MB */ struct record *a[SIZE]; int i; /* * Preallocate memory required by the application before * real-time processing begins */ /* Insert code here */
/* * Lock down all pages mapped to the process */ printf(“Locking down process\n”); if (mlockall(MCL_CURRENT | MCL_FUTURE) < 0) { perror("mlockall"); return 1; } /* * Begin real-time processing */ /* Insert RT code here */ /* * Release all memory belonging to this process */ printf(“Finished, cleaning up\n”); munlockall(); return 0;}
Monitoring Paging & SchedulingThe system call getrusage() collects information about paging and scheduling during program execution.
#include <sys/time.h>#include <sys/resource.h>int getrusage(int who, struct rusage *usage);
getrusage() returns resource usage measures for who, which can be one of the following:o RUSAGE_SELF
Return resource usage statistics for the calling process, which is the sum of resources used by all threads in the process.o RUSAGE_CHILDREN
Return resource usage statistics for all children of the calling process that have terminated and been waited for. These statistics will include the resources used by grandchildren, and further removed descendants, if all of the intervening descendants waited on their terminated children.
o RUSAGE_THREAD (since Linux 2.6.26) Return resource usage statistics for the calling thread.
The resource usages are returned in the structure pointed to by usage, which has the following form:
struct rusage { struct timeval ru_utime; /* user CPU time used */ struct timeval ru_stime; /* system CPU time used */ long ru_maxrss; /* maximum resident set size */ long ru_ixrss; /* integral shared memory size */ long ru_idrss; /* integral unshared data size */ long ru_isrss; /* integral unshared stack size */ long ru_minflt; /* page reclaims (soft page faults) */ long ru_majflt; /* page faults (hard page faults) */ long ru_nswap; /* swaps */ long ru_inblock; /* block input operations */ long ru_oublock; /* block output operations */ long ru_msgsnd; /* IPC messages sent */ long ru_msgrcv; /* IPC messages received */ long ru_nsignals; /* signals received */ long ru_nvcsw; /* voluntary context switches */ long ru_nivcsw; /* involuntary context switches */
};
Monitoring Paging: Example 2a #include <stdio.h>#include <stdlib.h>#include <sys/mman.h>#include <sys/resource.h>#define SIZE 500 int main(void) { struct record { char name[128]; int count; unsigned char dummy_array[1024000]; struct record *next; }; struct record *a[SIZE]; int i; struct rusage usage1, usage2, usage3, usage4; getrusage(RUSAGE_SELF, &usage1); /* Part 1: Lock down memory mapped to the process address space */ if (mlockall(MCL_CURRENT) < 0) { perror("mlockall"); return 1; } /* Collect stats that show the effect of mlockall on paging */ getrusage(RUSAGE_SELF, &usage2); /* Part 2: Dynamically allocate memory for records */ for (i = 0; i < SIZE; i++) { if ((a[i] = (struct record *)malloc(sizeof(struct record))) == NULL) { perror("malloc failed"); return 1; } } // continued…
Monitoring Paging: Example 2b /* Collect stats that show the effect of dynamic memory allocation */ getrusage(RUSAGE_SELF, &usage3); /* Part 3: Initialize allocated records */ for (i = 0; i < SIZE; i++) { (a[i])->count = 0; (a[i])->dummy_array[0] = 0; (a[i])->next = NULL; } /* Collect stats that show the effect of initialing newly allocated memory. */ getrusage(RUSAGE_SELF, &usage4); /* Print out paging statistics */ printf("Before mlockall\n"); printf("minflt %d\tmajflt %d\n\n", usage1.ru_minflt, usage1.ru_majflt); printf("After mlockall\n"); printf("minflt %d\tmajflt %d\n\n", usage2.ru_minflt, usage2.ru_majflt); printf("After dynamic memory allocation with malloc\n"); printf("minflt %d\tmajflt %d\n\n", usage3.ru_minflt, usage3.ru_majflt); printf("After initializing all allocated memory\n"); printf("minflt %d\tmajflt %d\n\n", usage4.ru_minflt, usage4.ru_majflt); /* Cleanup. Free all memory allocations */ for(i = 0; i < SIZE; i++) { free(a[i]); } return 0;}
Monitoring Paging: Example Run 1
Program has 3 steps:o Locking down the application’s memory mlockall()o Dynamically allocating memoryo Initializing the dynamically allocated memory
Execution with mlockall(MCL_CURRENT):Before mlockallminflt 144 majflt 0
After mlockallminflt 421 majflt 0 After dynamic memory allocation with mallocminflt 921 majflt 0 After initializing all allocated memoryminflt 1421majflt 0
Monitoring Paging: Example Run 2
Mlockall(MCL_CURRENT | MCL_FUTURE) Before mlockallminflt 144 majflt 0 After mlockallminflt 421 majflt 0 After dynamic memory allocation with mallocminflt 125921 majflt 0 After initializing all allocated memoryminflt 125921 majflt 0
Monitoring Paging: Analyze Results 1With MCL_CURRENT: Only the application’s current pages are locked into memory. getrlimit() reports that there were 144 page reclaims after the start of the application,
which represents the #page reclaims required to bring in only the working set of pages for the application.
After the call to mlockall, there are 277 additional page reclaims (for a total of 421 since the program’s start). At this point, all current data and executable pages are locked into memory. This includes any libraries that are referenced by the application. Libraries are locked down in their entirety, and a large number of the page reclaims seen here were used to page in and lock down parts of libraries not already in memory.
After dynamically allocating memory, there are 500 more page reclaims (for a total of 921 since the program’s inception). This memory is not locked down because mlockall was called with the MCL_CURRENT flag, which specifies only memory already allocated at the time it is called be locked down. Therefore, this memory will be demand paged and physical pages are mapped to the virtual address only when the memory is accessed for the first time. Memory is required, however, to hold the size information fields that are associated with each chunk of memory, and it is responsible for the 500 page reclaims generated in this portion of the program.
Finally, there are 500 more page reclaims (for a total of 1421 since the program’s inception) after pieces of the allocated memory are initialized. Physical pages are only allotted to initialized memory. The rest of the dynamically allocated memory remains only in virtual space, with no physical pages mapped to it.
Monitoring Paging: Analyze Results 2With MCL_CURRENT | MCL_FUTURE:
Before mlockallminflt 144 majflt 0 After mlockallminflt 421 majflt 0 After dynamic memory allocation with mallocminflt 125921 majflt 0 After initializing all allocated memoryminflt 125921 majflt 0
Here, the number of page reclaims triggered by the dynamic memory
allocation is substantial: Physical pages are allotted for all the memory that was dynamically allocated, whether it is used or not.
This example demonstrates the power of mlockall(), but also serves as a strong caution: Be sure to dynamically allocate only memory that will be used, otherwise valuable physical memory will be needlessly tied up, potentially starving other system processes.
Segmentation
User Mode and Memory Management
Case studieso Pentium
o Unix
o Linux
o Windows
Memory management, part 3: outline
44Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Pentium: Segmentation + paging Segmentation with or without paging is possible 16K segments per process, segment size up to 4G 32-bit
words Page size 4K A single global GDT, each process has its own LDT 6 segment registers may store (16 bit) segment selectors: CS,
DS, SS… When the selector is loaded to a segment register, the
corresponding descriptor is stored in microprogram registers
Index
1 2Privilege level (0-3)0 = GDT/ 1 = LDT13
Pentium segment selector45Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Pentium- segment descriptors
Pentium code segment descriptor. Data segments differ slightly
46Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Pentium - Forming the linear address
Segment descriptor is in internal (microcode) register If segment is not zero (TRAP) or paged out (TRAP)
o Offset size is checked against limit field of descriptoro Base field of descriptor is added to offset (4k page-size)
47Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Intel Pentium address translation
10 10 12
Can cover up to 4 MBphysical address space
48Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation
User Mode and Memory Management
Case studieso Pentium
o Unix
o Linux
o Windows
Memory management, part 3: outline
49Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
UNIX process address space
Process A
BSSInit. Data
Text
Stack pointer
0
8K
20K
Process B
BSSInit. Data
Text08K
20K
Physical memory
Stack pointer
OS
50Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Unix memory management sys calls
Not specified by POSIX Common Unix system calls
o s=brk(addr) – change data segment size. (addr specifies the first address following new size)
o a=mmap(addr,len,prot,flags,fd,offset) – map (open) file fd starting from offset in length len to virtual address addr (0 if OS is to set address)
o s=unmap(addr,len) – unmap a file (or a portion of it)
51Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Memory-mapped file
Process A
BSSData
Text
Stack pointer
0
8K
20K
Process B
BSSData
Text08K
20K
Physical memory
Stack pointer
OS
Memory mapped file
Memory mapped file
52Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Unix 4BSD memory organization
Main memory
Page frame 3
Page frame 2
Page frame 1
Page frame 0
Core map entries, one per page frame
Core map entry
Index of next entry
Index of previous entry
Used when page frame is on free list
Disk block number
Disk device number
Block hash code
Index into proc table
Text/data/stack
Offset within segment
Misc.
Free In transit Wanted Locked
Kernel
53Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Location in backing store
Unix Page DaemonUnix attempts to keep a pool of free pagesThe pagedaemon - a process that sleeps most of the
time – frees page frames pro-actively.Awakened periodically to inspect the state of memory -
if less than ¼ 'th of page frames are free, then it frees page frames
Performs better than evicting pages only when needed (and writing the modified to disk in a hurry)
Uses a global clock algorithm – two-handed clock
54Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Page replacement – Unix BSD A two-handed clock algorithm clears the reference bit first
with the first hand and frees pages with its second hand. It has the parameter of the “angle” between the hands - small angle leaves only “busy” pageso If page is referenced before 2’nd hand comes, it will not be freed
55Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Page replacement – Unix, cont'd If thrashing is detected, the swapper process removes
processes to secondary storageo Remove processes idle for 20 sec or moreo If none – swap out the oldest process out of the 4 largest
Who get swapped back function of: o How long process was kept out of memoryo Size of process
56Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation
User Mode and Memory Management
Case studieso Pentium
o Unix
o Linux
o Windows
Memory management, part 3: outline
57Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Linux 32 bits processes
Each process gets 3GB virtual memory Remaining 1GB for kernel and page tables Virtual address space composed of areas with same
protection, paging properties (pageable or not, direction of growth)
Each process has a linked list of areas, sorted by virtual address (text, data, memory-mapped-files,…)
58Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Linux page tables organization (32 bits)
59Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
32 bit architecture: Some pages 4K / Some pages 2M http://linux-mm.org/PageTableStructure
Linux page tables organization (64 bits)
60Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
64 bit architecture: Some pages 4K / Some pages 2M http://linux-mm.org/PageTableStructure
Linux main memory management
Kernel never swapped The rest: user pages, file system buffers, variable-size device
drivers The buddy algorithm is used. In addition:
o Linked lists of same-size free blocks are maintainedo To reduce internal fragmentation, a second memory allocation
scheme (slab allocator) manages smaller units inside buddy-blocks Demand paging (no pre-paging) Dynamic backing store management
61Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Linux page replacement algorithm
Variant of clock algorithm
Based on aging, pages are in either active or inactive list
62Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Segmentation
User Mode and Memory Management
Case studieso Pentium
o Unix
o Linux
o Windows
Memory management, part 3: outline
63Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Win 2000: virtual address space
Virtual address space layout for 3 user processes White areas are private per process Shaded areas are shared among all processes
64Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Win 2000: memory mgmt. concepts Each virtual page can be in one of following states:
o Free/invalid – Currently not in use, a reference causes access violationo Committed – code/data was mapped to virtual pageo Reserved – allocated to thread, not mapped yet. When a new thread
starts, 1MB of process space is reserved to its stacko Readable/writable/executable
Dynamic (just-in-time) backing store managemento Improves performance of writing modified data in chunkso Up to 16 pagefiles
Supports memory-mapped files Can use 4K or 4M pages Executable access pattern recorded for SuperFetch prepaging
65Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels
Win 2000: page replacement alg. Processes have working setsworking sets defined by two parameters - the
minimal and maximal # of pages The WS of processes is updated at the occurrence of each page
fault (i.e. the data structure WS)(i.e. the data structure WS) - o PF and WS < Min addadd to WSo PF and WS > Max removeremove from WS
If a process thrashes, its working set size is increased Memory is managed by keeping a number of free pages, which
is a complex function of memory use, at all times The balance-set-managerbalance-set-manager is run (every second) and it decides
whether to free pages -o surplus pages (to the WS) are removed from a process (large background
before small foreground…)
66Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels