osiiseans

Upload: kalaraiju

Post on 04-Jun-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 OSIISEANS

    1/9

    1.What are race conditions?

    Race conditions are problems that occur due to the sharing of the same file by several processes.In such a case none of the processes can use the shared file.Consider the example of a printingqueue that maintains the list of all files to be printed. Also consider a file shared by two

    processes P A and P B.

    Printing queue File

    The file has got two variables:in pointing to the next free slot (N.F.S)

    andout pointing to the next job to be executed. Initially in the file, in=7 and out=4. Let P A be the first process that uses the file. Then P A sets itslocal variable N.F.S=7. But before placing into the queue, an interrupt occurs. Now P B comes. P B also sets its local variable N.F.S=7 and is placed in the queue.Then N.F.S=N.F.S+1 which sets in=8. In such a case, while executing, P A overwrites the slot 7that is already occupied by P

    B. So nothing will be printed. Such a condition is an example for

    race condition2.Drawbacks of monitors

    A process might access a resource without first gaining access permissionto the resource. A process might never release a resource once it has been granted access to the resource.

    A process might attempt to release a resource that it never requestecj. A process might request the same resource twice (without first releasing the resource).

    3. What do you mean by semaphore?

    Semaphore : A synchronization variable that takes on positive integer values. (Invented byDijkstra)

    P(semaphore): an atomic operation that waits for semaphore to become positive, thendecrements it by 1.

    01234

  • 8/13/2019 OSIISEANS

    2/9

    V(semaphore): an atomic operation that increments semaphore by 1.

    The names come from the Dutch, proberen (test) and verhogen (increment). Semaphores aresimple and elegant and allow the solution of many interesting problems. They do a lot more than

    just mutual exclusion.

    4. What is external fragmentation?

    As process are loaded and removed from the memory free memory space is bracken into pieces.external fragmentation occurs when enough memory space exists to satisfy a request , but It isnot contagious ;storage is fragmented into large number of small holes.

    external fragmentation : left-over holes are too small to hold a new program eventhough there may be enough total free memory to do so

    book-keeping for holes can be expensive: a 1 byte hole needs an entire record in the table to cut down on space overhead, memory is allocated in fixed-size blocks

    5.TLBThe standard solution to this problem is to use a special, small, fastlookuphardware cache, called a translation look-aside buffer (TLB). The TLBis associative, high-speed memory. Each entry in the TLB consists of two parts:a key (or tag) and a value. When the associative memory is presented with anitem, the item is compared with all keys simultaneously. If the item is found,the corresponding value field is returned. The search is fast; the hardware,however, is expensive. Typically, the number of entries in a TLB is small, oftennumbering between 64 and 1,024. 6. What is critical section?

    Each process has a segment code called the critical section, in which the process may be

    changing variables, updating tables, writing file and so on. When one process is in criticalsection no other process is allowed to execute there. Each process must request permission toenter the CS; this section code is called the entry section. The CS is followed by exit section. Theremaining code is the remainder section.

    What are the requirements to be satisfied by the critical section problem?

    Following are the requirements to be satisfied by the critical section problem:

    a) Mutual exclusion: If a process is executing in the critical section, then noother process can be executed in the critical section.

    b) Progress: If no process is executing in the critical section and there are processes that wish to enter the critical section, then only those process thatare not executing in the non critical section can participate in the decision ofwhich will enter its critical section next, and this section cannot be postponedindefinitely

  • 8/13/2019 OSIISEANS

    3/9

    c) Bounded waiting: There exists abound on the number of times that other processes are allowed to enter their critical sections after a process has made arequest to enter its critical section and before that request is granted. This

    bound prevents starvation of any single process.

    7.What are the different methods for handling deadlocks?

    1. Deadlock detection and recovery: Allow the system to enter a deadlock state, detect itand then recover.

    2. Deadlock prevention: Use a protocol to ensure that the system will never enter a deadlock state.

    3. Deadlock avoidance: Avoid deadlock by careful resource scheduling.

    9. Producer-Consumer Problem Using Semaphores

    producer and consumer threads

    Output, disk blocks, memory pages, processes, etc.

    Whatever is generated by the producer

    No serialization of one behind the other

    Tasks are independent (easier to think about)

    The buffer set allows each to run without explicit handoff

    Use three semaphores:

    mutex mutual exclusion to shared set of buffers

    Binary semaphore

  • 8/13/2019 OSIISEANS

    4/9

    empty count of empty buffers

    Counting semaphore

    full count of full buffers

    Counting semaphore

    Th Problem: There is a set of resource buffers shared by

    e Solution to producer-consumer problem uses three semaphores, namely, full, empty and mutex.

    The semaphore 'full' is used for counting the number of slots in the buffer that are full. The'empty' for counting the number of slots that are empty and semaphore 'mutex' to make sure thatthe producer and consumer do not access modifiable shared section of the buffer simultaneously.

    Initialization

    Set full buffer slots to 0.i.e., semaphore Full = 0.

    Set empty buffer slots to N.i.e., semaphore empty = N.

    For control access to critical section set mutex to 1.i.e., semaphore mutex = 1.

    Producer ( )WHILE (true)

    produce-Item ( );P (empty);P (mutex);enter-Item ( )V (mutex)V (full);

    Consumer ( )WHILE (true)

    P (full)P (mutex);

    remove-Item ( );V (mutex);V (empty);

    consume-Item (Item).10.What are the necessary conditions for deadlocks?

    1. Mutual exclusion: Only one process at a time can use the resource.

  • 8/13/2019 OSIISEANS

    5/9

    2. Hold and wait: A process must be holding at least one resource and waiting toacquire additional resources that are currently being held by other processes.

    3. No preemption : Resource cant be preempted; that is, a resource can be released onlyvoluntarily by the process holding it, after that process has completed its task.

    4. Circular wait : A set {p0,p1,,pn} of waiting processes must exit

    such that p0 is waiting for a resource that is held by p1, p1 is waiting for a resourcethat is held by p2,,pn -1 is waiting for a resource that is held by pn, and pn iswaiting for a resource that is held by p0.

    10.Deadlock Avoidance

    Its a method to avoid deadlock by careful resour ce scheduling.This approach to the deadlock problem anticipates deadlock before it actually occurs. Thisapproach employs an algorithm to access the possibility that deadlock could occur and actingaccordingly.

    If the necessary conditions for a deadlock are in place, it is still possible to avoid deadlock by being careful when resources are allocated. Perhaps the most famous deadlock avoidancealgorithm is the Bankers algorithm. So named because the process is analogous to that used by a

    banker in deciding if a loan can be safely made.

    Bankers Algorithm

    In this analogyCustomers processes

    Units resources, say,tape driveBanker Operating System

    Customers Used MaxABCD

    0000

    6547

    Available Units= 10

    Fig. 1

    In the above figure, we see four customers each of whom has been granted a number of creditnits. The banker reserved only 10 units rather than 22 units to service them. At certain moment,the situation becomes

    Customers Used MaxABC

    112

    654

    Available Units= 2

  • 8/13/2019 OSIISEANS

    6/9

    D 4 7

    Fig. 2Safe State The key to a state being safe is that there is at least one way for all users to finish. Inother analogy, the state of figure 2 is safe because with 2 units left, the banker can delay any

    request except C's, thus letting C finish and release all four resources. With four units in hand,the banker can let either D or B have the necessary units and so on.Unsafe State Consider what would happen if a request from B for one more unit were grantedin above figure 2.We would have following situation

    Customers Used MaxABCD

    1224

    6547

    Available Units = 1

    This is an unsafe state.

    If all the customers namely A, B, C, and D asked for their maximum loans, then banker could notsatisfy any of them and we would have a deadlock.It is important to note that an unsafe state does not imply the existence or even the eventual

    existence a deadlock. What an unsafe state does imply is simply that some unfortunate sequenceof events might lead to a deadlock. The Banker's algorithm is thus to consider each request as itoccurs, and see if granting it leads to a safe state. If it does, the request is granted, otherwise, it

    postponed until later.

    11.. Explain the basic method of paging method. Physical memory is broken into fixed-sized blocks called frames. Logical memory is

    also divided into blocks of the same size called pages. When a process is to be executed,its pages are loaded into any available memory frames from the backing store. Everyaddress generated by the cpu is divided into two parts: a page number (p) and a page offset(d) .The page number is used as the index into a page table. The page table containsthe base address of each page in physical memory. This base address is combined withthe page offset to define the physical memory address that is sent to the memory unit.

    Logical Memory Page Table Physical Memory

  • 8/13/2019 OSIISEANS

    7/9

    hardware support required to implement paging?

    Each operating system has its own methods for storing page tables. Most allocate a pagetable for each process. A pointer to the page table is stored with the other register values in the

    process control block.The hardware implementation of the page table can be done in several ways. In the

    simplest case, the page table is implemented as a set of dedicated registers. These registersshould be built with very high-speed logic to make the paging-address translation efficient.

    The use of registers for the page table is satisfactory if the page table is reasonably small.For other cases, the page table is kept in main memory and a page table base register (PTBR)

    points to the page table. Changing page tables requires changing only one register, substantiallyreducing context-switch time.

    However the standard solution to this problem is to use a special, small, fast-lookuphardware cache called translation look-aside buffer (TLB). The TLB is associative, high speedmemory. Each entry in the TLB consists of 2 parts: a key and a value. Some TLBs storeaddress- space identifiers (ASIDs) in each entry of the TLB. A n ASID uniquely identifies each

    process and is used to provide address space protection for that process. When TLB attempts toresolve virtual page numbers, it ensures the ASID for the currently running process matches theASID associated with the virtual page.

    page 0

    page 1

    page 2

    page 3

    page0

    page2

    page1

    page3

    p d

    1

    4

    3

    7

  • 8/13/2019 OSIISEANS

    8/9

    12. Hashed Page TablesA common approach for handling address spaces larger than 32 bits is to usea hashed page table, with the hash value being the virtual page number. Eachentry in the hash table contains a linked list of elements that hash to the samelocation (to handle collisions). Each element consists of three fields: (1) thevirtual page number, (2) the value of the mapped page frame, and (3) a pointerto the next element in the linked list.The algorithm works as follows: The virtual page number in the virtualaddress is hashed into the hash table. The virtual page number is comparedwith field 1 in the first element in the linked list. If there is a match, thecorresponding page frame (field 2) is used to form the desired physical address.

    If there is no match, subsequent entries in the linked list are searched for amatching virtual page number. This scheme is shown in Figure 8.16.A variation of this scheme that is favorable for 64-bit address spaces has

    been proposed. This variation uses clustered page tables, which are similar tohashed page tables except that each entry in the hash table refers to several

    pages (such as 16) rather than a single page. Therefore, a single page-tableentry can store the mappings for multiple physical-page frames. Clustered

    CPU P d

    F d

  • 8/13/2019 OSIISEANS

    9/9

    page tables are particularly useful for sparse address spaces, where memoryreferences are noncontiguous and scattered throughout the address space.

    Inverted Page TablesUsually, each process has an associated page table. The page table has oneentry for each page that the process is using (or one slot for each virtualaddress, regardless of the latter's validity). This table representation is a naturalone, since processes reference pages through the pages' virtual addresses. Theoperating system must then translate this reference into a physical memoryaddress. Since the table is sorted by virtual address, the operating system isable to calculate where in the table the associated physical address entry is and

    to use that value directly. One of the drawbacks of this method is that each page table may consist of millions of entries. These tables may consume largeamounts of physical memory just to keep track of how other physical memoryis being used.To solve this problem, we can. use an inverted page table. An inverted

    page table has one entry for each real page (or frame) of memory. Each entryconsists of the virtual address of the page stored in that real memory location,with information about the process that owns that page. Thus, only one pagetable is in the system, and it has only one entry for each page of physicalmemory. Figure 8.17 shows the operation of an inverted page table. Compareit with Figure 8.7, which depicts a standard page table in operation. Inverted

    page tables often require that an address-space identifier (Section 8.4.2) bestored in each entry of the page table, since the table usually contains severaldifferent address spaces mapping physical memory. Storing the address-spaceidentifier ensures that a logical page for a particular process is mapped to thecorresponding physical page frame. Examples of systems using inverted pagetables include the 64-bit UltraSPARC and PowerPC.To illustrate this method, we describe a simplified version of the inverted

    page table used in the IBM RT. Each virtual address in the system consists of a

    triple.