operating systems

90
OPERATING SYSTEMS. SY B.Sc. I.T. Author: Fahad Shaikh.

Upload: fahad-shaikh

Post on 01-Jul-2015

201 views

Category:

Education


0 download

DESCRIPTION

Theory related to OS : It Includes: 1. Unit I (COMPONENTS OF COMPUTER SYSTEM) 2. Unit II (OPERATING SYSTEM STRUCTURE) 3. Unit III (PROCESS MANAGEMENT) 4. Unit IV (MEMORY MANAGEMENT) 5. Unit V (FILE SYSTEM) 6. Unit VI (INPUT OUTPUT SYSTEM)

TRANSCRIPT

Page 1: Operating Systems

OPERATING SYSTEMS.

SY B.Sc. I.T.

Author: Fahad Shaikh.

Page 2: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 2

Unit - ICOMPONENTS OF COMPUTER SYSTEM

System and Application Program

Operating System

The above diagram gives a view of the components of a computer system, which can be roughly dividedinto four components. The hardware, the operating system, the application program and the users.

The hardware consists of the CPU, memory and I/O devices which provide the basic computingresources. The application program consists of programs such as word processors, compilers, and webbrowsers. The operating system controls and co- ordinates the use of the hardware among the variousapplication programs for the various users. The Operating System provides the means for proper use ofthe recourses in operation of the computer system.

The two basic goals of an operating system are convenience and efficiency

TYPES OF SYSTEMS:

1) Mainframe Systems:

Mainframe computer system where the first computers which were used to solve manycommercial and scientific applications. They are divided into three types.

i) Batch systemsii) Multi programmed systemsiii) Time sharing systems

User 1 User2 User n

Computer Hardware

Page 3: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 3

i) Batch systems

Batch system consists of large machines in which the input devices were card readers and tapedrives. The output consists of line printers, tape drives and card punches. The user did notinteract directly with the computer system; the user prepared a job and submits it to thecomputer operator. The job was usually in the form of punch cards.

The operating system in these early systems was very simple. Its major task was totransfer control automatically from one job the next. The operating system is always resident inthe memory. In this execution environment, the CPU is often idle due to the speeds of theperipheral devices. Hence the major disadvantage of this system was inefficient use of the CPU.

ii) Multi programmed systems

Operating system

Job 1

Job 2

Job 3

Job 4

Multi programming increases CPU utilization by organizing jobs so that the CPU alwayshas at least one job to execute. The operating system keeps many jobs in the memory at a timeand picks up a job to execute. In case the job has to wait for some task (such as I/O) theoperating system switches to execute another job. Hence the CPU is never idle.

Operatingsystem

Userprogram

Page 4: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 4

Multi programming is the first instance where the operating system must make decisionsfor the users. Hence multi programmed operating system are more sophisticated.

iii) Time sharing operating system /Multitasking system /Interactive system

A time shared operating system allows many users to share the computer simultaneously.Since each action or command in a time share system is very short, only a little CPU time isneed for each user. As the system switches rapidly from one user to the next, each user is giventhe impression that the entire computer system is dedicated to him even though it may beshared among many users.

A time shared operating system uses CPU scheduling & multi programming to provide eachuser with a small portion of time shared computer. Time sharing operating system are morecomplex than multi programmed operating system because they need to provide protection, away to handle the file system, job synchronization, communication and deadlock freeenvironment

2) Desktop System:

Desktop systems during their initial stages did not had the feature to protect the operatingsystem from the user programs. Hence pc operating system where neither multiuser normultitasking. Since every user is having all the resources hence there is no question of efficiencytherefore the main goal of such system was maximizing user convenience and responsiveness.With the advent of network this system needed protection hence this feature was also added.

3) Multiprocessor System:

Multiprocessor system (parallel system or tightly couple system) have more than one processorin closed communicating sharing the computer bus, the clock as well as memory and peripheraldevices.

Multiprocessor systems have three main advantages:

I) Increased Throughput:- By increasing the number of processor more work is done in less time.The speed up ratio with N processors is not almost equal to N due to overhead in keeping allthe work correctly also due to sharing of resources the speedup ratio decrease further

II) Economy: - Multiprocessor systems are more economic as compared to multiple singleprocessor system because they share peripheral, memory and power supplies

Page 5: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 5

III) Increase Reliability: - If functions (if work) are properly distributed among several processorsthen the failure of one processor will not hold the system but only slow down the system.

Suppose if we have 10 processors and if one fails, then each of the remaining would continuethe processing. The overall may degrade by 10%. The ability of a system to provide serviceproportional to the level of surviving hardware is called graceful degradation. Systems designedfor graceful degradation are called fault tolerant.

Multiprocessor system can be realized in the following ways

Tandem system :

This system uses both hardware and software duplication to ensure continuous operation evenin case of failures. The system consists of two identical processor each having its own localmemory the processor is connected by a bus. One processor is primary and other is backup.Two copies of each processor (1 in primary in backup). At some fixed interval of time theinformation of each process is copied from primary to backup. If a failure is detected, thebackup copies activated and restarted from the most recent check point.

The drawback of the system is at it is expensive.

Symmetric multiprocessing system(SMP):

Memory

CPU …………… CPU …………………. CPU ………….. CPU

In symmetric multiprocessing each processor runs and identical copies of the operating systemand these copies communicate each other as needed. There is no master slaves relationshipbetween processor, all processor are peers. The benefit of this model is that many processorscan run simultaneously. The problem with this system would be that one processor (CPU) maybe sitting idle and other may be overloaded. This can be avoided if processors share certain

Page 6: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 6

data structures. A multiprocessor system of this form will allow processes and resources to beshared properly among various processors.

Asymmetric multiprocessing:

MASTER

CPU

CPU . . . . . . CPU . . . . . CPU . . . . . CPU

SLAVES

In asymmetric multiprocessing each processing assigned a specific task. A master processorcontrols the system. Other processors depend on the master for instruction or may have apredefined task. This scheme defined a master slave relationship. The master processorschedules and allocates work to the slave processors.

The difference between symmetric and asymmetric multiprocessing may be the result of eitherhardware or software. Special hardware can differentiate the multiprocessor or the softwarecan be written to allow only one master and multiple slaves.

3) Distributed System:

Distributed systems depend on networking for their functionality, they are able to sharecomputational task and provide a rich set of features to users. Networks are of various types,the type may depend by the protocols used, the distance between the nodes and the transportmedia. TCP/IP is the most common network protocol a part from others. Most operatingsystem support TCP/IP.

A LAN exists within a room or a building. A VAN exists between cities and countries and so on.The different transport media include copper wires, fiber optic, satellites and radios.

The following are the most common type of the system

1) Client server system2) Peer to peer system

Page 7: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 7

i) Client server system:

Client . . . . . . Client . . . . . . Client . . . . . . . Client

Network

SERVER

The above diagram gives the general structure of client server system in which we have a serversystems which satisfied request generated by client system.

Server systems can be broadly categorized as compute servers and file servers.

Compute server system provide an interface to which client can send request to perform anaction. In response to which they execute the action and send back result to the client.

File server system provide a file system interface where clients can create, update, read, anddelete files.

ii) Peer to peer system:

In this system the processors communicate with one and other through various communicationlines such as high speed buses or telephone lines. These systems are usually referred as looselycoupled system.

The operating system designed for such a system is called network operating system whichprovides features such as file sharing across the network, allowing features to allow differentprocess on different computer to exchange messages.

4) Real Time System:

A real time system has a well define fixed time constraints. Processing must be done within thedefined constraints otherwise the system will failed. A real time system functions correctly onlyif it returns the correct result within the time limit. Real time systems are of two types.

Page 8: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 8

i) Hard Real Time System

ii) Soft Real Time System

A hard real time system guarantees that critical task must be complete on time. In such asystem all delays must be bounded. The use of secondary storage should be extremely limited.Also most advanced operating system features are absent in such system.

Soft real time systems are less restrictive than hard real time system. In this system acritical real time task gets priority over other task. Soft real time can be easily achieved and canbe mixed with other type of system. These systems have limited applications as compared tohard real time systems.

These systems are useful in multimedia, virtual reality and advanced scientific projects

5) Hand Held System:

Hand held systems include personal digital assistant (PDA) as well as cellular phone. Thesedevices have small size, less memory, slow processors and small screens.

Due to small amount of memory the operating system the applications must manage memoryefficiently. Faster processors are not included in these devices because fast processor wouldrequire more power hence recharging and replacing the battery more frequently. Hence theyare designed so as to utilize the processor efficiently. Since the monitors of these devices arevery small, reading or browsing web pages becomes difficult.

6) Clustered System:

Clustered systems are composed of two or more individual systems coupled together.Clustering is usually performed to provide high availability. A layer of clustered software runson the clustered nodes. Each node can monitor one or more of the other nodes. Each node canmonitor one or more of the other nodes. If a machine fails, the monitoring machine can takeownership of its storage and restart the applications that were running on the failed machine.

The most common form of clustering are Asymmetric clustering and Symmetric clustering.

In Asymmetric clustering one machine is in hot standby mode which other is running the application.The hot stand by a machine only monitors the active server. If that server fails, the hot standby host(machine) becomes the active sever.

Page 9: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 9

In symmetric clustering mode two or more host are running applications and are alsomonitoring each other. These methods are more efficient as compared to asymmetricclustering.

Unit - IIOPERATING SYSTEM STRUCTURE

Process Management:

A process is a program in execution. A process needs certain resources, including CPU time,memory, files, and I/O devices, to accomplish its task.

The operating system is responsible for the following activities in connection with processmanagement.

I. Process creation and deletionII. Process suspension and resumption

III. Provision of mechanisms for:a) Process synchronizationb) Process communication on handling deadlock

A process is unit of work in a system. Such a system consists of collection of process, some ofwhich are operating system process and the rest are user process all these process executeconcurrently by multiplexing the CPU among theme.

Main Memory Management:

Memory is large array of words or bytes, each with its own address. It is a repository ofquickly accessible data shared by the CPU and input output devices.

Main memory is a volatile storage device. It loses its contents in case of system failure. The operating system is responsible for the following activities in connections with.

Page 10: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 10

I. Keep track of which parts of memory are currently being used and bywhom.

II. Decide which processes to load when memory space become available.III. Allocate and de-allocate memory space as needed.

File Management:

File management is one of the most visible components of the most visible components of anoperating system for convenient use of computer system the operating system provides auniform logical view of information storage. The operating system hides the physical propertiesof its storage unit called as file.

A file is a collection of related information defined by its creator. Commonly, filesrepresent programs (both source and object forms) and data.

The operating system is responsible for the following activities in connection will filemanagement:

I. File creation and deletion.II. Directory creation and deletion.

III. Support of primitives for manipulating files and directories.IV. Mapping files onto secondary storage.V. File backup on stable (nonvolatile) storage media.

Input output system management:

One of the purpose of the operating system is to hide the specification of the operating systemis to hide the specification of the hardware devices from the user only the device driver knowsthe specification of the device to which it is assigned. In UNIX the specification of input outputdevices are hidden from the bulk of operating system by input output sub system.

The input output system consist ofI. A buffer caching system.

II. A general device driver interface.III. Drivers for specific hardware device.

Secondary Storage Management:

Page 11: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 11

Since main memory (primary storage) is volatile and too small to accommodate all dataprograms permanently, the computer system must provide secondary storage to backup main memory.

Most modern computer systems use disks as the principle on line storage medium, forboth programs and data.

The operating system is responsible for the following activities in connection with diskmanagement.

I. Free space management.II. Storage allocation.III. Disk scheduling.

Command Interpreter System:

One of the most importance systems program for an operating systems is the commandinterpreter which is the interface between the user and the operating system some operatingsystem includes the command interpreter in the kernel other operating system such as Unixand Ms-Dos consider the command interpreter as a special program that is running when a jobis initiated.

When a new job is started in a batch system a program that reads and interprets controlstatements is executed automatically. These programs are called as command line interpreteror shell.

Many commands are giving to the operating system by control statements which deal with:

I. Process creation and management.II. Input Output handling.

III. Secondary storage managementIV. Main memory managementV. File system access

VI. ProtectionVII. Networking.

Operating System Services:

An operating system provides an environment for the execution of the programs it providescertain services to program and to the user of those programs the services are

Page 12: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 12

1. Program Execution: The system must be able to load a program into a memory and runthat program. The program must be able to end its execution either normally orabnormally.

2. Input Output Operations: A running program may require input output. This inputoutput may involve a file or an input output device.

3. File System Manipulation: The programs need to create and delete file and also readand write files.

4. Communication: One process need to exchange information to another process eitheron the same computer or different computer systems, this is handled by the operatingsystem through shared memory or message passing.

5. Error detection: When a program is executing error may occur in the CPU, Memory,Input Output devices or the user program for each type of error the operating systemshould take proper action to ensure proper functioning of the system.

6. Resource Allocation: When multiple users are using a system it is the responsibility ofoperating system to allocate or de-allocate the various resources of the system.

7. Accounting: Operating system keeps a track of use of the computer resources by eachuser. This record may be use for accounting.

8. Protection: protection involves insuring that all access to system resources is controlled.The system should also provide a security from outsiders.

System Calls:

System calls provide an interface between a process and the operating system. These calls aregenerally available as assembly language instructions. A part from assembly language higherlevel languages such as c, c++. Could replace the assembly language for writing the systems callssystems calls are generated in the following way

Consider writing a simple program to read data from one file and to copy them to another file,suppose the file names are obtained the program must open the input file and create theoutput file. Each of these operation required a system calls when the program tries to open theinput file, it may find that no such file exits hence display s on error message through a systemcall and the program terminates abnormally(another a system call).

Page 13: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 13

Three general methods are used to pass parameters between a running program and theoperating system.

Pass parameters in registers. Store the parameters in a table in memory, and the table address is passed as

parameters in a register. Push (store) the parameters onto the stack by the program, and pop off the stack by

operating system.

System calls can be grouped into five categories

1. Process control2. File management3. Device Management4. Information maintenance5. Communications

1. Process control: The following are the system calls with respect to process control

i. End, Abort: A running process may end normally or due to error condition the processmay be aborted.

ii. Load, Execute: A process may want to load and execute another program.iii. Create process, Terminate process: In multiprogramming environment new processes

are created as well as terminated.iv. Get process attribute, Set process attributes: When several process are executing we

can control its execution. This control requires the ability to determined and reset theattribute of a process.

v. Wait for time: After creating new processes, the parent process may need to wait forthem (child process) to finish their execution.

vi. Wait event, Signal event: In case processors are sharing some data a particular processmay wait for certain amount of time or would wait for some specific event.

vii. Allocate memory and free memory: When a process is created or loaded it is allocated amemory space. When the process completes its execution it is destroyed by theoperating system and its memory space is free.

Page 14: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 14

2. File Management: The systems calls with respect to file management are

i. Create file, Delete fileii. Open file, close file

iii. Read, write, reposition the fileiv. Get file attributes, set file attributes.

We need to crate and delete files for which the system calls are generated, after creating a filewe need open it and performed read or write operation or reposition it. Finally we need toclose the file each of these operations requires the system calls.

The various file attribute such as file name, file types, protection source and accountinginformation, these attributes can be set by using two system call get file attributes, set fileattributes.

3. Device Management: The system calls related to device management are

i. Request device, Release deviceii. Read, write, reposition

iii. Get device attributes, Set device attributeiv. Logically attached or Detached devices

A process may need some resources, if the resources are available, they can be grantedotherwise the process would fail. After getting the resource the process could use the resourceand finally released it.

We can also set device attribute and get device attributes through system call

4. Information maintenance: The various system calls related to information maintenance are

i. Get time or date, set time or dateii. Get system data, set system data

iii. Get process, file, device attributesiv. Set process, file, or device attributes.

We can get the current time and date and set it through system calls. Apart from it we can getinformation regarding number of current user, operating system version, amount of freememory space through system calls. We can even get the process attribute and set the processattribute through system calls.

Page 15: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 15

5. Communication: The various systems related to communications are

I. Create, delete communication connectionII. Send, receive messages

III. Transverse status informationIV. Attach or detach remote devices

There are two common models of communication

1) Message passing model2) Shared memory model

Process A M Process A 1

Process B M shared memory 2

Process B

2 1

Kernel M kernel

Message passing shared memory

In message passing model information is exchange through and interposes communicationfacility provide by the operating system. Before communication can take place a connectionmust be opened. The name of the other communicator must also be known. Afteridentification, the identifiers are passed to general purpose open and closed calls provided bythe file system or open connection and close connection system calls depending upon thesystem. The receiver process must give it permission for communication. Once the connectionis established they exchange messages by read message and write message system calls. Theclose communication call communicates the communication model.

Page 16: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 16

In shared memory module the processes used memory map system calls to gain access a regionof memory of other processes.

Processor may exchange information by reading and writing data in these shared areas. Theform of data and the locations are determined by these processor are not under operatingsystem controls.

Message passing is useful to be exchange. Shared memory allows maximum speed andconvenience of communication. However we need to deal with problems such as protectionand synchronization.

System Programs:

System programs provide a convenient environment for program development and execution.They can be divided into following categories

1. File Management: These programs create, delete, copy, rename, print etc as well asmanipulate file and directories.

2. Status Information: Some programs provide the information of the system regardingdata, time, memory, number of users etc.

3. File Modification: several text editor may be available to create and modify thecontents of file stored on disk and tape.

4. Programming Language Support: compilers, assemblers and interpreters are providedto the user with the operating system.

5. Program Loading and Execution: Once a program is assembled or compiled it must beloaded into memory to be executed. The system may provide absolute loaders, re-locatable loaders, and linkage editors and overlay loaders.

6. Communication: These programs provide the mechanism for creating virtual connectionamong processes users and computer systems.

Page 17: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 17

System structure:

1. Layered approach:

In layered approach they operating system is broken up into a number of layers, each build ontop of lower layers. The bottom layer is the hardware and the highest layer is the user interface.A typical operating system layer may consist of data structures and a set of routines that can beinvoked by higher level layers.

The main advantage of layer approach is modularity. The layers are selected in such a waythat each layer uses functions and services of only lower level layers hence debugging becomemust easier. The design and implementation of the systems are simplified when the system isbroken down into layer.

The major difficulty with layer approach involves the careful definition of the layersbecause layer can use only those layers below it. Another difficulty of layered approach is thatthey are less efficient than other.

2. Micro Kernel Approach: It was seen that as the Unix operating system expanded thekernel become large and difficult to manage. Hence an approach called as microkernelapproach was used by modularizing the kernel. This method structures the operatingsystem by removing all non-essential components from the kernel and implementing

Page 18: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 18

them as system and user level programs which result in smaller kernel. The mainfunction of the microkernel is to provide a communication facility between the clientprogrammed and various services which are running in user space. Communication isprovided by message passing.

The benefits of microkernel approach are that the operating system can beeasily extended. All new services are added to user space hence kernel need not bemodified. Since kernel is not modified while the microkernel is a smaller kernel. Theresulting operating system is easier to port from one hardware designed to another. Themicrokernel also provides more security and reliability because most services arerunning as user processes and not as kernel processes. If a service fails the rest of theoperating system remains intact.

3. System Design and Implementation:i. Design Goals: The first problem in designing a system is to define the goals and

specification of the systems. The requirements can be divided into two basicgoals. From the user point of view the system should be easy (convenient) to useeasy to learn, reliable, safe and fast.

Form the designer point of view the system should be easy to design,implement and maintained.

ii. Mechanisms and Policies: mechanisms determine how to do something whilepolicies will determine what will be done. Policies may change across places andwith respect to time. A general mechanism would be more desirable a change inpolicy would that required redefinition of only certain parameter of the system.

iii. Implementation: After designing an operating system it must be implemented. Itcan be implemented either by using assembly language or using by higher levellanguages such as C or C++.

The advantages of using higher level languages are The code can be written faster and is more compact. It is easier to port from one hardware to other hardware for example Ms

Dos was written in assembly language hence is only available for the Intelfamily of processor. While the UNIX operating system which was writtenin c is available on different processors such as Intel, Motorola, ultra-spars etc.

Page 19: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 19

Unit - IIIPROCESS MANAGEMENT

A process can be defined as a program in execution. Two essential elements of aprocess are program code and a set of data associated with that code at any givenpoint in time, while the program is executing, it can be characterized by a number ofelements called process control block (PCB). The elements of PCB are

I. Identifier: every process has a unique identifier to differentiate it from otherprocesses.

II. State of the processor: It provides the information regarding the current state ofthe process.

III. Priority: It gives the priority level of each process.IV. Program Counter: It provides the address of the next instruction which is to be

executed.V. Memory Pointers: These memory pointers point to the memory location

containing program code and data.VI. Context Data: These are the data that are present in register of the processor

when the process is executing.VII. Input Output Status Information: It includes pending input output request, input

output devices, which are assigned to the process etc.VIII. Accounting Information: It includes the amount of processor time and clock time

used time limits and so on.

Page 20: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 20

Process States:

Two State Process Model

The above diagram gives the simplest two state process models. A process is either beingexecuted by processor or not execute. In this model, a process may be in one of the twostates running or not running. When the operating system creates a new process and entersthat process into the system in the not running state. From time to time the currentlyrunning process may be interrupted, the operating system will select to other process to run.Hence a process may switched from running state to not running state while the other maymove from not running to running state.

Reasons for process creation

1) New batch job: The operating system is providing with a batch job controlstream. When the operating system is prepared to take on a new work, it willread the next sequence of job controls command.

Page 21: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 21

2) Interactive login: A user at a terminal logs on to the system.3) Created by operating system to provide service: The operating system can

create a process on behalf of a user program to perform a function.4) Created by existing process: A user program can create a number of processes

for the purpose of modularity.

Reasons for process termination

1. Normal completion.2. Time limit exceeded.3. Memory unavailable.4. Bounds violation.5. Protection error.6. Arithmetic error.7. Time over run.8. Input output failure.9. Invalid instruction.10. Privileged instruction.11. Data misuse.12. Operating system intervention.13. Parent termination.14. Parent request.

Five State Process Model

Page 22: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 22

The various states of the process in the given model are

1. New: A process has just been created and is not admitted to the pool(queue) ofexecutable process by the operating system .

2. Ready: The process is prepared to execute and is waiting for the processor.3. Running: The process which is currently being executed.4. Blocked: A process that cannot execute until some event occurs (such as input

output).5. Exit: A process which is released by the operating system either because it is halted

or aborted due to some reason.

POSSIBLE TRANSITIONS:

The following are the possible transitions from one state to anotherI. Null New: A new process is created to execute a program.

II. New Ready: The operating system moves a process from new state toready state in case memory space is available or there is a room for newprocess so as to keep the number of process to constant.

III. Ready Running: A process jumps from read state to running state itprocessor is running in the processor. The operating system selects aparticular process for the processor.

IV. Running Exit: The currently running process is terminated or aborted.V. Running Ready: The most common reasons for this transitions are

a. A process exceeds its time limit.b. Currently running process is preempted due to the arrival of high

priority process in the ready queue.c. A process may itself release control of the processor.

VI. Running Blocked: A process is put in the blocked state if it request forsomething for which it must wait. For example a process may request aservice from the operating system, operating system is not prepared toservice the request immediately are the process may wait for some inputoutput operation.

VII. Blocked Ready: A process in the blocked state is mode to the readystate when the event for which it has been waiting occurs.

Page 23: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 23

VIII. Ready Exit: A parent may terminate a child process at any time. Also if aparent terminates all child process of the parent process terminate.

Process Description

In the above diagram we see that there are a number of processes each process needscertain resources for its execution. Process p1 is running and has control of two input outputdevices and occupying a part of main memory process p2 is also in the main memory but itsblocked and waiting for input output device. The process pn is swapped out and issuspended.

The operating system controls the processes and manages the resources for theprocesses using some control structures which are divided into four categories.

Page 24: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 24

Memory tables are used to keep a track of both main memory and virtual memory. Thememory tables must include the following information.

i. The allocation of main memory to processes.ii. The allocation of secondary memory to process.

iii. Any protection attributes of blocks of main and virtual memory.iv. Any information needed to manage the virtual memory.

Input Output Tables: Input output tables are by the operating system to manage the inputoutput devices of the computer system. At any given time an input output device may beavailable or not available. Hence the operating system must know the status of the inputoutput operation and also the location in the main memory where the transfer is carriedout.

File Tables: The operating system also maintains file table which provide information aboutthe existence of file, their location on the secondary memory, their current status and otherattributes.

Page 25: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 25

Process Tables: The operating system must maintain process tables to manage processescontrol them. In doing so the operating system must know where the process is located andthe attribute of the process. The various process attributes which is also called as processcontrol blocked is grouped into three categories.

1) Process identification.2) Processor state information.3) Process control information.

1. Process identification:Identifiers: Numeric identifiers may be stored in the process control blocked whichincludei) Identifier of this process.ii) Identifier of the parent processiii) User identifier

2. Processor state information:i) User visible registers: A user visible register is available to the user, there may be

about 8 to 32 of these registers.ii) Control and status register: These registers are used to control the operation of

the processor which include program counter, condition codes and statusinformation.

iii) Stack pointers: Each process has one or more LIFO system stack associated withit.

3. Process control information:i) Scheduling and state information: These information is needed by the operating

system to performed scheduling. The information’s are process state, priority,scheduling related information and event.

ii) Data structuring: A process may be linked to other process in a queue, ring orsome other structure.

iii) Inter-process communication: Various flag, signals and messages may beassociated with communication between two independent processes.

iv) Process privileges: Processes are granted privileges in terms of memory that maybe accessed and the types of instructions that may be executed.

Page 26: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 26

v) Memory management: It includes pointers to segments and page tables.vi) Resource ownership and utilization: Resources controlled by the process may be

indicated.

Operations and processes:

The processes in the system can execute concurrently and they must created and deleteddynamically. Hence the operating system must provide a mechanism for process creationand termination.

Process creation: A process may create several new processes through create processsystem call. The creating process is called the parent process while the new processes calledthe children of that processes. Each of these new processes may in turn create newprocesses forming a tree of processes. When a process creates a new process, twopossibilities exist in terms of execution.

i) The parent continues to execute concurrently with its children.ii) The parent waits until or all of its children have terminated.

There are two possibilities(1) The child process is a duplicate of the parent process.(2) The child process has a program loaded into it.

Process termination: A process terminates when it finishes executing its final statement andask the operating system to delete it by using exit system call. At that point, the process mayreturn the data to its parent process through wait system call. All the resources of theprocess, including memory, files, input output buffers are de-allocated by the operatingsystem.

A parent may terminate its children for the following reasons.

1) The child has exceeded its usage of some of it resources which is allocated to it.2) The task assigned to the child is no longer required.

Page 27: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 27

3) The parent is exiting, in such a case the operating system does not allow a child tocontinue if its parent terminates.

Co-operating Processes:

A process is co-operating process if it can affect or be affected by other processes executingin the system. Co-operating processes has several advantages.

Information sharing: Several users may be interested in same piece of informationwe must provide an environment to allow concurrent access to these type ofresources.

Computation speed up: We can break a task into smaller sub task, each of which willbe executing in parallel with the others.

Modularity: We can construct the system by dividing the system functions intoseparate process.

Convenience: A user may have many task on which it can work at a time (editing,printing and compiling).

Inter process communication (IPC):

Inter process communication facility is the means by which processes communicate byamong themselves. Inter process communication provides a mechanism to allow processesto synchronize their action without sharing the same address space. Inter processcommunication is particularly useful in distributed environment.

Inter process communication is best provided by a message passing system. Thefunction of a message system is to allow processes to communicate with one and anotherwithout any shared memory. Communication among the user processes is achieved tothrough passing of messages. An inter process communication facility provides at least twooperations send and receive.

If processes P and Q want to communicate they must send messages and receivemessage from each other through a communication link.

There are several methods for logically implementing a link and send / receive operation

Page 28: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 28

a) Direct communication OR Indirect communication:

Direct communication: With direct communication each process that wants tocommunicate should name the receiver or sender of communication as followsSend (P, message) - send a message to process PReceive (Q, message) - receive a message from process QWith direct communication exactly one link exist between each pair of processes.

Indirect communication: In indirect communication the messages are send andreceived from mail boxes or ports. In which message can be placed or removed. Twoprocess can communicated only if they share a mail box. Communication is done inthe following waySend (A, message) - send a message to mailbox A.Receive (A, message) - receive a message to mailbox A.

In indirect communication a link may be associated with more than two processes.More than 1 link may exist between each pair of communicating processes. Themailbox may be owned either by a process or by the operating system.

b) Synchronization: Communication between process through message passing may beeither blocking or non-blocking (synchronous or asynchronous).

i. Blocking send: The sending process is blocked until the message is receivedby the receiving process or by the mailbox.

ii. Non-Blocking send: The sending process sends the message and resumesoperation.

iii. Blocking receive: The receive block until a messages is available.iv. Non-Blocking receive: The receiver gets receives either a valid message or a

null.

c) Buffering: buffering can be implemented in three waysi. Zero capacity: The queue has maximum length zero. Hence the link cannot

have any messages waiting in it.

Page 29: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 29

ii. Bounded capacity: The queue has finite length. If the link is fall the sendermust block until space is available in the queue.

iii. Unbounded capacity: The queue has almost infinite length hence the sendernever blocks.

Mutual exclusion using messages:Count int n /* number of processes */Void p (int){

Message msg;While (true){

Receiver (box, msg); /* critical section */Send (box, msg); /* remainder*/

}}Void main (){

Create mailbox (box);Send (box, null);Par begin (p (1), p (2), ... , p (n));

}

The above algorithm show how we can use message passing to get mutual exclusion. A set ofconcurrent processes share a mailbox which can be use by all processes to send and receive.The mail box is initialized to contain a single message which null content. A process wishingto enter its critical section first attempts to receive a message. If the mailbox is empty thenthe process is blocked.

Once a process gets the message, it performs it critical section and places the message backinto the mailbox. If more than one process perform the receive operation concurrently thantwo cases arise.

If there is a message, it is delivered to only one process and other are blocked. If the message queue is empty, all the processes are blocked, when the message is

available only one blocked process is activated and given the message.

Page 30: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 30

THREADS:

A thread is a light weight process; it is a basic unit of CPU utilization and consists of threadid, a program counter, a register set and a stack. All the threads belong to a process mayshare code section, data section and other operating system resources. If a process hasmultiple threads control it can do more tasks at a time.

Advantages of multithreading are

I. Responsiveness: Multithreading may allow a program to continue even if a part of itis blocked for performing a lengthy operation.

II. Resource sharing: Threads share the memory and the resources of the process towhich they belong.

III. Economy: Creating, maintaining and switching a process is difficult as compared tocreating, maintaining and switching a thread.E.g.: In Solaris creating a process is about 30 times slower than creating a thread.

IV. Utilization of multiprocessor: Architecture – The benefits of multithreading can begreatly increased in a multiprocessor architecture, where each thread may run inparallel on different processors.

Page 31: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 31

Types of Threads:

1. User threads.2. Kernel threads.

User Threads: User threads are supported above the kernel and are implemented at theuser level. All threads creation and scheduling are done in user space without kernelintervention. Hence user level threads are fast to create and easy to manage. The drawbackwith user level thread is that if kernel is single threaded, any user level thread performing ablocking system call will cost the entire process to block.

Kernel Threads: Kernel threads are supported directly by the operating system. The kernelperforms threads creation, scheduling and management in the kernel space. Due to thisreasons they are generally slower to create and not easy to manage as compared to userthreads. Since the kernel is managing the threads, if a threads perform a blocking systemcall, the kernel can schedule another thread for execution.

Multithreading Models:

The three common types of multithreading models.

1. Many to One.2. One to One.3. Many to Many.

Page 32: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 32

Many to One:

The many to one model maps many user level threads to one kernel thread. Threadmanagement is done in user space, hence it is efficient but the disadvantage is that theentire process will blocked if a thread makes a blocking system call. Since only one threadcan access the kernel at a time multiple threads are unable to run in parallel onmultiprocessors.

One to One:

One to One model maps each user thread to a kernel thread. It provides more concurrencyas compared to many to one model. In case a thread makes a blocking system calls only thatparticular thread will be blocked while other still execute. In multiprocessor environmentmultiple threads can run in parallel. The only drawback of this model is that creating a userthread required creating the corresponding kernel threads.

Page 33: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 33

Many to Many:

The many to many model multiplex many user level threads to a smaller or equal number ofkernel threads. The number of kernel threads may depend upon the particular application ora particular machine. This model is better on the other two models. Developers can create asmany user threads as necessary and the corresponding kernel threads can run in parallel ona multiprocessor.

Threading Issues:

I. The fork and exec system calls: If one thread in a program calls a fork system calls, thenew process may duplicate all threads or a new process may be a single threaded. If athread invokes the exec system call, the program specified in the entire process.

II. Cancellation: Thread cancellation is the task of terminating a thread before it hascompleted. A thread which is to be cancelled is called as target thread. Threadcancellation occurs in two different ways.

i) Asynchronous cancellation: One thread immediately terminates the targetthread.

Page 34: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 34

ii) Differed cancellation: The target thread can periodically check if it shouldterminate.

III. Signal handling: A signal is used to notify a process that a particular event has occurred.Signal can be generated through following reasons.

Generated by the occurrence of a particular event, whenever it is generated itmust be delivered to a process; once the signal is delivered it must behandled. Signal can handled by two handlers.

I. A default signal handler.II. User define signal handler.

In order to deliver the signal there are few options.a) Deliver the signal to the thread to which the signal applies.b) Deliver the signal to every thread in the process.c) Deliver the signal to certain threads in the process.d) Assign a specific thread to receive all signals for the process.

IV. Thread pools: The general idea behind a thread pool is to create a number of threads atprocess start up and place them into a pool, where they sit and wait for work. Thebenefits of thread pools are

i) We get fast serviceii) A thread pool limits the number of threads.

V. Thread specific data: Threads belonging to a process share the data of the process.However each thread may need its own copy of certain data, such data is called threadspecific data.

CPU SCHEDULING

Basic concepts. Scheduling criteria. Scheduling algorithm. Multiprogramming algorithm.

CPU Scheduler:

Page 35: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 35

Whenever the CPU becomes idle, the operating system must select one of the processes inthe ready queue which is to be executed. The selection of the process is carried out by CPUScheduler (short term scheduler).

CPU scheduling decisions may take place under following four situations.

1. When a process switches from running state to waiting state.2. When a process switches from running state to ready state.3. When a process switches from waiting state to ready state.4. When a process terminates.

Scheduling in case of 1 and 4 is called non-preemptive while under case 2 and 3are called preemptive.

Dispatcher: Dispatcher module gives control of the CPU to process by the short termscheduler, this involves:

i. Switching context.ii. Switching to user mode.

iii. Jumping to the proper location in the user program to restart that program.The dispatcher should be as fast as possible because it is used during every process

switch. The time it tackles for the dispatcher to stop one process and start anotherprocess is known as dispatcher latency.

Scheduling criteria: The criteria which are used to compare the various schedulingalgorithms are

1. CPU utilization: It means CPU must be kept as busy as possible CPU utilization mayrange from 0 – 100%.

2. Throughput: It gives the number of processes which are completed in a given unittime.

3. Turnaround time: It is the sum of time a process spends waiting to get into thememory. Waiting in the ready queue, executing and doing input output.

4. Waiting time: It is the amount of time a process has been waiting in the ready queue.5. Response time: It is the amount of time it takes from when a request was submitted

until the first response is produced, not output (for time sharing environment).

Page 36: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 36

Scheduling Algorithm

1) First Come First Serve Scheduling (FCFS): It is a purely non-preemptive algorithm. In thisscheme the process which requests the CPU first is allocated the CPU first. Theimplementation of First come first serve policy is easily managed with a FIFO queue. Thecode for FCFS scheduling is simple to write and understand.

The disadvantage of FCFS policy is that the average waiting time is quite high ascompared to other algorithms.Eg: consider the following example with processes arriving in the order p1,p2,p3 andthe burst time in milliseconds.The Gantt chart for the scheduling is:

Waiting time for P1 = 0; P2 = 24; P3 = 27Average waiting time: (0 + 24 + 27)/3 = 17Convoy effect—In FCFS scheme if we have a one big process which is CPU boundedand many small process which are input output bounded, it is seen that all othersmall process wait for one big process to get the CPU. This results in lower CPUutilization. This effect is called as convey effect FCFS algorithm is not suitable for timesharing system because it is non-preemptive.

2) Shortest Job First (SJF) Scheduling: In this scheme, when the CPU is available it isassigned to the process which has the smallest next CPU burst. If two processes has thesame next CPU burst, FCFS is used.

SJF algorithm is optional because it gives minimum average waiting time for agiven set of processes. The real difficulty with SJF algorithm is knowing the length ofthe next CPU request.

SJF algorithm may be either preemptive or non-preemptive.A preemptive SJF algorithm will preemptive the currently process if a new process

arrives in the ready queue having CPU burst time left for the current executing,process. A non-preemptive SJF algorithm will allow the currently running process tofinish its CPU burst.

P1 P2 P324 27 300

Page 37: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 37

Eg: Non preemptive SJF

Eg: Preemptive SJF

3) Priority Scheduling: In this scheduling a priority is associated with each process and theCPU is allocated to the process with higher priority. Priorities are generally some fixedrange of numbers. Priorities can be defined either internally or externally. Internallydefined priorities are use some measurable quantity to compute the priority of a process(for example: time limits, memory, file etc).external priorities are said by criteria such asimportance of the process, the type of the process, departments sponsoring the processetc.

Priority scheduling can be either preemptive or non-preemptive. When a processarrives at the ready queue its priority is compared with the running process. In apreemptive priority scheduling algorithm the running process will be preempted ifthe newly arrived process is having higher priority. A non preemptive priorityscheduling algorithm will not preempt the running process.

The major drawback of priority scheduling algorithm is indefinite blocking(starvation). A solution to the problem of starvation is aging. Aging is a technique ofincreasing the priority of processes that wait in the system for long time.

Process Arrival TimeBurst TimeP1 0.0 7

P2 2.0 4

P 4.0 1

P1 P3 P273 160

P48 12

Process Arrival TimeBurst TimeP1 0.0 7

P2 2.0 4

P 4.0 1

P1 P3P2

42 110

P4

5 7

P2 P116

Page 38: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 38

Q. For the following set of processes find the average waiting time considering smallnumber to have higher priority.Process Burst time PriorityP1 10 3P2 1 1P3 2 4P4 1 5P5 5 2

P2 p5 p1 p3 p40 1 6 16 18 19Waiting time of p1=6Waiting time of p2=0Waiting time of p3=16Waiting time of p4=18Waiting time of p5=1Average waiting time = (6+0+16+18+1)/5= 8.2

4) Round Robin (RR): Round robin scheduling algorithm is designed specially for timesharing system. It is a purely preemptive algorithm. In this scheme every process isprovided a time slice (time point of view) in case the process is unable to complete in thegiven time slice, it is preempted and another process is executed. The ready queue istreated as a circular queue. The CPU scheduler goes around the ready queue allocationthe CPU to each process for a time interval of one time slice. If there are n processes inthe ready queue and the time quantum is q, then each process gets 1/n of the CPU timein chunks of at most q time units at once. No process waits more than (n-1)q time units.

Example of Round robin with time quantum= 20

Process BurstTime

P1 53

P2 17

P 68

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121134 154162

Page 39: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 39

Multilevel Queue Scheduling:

A multilevel queue scheduling algorithm partition the ready queue into several separatesqueue. The processes are permanently assigned to a particular queue. Its queue has its ownscheduling algorithm.

For example: The interactive processes queue may use round robin algorithm while batchprocesses queue would use FCFS algorithm. A part from it there is a scheduling among aqueue which is implemented through preemptive priority scheduling algorithm. Each queuehas higher priority over low priority queue.

For example: Low process in the batch queue would run unless system process queue,interactive process queue and interactive editing process queue are empty.

Multilevel Feedback Queue Scheduling:

Page 40: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 40

Multilevel feedback queue scheduling allows a process to move between the queues.Separate queues are formed on the basis of CPU burst time.

If a process uses too much CPU time, it is moved to a low priority queue. If a process waitstoo long in a low priority queue it is moved to a higher priority queue.

For example: Consider the above diagram, a process entering the ready queue, it is put inqueue 0. A process in queue 0 is given in a time slice (quantum) of 8ms. If it does not finisheswithin the given time, it is moved at the end of queue 1 and so on.

A multilevel feedback queue scheduling is defined by the following parameters.

Number of queues. Scheduling algorithms for each queue. Method used to determine when to upgrade a process. Method used to determine when to demote a process. Method used to determine which queue a process will enter when that process

needs service.

PROCESS SYNCHRONIZATION

Definitions:

1. Critical section: A section of code within a process which cannot be executed whileone process is executing it is called as critical section.

2. Deadlock: A situation in which two or more processes are unable to proceed andwait for each other is called as deadlock.

3. Live lock: A situation in which two or more processes continuously change their statein response to changes in the other processes without doing any useful work.

4. Mutual exclusion: The requirement that only one process execute the criticalsection.

5. Race condition: A situation in which processes read and write a shared data item andthe final result depend on the relative timing of their execution.

6. Starvation: A situation in which a run able process is not executed for an indefiniteperiod.

Page 41: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 41

Principles of concurrency: concurrency has to deal with the following issues.

Communication among processes, sharing of resources, synchronization of the multipleprocesses and allocation of the processor time.

Concurrency arises in three different contexts.

i. Multiple applications.ii. Structured application.

iii. Operating system structure.

In a single processor multiprogramming system, processes are int leaved in time butstill they appear to be simultaneously executing. In a multiprocessor system the processesare overlapped that is two or more processes simultaneously execute. In both the situationthe following difficulties arise.

a) Sharing of global resources.b) Allocation of resources.c) Locating a programming error.

Consider a simple example in a uni-processor environment.void echo(){

chin = getchar();chout = chin;putchar(chout);

}

Operating System Concerns: The following are the issues with respect to existenceof the concurrency.

1. The operating system must be able to keep track of various processes.

Page 42: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 42

2. The operating system must allocate and de-allocate various resources for eachprocess. The resources are processor time, memory, input output device, files.

3. The operating system must protect the data and critical resources of eachprocess.

4. The functioning of the process and its result must be independents of its speed atwhich its execution is carried out with respect to other processes.

Process Interaction: The following are the various process interaction.

Processes unaware of each other. Processes directly aware of each other.

Process interaction:Degree ofawareness

Relationship Influence that oneprocess has on theother

Potential controlproblem

Processesunaware of eachother

competition Results ofprocessindependentof the actionof others

Timing ofprocess maybe affected.

Mutualexclusion.

Deadlock(renewableresource)

Starvation.

Process indirectlyaware of eachother (eg: sharedobject)

Cooperation bysharing

Results of oneprocess maydepend oninformation’sobtainedfrom others.

Timing ofprocess maybe affected.

Mutualexclusion.

Deadlock(renewableresource).

Starvation. Data

coherence.

Process directlyaware of eachother (havecommunicationprimitivesavailable tothem)

Cooperation bycommunication.

Results of oneprocess maydepend oninformationobtainedfrom others.

Timing ofprocess maybe affected.

Deadlock(consumableresource).

Starvation.

Page 43: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 43

Requirements For Mutual Exclusion: Any solution for enforcing mutualexclusion must satisfied the following requirements.

1. Mutual exclusion must be enforced that is when one process is in its criticalsection no other process should be in its critical section.

2. A process that halts in its non critical section must not interfere with otherprocesses.

3. A process should not wait indefinitely for entry into the critical section.4. When no process is in its critical section, any process which request entry to

the critical section must be granted without delay.5. A process must remain in its critical section for a finite time.

Mutual Exclusion: Algorithm Approach

Algorithm 1/* process 0 */ /* process 1*/While (turn!=0) While (turn!=1)

/* do nothing */; /*do nothing */;/* critical section */; /* critical section */;Turn 1; Turn 0;

In this case the two process share a variable turn. A process which wants to enter the criticalsection checks the turn variable. If the value of the turn is equal to the number of process,then the process may go into the critical section. The drawback of these algorithm is that areget busy waiting. If one process fails the other is permanently blocked.

Algorithm 2/* process 0 */While (flag [1])/* do nothing */Flag [0]= true;/* critical section */Flag [0]= false;

/* process 1 */While (flag [0])/* do nothing */Flag [1]= true;

Page 44: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 44

/* critical section */Flag [1]= false;

In this algorithm we used a Boolean variable flag. When a process wants to enter its criticalsection, it checks other process flag, if it is false, it indicates that other process is not in itscritical section. The checking process immediately sets its own flag to true and goes into thecritical section. After leaving its critical section it resets its flag to false.

This algorithm does not ensure mutual exclusion. It can happen that both the processcheck each other’s flag and find it false, hence set their own flag to true and both the criticalsection simultaneously.

Algorithm 3/* process 0 */Flag [0]= trueWhile (flag[1])/* do nothing *//* critical section*/Flag[0]= false;

/* process 1 */Flag [1]= trueWhile (flag[0])/* do nothing *//* critical section*/Flag[1]= false;

In this algorithm a process which wants to enter the critical section sets its flag to true andcheck other process flag if it is not set it enter the critical section if it is set the process wait.

The drawback of this algorithm is that it can happen that both process set their flag totrue and check each other’s flag causing deadlock. Also if a process fails inside the criticalsection, the other process is blocked.

Algorithm 4/* process 0 */Flag [0]= trueWhile (flag[1]){Flag[0]= false;

Page 45: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 45

/* delay */Flag[0]=true;}/* critical section */Flag [0]= false;

/* process 1 */Flag [1]= trueWhile (flag[0]){Flag[1]= false;/* delay */Flag[1]=true;}/* critical section */Flag [1]= false;

In this algorithm it can be shown that a live lock situation occurs in which the two processescontinuously set and reset their flag without doing any useful works. If one of the processslows down than live lock situation would be broke down and one of the process enter thecritical section.

Peterson’s Algorithm:Boolean flag[2];Void p0(){

While (true){

Flag[0]=true;Turn=1;While(flag[1] && turn==1)/*do nothing *//* critical section */Flag[0]=false;/* remainder section */

}}Void p1(){

While (true){

Flag[1]=true;Turn=0;

Page 46: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 46

While(flag[0] && turn==0)/*do nothing *//* critical section */Flag[1]=false;/* remainder section */

}}Void main(){

Flag[0]=false;Flag[1]=false;Par begin(p0,p1)}

In Peterson algorithm gives the simple solution to the problem of mutual exclusion. A globalvariable turn decides which process must go into the critical section. We see that mutualexclusion is easily preserved.

Suppose p0 wants to enter its critical section, it sets its flag to true and the turn valueto 1 because of which p1 can not enter its critical section. If p1 is already in its critical sectionthan p0 is blocked from entering its critical section. It can be shown that Peterson’salgorithm provide a better solution which is free from deadlock and live lock. This algorithmcan be easily generalized for n number of processes.

Semaphores: Two or more processes can co-operate by means of simple signals such that aprocess can be forced to stop at a specific place until it has receive a specific signal. Any coordination requirement can be satisfied by proper signals. For signaling special variablecalled semaphores are used to transmit a signal through semaphores S a process execute theprimitive semSignal (S). To receive a signal through semaphores S a process executes theprimitive semWait(S).

To achieve the desired effect we can view the semaphore as a variable that has an integervalue on which three operations are defined.

1) A semaphore may be initialize to a non negative value.2) The semWait Operation: The semWait decrements the semaphore value. If the value

become negative then the process executing the semWait is blocked, otherwise theprocess continue.

Page 47: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 47

3) The semSignal Operation: The semSignal operation increments the semaphore value.If the value is less than or equal to 0, a process blocked by semWait operation isunblocked.

Semaphore primitives are defined as followsStruct semaphore{

Int count;Queue type queue;

}Void semaWait(semaphore.S){

S.count--;If (S.count<0){

Place this process in S.queue;Block this process

}}Void semaSignal(semaphore.S){

S.count++;If (S.count<=0){

Remove a process p from S.queue;Place process p on ready list;

}}

Binary Semaphores: Binary semaphore may only take the value 0 and 1 and can be definedby the following three operation.

1. Initialization: A binary semaphore may be initialized to 0 or 1.2. SemWaitB operation: The semWait B operation checks the semaphore value. If the

value is 0, than the process executing the semWait B is blocked. If it is 1 it is changeto 0 and the process continues.

3. SemSignal B operation: The semSignal B operation checks to see if any processes areblocked on this semaphore. If so, than a process blocked by a semwait B operation isunblocked.

If no processes are blocked, than the value of semaphore is set to 1.

Page 48: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 48

Binary semaphore primitives are defined as follows.Struct binary—semaphore{

Enum {zero, one} valueQueue type queue;

}Void semWait B (binary-semaphore.S){

If(S.value==1)S.value=0;{

Place this process in S.queue;Block this process

}}Void semSignal(binary-semaphore.S){

If (S.queue is empty())S.value=1;{

Remove a process p from s. queue;Place process p on ready list;

}}

Page 49: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 49

Unit - IVMEMORY MANAGEMENT

Memory consists of a large array of words or bytes each having its own address.The CPU fetchesinstructions from the memory according to the value of the program counter.To improvethe utilizationof CPU the computer must keep several processes in the memory.To utilize the memory many memorymanagement schemes are proposed.Selection of a memory management schemes depends on manyfactors such as hardware of the system.

ADDRESS BINDING

A user program goes through several steps such as compiling ,loading ,linking.Addresses may berepresented in different ways during these steps.Addresses in the source program are generallysymbolic.A compiler will bind these addresses to relocatable addresses.The loader will inturn bind theseaddresses to absolute addresses.Each binding is a mapping from one address space to another.

The binding of instructions and data can be done in the following ways.

1..Compile time-If it is known at compile time where the process will reside in memory ,then absolutecode can be generated.If address changes at compile time the program is recompiled.

2..Load time-If it is not known at compile time where the process will reside in the memory,than thecompiler must generate relocatable code.

3..Execution time-Binding delayed until run time if the process can be moved during its execution fromone memory segment to another.Need hardware support for address maps.(eg.base and limit registers).

LOGICAL V/S PHYSICAL ADDRESS SPACE

Logical address-generated by the CPU,also referred to as virtual address.

Physical address-address seen by the memory unit.

Logical and physical addresses are the same in compile time and load time address bindingschemes,logical(virtual)and physical addresses differ in execution time address binding scheme.

Page 50: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 50

DYNAMIC LOADING

Routine is no loaded until it is called . Better memory space utilization,unused routine is never loaded. Useful when large amounts of code are needed to handle in frequently occurring cases. No special support from the OS is required implemented through program design.

OVERLAYS :

OVERLAYS FOR A TWO PASS ASSEMBLER

Keep in memory only those instructions and data that are needed at any given time. Needed when process is larger than amount of memory allocated to it. Implemented by user, no special support needed from OS, programming design of overlay

structure is complex.

SWAPPING

Page 51: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 51

A process needs to be in memory to be executed. A process can be swapped out temporarilyfrom the main memory to a backing store and then brought back into main memory forcontinued execution. When a process completes its time slice it will be swapped with anotherprocess (in case of RR scheduling algorithm)Another type of swapping policy which is used for priority based algorithm is that is that a higherpriority process arrives and wants service,the memory manager can swap out the lower priorityprocess and swap in the higher priority process.This type of swapping is called as roll out –roll in.Swapping requires a backing store.The backing store is commonly a fast disk and is large enoughto accommodate copies of all memory images for all users,and it must provide direct access tothese memory images.Major part of swap time is transfer time,total transfer time is directly proportional to theamount of memory swapped.

SCHEMATIC VIEW OF SWAPPING

Page 52: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 52

CONTIGUOUS ALLOCATION:

The memory is usually divided into two partitions one for the OS and other for the user processes.Wewant several user process to reside in memory at the same time.Hence we need to consider how toallocate the available memory to the processes.In contiguous memory allocation each process iscontained in a single contiguous part of the memory.

When the CPU scheduler selects a process for execution,the dispatcher loads the relocation and limitregister and every address generated by the CPU is checked with these registers.So that the OS and userprograms are not modified by the running process.

Logical add yes physical add

No

Trap;addressing error

HARDWARE SUPPORT FOR RELOCATION AND LIMIT REGISTERS

MEMORY ALLOCATION

One of the simplest method for memory allocation is to divide memory into several fixed sizedpartitions.Each partition may contain exactly one process.When a partition is free a process is selectedfrom the input queue and is loaded into the free partition.When the process terminates the partitionbecome available for another process.

The OS keeps a table indicating which parts of memory are available and which are occupied.Initially allmemory is available for user processes and is considered as one large block of user memory calledhole.When a process arrives and needs memory we search for a large enough for this process.The set ofholes is searched to determine which hole is best to allocate.The following strategies are considered toselect a free hole.

CPU

Limitregister

Relocationregister

<

+ Memory

Page 53: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 53

1..First-fit:-Allocate the first hole that is big enough.We can search the set of holes either at thebeginning or at the end of the previous first fit search.We can stop searching as soon as we find a freehole that is large enough.

2..Best fit:-Allocate the smallest hole that is big enough.For this purpose we must search entire list.Itproduces the smallest left over hole.

3..Worst fit:-Allocate the largest hole.For this purpose we must search the entire list.It produces thelargest left over hole.First-fit and best fit are better than worst fit in terms of speed and storageutilization.

All the above algorithms suffer from external fragmentation.Memory fragmentation can be internal orexternal.

External fragmentation –total memory space exists to satisfy a request,but it is not contiguous. Internal fragmentation-allocated memory may be slightly larger than requested memory,the

size difference is memory internal to a partition,but not being used.

PAGING:

Paging is a memory management scheme which permits the physical address space of a process tobe non contiguous.Paging avoids the problem of fitting the different process of different sizes intothe memory.In this scheme physical memory is broken into fixed sized blocks called frames whilethe logical memory is broken into blocks of same size calles pages.When a process is to beexecuted,its pages are loaded into any available memory frames from the backing store.

The following diagram gives the required paging hardware.

Page 54: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 54

Every address generated by the CPU is divided into two parts –a page number p and page offset d.Thepage number is used as index into a page table. The page tables contains the base address of each pagein physical memory .This base address is combined with the page offset to define the physical memoryaddress which is sent to the memory unit.

The paging model of a memory is as shown below

Page 55: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 55

The size of a page is a power of 2 and lies between 512 bytes and 16mb per page depending on thecomputer architecture. The selection of power of 2 as the page size makes the translation of logicaladdress into a page number and page offset easy.

When we use the paging scheme we have no external fragmentation. However there may be someinternal fragmentations. When a process arrives in a system to be executed its size in terms of pagesis examined. Each page of the process needs one frame. The first page of the process is loaded intoone of the allocated frame and the frame number is put in the page table for this process. This isdone for all the pages. The OS maintains a list of free frames in a data structure called the frametable as given in the diagram below.

SEGMENTATION :

A program may consists of a main program, subroutines, procedures, functions, etc. Each of whichcan be considered as a segment of variable length. Elements within a segment are identified by theiroffset from the beginning of the segment. Segmentation is a memory management scheme whichsupports this view of the memory. A logical address space is a collection of segments. Each segmenthas a name and a length. The addresses specify the segment name and the offset within thesegment. A logical address consists of segment number and offset.

Page 56: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 56

SEGMENTATION HARDWARE:

A logical address consists of two parts a segment number s and an offset d. The segment number is usedas an index into the segment table. The offset d of the logical address must be between O and thesegment limit. If it is not we trap to the OS. If this offset is valid it is added to the segment base toproduce the address in the physical memory.

A particular advantage of segmentation is the association of protection with the segments.

Another advantage of segmentation involves the sharing of code or data.

Segmentation may cause external fragmentation when all blocks of free memory are too small toaccommodate a segment.

SEGMENTATION WITH PAGING:

By combining segmentation and paging we can get best of both. In this model the logical addressspace of a process is divided into two partitions. The first partition consist of upto 8kb segments thatare private to that process. The second partition consist of upto 8 kb segments which are sharedamong all the processes. Information about the first partition is kept in the local descriptiontable(LDT) while the information about the second partition is kept in the global descriptor table(GDT). Each entry in the LDT and GTD cosist of 8 bytes with detailed information about a particular

Page 57: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 57

segment including the base location and length of that segment. The logical address consist ofselector and offset, the selector is 16 bit number given as

S G P13 1 2

Where s = segment number.

G= indicates whether segment is LDT of GDT

P = deals with protection / for protection

The offset is a 32 bit number specifying the location of the byte within the segment. It is given as

Page number page offset

P1 P2 D10 10 12

VIRTUAL MEMORY

Virtual memory is a technique that allows the execution of processes which may not be completely inmemory. One major advantage of this scheme is that we can have programs which are larger than thephysical memory. It is seen that many programs have code to handle unusual error conditions and theseerrors never occur hence these codes are never executed. Also programs may contains arrays, list andtables which may be allocated more memory than required. Apart from this certain options and featuresof a program are rarely used. Virtual memory makes the task of programming much easier because theprogrammer no longer needs to worry about the amount of physical memory. Virtual memory isimplemented by demand paging as well as demand segmentation

DEMAND PAGING

A Demand paging system is similar to paging system with swapping in which we have a swapper whichswaps only those pages which are needed and does not swaps the entire process. When a process is tobe swapped in the swapper(pager) guesses which pages will be used before the process is swapped outagain, it brings only those pages. Hence the pager decreased the swap time and the amount of memoryneeded.

To implement demand paging we need some hardware to differentiate between those pages which arein the memory and those pages that are on the disk.

Page 58: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 58

When the valid bit is set it means that the page is legal and is in the memory. If the bit is set to invalid itmeans that either the page is not valid or is valid but is currently on the disk.

Page 59: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 59

STEPS FOR HANDLING PAGE FAULT:

The procedure for handling page fault is as follows.

1. We check an internal table to determine whether the reference was valid or invalid.2. If it was valid but is not in the physical memory we now bring it.3. We find a free frame.4. Bring the desired page from disk into the frame.5. Modify the table.6. Restart the instruction.

PAGE REPLACEMENT

If no frame is free, we find a frame that is not currently being used and free it, we can free a frame bywriting the contents to swap space and change the page table entries. It is done by the following steps.

1. Find the location of the desired page on disk.2. Find a free frame:

Page 60: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 60

If there is a free frame, use it, If there is no free frame, use a page replacement algorithm to select a victim frame.

3. Read the desired page into the (newly) free frame. Update the page and frame table.4. Restart the process.

If no frames are free, two pages transfers are required (one out, one in). We can reduce this overheadby using a modify bit (dirty bit). The dirty bit is set for a page if the page is modified. Hence in case ofpage replacement we need to replace that page. If the dirty bit is reset we don’t need to replace thatpage, it can be overwritten by other page.

PAGE REPLACEMENT ALGORITHMS.

First –In – first –Out (FIFO) Algorithm.

The simplest page replacement algorithm is the FIFO algorithm. A FIFO replacement algorithmassociates each page the time when that page was brought into the memory. When a page must bereplaced the oldest page is choosen. FIFO algorithm is easy to understand and program. However itsperformance is not always good.

Page 61: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 61

Prob: Consider the following reference string

7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1. Find the number of page faults using FIFO page replacementalgorithm with three frames.

Belady,s Anamoly :

In some page replacement algorithms, the number of page faults may increase as the number ofallocated frames increased. Consider the following curve showing Belady’s Anamoly.

Page 62: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 62

From the above graph we see that if the number of frames are three we get a page faults and when thenumber of frames are 4 we get 10 page faults.

OPTIMAL PAGE REPLACEMENT ALGORITHM:

In the algorithm we replace the page which will not be used for the longest period of times. Thisalgorithm has the lowest page fault rate as compared to other algorithms and does not suffers fromBelady’s Anamoly. This algorithm is difficult to implement because it requires the future knowledge ofthe reference string. This algorithm is mainly used for comparison studies.

Eg: For the following reference string find the number of page faults using optimal page replacementalgorithm with 3 frames.

RS : 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1.

9 pagefaults

LEAST RECENTLY USED ALGORITHM(LRU):

In this algorithm when we need to replace a page we replace it with the page that has not been used forthe longest period of time LRU Replacement associates with each page the time of that pages last use.When a page must be replaced we choose a page which has not been used for the longest period oftime. Hence we look into the part.

The major problem with this algorithm is to implement it for which additional hardware is require. Thetwo types of implementations are

1. Counter implementation Every page entry has a counter, every time page is referenced through this entry, copy the

clock into the counter.

Page 63: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 63

When a page needs to be changed look at the counters to determine which are to change.

2. Stack implementation.

Another approach to implement is to keep a stack of page number. Whenever a page is referenced,it is removed from the stack and put on the top. Hence the top of the stack is the most recently usedpage and the bottom of the stack is the least recently used.

Eg : For the following reference string find the number of page faults using LRU algorithm with 3frames.

RS-7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1

THRASHING:

For execution, a process needs free frames, if the process does not have the required number offree frames, we get a page fault. Hence we must replace some page, since all the pages are inactive use, we replace a page which is needed. Hence we get a page fault again, and again, andagain. The process continued to fault, this high paging activity is called thrashing.

CAUSE OF THRASHING:The OS monitors the CPU utilization, if it is low we increase the degree of multiprogramming byintroducing new process. A global page replacement algorithm is used which replaces processthey belong. Suppose a process needs more frames, hence it starts faulting and takes framesaway from the other processes. These processes need those pages, hence they also fault takingaway frames from other processes. Hence we get a high paging activity due to which theutilization decreases.

Page 64: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 64

As the CPU utilization decreases, the CPU scheduler brings in more new processes causing morepage faults as a result CPU utilization drops even further. The CPU schedule again brings in moreprocesses. At this stage thrashing has occurred.

Page 65: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 65

Unit - VFILE SYSTEM.

A file is a collection of related information which is recorded on the secondary storage.From users pointof view data cannot be written to secondary storage unless they are within a file.Files representprograms and data.A file has a certain designed structure according to its type.A text file is a sequenceof characters.A source file is a sequence of subroutines and functions.An object file is a sequence ofbytes organize into blocks.An executable file is a series of codes.

FILE ATTRIBUTES:

A file has certain attributes which may change from OS to another and consists of the following

1..Name – The symbolic file name is the only information through which the user can identify the file.

2..Identifier – It consists of a number which identifies the file within the file system.

3..Type – It is required for those systems which support different file types.

4..Location – It provides information regarding the location of the file and to the device on which the fileresides.

5..Size – It gives the information regarding the size of the file.

6..Protection – It gives the access control information so as to decide who can do reading writing andexecuting.

7..Time,date and user identification – This information may be kept for creation,last modification andlast use.The data can be useful for protection,security as well as monitoring the usage.

FILE OPERATIONS:

The various file operations are:

1..Creating a file – To create a file two steps are necessary.

Page 66: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 66

Space in the file system must be found. An entry for the new file must be made in the directory.

2..Writing a file – To write a file we make a system call specifying the name of the file and theinformation to be written to the file.The system must keep a write pointer to the location in the filewhere the next write is to take place.

3..Reading a file – To read from a file we make a system call specifying the name of the file and theblocks of information which is to be read.The system searches the directory to find the file andmaintains a read pointer from where the next read is to take place.

4..Repositioning within a file – The directory is searched for a particular entry and the current fileposition is set to the given value.

5..Deleting a file – To delete a file we search the directory for the given file and release all file space sothat it can be reused by other files and erase the directory entry.

6..Truncating a file – This operation allows a user to retain the file attribute and erase the contents ofthe file.

The other common attribute include file appending and file renaming.

The following information are associated with an open file.

1..File pointer – It keeps the track of last read-write operation.

2..File open count – It keeps a tarck of the counter giving the number of open and closes of a file andbecomes zero on the last close.

3..Disk location of the file – It is required for modifying the data within the file.

4..Access rights – This information can be used to allow or deny any request.

FILE TYPES:

A common technique for implementing file types is to include the type as part of the file name .Thename is split into 2 parts that is name and extension.The common file types are

File type usual extension function

1..executable exe,com,bin,or none read to run machine language program

2..object obj,o compiled,machine language,not linked

Page 67: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 67

3..text txt,doc textual data,documents

4..batch bat,sh commands to the command interpreter

5..word processor wp,tex,rrf,doc various word processor formats.

FILE STRUCTURE :

Since there are various files.It is required that the OS must support multiple file structures,due to thisthe resulting size of the OS would become very large.If the OS defines five different file structures, itneeds to contain the code to support these file structures. Severe problems may arise if newapplications require different file structure not supported by the OS.

UNIX consider each file to be a sequence of 8 bits (byte), no interpretation of these bits is made by theOS. Each application program must include its own code to interpret an input file into proper structure.However all OS must support at least executable file structure.

ACCESS METHODS

The various access methods are

1..Sequential access

2..Direct access

3..Indirect access

1..Sequential access-The simplest access method is sequential access which is used by editorscompilers,etc.Information in the file is processed in order one record after the other.The operations onthe files are reads and writes.A read operation reads the next portion of the file and advances thepointer.The write operations appends to the end of the file and advances the pointer.Sequential accessis based on a tape model of a fil.

2..Direct access-We consider file to be made up of fixed length logical records which allows program toread and write records without following any order.The direct access method is based on disc model ofa file,since disc allows random access to any file block.For direct access the file is viewed as a numberedsequence of blocks or records.Since there is no restriction to follow any order we may read block 14then block 54 and write block 6.

Page 68: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 68

For the direct access method ,the file operations are Read n and Write n.

3..Indexed access-

With large files we see that the indexed file become too large to be kept in the memory.Hence wecreate an indexed for the index file.The primary index file would contain pointers to secondary indexfiles which would point to the actual data items.

DIRECTORY STRUCTURES

The directory can be viewed as a symbol table which translates file names into their directory entries.Adirectory can be organize in many ways .The following are the various operations perform on adirectory.

1..Search for a file

2..Create a file

3..Delete a file

4..List a directory

5..Rename a file

Page 69: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 69

A directory has the following logical structures

1..Single level directory

The simplest directory structure is the single level directory.Since all files are stored in the samedirectory.The advantage of this structure is that it is easy to support and understand.

The drawback of this implementation is that if the number of files increases the user may find it difficultto remember the names of all the files.If thesystem has more than one user the file naming issue wouldalso arise because each file must have a unique name.

2..Two level directory

In two level directory structure each user has his own user file directory (UFD).Each UFD has a similarstructure and contains the files of a single user.When a user logs in the systems master filedirectory(MFD) is searched.When a user refers to a particular file,only his own UFD is searched.Hencedifferent users may have files with the same name as long as all the file names within each UFD areunique.

Page 70: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 70

To create a file for a user ,the OS searches only that users UFD to confirm whether another file of thatname exits.To delete a file,the OS confines its search to the local UFD,hence it cannot accidentally deleteanother users file which has the same name.

UFD are created by a special system program through proper user name and account information.Theprogram creates a new UFD and adds entry in the MFD.

The disadvantage of two level directory structure is that it isolates one user from another.In somesystems if the path name is given and access is possible to the file residing in other UFD.A two leveldirectory can be thought as a tree of height 2.The root of the tree is the MFD,UFD’s are the branchesand the files are the leaves.

3..TREE DIRECTORY DIRECTORY

The tree structured directory is the most common directory structure.It contains a root directory.Everyfile in the system has a unique path name.A directory (or subdirectory) contains a set of files orsubdirectories.A directory is simply another file treated in a special way.All directories have the sameinternal format.One bit in each directory entry defines the entry as a file(0) or as asubdirectory(1).Special system calls are used to create and delete directories.

Each user has a current directory,when a reference is made to a file the current directory is searched.Ifthe file is not in the current directory than the user must specify a path name or change the current

Page 71: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 71

directory to the directory containing that file.The user can change his current directory wheneverrequired.

In this structure path names can be of two types.Absolute path names or Relative path names.

There are two approaches to handle the deletion of a directory.

1..Some systems will not delete a directory unless it is empty.

2..Some systems delete a directory even if it contains subdirectories or files.

4..ACYCLIC – GRAPH DIRECTORIES

An Acyclic graph directory structure allows directories to have shared subdirectories and files such thatsame file or subdirectories may be in two different directories.A shared fileor directory is not the sameas two copies,it is such that any changes made by one user are immediately visible to the othe.A newfile created by one person will automatically apper in all the shared sub directories.

This structure is more complex since a file may have multiple absolute path names.Another probleminvolves deletion of aq shared directory or a file.

Page 72: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 72

ALLOCATION METHODS:

1..Contiguous Allocation- The contiguous allocation method requires each file to occupy a set ofcontiguous blocks on the disk.Contiguous allocation of the first block and the length of thefile.Thedirectory entry for each file indicate the address of the starting block and the length of the areaallocated for this file.Accessibg a file in this method is very easy,it supports both sequential and directaccess method.

The main drawback of this method is that it suffers from external fragmentation,another problem isfinding space for new file.Also with contiguous allocation there is a problem of determining how muchspace is needed for a file.

2..Linked Allocation – In Linked allocation each file is a linked list of disk blocks,the disk blocks ,the diskblocks may be scattered anywhere on the disk.The directory contains a pointer to the first and last blockof the file.Each block contains a pointer to the next block.These pointers are not made available to theuser.In the allocation there is no external fragmentation and any free block can be used for a file.A filecan continue to grow as long as free blocks are available.The disadvantage with linked allocation is thatit does not supports direct access methods.Another disadvantage is the space required for thepointers.A problem may arise if a pointer is damage or lost hence this type of allocation would beunreliable.

3..Indexed Allocation – In this allocation all the pointers are occupied in a block called as indexedblock.The directory contains the address of the indexed block.When a file is created,all pointers in theindexed block are set to nil.When a block is first written its address is put in the indexed block.

Indexed allocation supports direct access and does not suffers from external fragmentation.The maindrawback of this allocation is that it suffers from wasted space used to store blocks information.

FREE SPACE MANAGEMENT

1..Bit vector-In this approach each block is represented by 1 bit.If the block is free,the bit is 1,if the blockis allocated the bit is 1

The main advantage of this approach is its relative simplicity and efficiency in finding the freeblocks.However the disadvantage is that for fast access these bit vectors are kept in main memory.Hence they occupy large spaces.

2..Linked list-In this approach we keep a pointer to the first free block.This free block contains a pointerto the next free block and so on.This scheme is not efficient because to go through the list of free blockswe must read each block which takes much time.

Page 73: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 73

3..Grouping-In this approach we store the addresses of N free blocks in the first free block.The first N-1of these blocks are free while the last block contains the addresses of another n free blocks and so on.

4..Counting-Generally it is seen that several contiguous blocks may be allocated or freedsimultaneously.Hence instead of keeping a list of N free blocks.We can keep the address of first freeblock and the number N of free contiguous blocks which follow the first block.

Mass-Storage Systems

Disk Structure:

1. FCFS Scheduling: The simplest form of disk scheduling is FCFS scheduling. This algorithm is easyto understand but does not provide fast service. Consider a disk queue with request for I/O toblocks on cylinders 98, 183, 37, 122, 14, 124, 65, 67.If the disk head is initially at cylinder 53, it will first move from 53 to 98 and than to 183 and soon for a total head movement of 640 cylinder as shown as the diagram below.Queue = 98, 183, 37, 122, 14, 124, 65, 67.Head starts at 53

2. SSTF(Shortest seek time first): The SSTF algorithm selects the request with the minimumseek time from the current head position SSTF select the pending request which are close to thecurrent head position. This algorithm gives good performance but may cause starvation.Suppose that we have two request in a queue for the cylinder 14 and 186,while servicing therequest from 14 a new request near 14 arrives hence it is service while the request 186 wait.While servicing this request another request close to 14 arrives and this may continue causingthe request for cylinder 186 to wait indefinitely. Consider the following example.

Page 74: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 74

3. SCAN(Count): In the SCAN algorithm the disk arm start at one end of the disk and movestowards the other end servicing request at if reaches each cylinder until it reaches the other endof the disk. At the other end, the direction of the head movement is reverse and servicingcontinues. The head continuously scans back and forth across the disk.consider the following example.Queue=98,183,37,122,14,124,65,67Head starts at 53

4. C-SCAN(Circular-SCAN): Circular Scan is another form of scan scheduling to provide a moreuniform wait time. C-SCAN moves the head from one end of the disk to the other servicingrequest along the way when the head reaches the other end, it immediately returns to thebeginning of the disk without servicing any request on the return trip.Consider the following example.

Page 75: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 75

5. C-Look: In this scheduling we move as far as the final request in each direction instead of goingfrom one end to the other end. This algorithm is called Look or C-Look because we Look for arequest before continuing to move in a given direction.

DEADLOCKS

Deadlocks:

A system consists of a finite number of resources which is to be distributed among a number ofcomputing process .If a process request a resource which satisfied the request.

Under the normal mode of operation,a process may utilize a resources in only the following sequence

1..request

2..use

3..release

Page 76: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 76

A set of processes is in a deadlock state when every process in the set is waiting for an event that can becaused only by another process in the set.

DEADLOCK CHARACTERISATION

NECESSARY CONDITIONS FOR DEADLOCK

A deadlock situation can arise if the following four condition are hold simultaneously in a system.

1..Mutual Exclusion-Atleast one resources must be held in a non sharable mode so that only one processcan use the resource at a time.

2..Hold and wait-A process must be holding atleast one resource and waiting to get addition resourceswhich are held by other processes.

3..No pre-emption-Resources cannot be preempted forcefully from any process.

4..Circular wait-A set of waiting processes exist such that P0 is waiting for a resource held by P1,P1 iswaiting for a resource that is held by P0 and P0 is waiting for a resource that is held by P0.

RESOURCE ALLOCATION GRAPH

Deadlocks can be described in terms of resources allocation graph.This graph consist of a set of verticesand a set of edges.

A directed edge from process Pi to resource Rj is denoted by Pi->Rj,it signifies that process Pi requestedan instance of resources type Rj and is currently waiting for that process.A directed edge Pi->Rj is calleda request edge.A directed edge from resource type Rj to process Pi is denoted by Rj->Pi,it signifies thatan instance of resource type Rj has been allocated to process Pi.The directed edgeRj->Pi is calledassignment edge.

We represent each process Pi as a circle and each resource type Rj as a square.Since the resource type Rjmay have more that one instance,we represent such instance a a dot within the square.

When a process Pi request an instance of resource type Rj a request edge is inserted .When the requestis fulfilled the request edge converts to assignment edge.When the process releases the resource theassignment edge is deleted.Consider the following example

Page 77: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 77

In the above allocation graph we say that,process P is holding the resource R2 and has requested forresource R1.Process P2 is holding the resource R1 and R2 and is requesting for the resource R3.ProcessP3 is holding the resource R3.

METHOD FOR HANDLING DEADLOCKS

We can deal with the deadlock problem in one of the three ways.

1..We can use a protocol to prevent or avoid deadlocks ensuring that the system will never enter adeadlock state.

2..Deadlock prevention is a set of methods for ensuring that atleast one of the necessary conditions fordeadlock does not occur.Deadlock avoidance requires that the operating system must be given inadvance additional information regarding the resources a process will request and use during itslifetime.

2..The second way of handling deadlock we can allow the system to enter a deadlock state,detect it andrecover.In this environment the system can provide an algorithm which examine the state of a system todetermine whether a deadlock has occurred or not,if a deadlock has occurred then we should have analgorithm to recover from it.

3..We can ignore the deadlock problem and assume that deadlocks never occur,incase deadlock occursthe system must be restarted manually.

DEADLOCK PREVENTION

Page 78: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 78

In deadlock prevention we ensure that atleast one of the necessary conditions for deadlock do notoccur.It can be done in the following way.

1..Mutual Exclusion-The mutual exclusion condition must hold for non sharable resources,sharableresources do not require mutually exclusive access hence these resources cannot be involved in adeadlock.

2..Hold and wait-To ensure that hold and wait condition never occur in the system we must guaranteethat whenever a process request a resource it does not hold any other resources.This can be done intwo ways.One way is that each process must request and be allocated all its resource before it begin itsexecution.Another way is a process may request some resources and use it,incase it requires moreresources it must release all the resources which are allocated to it.

In both the ways we see that resources arenot properly utilized and starvation may occur.

3..No Pre emption-To ensure that this condition does not hold we use the following protocols.

If a process is holding some resources and request another resource which cannot be immediatelygranted to it,then all resources which are held by the process are preempted.

Alternatively if a process request some resources,if they are not available we check whether they areallocated to some other process that is waiting for additional resources.If so ,we preempt the desiredresources from the waiting process and allocate them to the requesting process.

4..Circulat wait-One way to ensure that circular wait never holds is to empose require that each processrequest resources in an increasing order of number.

DEADLOCK AVOIDANCE

In deadlock avoidance method we require additional information about how resources would berequested by the processes before they are executed.With the complete knowledge of sequence ofrequests and releases of resources by each process we can decide whether or not grant the resource.Soas to avoid a future possible deadlock.

Various types of deadlock avoidance algorithm are proposed.In the most simplest case we require thateach process declare the maximum number of resources of each type which it needs,we can constructan algorithm that ensures that the system will never enter a deadlock state,A deadlock avoidancealgorithm dynamically examines the resource allocation state to ensure that a circular wait conditionnever occur.

SAFE STATE

Page 79: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 79

A state is safe if the system can allocate resources to each process in some order and still avoid adeadlock.A sequence of processes <P1,P2,P3,…..,Pn > is a safe sequence for the current allocation stateif for each Pi the resources that Pi can still request can be satisfied by currently available resources plusthe resources held by all the Pj process.In this situation

If Pi resource needs are not immediately available,then Pi can wait until all Pj have finished. When Pj is finished,Pi can obtain needed resources,execute,return allocated resources and

terminate. When Pi terminate,Pi+1 can obtain its need resources and so on.

A safe state is not a deadlock state while a deadlock state is an unsafe state.All unsafe states arenot deadlocks,however an unsafe state may lead to a deadlock.Eg:Tape drivers=12

Max need Current needP0 10 5P1 4 2P2 9 2

RESOURCE ALLOCATION GRAPH ALGORITHMResource allocation graph can be used for deadlock avoidance by introducing a new type ofedge called claim edge.A claim edge Pi->Pj indicates that process Pj may request resource Rj infuture.It is represented by a dashed line.When process Pi request resource Rj the claim edge Pi->Pj is converted to request edge.Similarly when a resource Rj is released by Pi,the assignmentedge Rj->Pi is reconverted to claim edge Pi->Rj.Suppose that process Pi request resource Rj.The request can be granted only if converting therequest edge Pi->Rj to an assignment edge Rj->Pi does not result in the formation of a cycle inthe resource allocation graph .If no cycle exist than the allocation of the resource will ensure asafe state.If a cycle is found,then the allocation will put the system in unsafe state.Henceprocess Pi will wait for its request to be satisfied.Consider the following example

Page 80: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 80

BANKERS ALGORITHM :

Bankers algorithm is applicable for a system with multiple instances of each resource type. It iscalled as bankers algorithm could be used in banking system to ensure that the bank neverallocates its available cash in such a way that it is unable to satisfy the customers need.When a new process enters the system it must declare the maximum number of instances ofeach resources type that it may need.This number may not exceed the total number ofresources in the system.When a user request a set of resources,the system must determinewhether the allocation of these resources will leave the system in a safe state.If it will,theresources are allocated otherwise the process must wait until some other process releasesenough resources.

DATA STRUCTURE FOR THE BANKERS ALGORITHMLet n=number of process and m=number of resources types.Available :Vector of length m.If available [j]=k,there are k instances of resource type Rj available.Max:n*m matrix.If Max[i,j]=k,then process Pi may request at most k instances of resource typeRj.Allocation:n*m matrix.If allocation [i,j]=k,then Pi is currently allocated k instances of Rj.Need:n*m matrix.If Need[i,j]=k,then Pi may need k more instances of Rj to complete its task.Need[i,j]=Max[i,j]-Allocation[i,j]

SAFETY ALGORITHM

Page 81: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 81

The algorithm for finding out whether or not a system is in a safe state can be described asfollows:1..Let Work and Finish be vectors of length m and n respectively.Initialize

Work = AvailableFinish[i]=false for i=1,2,3,….,n

2..Find an I such that both:a)Finish [i]=falseb)Needi <=WorkIf no such i exist,go ti step4.3..Work =Work + AllocationFinish [i]=trueGo to step 24..If Finish [i]==true for all I,then the system is in a safe state.

RECOVERY FROM DEADLOCKWhen a detection algorithm determines that a deadlock exist there are several ways to recoverfrom the deadlock1..Process termination:To eliminate deadlocks by aborting a process two methods can be used.i)Abort all deadlock processes:This method will break the deadlock cycle but at a very great cost.ii)Abort one process at a time until a deadlock cycle is eliminated:This method has a drawbackthat after each process is aborted a deadlock detection algorithm must be invoked to determinewhether any processes are still deadlocked.The following factors are used to select the process which is to be aborted

Priority of the process How long process has computed and how much longer to completion Resources the process has used Resources process needs to complete How many processes will need to be terminated Is process interactive or batch

2..Resource preemption:

To eliminate deadlock using resource preemption we successively preempt some resourcefrom processes and give these resources to other processes until the deadlock cycle isbroken.

Page 82: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 82

The following issues arises

i)Selecting a victim i.e which resources and which process are to be preempted so as tominimize the cost.

ii)Rollback –After a preempting a resource from a process we must rollback the process tosome safe state and restart the process from that state.

iii)Starvation-It may happen that the same process is victimized everytime which may leadto starvation.Hence we must ensure that a process can be picked as a victim only a smallnumber of times.

Page 83: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 83

Unit - VIINPUT OUTPUT SYSTEM

Input-Output Hardware:

A Typical bus architecture is shown in the above diagram. A pc1 bus connects the processorand memory sub systems to the fast devices together with and expansion bus which isconnected to slow input output devices. The control is the hardware device which operate aport, bus or a device. The device controller input output can address the I/O devices in twoways.

1. Direct Input Output.2. Memory map Input Output.

Various categories of I/O devices: The following are the various categories of I/O devices.I. Character Stream or Block: The character stream devices transfer data byte by byte

while block devices transfer data block by block.II. Sequential or Random Access: Sequential devices transfer data in sequential order

while random access devices can read or write data in any order.III. Sharable or Dedicated Device: A sharable device can be concurrently used by many

processes or threads but a dedicated device can be used by only one process or threadat a time.

Page 84: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 84

IV. Synchronous or Asynchronous: A synchronous device transfer data in a non responsetime (predictable) while asynchronous device exhibit irregular response.

V. Speed of Operation: It gives the transfer rate which can range from few byte persecond to few Giga-Bytes per second.

VI. Read Write, Read Only, Write Only: This devices can perform data transfer only in onedirection or both direction.

Methods of Handling I/O Transfers:1) Polling: Polling the device usually means reading its status register until the device

become ready. A polling base program continuously polls whether or not data areready to be received or transmitted. The interaction between the host and thecontroller is done throw hand shaking signal as given by the following steps.

1. The host repeatedly read the busy bit until that the bit become clear.2. The host sets the write bit in the command register and writes a byte into a

data byte register.3. The host sets the command ready bit.4. When the controller notice it that the command ready bit set, it sets the busy

bit.5. The controller reads the command register and sees the write command. It

reads the data out register to get the byte and perform the input output to thedevice.

6. The controller clears the command ready bit, clears the error bit to indicatesuccessful transfer and clears the busy bit to indicate the completion of datatransfer.

7. The drawback of polling is that its gives rise to busy wait.2) Interrupts: In interrupt driven I/O the CPU hardware senses the interrupt request line

after executing every instruction. When the CPU detects than the controller has(asserted) set a signal an interrupt request line, the CPU saves the current value of theinstruction pointer and jumps to the interrupt handler routine. The interrupt handlerdetermine the cause of interrupt and services it, after servicing it comes back to themain program the following diagram gives the interrupt driven I/O cycle.

Page 85: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 85

Direct Memory Access:

In DMA transfer a special purpose processor called as DMA controller is used. To initiate a DMAtransfer the host write a DMA and command block into memory. This block contains a pointerto the source and a pointer to the destination and a count of number of byte to be transferred.The CPU writes the address of this command block to a DMA controller and carries out otherwork. The DMA controller carries out the required transfer without any interference from themain CPU. Hand shaking between the DMA controller and the device controller is performed byDMA request and DMA acknowledge wires.

Page 86: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 86

Kernel I/O Structure:

The above diagram shows the input output related portions of the kernel. The purpose of thedevice driver layer is to hide the differences among device controller from the I/O subsystem ofthe kernel. Marking the I/O subsystem independent of the hardware the job of the operatingsystem developers becomes much simplified. It also benefits the hardware manufactures asthey either design new devices to be compatible with an existing host controller or they writedevice drivers to interface the new hardware to the operating system.

Kernel provides the following services

1. I/O Scheduling: Scheduling is implemented using a wait queue of request for eachdevice. The input output scheduler rearranges the wait queue to improve overallefficiency and average response time.

2. Buffering: A buffer is a memory area which stores data while it is being transferredbetween the two devices buffering is used to adjust the speed mismatch betweenthe devices having different data transfer sizes.

3. Caching: A cache is a region of fast memory that holds copies of data. Access tothe cached copy is more efficient as compare to the access to the original.

4. Spooling: A spool is a queued buffer which holds data for a device which cannotaccept inter leaved data streamsFor example: A printer spooling use to co-ordinate concurrent output.

5. Error Handling: The operating system can provide protection against hardwareand application error so that they do not lead to complete system failure.

Page 87: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 87

SECURITY

Security deals with protecting the system from external environment within which thesystem operates. We say that a system is secure if its resources are used and accessed as(intended) required.

Security violates of the system can be categorized as accidental intentional (malicious)

The following are the forms of malicious misuses

1. Unauthorized reading of data

2. Unauthorized modification of data

3. Unauthorized destruction of data

4. Denial of service

To protect the system we must take security measures at four levels.

1. Physical-The place containing the computer systems must be physically secured againstintruders

2. Human-Users must be screened carefully to reduce the chance of authorizing a user whothen give access to an intruder.

3. Network-Security measures at this level is required so that harmful program may not createhavoc in the computer system

4. Operating System-The system must protect itself from accidental or purposeful securitythreats.

PROGRAM THREATS

1. Trojan Horse - A code segment which misuses its environment is called a Trojan horse. Manysystem have mechanisms for allowing programs written by users to be executed by other users.If these program are executed in a domain that provides the access rights of the executinguser, the other users may misuse these rights.

Page 88: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 88

2. Trap door - The designer of a program or system may leave a hole in the software which onlythe designer is capable of using. This type of security threat is called as trap door. A trap doorcould be included in a computer. These trap doors create a serious problem because to detectthem we have to analyses all the source code for all components of the system, since softwaresystems may contains millions of lines of code, this analysis is not done frequently.

3. Stack and buffer overflow - This attack is the most common way for an attacker outside thesystem, on a network so as to gain unauthorized access to the target system. This type of attackexploits a bug in a program and sends more data than the program was expecting, the attackerdetermines the weak points and writes a program to do the following.

(i)Overflow an input field until it writes into the stack

(ii)Overwrite the current written address on the stack with the address of the exploit codeloaded in the next step.

(iii)Write a simple set of code for the next space in the stack which includes the commandwhich the attacker wants to execute.

SYSTEM THREATS

1. Worms-A worm is s process which uses the spawn mechanism to degrade systemperformance. The worm spawns copies of itself using up system resources and locking systemuse by all other processes. On computer networks worms may reproduce themselves and shutdown the entire network. The most famous worm was developed by Robert Morris to design aself replicating program which allowed easy access to machines on a network without checkingpasswords.

2. Viruses-Virus is a form of computer attack which are designed to spread into other programsso as to create havoc in the system. It may include modifying or destroying files, causing systemcrashes and program malfunctioning. A virus is a fragment of code embedded in a legitimate(legal) program. Viruses are usually spread by users downloading infected programs orexchanging disk which are infected.

A common form of virus transmission detected is through exchange of MS Office files whichcontain macros which execute automatically. Since these programs under the users ownaccount, the macros can run without any interruption.

Page 89: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 89

3. Denial of service-These attacks are generally network base and fall into two categories. Thefirst case is an attack which uses so many resources that no useful work can be done. Forexample - A website click could download a Java applet which uses all the available CPU time.

The second case involves disrupting the network so to provide connection. These attacks areusually stopped at the network level until the OS can be updated.

INTRUSSION DETECTION

Intrusion detection is used to detect attempted intrusion or successful intrusion into computersystems and to initiate appropriate response to the intrusions. Intrusions detection has a widearray of techniques some of which are as follows

1. The time that detection occurs.

2. The types of input examined to detect intrusive activity. These could include shell commands,process system calls and network packet headers.

3. The range of response capabilities-The response may include alerting an administrator of thepotential intrusion or somehow killing a process engaged in intrusive activity. In recent systemswe divert the activity of the intruder to a trap.

TYPES OF INTRUSION DETECTION

1. Signature based detection-In this type of detection system input or network traffic isexamined for specific behavior patterns to indicate attacks. A simple example of signaturebased detection is monitoring for multiple failed attempts to log into an account whichindicates someone trying to guess the password for that account.

2. Anomaly detection-It refers to techniques which detect anomalous behavior withincomputer systems. An example of anomaly detection is monitoring system calls of a process todetect if its system call behavior deviates from normal pattern. Another example is monitoringshell commands to detect anomalous login time for a user. Signature based detection attemptsto characterize dangerous behaviors and detects when one of these behavior occurs. Whileanomaly detection attempts to characterize normal behavior and detects when somethingother than normal occurs. Anomaly detection can detect previously unknown methods ofintrusion while signature based detection will identify only known attacks that can be codified.

Page 90: Operating Systems

Operating Systems SY-IT

Fahad Shaikh (System Administrator) Page 90

COMPUTER SECURITY CLASSIFICATION

Four divisions of security are specified A, B, C, D.

1. Level D-The lowest level classification is division D providing minimum protection andconsist of one class. It is used for systems which fail to meet the requirements of any securityclasses.

2. Level C-The next level of security provides protection and accountability of users and theiractions through the use of audit capability. Division C has 2 classes C1 and C2.C1 class providessome form of controls which allows users to protect private information and keep other usersaway from reading or destroying their data. The TCB (Trusted Computer Base)of C1 controlsaccess between users and files by allowing the user to specify and control sharing of objects byauthorized users or defined groups. The TCB protects the authentication data so that they areinaccessible to unauthorized users.

The C2 class adds one more level to the C1 class through an individual level access control.Access rights of a file can be specified to the level of single individual. The TCB protects itselffrom modification of its code or data structure. Also no information produce by a previous useris available to another user

3. Level B – This has all the properties of class C2 system as well as they attached sensitivitylabels to each object. It consists of three class B1 and B2 and B3. The B1 class TCB maintains thesecurity labels of each object in the system, this label is used for decisions regarding mandatoryaccess control. A user at confidential level could not access a file at the more sensitive secretlevel. The B2 class system extends the sensivity level to each system resource such as storageobject. The B3 class system allows the creation of access control list which denote user orgroups not granted access to a given named object. The TCB also contains a mechanism tomonitor events which may indicate violation of security, in case it occurs, the securityadministrator is notified and the event is terminated.

4. Level A – The highest level classification is level A. A class A1 system is almost equivalent toB3 system, it also uses formal designs specification and verification techniques and granting ahigh degree of assurance that the TCB has been implemented correctly.