chapter reading task operating system

34
UNIVERSITI TEKNIKAL MALAYSIA MELAKA SEMESTER 1 YEAR 2 BITS 1213 - OPERATING SYSTEM LIST OF MEMBERS : NO NAME MATRIC NO. 1 NUR ATIQAH BT MOHD ROSLI B031210097 2 SU AIDAH BT MOKHTAR B031210193 3 SITI NADIRAH BT MINHAT B0131210037 4 NOR HADHIRAH BT SHERIFF B031210041 5 NURUL NAJEEHA BT ANNUAR B031210270 0 | Page

Upload: nur-atiqah-mohd-rosli

Post on 20-Nov-2014

595 views

Category:

Technology


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: CHAPTER READING TASK OPERATING SYSTEM

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

SEMESTER 1 YEAR 2

BITS 1213 - OPERATING SYSTEM

LIST OF MEMBERS :

NO NAME MATRIC NO.

1 NUR ATIQAH BT MOHD ROSLI B031210097

2 SU’AIDAH BT MOKHTAR B031210193

3 SITI NADIRAH BT MINHAT B0131210037

4 NOR HADHIRAH BT SHERIFF B031210041

5 NURUL NAJEEHA BT ANNUAR B031210270

0 | P a g e

Page 2: CHAPTER READING TASK OPERATING SYSTEM

CONTENT

PROCESS................................................................................................... 2-14

What is process............................................................................................ 2-3

Process state................................................................................................. 4-8

Process creation........................................................................................... 9-10

Process termination..................................................................................... 11-12

THREAD..................................................................................................... 12-23

Introduction................................................................................................. 12

How it work................................................................................................. 12-13

Advantages and disadvantages.................................................................... 14-15

Threading issues.......................................................................................... 16-18

Types of thread............................................................................................ 19-23

SYMMETRIC MULTIPROCESSING..................................................... 24-25

Description and process................................................................................ 24

How it works................................................................................................ 24

Diagram of multiprocessing......................................................................... 25

MICROKERNEL....................................................................................... 26-30

Introduction.................................................................................................. 26

Descriptions.................................................................................................. 26

Features........................................................................................................ 27

Advantages and disadvantages.................................................................... 28-29

Diagram of microkernel............................................................................... 30

1 | P a g e

Page 3: CHAPTER READING TASK OPERATING SYSTEM

PROCESS IN OPERATING SYSTEM

I. WHAT IS PROCESS?

Process – a program in execution; process execution must progress in sequential fashion

Task, execution of individual program.

A process includes:

– program counter – specifying next instruction to be executed.

– Stack – containing temporary data such as return address.

– data section – containing global variables.

Figure 3.1

Process memory is divided into four sections as shown in Figure 3.1.

Text - Comprises the compiled program code, read in from non-volatile

storage when the program is launched.

2 | P a g e

Page 4: CHAPTER READING TASK OPERATING SYSTEM

Data -Stores global and static variables, allocated and initialized prior to

executing main.

Heap - Dynamic memory allocation, and is managed via calls to new, delete,

malloc, free, etc.

Stack - Local variables. Space on the stack is reserved for local variables when

they are declared ( at function entrance or elsewhere, depending on the

language ), and the space is freed up when the variables go out of scope. Note

that the stack is also used for function return values, and the exact mechanisms

of stack management may be language specific.

Note that the stack and the heap start at opposite ends of the process's free

space and grow towards each other. If they should ever meet, then either a

stack overflow error will occur, or else a call to new or malloc will fail due to

insufficient memory available.

When processes are swapped out of memory and later restored, additional

information must also be stored and restored. Key among them are the program

counter and the value of all program registers.

3 | P a g e

Page 5: CHAPTER READING TASK OPERATING SYSTEM

II. PROCESS STATE

There is five states in process as shown in Figure 3.2 below (may have other states besides the

ones listed):

New - The process is in the stage of being created.

Ready - The process has all the resources available that it needs to run, but the CPU is

not currently working on this process's instructions.

Running - The CPU is working on this process's instructions.

Waiting - The process cannot run at the moment, because it is waiting for some

resource to become available or for some event to occur. For example the process may

be waiting for keyboard input, disk access request, inter-process messages, a timer to

go off, or a child process to finish.

Terminated - The process has completed.

Figure 3.2

Process Control Block (PCB)

4 | P a g e

Page 6: CHAPTER READING TASK OPERATING SYSTEM

Figure 3.3

A PCB contains the following Information:

– Process state: new, ready, …

– Program counter: indicates the address of the next instruction

to be executed for this program.

– CPU registers: includes accumulators, stack pointers, …

– CPU scheduling information: includes process priority,

pointers to scheduling queues.

– Memory-management information: includes the value of base

and limit registers (protection) …

– Accounting information: includes amount of CPU and real

time used, account numbers, process numbers, …

– I/O status information: includes list of I/O devices allocated to this process, a list

of open files, …

CPU Switch From Process to Process

5 | P a g e

Page 7: CHAPTER READING TASK OPERATING SYSTEM

Figure 3.4

Process Scheduling Queue

The two main objectives of the process scheduling system are to keep the CPU busy

at all times and to deliver "acceptable" response times for all programs, particularly

for interactive ones.

The process scheduler must meet these objectives by implementing suitable policies

for swapping processes in and out of the CPU.

( Note that these objectives can be conflicting. In particular, every time the system

steps in to swap processes it takes up time on the CPU to do so, which is thereby "lost"

from doing any useful productive work. ).

6 | P a g e

Page 8: CHAPTER READING TASK OPERATING SYSTEM

Figure 3.5 - Ready Queue And Various I/O Device Queues

All processes are stored in the job queue.

Processes in the Ready state are placed in the ready queue.

Processes waiting for a device to become available or to deliver data are placed in

device queues. There is generally a separate device queue for each device.

Schedulers

A long-term scheduler is typical of a batch system or a very heavily loaded system. It

runs infrequently, ( such as when one process ends selecting one more to be loaded in

from disk in its place ), and can afford to take the time to implement intelligent and

advanced scheduling algorithms.

7 | P a g e

Page 9: CHAPTER READING TASK OPERATING SYSTEM

The short-term scheduler, or CPU Scheduler, runs very frequently, on the order of 100

milliseconds, and must very quickly swap one process out of the CPU and swap in

another one.

Some systems also employ a medium-term scheduler. When system loads get high,

this scheduler will swap one or more processes out of the ready queue system for a

few seconds, in order to allow smaller faster jobs to finish up quickly and clear the

system. See the differences in Figures 3.7 and 3.8 below.

An efficient scheduling system will select a good process mix of CPU-bound

processes and I/O bound processes.

Figure 3.6 - Queueing-diagram representation of process scheduling

8 | P a g e

Page 10: CHAPTER READING TASK OPERATING SYSTEM

III. PROCESS CREATION

• A process may create several new processes, via a create-process system call, during

execution.

• Parent process creates children processes, which, in turn create other processes,

forming a tree of processes.

• Resource sharing, such as CPU time, memory, files, I/O devices …

– Parent and children share all resources.

– Children share subset of parent’s resources.

– Parent and child share no resources.

• When a process creates a new process, two possibilities exist in terms of execution:

– Parent and children execute concurrently.

– Parent waits until children terminate.

• There are also two possibilities in terms of the address space of the new process:

– Child duplicate of parent.

– Child has a program loaded into it.

• UNIX examples:

– fork system call creates new process

– execve system call used after a fork to replace the process’ memory space with

new program.

9 | P a g e

Page 11: CHAPTER READING TASK OPERATING SYSTEM

IV. PROCESS TERMINATION

• Process executes last statement and asks the operating system to delete it by using

the exit system call.

– Output data from child to parent via wait system call.

– Process’ resources are deallocated by operating system.

• Parent may terminate execution of children processes via abort system call for a

variety of reasons, such as:

– Child has exceeded allocated resources.

– Task assigned to child is no longer required.

– Parent is exiting, and the operating system does not allow a child to continue if its

parent terminates.

Interprocess Communications (IPC)

Mechanism for processes to communicate and to synchronize their actions.

• IPC is best provided by message-passing systems.

• IPC facility provides two operations:

– send (message) – message size fixed or variable

– receive (message)

• If P and Q wish to communicate, they need to:

– establish a communication link between them

– exchange messages via send/receive

• Processes can communicate in two ways:

– Direct communication

– Indirect communication.

10 | P a g e

Page 12: CHAPTER READING TASK OPERATING SYSTEM

Advantages of process.

Information sharing – such as shared files.

Computation speed-up – to run a task faster, we must break it into subtasks, each of

which will be executing in parallel. This speed up can be achieved only if the

computer has multiple processing elements (such as CPUs or I/O channels).

Modularity – construct a system in a modular function (i.e., dividing the system

functions into separate processes).

Convenience – one user may have many tasks to work on at one time. For example, a

user may be editing, printing, and compiling in parallel.

Disadvantages of process.

Not convenient for user and poor performance.

Complexity in OS.

Processes can misbehave

– By avoiding all traps and performing no I/O, can take over entire

machine.

– Only solution: Reboot!

Difficult to setup process correctly and to express all possible options

– Process permissions, where to write I/O, environment variables.

– Example: WindowsNT has call with 10 arguments.

11 | P a g e

Page 13: CHAPTER READING TASK OPERATING SYSTEM

THREADS

INTRODUCTION

What is thread in operating system?

A thread is a flow of execution through the process code, with its own program

counter, system registers and stack. A thread is also called a light weight

process. Threads provide a way to improve application performance through

parallelism. Threads represent a software approach to improving performance of

operating system by reducing the overhead thread is equivalent to a classical

process.

A thread is a single sequence stream within in a process. Because threads have

some of the properties of processes, they are sometimes called lightweight

processes. In a process, threads allow multiple executions of streams. In many

respect, threads are popular way to improve application through parallelism.

The CPU switches rapidly back and forth among the threads giving illusion that

the threads are running in parallel. Like a traditional process i.e., process with

one thread, a thread can be in any of several states (Running, Blocked, Ready or

Terminated). Each thread has its own stack.

DESCRIPTION OF TOPIC

A thread is a flow of execution through the process code, with its own program

counter, system registers and stack. A thread is also called a light weight

process. Threads provide a way to improve application performance through

parallelism. Threads represent a software approach to improving performance of

operating system by reducing the overhead thread is equivalent to a classical

process.

12 | P a g e

Page 14: CHAPTER READING TASK OPERATING SYSTEM

Each thread belongs to exactly one process and no thread can exist outside a

process. Each thread represents a separate flow of control.Threads have been

successfully used in implementing network servers and web server. They also

provide a suitable foundation for parallel execution of applications on shared

memory multiprocessors. Folowing figure shows the working of the single and

multithreaded processes.

HOW IT WORK

Each process has its own memory space. When Process 1 accesses some given

memory location, say 0x8000, that address will be mapped to some physical

memory address1. But from Process 2, location 0x8000 will generally refer to a

completely different portion of physical memory. A thread is a subdivision that

shares the memory space of its parent process. So when either Thread 1 or

Thread 2 of Process 1 accesses "memory address 0x8000", they will be referring

to the same physical address. Threads belonging to a process usually share a few

other key resources as well, such as their working directory, environment

variables, file handles etc.

On the other hand, each thread has its own private stack and registers, including

program counter. These are essentially the things that threads need in order to be

independent. Depending on the OS, threads may have some other private

13 | P a g e

Page 15: CHAPTER READING TASK OPERATING SYSTEM

resources too, such as thread-local storage (effectively, a way of referring to

"variable number X", where each thread has its own private value of X). The OS

will generally attach a bit of "housekeeping" information to each thread, such as

its priority and state (running, waiting for I/O etc).

ADVANTAGES AND DISADVANTAGES

NO ADVANTAGES DISADVANTAGES

1.

Responsiveness - One thread may

provide rapid response while other

threads are blocked or slowed down

doing intensive calculations.

Global variables are shared

between threads.Inadvertent

modification of shared

variables can be disastrous.

2 Resource sharing - By default

threads share common code, data,

and other resources, which allows

multiple tasks to be performed

simultaneously in a single address

Many library functions are not

thread safe.

14 | P a g e

Page 16: CHAPTER READING TASK OPERATING SYSTEM

space.

3

Economy - Creating and managing

threads is much faster than

performing the same tasks for

processes.

If one thread crashes the whole

application crashes.

4

Scalability, i.e. Utilization of

multiprocessor architectures - A

single threaded process can only

run on one CPU, no matter how

many may be available, whereas

the execution of a multi-threaded

application may be split amongst

available processors.

Memory crash in one thread

kills other threads sharing the

same memory, unlike

processes.

5

User level threads are fast to create

and manage.

In a typical operating system,

most system calls are blocking.

6

User level thread can run on any

operating system.

Transfer of control from one

thread to another within same

process requires a mode switch

to the Kernel.

15 | P a g e

Page 17: CHAPTER READING TASK OPERATING SYSTEM

THREADING ISSUES

1- The Semantics of fork() and exec() system calls

It is system dependant. If the new process execs right away, there is no

need to copy all the other threads. If it doesn't, then the entire process

should be copied. Many versions of UNIX provide multiple versions of

the fork call for this purpose.

2- Signal Handling

i- When a multi-threaded process receives a signal, there are four

major options where that the thread will be delivered based on the

signal:-

Deliver the signal to the thread to which the signal applies.

Deliver the signal to every thread in the process.

Deliver the signal to certain threads in the process.

Assign a specific thread to receive all signals in a process.

ii- The best choice may depend on which specific signal is involved.

iii- Windows does not support signals, but they can be emulated using

Asynchronous Procedure Calls ( APCs ). APCs are delivered to

specific threads, not processes.

iv- Synchronous and asynchronous

16 | P a g e

Page 18: CHAPTER READING TASK OPERATING SYSTEM

3- Thread Cancellation of target thread

Threads that are no longer needed may be cancelled by another thread in

two ways:

Asynchronous Cancellation cancels the thread immediately.

Deferred Cancellation sets a flag indicating the thread should

cancel itself when it is convenient. It is then up to the cancelled

thread to check this flag periodically and exit nicely when it sees

the flag set.

4- Thread-Local Storage

i- Most data is shared among threads, and this is one of the major

benefits of using threads in the first place.

ii- Sometimes threads need thread-specific data also.

iii- Most major thread libraries ( pThreads, Win32, Java ) provide

support for thread-specific data, known as thread-local storage or

TLS.

iv- Note that this is more like static data than local variables, because it

does not cease to exist when the function ends.

5- Scheduler Activations

i- Many implementations of threads provide

virtual processor (interface between the user thread),

17 | P a g e

Page 19: CHAPTER READING TASK OPERATING SYSTEM

Kernel thread, particularly for the many-to-many or two-tier

models.

ii- This virtual processor is known as a "Lightweight Process", LWP.

iii- There is a one-to-one correspondence between LWPs and kernel

threads.

iv- The number of kernel threads available, and hence the number of

LWPs may change dynamically.

v- The application (user level thread library) maps user threads onto

available LWPs.

vi- Kernel threads are scheduled onto the real processors by the OS.

vii- The kernel communicates to the user-level thread library when

certain events occur (such as a thread about to block) via an upcall,

which is handled in the thread library by an upcall handler. The

upcall also provides a new LWP for the upcall handler to run on,

which it can then use to reschedule the user thread that is about to

become blocked. The OS will also issue upcalls when a thread

becomes unblocked, so the thread library can make appropriate

adjustments.

viii- If the kernel thread blocks, then the LWP blocks, which blocks the

user thread.

ix- Ideally there should be at least as many LWPs available as there

could be concurrently blocked kernel threads. Otherwise if all

LWPs are blocked, then user threads will have to wait for one to

become available.

18 | P a g e

Page 20: CHAPTER READING TASK OPERATING SYSTEM

TYPES OF THREAD

Threads are implemented in following two ways;

User Level Threads

In this case, application manages thread management kernel is not aware

of the existence of threads. The thread library contains code for creating

and destroying threads, for passing message and data between threads, for

scheduling thread execution and for saving and restoring thread contexts.

The application begins with a single thread and begins running in that

thread.

Kernel Level Threads

In this case, thread management done by the Kernel. There is no thread

management code in the application area. Kernel threads are supported

directly by the operating system. Any application can be programmed to

19 | P a g e

Page 21: CHAPTER READING TASK OPERATING SYSTEM

be multithreaded. All of the threads within an application are supported

within a single process.

The Kernel maintains context information for the process as a whole and

for individuals threads within the process. Scheduling by the Kernel is

done on a thread basis. The Kernel performs thread creation, scheduling

and management in Kernel space. Kernel threads are generally slower to

create and manage than the user threads.

Advantages

Kernel can simultaneously schedule multiple threads from the same

process on multiple processes.

If one thread in a process is blocked, the Kernel can schedule

another thread of the same process.

Kernel routines themselves can multithreaded.

Disadvantages

Kernel threads are generally slower to create and manage than the

user threads.

20 | P a g e

Page 22: CHAPTER READING TASK OPERATING SYSTEM

Kernel can simultaneously schedule multiple threads from the same

process on multiple processes.

If one thread in a process is blocked, the Kernel can schedule

another thread of the same process.

Kernel routines themselves can multithreaded.

Kernel threads are generally slower to create and manage than the

user threads.

Multithreading Models

Some operating system provide a combined user level thread and Kernel level

thread facility. Solaris is a good example of this combined approach. In a

combined system, multiple threads within the same application can run in

parallel on multiple processors and a blocking system call need not block the

entire process. Multithreading models are three types;

Many to Many Model

Many user level threads multiplexes to the Kernel thread of smaller or equal

numbers. The number of Kernel threads may be specific to either a particular

application or a particular machine.

Following diagram shows the many to many model. In this model, developers

can create as many user threads as necessary and the corresponding Kernel

threads can run in parallels on a multiprocessor.

21 | P a g e

Page 23: CHAPTER READING TASK OPERATING SYSTEM

Many to One Model

Many to one model maps many user level threads to one Kernel level thread.

Thread management is done in user space. When thread makes a blocking

system call, the entire process will be blocks. Only one thread can access the

Kernel at a time,so multiple threads are unable to run in parallel on

multiprocessors.

If the user level thread libraries are implemented in the operating system in such

a way that system does not support them then Kernel threads use the many to

one relationship modes.

22 | P a g e

Page 24: CHAPTER READING TASK OPERATING SYSTEM

One to One Model

There is one to one relationship of user level thread to the kernel level

thread.This model provides more concurrency than the many to one model. It

also another thread to run when a thread makes a blocking system call. It

support multiple thread to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the

corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to

one relationship model.

23 | P a g e

Page 25: CHAPTER READING TASK OPERATING SYSTEM

Difference between User Level & Kernel Level Thread

S.N. User Level Threads Kernel Level Thread

1User level threads are faster to

create and manage.

Kernel level threads are slower to

create and manage.

2Implementation is by a thread

library at the user level.

Operating system supports creation of

Kernel threads.

3User level thread is generic and can

run on any operating system.

Kernel level thread is specific to the

operating system.

4Multi-threaded application cannot

take advantage of multiprocessing.

Kernel routines themselves can be

multithreaded.

SYMMETRIC MULTIPROCESSING

SMP systems allow any processor to work on any task no matter where the data for that task are located in memory, provided that each task in the system is not in execution on two or more processors at the same time. With proper operating system support, SMP systems can easily move tasks between processors to balance the workload efficiently.

SMP systems are tightly coupled multiprocessor systems with a pool of homogeneous processors running independently, each processor

24 | P a g e

Page 26: CHAPTER READING TASK OPERATING SYSTEM

executing different programs and working on different data and with capability of sharing common resources (memory, I/O device, interrupt system and so on) and connected using a system bus or a crossbar.

Uniprocessor and SMP systems require different programming methods to achieve maximum performance. Programs running on SMP systems may experience a performance increase even when they have been written for uniprocessor systems. This is because hardware interrupts that usually suspend program execution while the kernel handles them can execute on an idle processor instead

25 | P a g e

Page 27: CHAPTER READING TASK OPERATING SYSTEM

Process of Symmetric Multiprocessor

26 | P a g e

Page 28: CHAPTER READING TASK OPERATING SYSTEM

Microkernel

Introduction of Microkernel

Early operating system kernels were rather small, partly because computer memory was

limited. As the capability of computers grew, the number of devices the kernel had to control

also grew. Through the early history of Unix, kernels were generally small, even though those

kernels contained device drivers and file system managers. When address spaces increased

from 16 to 32 bits, kernel design was no longer cramped by the hardware architecture, and

kernels began to grow.

The Berkeley Software Distribution (BSD) of Unix began the era of big kernels. In addition to

operating a basic system consisting of the CPU, disks and printers, BSD started adding

additional file systems, a complete TCP/IP networking system, and a number of "virtual"

devices that allowed the existing programs to work invisibly over the network. This growth

continued for many years, resulting in kernels with millions of lines of source code. As a

result of this growth, kernels were more prone to bugs and became increasingly difficult to

maintain.

The microkernel was designed to address the increasing growth of kernels and the difficulties

that came with them. In theory, the microkernel design allows for easier management of code

due to its division into user space services. This also allows for increased security and

stability resulting from the reduced amount of code running in kernel mode. For example, if a

networking service crashed due to buffer overflow, only the networking service's memory

would be corrupted, leaving the rest of the system still functional.

27 | P a g e

Page 29: CHAPTER READING TASK OPERATING SYSTEM

Descriptions of Microkernel

A Microkernel is a highly Spartan modular subsystem composed of OS-neutral abstractions,

providing only essential services such as process abstractions, threads, IPC, and memory

management primitives. All device drivers, etc., which are normally part of an OS kernel, run

on the microkernel as just another user process.

• Multiple operating systems can then be layered on top of these abstractions, and are thus

viewed assimply another application.

• Thisfocus on modularity allowsforscalability,extensibility and portability not found in

monolithic operating systems(Unix, Linux, DOS, etc.)

Features

• the microkernel provides only rudimentary core facilities, different OS personalities(such as

BSD Unix, Linux,NT, etc.) can be hosted on the microkernel.

• Because ofits highly modular nature,many of the services commonly found in“kernelspace”

are found in “userspace”on a microkernelFeatures

• Flexibility (can restart modules without rebooting the OS)

• Lower fixed memory demand: The L4 (Mach)Microkernel only takes up about 32 Kilobytes

of memory.

• However, a microkernel + regular OS will probably take up more memory than a simple OS

would take up, because of the additional memory required by the microkernel itself

• SMP delivery is easier

28 | P a g e

Page 30: CHAPTER READING TASK OPERATING SYSTEM

The advantages and disadvatages of Microkernel

Advantages

Extensible: add a new server to add new OS functionality.

Kernel does not determine operating system environment.

• Allows support for multiple OS personalities

• Need an emulation server for each system (e.g. Mac,Windows, Unix)

• All applications run on same microkernel

• Applications can use customized OS (e.g. for databases)

Most hardware agnostic

Threads ,IPC, user-level servers don't need to worry abount udelying hardware.

Strong protection

Even of the OS againts itself (i.e the part of the OS that are implemented as

servers)

Esay extension to multiprocessor and distributed system.

Microkernel are simplicity of the kernel (small)

Flexibility

we can have a file server and a database server

Disadvantages

Performance

System call can require a lot of protection mode changes

Expensive to reimplement everything with a new model.

29 | P a g e

Page 31: CHAPTER READING TASK OPERATING SYSTEM

OS personalities are easir to port to new hardware after porting to microkernel,

but porting to microkernel may be hardder than porting to new hardware.

More overhead

Loss of the system call and context switches

E.g Mach, L4, AmigaOS, Minix , K42.

30 | P a g e

Page 32: CHAPTER READING TASK OPERATING SYSTEM

The diagram of Microkernel

31 | P a g e

Page 33: CHAPTER READING TASK OPERATING SYSTEM

32 | P a g e