15. computer systems basic software 1

Post on 17-May-2015

4.947 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Part2: Computer Systems

Chapter 2: Basic Software

(Text No. 1 Chapter 3)

Introduction

• This chapter describes the mechanism and functions of the operating system (control program), which performs different kinds of control/management– Understand the software names and classifications,

functions and roles, including the relation with the hardware and the user

– Understand the reasons why the operating system (OS) is necessary, its roles, structure, functions, etc.

– Understand the types and characteristics of the major OS

Software

• Programs that control the operations of the computer and its devices– Starting up the computer– Opening, executing, and

running applications– Disk formatting– File compression– Backups

U se rIn te rfa ce

O p e ra tingS ys tem

U tilit ie s

S ys tem S o ftw a re

P a ckag edS o ftw a re

C u s tomS o ftw a re

S h a rew a re

C a rd w a re

F re e w a re

P u b lic D o m a in

A p p lica tion S o ftw a re

C o m p u te r S o ftw a re

Operating System Position

Application software

Operating system Hard ware

Application

Operating System

“ A set of system software routines that sits between the application program and the hardware. ”

User

Application program

Hardware

Operating system

Operating System

• OS serves as a hardware/software interface, application programmers and users rarely communicate directly with hardware (simplifies programming)

• Acts as repository for command, shared routines, and defines a platform for constructing and executing application software

OS Configuration and Functions

• OS was “born” for the purpose of having the computer prepare the data to be processed and control the execution process by itself.

• Concerns and considerations of users was how to operate the extremely expensive machines

OS Configuration and Functions

OS role

Efficient use of resources

Consecutive job processing

Multiple programming

Reduction of the response

Improvement of reliability

OS Configuration and Functions

• 1. OS Role– Efficient use of the resources

• Efficiently use of resources (e.g. processor, memory, storage devices, I/O devices, application software, other components of computer system) for computer use without relying on human intervention and wastage of these resources

– Consecutive job processing• Implements automatic processes for work done (job) in computer• Minimize or eliminate human involvement to increase processing

efficiency– Multi-programming

• Processing multiple jobs with the same processor• Enhance computer processing efficiency by minimizing or

eliminating processor idle time

Consecutive Job Processing

Multi-programming

OS Configuration and Functions

– Reduction of the response time• Enhances services (reduce waiting time)

• E.g. Online transaction processing systems

– Improvement of reliability• Utility programs

– Others• User friendly

• “extensibility” of the OS (e.g. .Net framework)

OS Configuration and Functions

• 2. OS Configuration– Components of OS are configured to work

together for its complex and wide range of functions

General-purpose language processors Control

ProgramHardware

Service Programs

OS

OS Configuration and Functions• 3. OS Functions

– The control program (nucleus or kernel) is equipped with diverse functions, aimed at enabling efficient use of hardware

• Job Management Function• Process Management Function• Data Management Function• Memory Management Function• Operation Management Function• Failure Management Function• Input/Output Management Function• Communication Management Function

OS Functions

Job Management

• To improve the computer system processing capacity by performing consecutive processing of the job– Within the OS, routines that dispatch, queue,

schedule, load, initiate, and terminate jobs or tasks

– Concerned with job-to-job and task-to-task transition

– JCL and SPOOL

Job Management

• 1. Job control language (JCL)– Often used in larger computer systems, JCL is any control

language that controls the execution of applications– Need to supply the information that the job requires and

instruct the computer what to do with this information– This includes information about:

• the program or procedure to be executed • input data • output data • output reports

Job Management

– A JCL statement also provides information about who the job belongs to and which account to charge for the job.

– Syntax of JCL differs depending on the OS:• JOB statement - submit a job to the operating

system • EXEC statement - control the system’s processing

of the job • DD statement - request resources needed to run the

job (location of files required)

JCL

Job Management

• Job Control Language

Command Translation

Object module

Linkage edition

Loadmodule

ExecutionResult

Job Management• 2. SPOOL (Simultaneous Peripheral Operations Online)

– Used especially in multiprogramming environments– Discrepancy between I/O transfer rate and CPU processing speed– Refers to putting jobs in a buffer, a special area in memory or on a

disk where a device can access them when it is ready– Allow devices to access data at different rates– Example: Print Spooling

• Documents are loaded into a buffer (usually an area on a disk), and then the printer pulls them off the buffer at its own rate

• Able to perform other operations on the computer while the printing takes place as documents are in a buffer where they can be accessed by the printer

• Able to place a number of print jobs on a queue instead of waiting for each one to finish before specifying the next one

Job Management

• SPOOL (Simultaneous Peripheral Operations OnLine)

Runningprogram

SPOOL file

OutputDevicesOutputDevices

Print Spoolingprint queue print spooler print job

jobs to be

printed

jobs being

printed

server laser printerdisk

Job Management• 3. Job scheduling

– Executing the right job, on the right day, at the right time, after the right dependency

– A job scheduler can initiate and manage jobs automatically by processing prepared job control language statements or through equivalent interaction with a human operator.

– Some features that may be found in a job scheduler include: • Continuously automatic monitoring of jobs and completion notification • Event-driven job scheduling • Performance monitoring • Report scheduling

Job Scheduling

Job Management

• Job Scheduler

SPOOLfile

Job queueData

Input device

Output device

Reader

SPOOLfile

Data

Input device

Initiator

Memory

ExecutionWriter

Output device

TerminatorInitiator

Device Management

• Interface for device drivers

• Communication between a device and the computer– Inflow (buffer) and outflow (queue) control– Multiple formats

• Print Manager

Questions

• What is an operating system?

• Describe SPOOLING.

• Describe the OS roles.

• Describe Job Management.

Where to Get More Information

• http://www.okstate.edu/cis_info/cis_manual/jcl_toc.html

• http://whatis.techtarget.com/definition/0,,sid9_gci214229,00.html

Operating System

• Operating System– OS Configuration and Functions

• OS Role

• OS Configuration

• OS Functions

– Job Management• Job Control Language (JCL)

• Simultaneous Peripheral Operations Online (SPOOL)

• Job Scheduling

Process Management• Main purpose is to efficiently use the

processor1. Execution Control

a.State transitionJob is converted into a processing unit called process. It is

processed while repeating the state transition through the process management

1. Job submission (Job step)

2.5. Executable status (Ready status)4. Wait status

3. Running (status)

Process generation

Dispatching

Timer interrupt SVC interrupt

Process terminationI/O interrupt

b. Dispatcher The OS routine that determines which application

routine or task the processor will execute nexti. Preemption

A form of multitasking in which each process is given a set amount of time to access the processor (priority-based queue)

ii. Round robinA processor management technique in which each program is limited to an equal time slice (standard-based queue)

Process Management

Process Management

• After a job is submitted– A process is initiated– The process may lead to multiple processes (or

threads) be created– All processes are assigned certain amount of

CPU time enabling multi-tasking

CPU Scheduling

• Deals with the problem of deciding which processes in the ready queue is to be allocated to the CPU– First-Come, First-Served– Shortest-Job-First– Priority– Round-Robin– Multilevel Queue

• Multiprocessor Scheduling

First-Come, First-Served

• Simplest CPU scheduling algorithm• Process that requests the CPU first is allocated the CPU

first• Easily managed with a queue (FIFO)

– When a process enters the ready queue, its PCB is linked onto the tail of the queue

– When the CPU is free, it is allocated to the process at the head of the queue

– The running process is then removed from the queue• Advantages

– Simple to write and understand• Disadvantages

– Average waiting time often quite long

First-Come, First-ServedProcess Burst Time

P1 24

P2 3

P3 3

• If the processes arrive in the order P1, P2 and P3, and are served in FCFS order

P1 P2 P30 24 27 30

• Waiting time: P1(0); P2(24); P3(27)• Average waiting time = (0+24+27)/3 = 17ms

First-Come, First-ServedProcess Burst Time

P1 24

P2 3

P3 3

• If the processes arrive in the order P2, P3 and P1, and are served in FCFS order

P2 P3 P10 3 6 30

• Average waiting time = (0+3+6)/3 = 3ms

First-Come, First-Served

• Issues– Average waiting time

• generally not minimal and varies substantially if the process CPU-burst times vary greatly

– Non-preemptive scheduling• Once the CPU has been allocated to a process, that

process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O

• Troublesome for time-sharing systems where each user gets a share of the CPU at regular intervals

Shortest-Job-First

• Associates with each process the length of the latter’s next CPU burst

• When the CPU is available, it is assigned to the process that has the smallest next CPU burst

• If two processes have the same length, FCFS scheduling is used to break the tie

• Also known as shortest-next-CPU-burst• Advantages

– Optimal average waiting time• Disadvantages

– Difficulty in predicting next CPU burst

Shortest-Job-FirstProcess Burst Time

P1 6

P2 8

P3 7

P4 3

P4 P1 P3 P2

0 3 9 16

• Average waiting time = (3+16+9+0)/4 = 7ms

24

Shortest-Job-First

• Issues– Knowing the length of the next CPU request

• For long-term (batch) scheduling, the process time limit that a user specifies when s/he submits the job

• Users are motivated to estimate process time limit accurately– Lower value = higher response rate

• For short-term scheduling, only approximations are possible– Premise: Expect the next CPU burst to be similar in length to the

previous ones– By computing an approximation of the length of the next CPU burst, we

can pick the process with the shortest predicted CPU burst– Exponential average formula

– Can be pre-emptive or non-preemptive

Shortest-Job-First (Pre-emptive)Process Arrival Time Burst Time

P1 0 8

P2 1 4

P3 2 9

P4 3 5

P1 P2 P4 P1 P3

0 1 5 10

• Process P1 is started at time 0, since it is the only process in the queue.• Process P2 arrives at time 1. The remaining time for P1 (7) is larger than

the time for P2 (4). So, P1 is pre-empted and P2 is scheduled.• Average waiting time = (10-1)+(1-1)+(17-2)+5-3)/4 = 6.5 ms

17 26

Shortest-Job-First (Non Pre-emptive)

Process Arrival Time Burst Time

P1 0 8

P2 1 4

P3 2 9

P4 3 5

• Calculate Average Waiting Time.

Priority Scheduling

• FCFS = equal priority scheduling algorithm• SJF = general priority scheduling algorithm

– priority (p) is the inverse of the predicted next CPU burst

• Priority– Domain: fixed range of numbers e.g. 0 to 7

– To simplify discussion• High priority (0)

• Low priority (7)

Priority SchedulingProcess Burst Time Priority

P1 10 8

P2 1 4

P3 2 9

P4 1 5

P5 5 2

Priority Scheduling

• Priorities can be defined:– Externally

• External to the operating system

• For example, importance of the process, type and amount of funds paid for computer use, department, political/hierarchical factors

– Internally• Measurable quantity or quantities to compute the priority of a

process

• For example, time limits, memory requirements, number of open files, ratio of average I/O burst to average CPU burst

Priority Scheduling

• Pre-emptive– When a process arrives at the ready queue, its

priority is compared with currently running processes

– Preempt the CPU if the priority of the newly arrived process is higher

• Non pre-emptive– Simply put the new process at the head of the

ready queue

Priority Scheduling• Issues– Indefinite blocking or Starvation

• Leave some low-priority processes waiting indefinitely for the CPU• In a heavily loaded environment, a steady stream of higher-priority processes

can prevent a low-priority process from ever getting the CPU• Situations:

– Process will eventually be run (Sun 3am)– Computer system will eventually crash and lose all unfinished low-priority

processes• Solutions:

– Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time

– For example, if priorities range from 127 (low) to 0 (high), decrement the priority by 1 every 15 minutes. Hence, no more than 32 hours for a priority 127 process to age to a priority 0 process.

Round-Robin

• Specifically designed for time-sharing systems• RR = FCFS + Pre-emption• A small unit of time called time slice or time quantum is defined

– Generally 10 to 100 ms

• Ready queue is a circular queue• CPU scheduler goes around the ready queue allocating the CPU

to each process for a time interval of up to 1 time quantum– New processes added to the tail of the ready queue– CPU scheduler picks the first process from the ready queue and sets a

timer to interrupt after 1 time quantum and then dispatches the process

RR

• Situations– Process may have a CPU burst < 1 time

quantum (voluntary CPU release)– Else, timer will go off interrupt to the OS

context switch executed to the next process in the queue process will be put at the tail of the ready queue

RR

• With a time quantum of 4ms, calculate the average waiting time.

Process Burst Time

P1 24

P2 3

P3 3

RR

• Issue– Choosing the size of the time quantum

• Too small a time quantum and the CPU will be spent context switching

• Too large a time quantum and the algorithm will revert to FCFS

• Rule of thumb – 80% of CPU bursts should be shorter than the time quantum

Multilevel Queue

• Created for situations in which processes are easily classified into different groups– For example, foreground (interactive) processes

and background (batch) processes• Different response-time requirements

• Different scheduling needs

• Foreground processes may have priority (externally defined) over background processes

Multilevel Queue

• Partition the ready queue into several separate queues– For example (from highest to lowest priority)

• System processes

• Interactive processes

• Interactive editing processes

• Batch processes

• Student processes

– Processes are permanently assigned to one queue based on some property (memory size, process priority, or process type)

Multilevel Queue

– Each queue can use a different scheduling algorithm• Foreground processes (RR)• Background processes (FCFS)

– Scheduling between queues• General, high priority queues have absolute priority over lower

priority queues• No process in the batch queue could run unless the queues for system

processes, interactive processes and interactive editing processes were all empty.

• If an interactive editing process entered the ready queue while a batch process was running, the batch process would be pre-empted.

– Time-slicing between queues• Foreground queue (80% of CPU to schedule locally using RR)• Background queue (20% of CPU to schedule locally using FCFS)

Multi-Processor Scheduling

• More complex than single processor scheduling• Discussion assumes homogenous processes

– Identical in functionality– Any processor can then be used to run any processes in

the queue• Possibilities

– Separate queue for each processor• One could be empty while another would be full

– One queue for all (common ready queue)• Self-scheduling• Master-slave scheduling

Process Control Block

• All of the information needed to keep track of a process when switching

• Typically contains– Process ID– Last instruction and data pointers– Register contents– States of various flags and switches– Upper and lower bounds of the memory– List of files opened– Priority of the process– Status of all I/O devices needed

Thrashing

• OS requires some CPU cycles to maintain each PCB and switching between processes

• When vast majority of the CPU time is spent on swapping between processes rather than executing processes, this condition is known as thrashing

c. Kernel and interruption control Interruption is performed to control the process

execution (when a process transition or anomaly occurs)

The prevention program is called the ‘interrupt routine’ Program is restarted once the fix is completed

Process Management

Halt

Restart

Halt

Restart

Anomaly A occurs

Anomaly B occurs

Program execution

Interrupt routine A

Interrupt routine B

Interrupt routine C

Interruption

Interruption

Central part of the OS performing the interruption control is called Kernel

Depending on the location where the anomaly occurs, the interruption is divided into

Internal InterruptExternal Interrupt

Process Management

Internal Interrupt

• Interruptions that occur due to errors of the program itself– Program interrupt

• Interruption occurs due to an error generated during the execution of a program e.g. division by zero

– Supervisor call (SVC) interrupt• When data input is requested during the execution of a program

• Saves value of program counter for the interrupted process and transfers control to a fixed location in memory (This location contains code known as the interrupt routine or the interrupt service routine or the interrupt handler)

External Interrupt• Interruption that occurs due to external factors and not due to the

program• The following external interrupts exist:

– Input/output interrupt• anomaly occurs in the input/output process completion report or in an input device or

output device during processing – Machine check interrupt

• malfunction of the processor or the main storage unit or an anomaly in the power supply, etc. happen. The failure occurrence is reported to the operating system by the processor.

– Timer interrupt• generated by the timer contained inside the processor. Programs exceeding the

execution time specified with the time sharing process, etc. are subject to forced termination by this interruption. Likewise, timer interrupt occurs when an abortion of programs of routines that never end, called infinite loops, is to be performed.

– Console interrupt• a special process request was indicated from the operator console during the execution

of a program.

Summary of Interrupts

• Internal interrupt– Program– Supervisor call

• External interrupt– Input/output device– Console (e.g. shutting down a server)– Hardware (e.g. low battery power)– Timer

Multi-Processing

• Asymmetric OS uses one CPU for its own needs and divide application processes among the remaining CPUs

CPU 1(OS)

CPU 2 CPU 3CPU 4

Multi-Processing

• Symmetric OS divides itself among the various CPUs, balancing demand versus CPU availability

CPU 1

CPU 2

CPU 3

CPU 4

Multi-Threading

• A multi-threaded application is one that has been specifically designed to break up its instructions into multiple streams, or threads, so that multiple processors (either physical or logical) can process the streams concurrently

• Adobe Photoshop and Windows Movie Maker are but a few multi-threaded mainstream applications on a very short list

Process Management

• Multi-programming– A processor management technique that takes

advantage of the speed disparity between a computer and its peripheral devices to load and execute two or more programs concurrently

– Multi-tasking = instructions / data from different processes co-resident in memory

– Multi-programming implies multi-tasking, but not vice versa

Multi-programming

Process Management

• TSS (Time sharing system)– Multiple, concurrent, interactive users are assigned, in turn, a single

time-slice before being forced to surrender the processor to the next user

– Provides the “feel” that one is the only user of the computer

• Exclusive control– Same resource cannot be used by all processes at the same time– Using semaphore, synchronization among processes is conducted and

resource sharing is implemented– Deadlock occurs when the status of two or more processes wait for

the resource release of each other

Semaphore

• A way to achieve exclusive control

• Two types of semaphore– One for the use of a resource– Another for the release of a resource

• Applications– Synchronisation among processes– Sharing resources

Deadlock

• Controls the storage area of the main storage unit1. Partition methodo A fixed length unit of memory defined when the OS is

first generated or loadedSingle-partition method

Divided into areas to store the control program and to store one program

Multiple partitions method Divided and multiple programs are stored in each of the partitions

Variable partitions method Sequentially assigns the area required by application programs in

the program storage area

Main Memory Management

Main Memory Managemento Comparison of 3 methods

Main Memory Management• Compaction

– Using these methods, areas that are not used (garbage) are generated in each partition of the main storage unit. This phenomenon is called fragmentation

– In order to solve this fragmentation, it is necessary to reset each partition at specific times or at specific intervals. This operation is called compaction

Fragmentation

Variable partitions

Control ProgramUnused

Program

NEWNEW

Defragmentation / Compaction

Main Memory Management

2. Swappingo The process of moving program instructions and data

between memory and secondary storage as the program is executed

Control program

Program 1

Program 2

Auxiliary storage device Main storage unit

Program 1

Swap out

Swap in

Swapping

Variable partitions

Control Program

Unused

Program

NEW NEW

NEW

3. Overlayo A program is broken into logically independent

modules (segments) and only the active modules are loaded into memory

o When module not yet in memory is referenced, it replaces (or overlays) a module already in memory

o It is for the execution of programs that are larger than the storage capacity of the partitions of the main storage unit

o Program is executed after each module is stored in the main storage unit

Main Memory Management

Main Memory Management

Module 1

Module 2

Module 3

Module 4

Module 1

Module 2

Module 1

Module 3

Module 1

Module 4

With overlay structures, only the necessary portions of a program are loaded into memory:

The complete program consists of 4 modules

Under normal conditions, only modules 1 and 2 are in memory

When errors occur, module 3 overlays module 2

At the end-of-job, only modules 1 and 4 are needed

Overlay

Program segments

Memory

Segment

Segment

Memory Release

• When a program ends, the memory assigned is released

• When the memory cannot be released, it is known as memory leak

3. Memory protectiono An OS routine that intervenes if a program

attempts to modify (or, sometimes, even to read) the contents of memory locations that do not belong to it and (usually) terminates the programI. Boundary address method

The address range that can be accessed is specified for each of the programs to be executed

Checking is on whether execution is within the address range

Lower and upper limit boundaries per process

Main Memory Management

I. Ring protection method Each process is assigned a ring number A ring number is assigned to each program and the access is

controlled according to the number size Small number is assigned to important program

(kernel/critical) Large number is assigned to user program Access from small numbers to large numbers can be

performed, but not vice versaIII. Keylock method

The main storage unit is divided into multiple partitions and each partition is locked for memory protection

Each program to be executed contains its respective memory protection key(s)

Authorization is done when memory is unlocked (using the key)

Each memory partition is assigned a lock Each process is assigned a key

Main Memory Management

5. Other main memory management issuesI. Dynamic allocation

Technique by which the main storage unit is dynamically assigned during program execution

II. Memory leak Occurs due to failure to release the area that should have

been released by a program that used the main storage unit All storage area is released when power is turned off Usually occurs in servers, which remain operational 24hrs

a day

Main Memory Management

Virtual Storage Management• Enables the execution of programs without worrying

about the storage capacity of the main storage unit• Basic approach in implementing virtual storage:

o Main storage unit is divided into partitions (page frames) of specific size

o Program is stored temporarily in the external page storage area of an external storage device

o This external page storage area is divided into partitions called slots (same size as page frame)

o Program stored in page frames or slots are called pages (approx. 2KB per page)

o External page storage area of the main storage unit and the auxiliary storage device is called logical address space

o Pages of the slots needed for execution are transferred to empty page frames of the main storage unit to be executed

Execution is repeatedly performed by transferring the programs that are stored by page unit in the external page storage area to the page frames of the main storage unit

- This act of transferring is called ‘load’

Virtual Storage Management

Virtual Storage Management1. Pagingo Exchange of programs between the main storage unit and an

auxiliary storage device

o In multi-programming, when paging occurs frequently, this condition is known as slashing

2. Address translationo During “page-in”, translation of the instruction address

according to the address of the page frame is requiredo Address stored in external page storage area – static addresso Address stored in page frame of main storage unit (after

address translation) – dynamic addresso Main address translation method is called Dynamic Address

Translation (DAT)Performs translation at the time the instruction paged in is executedAddress begins from 0 and increment of page unit

Virtual Storage Management

DAT

Virtual Memory

• PagingMemory

Page

Page

Page PagePagePagePagePagePage

Page inPage out

Dynamic Address Translation

DA00 Page 1

DB00 Page 2

DC00 Page 3

DD00 Page 4

DE00 Page 5

DF00 Page 6

3. Segmentation pagingo Segment – a group of pages logically relatedo Page-in and page-out are performed by these segments

4. Page replacement (displacement)o To achieve system processing efficiency:

Pages with high application frequency are permanently stored in the main storage unit;

Pages with low application frequency are stored in the external page storage area (transferred to the main storage unit only when they are needed)

LRU (Least recently used) methodFIFO (First-in First-out) method

Virtual Storage Management

Segmentation Paging

top related