computer system overview chapter 1. interrupts an interruption of the normal sequence of execution...
TRANSCRIPT
Computer System Overview
Chapter 1
Interrupts
• An interruption of the normal sequence of execution• Improves processing efficiency• Allows the processor to execute other instructions
while an I/O operation is in progress• A suspension of a process caused by an event
external to that process and performed in such a way that the process can be resumed
Classes of Interrupts
• Program– arithmetic overflow– division by zero– execute illegal instruction– reference outside user’s memory space
• Timer• I/O• Hardware failure
Interrupt Handler
• A program that determines nature of the interrupt and performs whatever actions are needed
• Control is transferred to this program• Generally part of the operating system
Multiple Interrupts
• Disable interrupts while an interrupt is being processed– Processor ignores
any new interrupt request signals
Multiprogramming
• Processor has more than one program to execute
• The sequence the programs are executed depend on their relative priority and whether they are waiting for I/O
• After an interrupt handler completes, control may not return to the program that was executing at the time of the interrupt
Cache Memory
Layers of Computer System
Services Provided by the Operating System
• Program development– Editors and debuggers
• Program execution• Access to I/O devices• Controlled access to files• System access
Operating System
• Functions same way as ordinary computer software– It is program that is executed
• Operating system relinquishes control of the processor to execute other programs
Kernel
• Portion of operating system that is in main memory
• Contains most-frequently used functions• Also called the nucleus
Multiprogramming
• When one job needs to wait for I/O, the processor can switch to the other job
Batch Multiprogramming versus Time Sharing
Batch Multiprogramming Time Sharing
Principal objective Maximize processor use Minimize response time
Source of directives to operating system
Job control language commands provided with the job
Commands entered at the terminal
Major Achievements
• Processes• Memory Management• Information protection and security• Scheduling and resource management• System structure
Processes
• A program in execution• An instance of a program running on a
computer• The entity that can be assigned to and
executed on a processor• A unit of activity characterized by a single
sequential thread of execution, a current state, and an associated set of system resources
Difficulties with Designing System Software
• Improper synchronization– ensure a process waiting for an I/O device receives
the signal• Failed mutual exclusion• Nondeterminate program operation– program should only depend on input to it, not
relying on common memory areas• Deadlocks
Process
• Consists of three components– An executable program– Associated data needed by the program– Execution context of the program• All information the operating system needs to manage
the process
Memory Management
• Process isolation• Automatic allocation and management• Support for modular programming• Protection and access control• Long-term storage
File System
• Implements long-term store• Information stored in named objects called
files
Major Elements ofOperating System
Operating System Design Hierarchy
Level Name Objects Example Operations
13 Shell User programming Statements in shell languageenvironment
12 User processes User processes Quit, kill, suspend, resume
11 Directories Directories Create, destroy, attach, detach,
search, list
10 Devices External devices, such Open, close,as printer, displays read, writeand keyboards
9 File system Files Create, destroy, open, closeread, write
8 Communications Pipes Create, destroy, open. close,read, write
Operating System Design Hierarchy
Level Name Objects Example Operations
7 Virtual Memory Segments, pages Read, write, fetch
6 Local secondary Blocks of data, device Read, write, allocate, freestore channels
5 Primitive processes Primitive process, Suspend, resume, wait, signalsemaphores, readylist
Characteristics of Modern Operating Systems
• Multithreading– process is divided into threads that can run
simultaneously• Thread– dispatchable unit of work– executes sequentially and is interruptable
• Process is a collection of one or more threads
Characteristics of Modern Operating Systems
• Symmetric multiprocessing– there are multiple processors– these processors share same main memory and
I/O facilities– All processors can perform the same functions
Characteristics of Modern Operating Systems
• Distributed operating systems– provides the illusion of a single main memory and
single secondary memory space– used for distributed file system
Client/Server Model
• Simplifies the Executive– possible to construct a variety of APIs
• Improves reliability– each service runs as a separate process with its
own partition of memory– clients cannot not directly access hardware
• Provides a uniform means fro applications to communicate via LPC
• Provides base for distributed computing
Threads and SMP
• Different routines can execute simultaneously on different processors
• Multiple threads of execution within a single process may execute on different processors simultaneously
• Server processes may use multiple threads• Share data and resources between process
Two Suspend States
Operating System Control Structures
• Information about the current status of each process and resource
• Tables are constructed for each entity the operating system manages
Memory Tables
• Allocation of main memory to processes• Allocation of secondary memory to processes• Protection attributes for access to shared
memory regions• Information needed to manage virtual
memory
I/O Tables
• I/O device is available or assigned• Status of I/O operation• Location in main memory being used as the
source or destination of the I/O transfer
File Tables
• Existence of files• Location on secondary memory• Current Status• Attributes• Sometimes this information is maintained by a
file-management system
Process Table
• Where process is located• Attributes necessary for its management– Process ID– Process state– Location in memory
User-Level Threads
• All thread management is done by the application
• The kernel is not aware of the existence of threads
Kernel-Level Threads
• W2K, Linux, and OS/2 are examples of this approach
• Kernel maintains context information for the process and the threads
• Scheduling is done on a thread basis
Categories of Computer Systems
• Single Instruction Single Data (SISD)– single processor executes a single instruction
stream to operate on data stored in a single memory
• Single Instruction Multiple Data (SIMD)– each instruction is executed on a different set of
data by the different processors
Categories of Computer Systems
• Multiple Instruction Single Data (MISD)– a sequence of data is transmitted to a set of
processors, each of which executes a different instruction sequence. Never implemented
• Multiple Instruction Multiple Data (MIMD)– a set of processors simultaneously execute
different instruction sequences on different data sets
Currency
• Communication among processes• Sharing resources• Synchronization of multiple processes• Allocation of processor time
Difficulties with Concurrency
• Sharing global resources• Management of allocation of resources• Programming errors difficult to locate
Competition Among Processes for Resources
• Mutual Exclusion– Critical sections• Only one program at a time is allowed in its critical
section• Example only one process at a time is allowed to send
command to the printer
• Deadlock• Starvation
Cooperation Among Processes by Communication
• Messages are passes– Mutual exclusion is not a control requirement
• Possible to have deadlock– Each process waiting for a message from the other
process• Possible to have starvation– Two processes sending message to each other
while another process waits for a message
Mutual Exclusion:Hardware Support
• Special Machine Instructions– Performed in a single instruction cycle– Not subject to interference from other
instructions– Reading and writing– Reading and testing
Mutual Exclusion:Hardware Support
• Test and Set Instructionboolean testset (int i) {
if (i == 0) {i = 1;return true;
}else {
return false;}
}
Mutual Exclusion:Hardware Support
• Exchange Instructionvoid exchange(int register,
int memory) {int temp;temp = memory;memory = register;register = temp;
}
Mutual Exclusion Machine Instructions
• Advantages– Applicable to any number of processes on either a
single processor or multiple processors sharing main memory
– It is simple and therefore easy to verify– It can be used to support multiple critical sections
Mutual Exclusion Machine Instructions
• Disadvantages– Busy-waiting consumes processor time– Starvation is possible when a process leaves a
critical section and more than one process is waiting.
– Deadlock• If a low priority process has the critical region and a
higher priority process needs, the higher priority process will obtain the processor to wait for the critical region
Semaphores
• Special variable called a semaphore is used for signaling
• If a process is waiting for a signal, it is suspended until that signal is sent
• Wait and signal operations cannot be interrupted
• Queue is used to hold processes waiting on the semaphore
Semaphores
• Semaphore is a variable that has an integer value– May be initialized to a nonnegative number– Wait operation decrements the semaphore value– Signal operation increments semaphore value
Monitors
• Monitor is a software module• Chief characteristics– Local data variables are accessible only by the
monitor– Process enters monitor by invoking one of its
procedures– Only one process may be executing in the monitor
at a time
Message Passing
• Enforce mutual exclusion• Exchange information
send (destination, message)receive (source, message)
Synchronization
• Sender and receiver may or may not be blocking (waiting for message)
• Blocking send, blocking receive– Both sender and receiver are blocked until
message is delivered– Called a rendezvous
Synchronization
• Nonblocking send, blocking receive– Sender continues processing such as sending
messages as quickly as possible– Receiver is blocked until the requested message
arrives• Nonblocking send, nonblocking receive– Neither party is required to wait
Deadlock
• Permanent blocking of a set of processes that either compete for system resources or communicate with each other
• No efficient solution• Involve conflicting needs for resources by two
or more processes
Reusable Resources
• Used by one process at a time and not depleted by that use
• Processes obtain resources that they later release for reuse by other processes
• Processors, I/O channels, main and secondary memory, files, databases, and semaphores
• Deadlock occurs if each process holds one resource and requests the other
Example of Deadlock
Conditions for Deadlock
• Mutual exclusion– only one process may use a resource at a time
• Hold-and-wait– A process request all of its required resources at
one time
Conditions for Deadlock
• No preemption– If a process holding certain resources is denied a
further request, that process must release its original resources
– If a process requests a resource that is currently held by another process, the operating system may preempt the second process and require it to release its resources
Conditions for Deadlock
• Circular wait– Prevented by defining a linear ordering of
resource types
Deadlock Avoidance
• A decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to a deadlock
• Requires knowledge of future process request
Two Approaches to Deadlock Avoidance
• Do not start a process if its demands might lead to deadlock
• Do not grant an incremental resource request to a process if this allocation might lead to deadlock
Resource Allocation Denial
• Referred to as the banker’s algorithm• State of the system is the current allocation of
resources to process• Safe state is where there is at least one
sequence that does not result in deadlock• Unsafe state is a state that is not safe
Determination of a Safe StateInitial State
Deadlock Avoidance
• Maximum resource requirement must be stated in advance
• Processes under consideration must be independent; no synchronization requirements
• There must be a fixed number of resources to allocate
• No process may exit while holding resources
Deadlock Detection
Strategies once Deadlock Detected
• Abort all deadlocked processes• Back up each deadlocked process to some
previously defined checkpoint, and restart all process– original deadlock may occur
• Successively abort deadlocked processes until deadlock no longer exists
• Successively preempt resources until deadlock no longer exists
Selection Criteria Deadlocked Processes
• Least amount of processor time consumed so far
• Least number of lines of output produced so far
• Most estimated time remaining• Least total resources allocated so far• Lowest priority
UNIX Concurrency Mechanisms
• Pipes• Messages• Shared memory• Semaphores• Signals
Paging
• Each process has its own page table• Each page table entry contains the frame
number of the corresponding page in main memory
• A bit is needed to indicate whether the page is in main memory or not
Aim of Scheduling
• Response time• Throughput• Processor efficiency
Types of Scheduling
Decision Mode
• Nonpreemptive– Once a process is in the running state, it will continue until
it terminates or blocks itself for I/O
• Preemptive– Currently running process may be interrupted and moved
to the Ready state by the operating system– Allows for better service since any one process cannot
monopolize the processor for very long
Classifications of Multiprocessor Systems
• Loosely coupled multiprocessor– Each processor has its own memory and I/O
channels• Functionally specialized processors– Such as I/O processor– Controlled by a master processor
• Tightly coupled multiprocessing– Processors share main memory – Controlled by operating system
Assignment of Processes to Processors
• Treat processors as a pooled resource and assign process to processors on demand
• Permanently assign process to a processor– Dedicate short-term queue for each
processor– Less overhead– Processor could be idle while another
processor has a backlog
Assignment of Processes to Processors
• Global queue– Schedule to any available processor
• Master/slave architecture– Key kernel functions always run on a particular processor– Master is responsible for scheduling– Slave sends service request to the master– Disadvantages
• Failure of master brings down whole system• Master can become a performance bottleneck
Process Scheduling
• Single queue for all processes• Multiple queues are used for priorities• All queues feed to the common pool of
processors• Specific scheduling disciplines is less important
with more than on processor
Deadline Scheduling
• Information used– Ready time– Starting deadline– Completion deadline– Processing time– Resource requirements– Priority– Subtask scheduler