lecture topics: 11/13 semaphores deadlock scheduling

29
Lecture Topics: 11/13 • Semaphores • Deadlock • Scheduling

Upload: herbert-rodgers

Post on 13-Jan-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Lecture Topics: 11/13

• Semaphores• Deadlock• Scheduling

Page 2: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Semaphores• Semaphores were the first synchronization

mechanism (so every mechanism created since is better)

• The semaphore is an integer variable that has two atomic operations:– P() (the entry procedure) wait for semaphore to

become positive and then decrement it by 1– V() (the exit procedure) increment semaphore by 1,

wake up a waiting P if any

• Who came up with those names?– They’re from Dutch for probieren (to try) and

verhogen (to increment)– Thanks, Dijkstra

• Don't use semaphores

Page 3: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Deadlock• Circular waiting for resources• Process A wants what process B has• Process B wants what process A has• Neither can make progress without acquiring the

other’s resource• Neither will relinquish its own resource• No progress possible!

lockOne->Acquire();

lockTwo->Acquire();

lockTwo->Acquire();

lockOne->Acquire();

red has lockOne and is waiting on lockTwoblue has lockTwo and is waiting on lockOne

DEADLOCK!

Page 4: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

System Model

• There are processes and resources

• A process follows these steps to utilize a resource– Acquire the resource

• If the resource is unavailable, block

– Use the resource– Release the resource

Page 5: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Necessary Conditions for Deadlock

• Mutual Exclusion– The resource can’t be shared

• Hold and Wait– Some process holds one resource while

waiting for another

• No Preemption– Once a process has a resource, it cannot be

forced to give it up

• Circular Wait– A waits for B, B for C, C for D, D for A

CA B

D

Page 6: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Dealing with Deadlock

• Deadlock Prevention– Ensure statically that deadlock is impossible

• Deadlock Avoidance– Ensure dynamically that deadlock is

impossible

• Deadlock Detection and Recovery– Allow deadlock to occur, but notice when it

does and try to recover

• Ignore the Problem

Page 7: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Deadlock Prevention

• There are four necessary conditions for deadlock

• Take any one of them away and deadlock is impossible

• Let’s attack deadlock by– examining each of the conditions– considering what would happen if we

threw it out

Page 8: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Mutual Exclusion

• Can't eliminate of this condition– some resources are intrinsically non-

sharable

• Examples include printer, write access to a file or record, entry into a section of code

Page 9: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Hold and Wait• A process acquires all the resources

it needs before it does anything; if it can’t get them all, then get none– If can't acquire both scanner and printer, then

wait until they are both available

• Resource utilization may be low– If you need P for a long time and Q only at

the end, you still have to hold Q’s lock the whole time

• Starvation prone– May have to wait indefinitely before popular

resources are all available at the same time

Page 10: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

No Preemption

• To attack the no preemption condition:– If a process asks for a resource not

currently available, block it and take away all of its other resources

– Add the preempted resources to the list of resources the process is waiting for

• This strategy works for some resources:– CPU state (contents of registers can be

spilled to memory)– memory (can be spilled to disk)

• But not for others:– The printer

Page 11: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Circular Wait

• To attack the circular wait condition:– Assign each resource a priority– Make processes acquire resources in

priority order

• Two processes need the printer and the scanner, both must acquire the printer (higher priority) before the scanner

• This is the most common form of deadlock prevention

• The only problem: sometimes forced to relinquish a resource

Page 12: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Deadlock Detection

• Build a wait-for graph and periodically look for cycles, to find the circular wait condition

• The wait-for graph contains:– nodes, corresponding to

processes– directed edges,

corresponding to a resource held by one process and desired by the other

E

CA B

D

A waits for B B waits for D D waits for A

deadlock!

Page 13: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Deadlock Recovery• Once you’ve discovered deadlock, what do

you do about it?• One option: abort one of the processes to

recover from circular wait– Process will likely have to start over from scratch – Which process should you choose?

• Another solution is to take a resource away from a process– Again, which process should you choose?– How can you roll back the process to its state

before it had the coveted resource?– Make sure you don’t keep on preempting from the

same process: avoid starvation

Page 14: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Ignoring Deadlock

• Not as silly as it sounds• The mechanisms outlined

previously for handling deadlock may be very expensive; if the alternative is to have a forced reboot once a year, that might be acceptable

Page 15: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Review: Process State

• A process can be (ready, waiting, running)• OS has queue of PCBs for each state• The ready queue contains PCBs of

processes that are ready to run

Ready Queue Header

Wait Queue Header

head ptrtail ptr

head ptrtail ptr

PCB TetrisPCB Word PCB MSVC

PCB Defrag PCB Telnet

Page 16: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

The Scheduling Problem

• Need to share the CPU between multiple processes in the ready queue.– OS needs to decide which process gets the

CPU next– Once a process is selected, OS needs to do

some work to get the process running on the CPU

• Scheduling is declining in importance– important with slow, heavily-used, shared

computers– now most CPU cycles are idle on PCs– still important for supercomputers

Page 17: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

How Scheduling Works

• The scheduler is responsible for choosing a process from the ready queue. The scheduling algorithm implemented by this module determines how process selection is done.

• The scheduler hands the selected process off to the dispatcher which gives the process control of the CPU

Page 18: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

When Does The OS Make Scheduling Decisions ?

• Scheduling decisions are always made:– when a process is terminated, and– when a process switches from

running to waiting.

• Scheduling decisions are made when an interrupt occurs in a preemptive system.

Page 19: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Non-preemptive/Preemptive

• Non-preemptive scheduling: – The process decides when it stops– The scheduler must wait for a running process

to voluntarily relinquish the CPU (process either terminates or blocks)

– Used in the past, now only in real-time systems

• Preemptive scheduling: – The OS can force a running process to give up

control of the CPU, allowing the scheduler to pick another process

– Used by all major OS's today– We will assume preemptive scheduling

Page 20: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Scheduling Goals

• Maximize throughput and resource utilization.– Need to overlap CPU and I/O activities

• Minimize response time, waiting time and turnaround time

• Share CPU in a fair way• May be difficult to meet all these

goals-- sometimes need to make tradeoffs

Page 21: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

CPU and I/O Bursts

• Typical process execution pattern: use the CPU for a while (CPU burst), then do some I/O operations (IO burst)

• CPU bound processes perform I/O operations infrequently and tend to have long CPU bursts

• I/O bound processes spend less time doing computation and tend to have short CPU bursts

Page 22: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Scheduling Algorithms: FCFS

• First Come First Served (FCFS) (aka FIFO)– Scheduler selects the process at the head of

the ready queue; typically non-preemptive– Example: 3 processes arrive at the ready

queue in the following order: P1 ( CPU burst = 240 ms), P2 ( CPU burst = 30 ms),P3 ( CPU burst = 30 ms)

+ Simple to implement– Average waiting time can be

large

Page 23: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Scheduling Algorithms: RR

• Round Robin (RR)– Each process on ready queue gets the CPU for

a time slice typically 10 - 100 ms– A process runs until it blocks, terminates, or

uses up its time slice– Short jobs don’t get stuck behind long jobs

– Average response time for jobs of same length is badFCFS:

RR:

Page 24: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Scheduling Algorithms: RR

• RR pros & cons:+Works well for short jobs; typically used in

timesharing systems+Shares CPU “fairly”– Overhead due to frequent context switches

(but only 1% of CPU)– Increases average waiting time, for jobs that

are the same length

– What's the right value for the time slice?• High throughput vs. Low latency

Page 25: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Scheduling Algorithms: Priority

• Priority Scheduling– Run the process with the highest

priority– Priority based on some attribute of the

process (e.g., memory requirements, owner of process, etc.)

• Issue:– Starvation: low priority jobs may wait

indefinitely– Can prevent starvation by aging

(increase process priority as it waits)

Page 26: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Priority Inversion

• Three processes with different priorities: HI, MED, LOW

• HI runs if it can• Suppose, LOW holds a lock that HI wants

– LOW prevents HI from running– MED prevents LOW from running– HI can’t run until MED finishes

• This is known as priority inversion• Solution: increase priority of a process

holding a lock to the max priority of a process waiting on the lock– LOW -> LOW until it releases the lock

Page 27: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Scheduling Algorithms: SJF

• Shortest Job First (SJF)– Special case of priority scheduling

(priority = expected length of CPU burst)– Scheduler chooses the process with the

shortest remaining time to completion (think copy machine)

– Example: What’s the average waiting time?

– Issue: How do you predict the future?• Systems use past process behavior to predict the

length of the next CPU burst

30 30 240

Page 28: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Scheduling Algorithms: SJF

• Shortest Job First (SJF)• SJF pros & cons:

+Better average response time+Can prove SJF provides optimal

response time– Impossible to predict the future– Unfair-- possible starvation (many

short jobs can keep long jobs from making any progress)

Page 29: Lecture Topics: 11/13 Semaphores Deadlock Scheduling

Multi-level Feedback

• Adaptive algorithm: process priority changes based on past behavior

• Process starts with high priority– because it’s probably a short job

• Decrease priority of processes that hog the CPU (CPU-bound jobs)

• Increase priority of processes that don’t use the CPU much (I/O-bound jobs)