friday, june 16, 2006

47
1 Friday, June 16, 2006 "In order to maintain secrecy, this posting will self-destruct in five seconds. Memorize it, then eat your computer." - Anonymous

Upload: gauri

Post on 01-Feb-2016

36 views

Category:

Documents


0 download

DESCRIPTION

Friday, June 16, 2006. "In order to maintain secrecy, this posting will self-destruct in five seconds. Memorize it, then eat your computer." - Anonymous. g++ -R/opt/sfw/lib somefile.c to add /opt/sfw/lib to the runtime shared library lookup path - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Friday, June 16, 2006

1

Friday, June 16, 2006

"In order to maintain secrecy, this posting will self-destruct in five

seconds. Memorize it, then eat your computer."

- Anonymous

Page 2: Friday, June 16, 2006

2

g++ -R/opt/sfw/lib somefile.c

to add /opt/sfw/lib to the runtime shared library lookup path

Edit your .bash_profile and add /opt/sfw/lib to LD_LIBRARY_PATH

Page 3: Friday, June 16, 2006

3

Scheduling in Unix - other versions also possible

Designed to provide good response to interactive processes

Uses multiple queues

Each queue is associated with a range of non-overlapping priority values

Page 4: Friday, June 16, 2006

4

Scheduling in Unix - other versions also possible

Processes executing in user mode have positive values

Processes executing in kernel mode (doing system calls) have negative values

Negative values have higher priority and large positive values have lowest

Page 5: Friday, June 16, 2006

5

Scheduling in Unix

Only processes that are in memory and ready to run are located on queues

Scheduler searches the queues starting at highest priority

first process is chosen on that queue and started. It runs for one time quantum (say 100ms) or until it blocks.

If the process uses up its quantum it is blocked Processes within same priority range share

CPU in RR

Page 6: Friday, June 16, 2006

6

Scheduling in UnixEvery second each process’s priority is

recalculated (usually based on CPU usage) and it is attached to appropriate queue.

CPU_usage decay

Page 7: Friday, June 16, 2006

7

Scheduling in UnixProcess might have to block before system call is

complete. While waiting it is put in a queue with a negative number. (determined by the event it is waiting for)

Reason: Allow process to run immediately after each request

is completed, so that it make the next one quickly If it is waiting for terminal input it is an interactive

process.

CPU bound get service when all I/O bound and interactive processes are blocked

Page 8: Friday, June 16, 2006

8Unix scheduler is based on multi-level queue structure

Page 9: Friday, June 16, 2006

9

top provides an ongoing look at processor activity in real time

nice default value is zero in UNIX Allowed range -20 to 20

Page 10: Friday, June 16, 2006

10

Windows Priority based preemptive schedulingSelected thread runs until:

Preempted by a higher priority thread Time quantum ends Calls a blocking system call Terminates

Page 11: Friday, June 16, 2006

11

Win32 APISetPriorityClass sets the priority of all

threads in the caller’s process. Real time, high, above normal, normal,

below normal and base

SetThreadPriority sets the priority of a thread compared to other threads in its process Time critical, highest, above normal, normal,

below normal, lowest, idle

Page 12: Friday, June 16, 2006

12

Win32 APIHow many combinations?System has 32 priorities 0 - 31

Page 13: Friday, June 16, 2006

13

RR for multiple threads at same priority

Page 14: Friday, June 16, 2006

14

Selection of thread irrespective of the process it belongs to.

Priorities 16-31 are called real time, but there are not.

Priorities 16-31 are reserved for the system itself

Page 15: Friday, June 16, 2006

15

Ordinary users are not allowed those priorities. Why?

Users run at priorities 1-15

Page 16: Friday, June 16, 2006

16

Windows Priority lowering depends on a thread’s time

quantumPriority boosting:

When a thread is released from a wait operation• Thread waiting for keyboard• Thread waiting for disk operation

Good response time for interactive threads

Page 17: Friday, June 16, 2006

17

Windows

Currently active window gets a boost to increase its response time

Also keeps track of when a ready thread ran last

Priority boosting does not go above priority 15.

Page 18: Friday, June 16, 2006

18

Example: Multiprogramming

5 jobs with 80% I/O waitIf one of theses jobs enter the system and

is the only process there then it uses 12 seconds of CPU time for each minute.

CPU busy: 20% of time.If that job needs 4 minutes of CPU time it

will require at least 20 minutes in all to get the job done

Page 19: Friday, June 16, 2006

19

Example: Multiprogramming

Too simplistic model: If average job computes for only 20% of

time, then with five such processes CPU should be busy all the time.

BUT…

Page 20: Friday, June 16, 2006

20

Example: Multiprogramming

But we are assuming all five will not be waiting for I/O all the time.

CPU utilization: 1-pn

n = number of processesp= probability that all n is waiting for I/O at

the same time

Approximation only: There might be dependencies between processes

Page 21: Friday, June 16, 2006

21

CPU utilization as a function of number of processes in memory

Page 22: Friday, June 16, 2006

22

Producer-Consumer Problem

Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process. unbounded-buffer places no practical limit

on the size of the buffer. bounded-buffer assumes that there is a

fixed buffer size.

Page 23: Friday, June 16, 2006

23

Bounded-Buffer – Shared-Memory Solution Shared data

var n;

type item = … ;

var buffer. array [0..n–1] of item;

in, out: 0..n–1;

Page 24: Friday, June 16, 2006

24

Bounded-Buffer – Shared-Memory Solution Producer process

repeat

produce an item in nextp

while in+1 mod n = out do no-op;

buffer [in] :=nextp;

in :=in+1 mod n;

until false;

Page 25: Friday, June 16, 2006

25

Bounded-Buffer (Cont.)

Consumer process

repeat

while in = out do no-op;

nextc := buffer [out];

out := out+1 mod n;

consume the item in nextc

until false;

Page 26: Friday, June 16, 2006

26

Bounded-Buffer (Cont.)

Solution is correct, but …?

Page 27: Friday, June 16, 2006

27

Bounded-Buffer Shared data

#define BUFFER_SIZE 10

typedef struct {

. . .

} item;

item buffer[BUFFER_SIZE];

int in = 0;

int out = 0;

int counter = 0;

Page 28: Friday, June 16, 2006

28

Bounded-Buffer Producer process

item nextProduced;

while (1) {

while (counter == BUFFER_SIZE)

; /* do nothing */

buffer[in] = nextProduced;

in = (in + 1) % BUFFER_SIZE;

counter++;

}

Page 29: Friday, June 16, 2006

29

Bounded-Buffer

Consumer process

item nextConsumed;

while (1) {while (counter == 0)

; /* do nothing */nextConsumed = buffer[out];out = (out + 1) % BUFFER_SIZE;counter--;

}

Page 30: Friday, June 16, 2006

30

Bounded Buffer

The statements

counter++;counter--;

must be performed atomically.

Atomic operation means an operation that completes in its entirety without interruption.

Page 31: Friday, June 16, 2006

31

Bounded BufferThe statement “count++” may be implemented in

machine language as:

register1 = counter

register1 = register1 + 1counter = register1

The statement “count - -” may be implemented as:

register2 = counterregister2 = register2 – 1counter = register2

Page 32: Friday, June 16, 2006

32

Process Synchronization

Cooperating processes executing in a system can affect each other

Concurrent access to shared data may result in data inconsistency

Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes

Page 33: Friday, June 16, 2006

33

Race ConditionProblem in previous example: One process

started using shared variable before another process was finished using it

Race condition: The situation where several processes access – and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last.

To prevent race conditions, concurrent processes must be synchronized.

Page 34: Friday, June 16, 2006

34

Two processes want to access shared memory at the same timeDebugging very difficult ...

Page 35: Friday, June 16, 2006

35

Example ...

Page 36: Friday, June 16, 2006

36

Solution #1

Page 37: Friday, June 16, 2006

37

Solution #2

Page 38: Friday, June 16, 2006

38

Solution #3

Page 39: Friday, June 16, 2006

39

The Critical-Section Problem

n processes all competing to use some shared data

Each process has a code segment, called critical section, in which the shared data is accessed.

Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.

Page 40: Friday, June 16, 2006

40

Page 41: Friday, June 16, 2006

41

No processes may be simultaneously inside their critical sections

No assumptions may be made about the speeds or number of CPUs

No process running outside the critical region may block other processes

No process waits forever to enter its critical section

Page 42: Friday, June 16, 2006

42

Only 2 processes, P0 and P1

General structure of process Pi (other process Pj)

do {

entry section

critical section

exit section

remainder section

} while (1);Processes may share some common variables to

synchronize their actions.

Page 43: Friday, June 16, 2006

43

Solution to Critical-Section Problem

1.Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.

Page 44: Friday, June 16, 2006

44

Solution to Critical-Section Problem

2.Progress. If no process is executing in its critical section and some processes wish to enter their critical section, then only those processes that are not executing in the remainder section can participate in the decision on which will enter critical section next.

Page 45: Friday, June 16, 2006

45

Solution to Critical-Section Problem

3.Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Assume that each process executes at a nonzero speed

No assumption concerning relative speed of the n processes.

Page 46: Friday, June 16, 2006

46

Assume the basic machine language instructions like load or store etc. are executed atomically.

Page 47: Friday, June 16, 2006

47

Algorithm 1

turn is initialized to 0 (or 1)