recap from last class

32
Recap from last class Operating System basics Logical OS structure Key OS issues Processes Basic unit of program execution Process states: running, ready and blocked Metadata maintained Process Control Block (PCB) 1

Upload: others

Post on 24-Dec-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Recap from last class

Recap from last class Operating System basics Logical OS structure Key OS issues

Processes Basic unit of program execution Process states: running, ready and blocked Metadata maintained Process Control Block (PCB)

1

Page 2: Recap from last class

Wei Gao

ECE 1175Embedded Systems Design

Operating Systems - II

2

Page 3: Recap from last class

What’s in a process? A process consists of (at least):

an address space the code for the running program the data for the running program an execution stack and stack pointer (SP)

• traces state of procedure calls made the program counter (PC), indicating the next instruction a set of general-purpose processor registers and their values a set of OS resources

• open files, network connections, sound channels, …

That’s a lot!! Can we decompose a process?

3

Page 4: Recap from last class

Thread A lightweight process Separating the process’s memory space Better concurrency!

Multithreading is useful even on a uniprocessor even though only one thread can run at a time creating concurrency does not require creating new processes “faster / better / cheaper”

4

Page 5: Recap from last class

Thread

5

address space

thread

one thread/processmany processes

many threads/processmany processes

one thread/processone process

many threads/processone process

MS/DOS

Java

olderUNIXes

Mach, NT,Chorus,Linux, …

Page 6: Recap from last class

Process Memory Space

6

0x00000000

0x7FFFFFFF

address space

code(text segment)

static data(data segment)

heap(dynamic allocated mem)

stack(dynamic allocated mem)

PC

SP

Page 7: Recap from last class

Memory Space with Threads

7

0x00000000

0x7FFFFFFF

address space

code(text segment)

static data(data segment)

heap(dynamic allocated mem)

thread 1 stack

PC (T2)

SP (T2)thread 2 stack

thread 3 stack

SP (T1)

SP (T3)

PC (T1)PC (T3)

SP

PC

Page 8: Recap from last class

Synchronization Threads cooperate in multithreaded programs to share resources, access shared data structures

• e.g., threads accessing a memory cache in a web server

also, to coordinate their execution• e.g., a disk reader thread hands off blocks to a network writer

thread through a circular buffer

8

disk reader thread

network writer thread

circular buffer

Page 9: Recap from last class

Shared Resources Major focus of synchronization across

processes/threads Basic problem: two concurrent threads are accessing a shared variable if the variable is read/modified/written by both threads,

then access to the variable must be controlled otherwise, unexpected results may occur

9

Page 10: Recap from last class

The Classic Example Suppose we have to implement a function to

withdraw money from a bank account:int withdraw(account, amount) {

int balance = get_balance(account);

balance -= amount;

put_balance(account, balance);

return balance;

}

Now suppose that you and your partner share a bank account with a balance of $100.00 what happens if you both go to separate ATM machines,

and simultaneously withdraw $10.00 from the account?

10

Page 11: Recap from last class

The Classic Example Represent the situation by creating a separate thread

for each person to do the withdrawals have both threads run on the same bank mainframe:

11

int withdraw(account, amount) {

int balance = get_balance(account);

balance -= amount;

put_balance(account, balance);

return balance;

}

int withdraw(account, amount) {

int balance = get_balance(account);

balance -= amount;

put_balance(account, balance);

return balance;

}

Page 12: Recap from last class

Interleaved Schedule

12

balance = get_balance(account);

balance -= amount;

balance = get_balance(account);

balance -= amount;

put_balance(account, balance);

put_balance(account, balance);

Execution sequenceas seen by CPU

context switch

context switch

Who comes first??int withdraw(account, amount) {

int balance = get_balance(account);

balance -= amount;

put_balance(account, balance);

return balance;

}

int withdraw(account, amount) {

int balance = get_balance(account);

balance -= amount;

put_balance(account, balance);

return balance;

}

Page 13: Recap from last class

And this?

13

i++; i++;

Page 14: Recap from last class

The Real Issue Here The problem is that two concurrent threads (or

processes) access a shared resource (account) without any synchronization creates a race condition

• output is non-deterministic, depends on timing

Synchronization is necessary for any shared data structure buffers, queues, lists, hash tables, scalars, …

14

Page 15: Recap from last class

Mutual Exclusion Code that uses mutual exclusion to synchronize its

execution is called a critical section only one thread at a time can execute in the critical section all other threads are forced to wait on entry when a thread leaves a critical section, another can enter

Mechanics for building critical sections Messages Locks Semaphores

15

Page 16: Recap from last class

Locks A lock is a object (in memory) that provides the following two operations:

acquire(): a thread calls this before entering a critical section release(): a thread calls this after leaving a critical section

Two basic types of locks Spinlock: Test-and-Set Blocking

16

Page 17: Recap from last class

Semaphores A semaphore is: a variable that is manipulated through two operations,

P and V (Dutch for “test” and “increment”)• P(sem) (wait/down)

– block until sem > 0, then subtract 1 from sem and proceed

• V(sem) (signal/up)– add 1 to sem

Do these operations atomically

17

Page 18: Recap from last class

How Semaphores work? Each semaphore has an associated queue of threads when P(sem) is called by a thread,

• if sem was “available” (>0), decrement sem and let thread continue

• if sem was “unavailable” (<=0), place thread on associated queue; dispatch some other runnable thread

when V(sem) is called by a thread• if thread(s) are waiting on the associated queue, unblock one

– place it on the ready queue– might as well let the “V-ing” thread continue execution– or not, depending on priority

• otherwise (when no threads are waiting on the sem), increment sem

– the signal is “remembered” for next time P(sem) is called

18

Page 19: Recap from last class

Two Types of Semaphores Binary semaphore (aka mutex semaphore) sem is initialized to 1 guarantees mutually exclusive access to resource (e.g., a

critical section of code) only one thread/process allowed entry at a time

Counting semaphore sem is initialized to N

• N = number of units available

represents resources with many (identical) units available allows threads to enter as long as more units are available

19

Page 20: Recap from last class

Classic Problems Reader/writer problem there is a buffer in memory with N entries producer threads insert entries into it (one at a time) consumer threads remove entries from it (one at a time)

Threads are concurrent so, we must use synchronization constructs to control

access to shared variables describing buffer state

20

headtail

Page 21: Recap from last class

Reader/Writer Using Semaphores

21

var mutex: semaphore = 1 ; controls access to readcountwrt: semaphore = 1 ; control entry for a writer or first readerreadcount: integer = 0 ; number of active readers

writer:P(wrt) ; any writers or readers?

<perform write operation>V(wrt) ; allow others

reader:P(mutex) ; ensure exclusion

readcount++ ; one more readerif readcount == 1 then P(wrt) ; if we’re the first, synch with writers

V(mutex)<perform read operation>

P(mutex) ; ensure exclusionreadcount-- ; one fewer readerif readcount == 0 then V(wrt) ; no more readers, allow a writer

V(mutex)

Page 22: Recap from last class

Classic Problems Dining philosopher problem

22

Page 23: Recap from last class

Deadlock

23

Page 24: Recap from last class

Deadlock Definition: A thread is deadlocked when it’s waiting

for an event that can never occur I’m waiting for you to clear the intersection, so I can

proceed• but you can’t move until he moves, and he can’t move until she

moves, and she can’t move until I move

24

Page 25: Recap from last class

Deadlock Resource graph A deadlock exists if there is an irreducible cycle in the

resource graph

25

Page 26: Recap from last class

Resource Graph with No Cycle No deadlock

26

Page 27: Recap from last class

Resource Graph with A Deadlock An irreducible cycle

27

Page 28: Recap from last class

What is a reducible cycle? Resource graph with

a cycle but no deadlock How to reduce?

28

Page 29: Recap from last class

Approaches to Deadlock Break one of the four required conditions Mutual Exclusion? Hold and Wait? No Preemption? Circular Wait?

Broadly classified as: Prevention (static), or Avoidance (dynamic)

29

Page 30: Recap from last class

Prevention (static) Hold and Wait

• each thread obtains all resources at the beginning; blocks until all are available• drawback?

Circular Wait• resources are ordered; each thread obtains them in

sequence (which means acquiring some before they are actually needed)• why does this work?• pros and cons?

30

Page 31: Recap from last class

Avoidance (dynamic) Circular Wait each thread states its maximum claim for every resource

type system runs the Banker’s Algorithm at each allocation

request• Banker ⇒ incredibly conservative• if I were to allocate you that resource, and then everyone were to

request their maximum claim for every resource, could I find a way to allocate remaining resources so that everyone finished?

31

Page 32: Recap from last class

Summary Managing concurrency – the core task of OS

Synchronization between process/threads Race condition

Mutual exclusion Locks Semaphores Deadlocks

32