1 concurrent programming andrew case slides adapted from mohamed zahran, jinyang li, clark barrett,...

37
1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

Upload: dwight-bates

Post on 26-Dec-2015

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

1

Concurrent Programming

Andrew Case

Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

Page 2: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

2

Concurrent vs. Parallelism Concurrency: At least two tasks are making

progress at the same time frame. – Not necessarily at the same time – Include techniques like time-slicing – Can be implemented on a single processing unit – Concept more general than parallelism

Parallelism: At least two tasks execute literally at the same time. – Requires hardware with multiple processing units

Page 3: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

3

Concurrent vs. Parallelism

Page 4: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

5

Concurrent Programming is Hard! Classical problem classes of concurrent programs:

Races: outcome depends on arbitrary scheduling decisions elsewhere in the system

Example: who gets the last seat on the airplane? Deadlock: improper resource allocation prevents forward progress

Example: traffic gridlock Livelock / Starvation / Fairness: external events and/or system

scheduling decisions can prevent sub-task progress Example: people always jump in front of you in line

Many aspects of concurrent programming are beyond the scope of this class

Page 5: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

6

Echo Server Echo Service

Original testing of round trip times / connectivity checks Superseded by “ping/ICMP”

Protocol: Client connects to server Client sends a message to server Server sends the same mess back to client (echos)

Page 6: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

7

Sequential Echo ServerClient Serversocket socket

bind

listen

read

writeread

write

Connectionrequest

read EOF

close

close

ConnectionFinished

Await connectionrequest fromnext client

open_listenfd

open_clientfd

acceptconnect

Page 7: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

8

Problems with Sequential Servers Low utilization

Server is idle while waiting for client1’s requests. It could have served another client during those idle times!

Increased latency Client2 waits on client1 to finish before getting served

Solution: implement concurrent servers serve multiple clients at the same time

Page 8: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

9

Sequential Echo Server Iterative servers process one request at a time

client 1 server client 2

connect

accept connect

write read

call read

close

accept

write

read

close

Wait for Client 1

call read

write

read returns

writeread returns

Page 9: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

10

Creating Concurrent Flows Allow server to handle multiple clients simultaneously

1. Processes Fork a new server process to handle each connection Kernel automatically interleaves multiple logical flows Each process has its own private address space

2. Threads (same process, multiple flows) Create one server thread to handle each connection Kernel automatically interleaves multiple logical flows Each flow shares the same address space

3. I/O multiplexing with select() Programmer manually interleaves multiple logical flows All flows share the same address space Relies on lower-level system abstractions

Page 10: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

11

Concurrent Servers: Multiple Processes Spawn separate process for each client

client 1 server client 2

call connectcall accept

call read

ret connectret accept

call connect

call fgetsforkchild 1

Client 1 blockswaiting for user to type in data

call acceptret connect

ret accept call fgets

writefork

call read

child 2

write

call read

end readclose

close

...

Page 11: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

12

Sequential Echo Serverint main(int argc, char **argv) { int listenfd, connfd; listenfd = socket(…) /* create socket */ bind(listenfd,…); /* bind socket to port */ listen(listenfd,…); /* listen for incoming connections*/

while (1) {connfd = accept(listenfd, …); /* connect to client */while (readline(connfd, …) > 0) /* read from client */

write(connfd, …); /* write to client */ close(connfd); } exit(0);}

Accept a connection request Handle echo requests until client terminates

Page 12: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

13

int main(int argc, char **argv) { int listenfd, connfd; signal(SIGCHLD, sigchld_handler); listenfd = socket(…) /* create socket */ bind(listenfd,…); /* bind socket to port */ listen(listenfd,…); /* listen for incoming connections*/

while (1) {connfd = accept(listenfd, …); /* client connects */if (fork() == 0) { close(listenfd); /* Child closes its listening socket */ while (readline(connfd, …); /* Child services client */

write(connfd,…) close(connfd); /* Child closes connection with client */ exit(0); /* Child exits */}close(connfd); /* Parent closes connected socket (important!) */

}}

Process-Based Concurrent Server

Fork separate process for each client

Does not allow any communication between different client handlers

Page 13: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

14

Process Concurrency Requirements

Listening server process must reap zombie children to avoid fatal memory leak

Listening server process must close its copy of connfd Kernel keeps reference for each socket/open file After fork, refcnt(connfd) = 2 Connection will not be closed until refcnt(connfd) == 0

Page 14: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

15

Process-Based Concurrent Server – Reaping

void sigchld_handler(int sig) { while (waitpid(-1, 0, WNOHANG) > 0)

; return;}

Reap all zombie children

Page 15: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

16

Process-Based Execution Model

Each client handled by independent process No shared state between them Both parent & child have copies of listenfd and connfd

Client 1Server

Process

Client 2Server

Process

ListeningServer

Process

Connection Requests

Client 1 data Client 2 data

Connection Requests

Page 16: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

19

Pros and Cons of Process-Based Designs

+ Handle multiple connections concurrently + Clean sharing model

Shared file tables with separate file descripters No shared address space (no shared variables/globals)

+ Simple and straightforward – Additional overhead for process control – Nontrivial to share data between processes

Requires IPC (interprocess communication) mechanisms FIFO’s (named pipes), shared memory and semaphores

Page 17: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

20

Approach #2: Multiple Threads

Very similar to approach #1 (multiple processes) but, with threads instead of processes

Multithreading

Page 18: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

21

Traditional View of a Process Process = process context + code, data, and stack

shared libraries

run-time heap

0

read/write data

Program context: Data registers Condition codes Stack pointer (SP) Program counter (PC)Kernel context: VM structures Descriptor table brk pointer

Code, data, and stack

read-only code/data

stackSP

PC

brk

Process context

Page 19: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

22

Alternate View of a Process Process = thread + code, data, and kernel context

shared libraries

run-time heap

0

read/write dataThread context:

Data registers Condition codes Stack pointer (SP) Program counter (PC)

Code and Data

read-only code/data

stackSP

PC

brk

Thread (main thread)

Kernel context: VM structures Descriptor table brk pointer

Page 20: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

23

Multi-Threaded process A process can have multiple threads

Each thread has its own logical control flow (PC, registers, etc.) Each thread shares the same code, data, and kernel context

Share common virtual address space

shared libraries

run-time heap

0

read/write dataThread 1 context: Data registers Condition codes SP1 PC1

Shared code and data

read-only code/data

stack 1

Thread 1 (main thread)

Kernel context: VM structures Descriptor table brk pointer

Thread 2 context: Data registers Condition codes SP2 PC2

stack 2

Thread 2 (peer thread)

Page 21: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

24

Thread Execution

Single Core Processor time slicing

Multi-Core Processor true concurrency

Time

Thread A Thread B Thread C Thread A Thread B Thread C

(Running 3 threads on 2 cores)

Page 22: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

25

Threads vs. Processes Similarities

Each has its own logical control flow Each can run concurrently with others (often on different cores/CPUs) Each is context switched

Differences Threads share code and some data

Processes (typically) do not Threads are less resource intensive than processes

Process control (creating and reaping) is ~2x as expensive as thread control

Scheduling may be skewed in favor of processes If one thread blocks, more likely to bring down the whole lot

Page 23: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

26

POSIX Threads (Pthreads)

Pthreads: Standard interface for 60+ functions that manipulate threads from C programs (pthread.h)

Compile with GCC:gcc/g++ -pthread programname

Programmer responsible for synchronizing/protecting shared resources

Page 24: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

27

POSIX Threads (Pthreads) Creating and reaping threads

pthread_create() pthread_join()

Terminating threads pthread_cancel() pthread_exit() exit() [terminates all threads]

Synchronizing access to shared variables pthread_mutex_init pthread_mutex_[un]lock pthread_cond_init pthread_cond_[timed]wait

Page 25: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

28

The pthread_join() blocks the calling threaduntil the thread with the specified threadID exits.

Page 26: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

29

/* thread routine */void *thread(void *vargp) { printf("Hello, world!\n"); return NULL;}

The Pthreads "hello, world" Program/* * hello.c - Pthreads "hello, world" program */

void *thread(void *vargp);

int main() { pthread_t tid;

pthread_create(&tid, NULL, thread, NULL); pthread_join(tid, NULL); exit(0);}

Thread attributes (usually NULL)

Thread arguments(void *p)

return value(void **p)

Thread function

Page 27: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

30

Execution of Threaded“hello, world”main thread

peer thread

return NULL;main thread waits for peer thread to terminate

exit() terminates

main thread and any peer threads

Call pthread_create()

Call pthread_join()

pthread_join() returns

printf()

(peer threadterminates)

pthread_create() returns

Page 28: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

31

Thread-Based Concurrent Echo Serverint main(int argc, char **argv) { … pthread_t tid; listenfd = socket(…); bind(listenfd,…); listen(listenfd,…); while (1) { connfd = accept(listenfd,…); /* Create a new thread for each connection */ pthread_create(&tid, NULL, echo_thread, (void *) connfd); ptread_detach(tid); /* if not planning to pthread_join */ }}void *echo_thread(void *p) { int connfd = (int)p; /* Get connection file handler */ while (readline(connfd,…)>0) write(connfd,…); close(connfd); /* Close file handler when done */ return NULL; /* Not returning any data to main thread */}

Page 29: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

32

Threaded Execution Model

Multiple threads within single process Shared state between them

Client 1Server

Client 2Server

ListeningServer

Connection Requests

Client 1 data Client 2 data

Page 30: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

33

Problems: Race Conditions

A race condition occurs when correctness of the program depends on one thread or process reaching point x before another thread or process reaches point y

Any ordering of instructions from multiple threads may occur

race.c

Page 31: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

35

Potential Race Condition

int i;for (i = 0; i < 100; i++) { pthread_create(&tid, NULL, thread, &i);}

Race Test If no race, then each thread would get different value of i Set of saved values would consist of one copy each of 0 through 99.

Main

void *thread(void *p) { int i = *((int *)p); pthread_detach(pthread_self()); save_value(i); return NULL;}

Thread

Page 32: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

36

Experimental Results

More on this later

No Race

Multicore server

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 990

1

2

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 990

2

4

6

8

10

12

14

Single core laptop

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 990

1

2

3

(Number of times each value is used)

Page 33: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

38

Pros and Cons of Thread-Based Designs

+ Easy to share data structures between threads e.g., logging information, file cache.

+ Threads are more efficient than processes.

– Unintentional sharing can introduce subtle and hard-to-reproduce errors! Race conditions, data corruptions from other threads, etc.

Page 34: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

39

Event-Based Servers Using I/O Multiplexing

OS provides detection for input among a list of FDs select(), poll(), and epoll()

Server maintains set of active connections Array of connfd’s

Repeat: Determine which connections have pending inputs If listenfd has input, then accept connection

Add new connfd to array Service all connfd’s with pending inputs

Details in book

Page 35: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

40

I/O Multiplexed Event Processing

10

clientfd

7

4

-1

-1

12

5

-1

-1

-1

0

1

2

3

4

5

6

7

8

9

Active

Inactive

Active

Never Used

listenfd = 3

10

clientfd

7

4

-1

-1

12

5

-1

-1

-1

listenfd = 3

Active Descriptors Pending Inputs

Read

Page 36: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

41

Pros and Cons of I/O Multiplexing

+ One logical control flow + Easier to debug + No process or thread control overhead

– More complex to code than process/thread based designs – Cannot take advantage of multi-core

Single thread of control

Page 37: 1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron

42

Approaches to Concurrency

Processes Hard to share resources: Easy to avoid unintended sharing High overhead in adding/removing clients

Threads Easy to share resources: Perhaps too easy Medium overhead Not much control over scheduling policies Difficult to debug

Event orderings not repeatable I/O Multiplexing

Tedious / more code to write (scheduler) Total control over scheduling Very low overhead only one process so it can only run on one cpu-core