e r i c d a r ve , i cm e , s t a n f o rd cm e 213, m e

45
CME 213, ME 339—Spring 2021 Eric Darve, ICME, Stanford "Computers are getting smarter all the time. Scientists tell us that soon they will be able to talk to us. (And by 'they', I mean 'computers'. I doubt scientists will ever be able to talk to us.)" (Dave Barry) 1 / 45

Upload: others

Post on 31-Jul-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

CME 213, ME 339—Spring 2021

Eric Darve, ICME, Stanford

"Computers are getting smarter all the time. Scientists tell us that soon they will be able to talk to us. (And by 'they', I mean 'computers'. I doubt scientists will ever be able to talk to us.)"

(Dave Barry)

1 / 45

Page 2: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Shared Memory Processor

2 / 45

Page 3: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Schematic

A number of processors or coresA shared physical memory (global memory)An interconnection network to connect the processors with the memory

3 / 45

Page 4: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

4 / 45

Page 5: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Process

Process: program in execution

Comprises: the executable program along with all information that is necessary for the execution of the program.

5 / 45

Page 6: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Thread

Thread: an extension of the process model.

Can be viewed as a "lightweight" process.

A thread may be described as a "procedure" that runs independently from the main program.

6 / 45

Page 7: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

In this model, each process may consist of multiple independent control �ows that are called threads

7 / 45

Page 8: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Imagine a program that contains a number of procedures.

Then imagine these procedures being able to be scheduled to run simultaneously and/or independently by theoperating system.

This describes a multi-threaded program.

8 / 45

Page 9: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Shared address space

All the threads of one process share the address space of the process, i.e., they have a common address space.

When a thread stores a value in the shared address space, another thread of the same process can access thisvalue.

9 / 45

Page 10: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

10 / 45

Page 11: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Threads

11 / 45

Page 12: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Threads are everywhere

C++ threads (11): std::threadC threads: PthreadsJava threads: Thread thread = new Thread();Python threads: t = threading.Thread(target=worker)Cilk: x = spawn fib (n-1);Julia: r = remotecall(rand, 2, 2, 2)OpenMP

12 / 45

Page 13: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

C++ threads exercise

Open the �le cpp_thread.cpp

Type make to compile

13 / 45

Page 14: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

thread constructor

thread t2(f2, m);

Creates a thread that will run function f2 with argument m

14 / 45

Page 15: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Reference argument

thread t3(f3, ref(k));

If a reference argument needs to be passed to the thread function, it has to be wrapped with std::ref.

15 / 45

Page 16: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

thread join

t1.join();t2.join();t3.join();

Calling thread waits (blocks) for t1 to complete (i.e., �nishes running f1)

Required before results of t1 calculations become "usable"

16 / 45

Page 17: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Complete exercise with t4 and f4

17 / 45

Page 18: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

void f4() { /* todo */ }

int main(void){ thread t4(); // todo // call f4() using thread t4; add m and k */}

18 / 45

Page 19: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

How can we "return" values from asynchronous functions?

Di�culty: these functions can run at any time

1. How do we allocate resources to store return value?2. How do we query the return value?

19 / 45

Page 20: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Answer

Use promise and future

20 / 45

Page 21: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

promise

Holds value to be returned

21 / 45

Page 22: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

future

Allows to query the value

22 / 45

Page 23: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

promise/future exercise

Open cpp_thread.cpp

23 / 45

Page 24: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

accumulate()

void accumulate(vector<int>::iterator first, vector<int>::iterator last, promise<int> accumulate_promise){ int sum = 0; auto it = first; for (; it != last; ++it) sum += *it; accumulate_promise.set_value(sum); // Notify future}

24 / 45

Page 25: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

main()

promise<int> accumulate_promise; // Will store the intfuture<int> accumulate_future = accumulate_promise.get_future();thread t5(accumulate, vec_1.begin(), vec_1.end(), move(accumulate_promise));// move() will "move" the resources allocated for accumulate_promise

// future::get() waits until the future has a valid result and retrieves itcout << "result of accumulate_future [21 expected] = " << accumulate_future.get() << '\n';

25 / 45

Page 26: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

promise/future exercise

Complete 2nd part of cpp_thread.cpp

max_promise and get_max()

26 / 45

Page 27: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

What is the point of promise and future?

Why not use a reference and set the value?

27 / 45

Page 28: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

The function associated with a thread can run at any time.

So to make sure a variable has been updated,

we need to use my_thread.join()

28 / 45

Page 29: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

promise/future is a more �exible mechanism

As soon as set_value is called on the promise,

the value can be acquired using the future

29 / 45

Page 30: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

promise/future allow �exible and e�cient communication between threads

30 / 45

Page 31: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

See for more information

https://en.cppreference.com/w/cpp/thread/thread

31 / 45

Page 32: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Thread coordination

32 / 45

Page 33: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

The risks of multi-threaded programming

33 / 45

Page 34: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

A well-known bank company has asked you to implement a multi-threaded code to perform bank transactions

Goal: allow deposits

34 / 45

Page 35: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

1. Clients deposit money and the amount gets credited to their accounts.2. But, a result of having multiple threads running concurrently the following can happen:

35 / 45

Page 36: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Thread 0 Thread 1 BalanceClient requests a deposit Client requests a deposit $1000

Check current balance = $1000Check current balance = $1000

Ask for deposit amount = $100 Ask for deposit amount = $300Compute new balance = $1300

Compute new balance = $1100 Write new balance to account $1300Write new balance to account $1100

36 / 45

Page 37: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

This is called a race condition

The �nal result depends on the precise order in which the instructions are executed

37 / 45

Page 38: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Race condition

Occurs when you have a sequence like

READ/WRITE

or

WRITE/READ

performed by di�erent threads

38 / 45

Page 39: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Threads race to �ll-up a todo-list

39 / 45

Page 40: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Thread 0 Thread 1Thread 0 wants to add new to-do item.Thread 0 closes lock. Add entry in list.

Thread 1 wants to use the lock. It has to wait.Thread 0 is done with the to-do list. It opens the lock.

Thread 1 can close the lock and add entry in list.

40 / 45

Page 41: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Mutex

A mutex can only be in two states: locked or unlocked.

Once a thread locks a mutex:

Other threads attempting to lock the same mutex are blocked.Only the thread that initially locked the mutex has the ability to unlock it.

41 / 45

Page 42: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

This allows to protect regions of code.

Only one thread at a time can execute that code.

42 / 45

Page 43: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

43 / 45

Page 44: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

Open mutex_demo.cpp

44 / 45

Page 45: E r i c D a r ve , I CM E , S t a n f o rd CM E 213, M E

void PizzaDeliveryPronto(int thread_id){ g_mutex.lock(); while (!g_task_queue.empty()) { printf("Thread %d: %s\n", thread_id, g_task_queue.front().c_str()); g_task_queue.pop(); g_mutex.unlock();

Delivery(); g_mutex.lock(); } g_mutex.unlock(); return;}

45 / 45