chapter 13: concurrencylangley/cop4020/2016-summer/lectures/13-new.p… · chapter 13: concurrency...

23
Chapter 13: Concurrency July 11, 2016

Upload: others

Post on 03-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Chapter 13: Concurrency

July 11, 2016

Page 2: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Concurrency: logically simultaneous, not necessarilyphysically so

I Parallel: physical simultaneity

Page 3: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Parallelism

I Quite common to have multiple cores these days; indeed, it’sbecome the rare case for any processor to support only onecore.

I As the text notes, instruction-level parallelism (ILP) was ofgreat interest back before the turn of the millenium;

I Vector parallelism refers to the application of one operation onmultiple data elements; SIMD is the most common realizationof this technique, and Intel/AMD have create a lot of SIMDinstructions (and ones to handle quite large registers!)

Page 4: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Levels of parallelism

I “Black box”: you don’t even know it’s there.

I Disjoint tasks: parallel execution over disjoint data

I Synchronicity (but also see RCU-type parallelism)

Page 5: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

The cases for and against multithreading

I Multithreading seems reasonable when you have multiplelogical processors that can act in concert; certainly sometasks, such as complex i/o driven tasks, seem naturally suitedto having separate threads

I However, the great success of the last few years has been theresurrection of event-driven programming, which generallydoes not use multiple threads, except occasionally as a meansto make use of multiple processors. (Your text discusses thisalternative rather dismissively in the section title “TheDispatch Loop Alternative”.)

Page 6: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Communication and synchronization

I Generally, we have seen communication via either messagepassing or some sort of shared memory mechanism (though itis possible to mix these two, such as with distributed sharedmemory models, as your text mentions in the Design andImplementation note 13.2 on page 636.)

I Synchronization can be implemented by, say, spinning(busy-wait), or by blocking.

Page 7: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Languages and libraries

I In higher-level languages, threading can be done at thelanguage level, at the compiler “extension” level, or via alibrary.

I At the assembly language level, generally you have to ask theoperating system to help you (which is what is silentlyhappening in the higher-level languages in the above threemethods.)

Page 8: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Languages with significant built-in support for concurrency

I AdaI RustI ClojureI ErlangI DI GoI Occam

Page 9: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Expressions of parallelism

I Languages that support parallelism generally use somevariation on the following themes

I Co-beginI Parallel loopsI Launch-at-elaborationI Fork/joinI Implicit receiptI Early reply

Page 10: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Co-begin

co-begin -- all n statements run concurrently

... [stmt1]

... [stmt2]

... [stmt3]

...

... [stmtN]

co-end

Page 11: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Parallel loops

I Instead of giving series of discrete statements, give the sameexpression a differing index (SIMD over an index):

Parallel.For(0,3,i => { somefunction(i); })

Page 12: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Parallel loops

I Or use map/fold over container elements:

Parallel.Map( somefunction somelist );

Parallel.Fold( somefunction someaccum somelist );

Page 13: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Launch at elaboration

I Ada has this; when a task is elaborated, it starts executing.

Page 14: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Fork/join

I The idea here is a process/thread starts another thread with a“fork”, and then at some point does a “join” to reap thefinished process.

Page 15: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Implicit receipt

I I haven’t seen this is in use under the name “implicit receipt”.I think that the author is referring to something like accept(2)(which creates a new file descriptor for the connection)followed by a fork(2) in Unix-land.

Page 16: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Implementation

I In Unix-land, we have

I fork(2) / wait(2) which creates two distinct processesI clone(2) / wait(2) which creates a thread in the same process

I Using these two mechanisms, we can create implementationssuch as pthreads

Page 17: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Synchronization

I mutexes : a locking mechanism that allow one thread to claimand release an exclusive lock; other threads that ask for thesame mutex have to wait for the first thread to release themutex

Page 18: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Synchronization

I Spin locks: an implementation of mutex that usesbusy-waiting. Used quite a bit in the Linux kernel (see, forinstance, Kernel Locking, but otherwise I don’t know of acommon use for it these days.

Page 19: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Synchronization

I Barriers: force all threads come to a single point beforecontinuing. Not really a metaphor that I have seen a lot ofuse of, though it is certainly mentioned a lot in the literature.

Page 20: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Semaphores

I A popular and longstanding mechanism. A “post” operationincrements the semaphore; if it becomes greater than zero,then any “waiting” processes are woken. A “wait” operationattempts to decrement the semaphore; if the semaphore iscurrently greater than zero, it is decremented and theoperation continues; otherwise, it blocks until it is possible toperform the decrement.

Page 21: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Transactional Memory

I Going lock-free with transactional memory systems: It doesseem to have promise, and there are a lot of implementations.

I You declare a code section to be “atomic”, and the systemtakes responsibility for trying to execute these in parallel. Ifthe system supports rollback, it can even try “speculative”computations.

Page 22: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Hardware support for transactional memory

I So far, Intel has provided some hardware support with “RTM”and “HLE” support; the machine (Ivy Bridge) that I am usingto make notes seems to list these both as off:

$ cpuid

[ ... ]

extended feature flags (7):

[ ... ]

HLE hardware lock elision = false

[ ... ]

RTM: restricted transactional memory = false

Page 23: Chapter 13: Concurrencylangley/COP4020/2016-Summer/Lectures/13-new.p… · Chapter 13: Concurrency July 11, 2016. Concurrency: logically simultaneous, not necessarily physically so

Message Passing

I Message passing for the win! Of all of the mechanismsdiscussed in this chapter, the one with the most success (byfar, in my estimation) is that of message passing.