instructor: justin hsia

55
Instructor: Justin Hsia 7/23/2013 Summer 2013 -- Lecture #17 1 CS 61C: Great Ideas in Computer Architecture OpenMP, Transistors

Upload: macy-holden

Post on 30-Dec-2015

28 views

Category:

Documents


4 download

DESCRIPTION

CS 61C: Great Ideas in Computer Architecture OpenMP , Transistors. Instructor: Justin Hsia. Review of Last Lecture. Amdahl’s Law limits benefits of parallelization Multiprocessor systems uses shared memory (single address space) - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Instructor:   Justin Hsia

Instructor: Justin Hsia

7/23/2013 Summer 2013 -- Lecture #17 1

CS 61C: Great Ideas in Computer Architecture

OpenMP, Transistors

Page 2: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 2

Review of Last Lecture

• Amdahl’s Law limits benefits of parallelization• Multiprocessor systems uses shared memory

(single address space)• Cache coherence implements shared memory

even with multiple copies in multiple caches– Track state of blocks relative to other caches

(e.g. MOESI protocol)– False sharing a concern

• Synchronization via hardware primitives:– MIPS does it with Load Linked + Store Conditional7/23/2013

Page 3: Instructor:   Justin Hsia

3

Question: Consider the following code when executed concurrently by two threads.

What possible values can result in *($s0)?

# *($s0) = 100lw $t0,0($s0)addi $t0,$t0,1sw $t0,0($s0)

101 or 102(A)

100, 101, or 102(B)

100 or 101(C)

102(D)

Page 4: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 4

Great Idea #4: Parallelism

7/23/2013

SmartPhone

Warehouse Scale

ComputerLeverage

Parallelism &Achieve HighPerformance

Core …

Memory

Input/Output

Computer

Core

• Parallel RequestsAssigned to computere.g. search “Garcia”

• Parallel ThreadsAssigned to coree.g. lookup, ads

• Parallel Instructions> 1 instruction @ one timee.g. 5 pipelined instructions

• Parallel Data> 1 data item @ one timee.g. add of 4 pairs of words

• Hardware descriptionsAll gates functioning in

parallel at same time

Software Hardware

Cache Memory

Core

Instruction Unit(s) FunctionalUnit(s)

A0+B0 A1+B1 A2+B2 A3+B3

Logic Gates

Page 5: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 5

Agenda

• OpenMP Introduction• Administrivia• OpenMP Directives

– Workshare– Synchronization

• OpenMP Common Pitfalls• Hardware: Transistors• Bonus: sections Example

7/23/2013

Page 6: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 6

What is OpenMP?

• API used for multi-threaded, shared memory parallelism– Compiler Directives– Runtime Library Routines– Environment Variables

• Portable• Standardized• Resources: http://www.openmp.org/

and http://computing.llnl.gov/tutorials/openMP/ 7/23/2013

Page 7: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 7

OpenMP Specification

7/23/2013

Page 8: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 8

Shared Memory Model with Explicit Thread-based Parallelism

• Multiple threads in a shared memory environment, explicit programming model with full programmer control over parallelization

• Pros:– Takes advantage of shared memory, programmer need

not worry (that much) about data placement– Compiler directives are simple and easy to use– Legacy serial code does not need to be rewritten

• Cons:– Code can only be run in shared memory environments– Compiler must support OpenMP (e.g. gcc 4.2)

7/23/2013

Page 9: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 9

OpenMP in CS61C

• OpenMP is built on top of C, so you don’t have to learn a whole new programming language– Make sure to add #include <omp.h>– Compile with flag: gcc -fopenmp– Mostly just a few lines of code to learn

• You will NOT become experts at OpenMP– Use slides as reference, will learn to use in lab

• Key ideas:– Shared vs. Private variables– OpenMP directives for parallelization, work sharing,

synchronization7/23/2013

Page 10: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 10

OpenMP Programming Model

• Fork - Join Model:

• OpenMP programs begin as single process (master thread) and executes sequentially until the first parallel region construct is encountered– FORK: Master thread then creates a team of parallel threads– Statements in program that are enclosed by the parallel region construct

are executed in parallel among the various threads– JOIN: When the team threads complete the statements in the parallel

region construct, they synchronize and terminate, leaving only the master thread7/23/2013

Page 11: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 11

OpenMP Extends C with Pragmas

• Pragmas are a preprocessor mechanism C provides for language extensions

• Commonly implemented pragmas: structure packing, symbol aliasing, floating point exception modes (not covered in 61C)

• Good mechanism for OpenMP because compilers that don't recognize a pragma are supposed to ignore them– Runs on sequential computer even with

embedded pragmas7/23/2013

Page 12: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 12

parallel Pragma and Scope

• Basic OpenMP construct for parallelization:#pragma omp parallel {

/* code goes here */}

– Each thread runs a copy of code within the block– Thread scheduling is non-deterministic

• Variables declared outside pragma are shared– To make private, need to declare with pragma:

#pragma omp parallel private (x)7/23/2013

This is annoying, but curly brace MUST go on separate line from #pragma

Page 13: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 13

Thread Creation

• Defined by OMP_NUM_THREADS environment variable (or code procedure call)– Set this variable to the maximum number of

threads you want OpenMP to use• Usually equals the number of cores in the

underlying hardware on which the program is run– But remember thread ≠ core

7/23/2013

Page 14: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 14

OMP_NUM_THREADS

• OpenMP intrinsic to set number of threads:omp_set_num_threads(x);

• OpenMP intrinsic to get number of threads:num_th = omp_get_num_threads();

• OpenMP intrinsic to get Thread ID number:th_ID = omp_get_thread_num();

7/23/2013

Page 15: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 15

Parallel Hello World#include <stdio.h>#include <omp.h>int main () { int nthreads, tid;

/* Fork team of threads with private var tid */ #pragma omp parallel private(tid) { tid = omp_get_thread_num(); /* get thread id */ printf("Hello World from thread = %d\n", tid);

/* Only master thread does this */ if (tid == 0) { nthreads = omp_get_num_threads(); printf("Number of threads = %d\n", nthreads); } } /* All threads join master and terminate */}

7/23/2013

Page 16: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 16

Agenda

• OpenMP Introduction• Administrivia• OpenMP Directives

– Workshare– Synchronization

• OpenMP Common Pitfalls• Hardware: Transistors• Bonus: sections Example

7/23/2013

Page 17: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 17

Midterm Scores

7/23/2013

10 20 30 40 50 60 70 80 900

2

4

6

8

10

12

14

16

18F

requ

ency

Score

Total Score on CS61C Su13 Midterm

Mean:Std Dev:

47.916.9

Page 18: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 18

Administrivia

• Midterm re-grade requests due Thursday• Project 2: Matrix Multiply Performance

Improvement– Work in groups of two!– Part 1: Due July 28 (this Sunday)– Part 2: Due August 4

• HW 5 also due July 31

7/23/2013

Page 19: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 19

Agenda

• OpenMP Introduction• Administrivia• OpenMP Directives

– Workshare– Synchronization

• OpenMP Common Pitfalls• Hardware: Transistors• Bonus: sections Example

7/23/2013

Page 20: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 20

OpenMP Directives (Work-Sharing)

7/23/2013

Shares iterations of a loop across the threads

Each section is executedby a separate thread

Serializes the executionof a thread

• These are defined within a parallel section

Page 21: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 21

Parallel Statement Shorthand

#pragma omp parallel{#pragma omp forfor(i=0;i<len;i++) { … }

}

can be shortened to:#pragma omp parallel for for(i=0;i<len;i++) { … }

7/23/2013

This is the only directive in the parallel section

Page 22: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 22

Building Block: for loop

for (i=0; i<max; i++) zero[i] = 0;

• Break for loop into chunks, and allocate each to a separate thread– e.g. if max = 100 with 2 threads:

assign 0-49 to thread 0, and 50-99 to thread 1• Must have relatively simple “shape” for an OpenMP-

aware compiler to be able to parallelize it– Necessary for the run-time system to be able to determine

how many of the loop iterations to assign to each thread• No premature exits from the loop allowed

– i.e. No break, return, exit, goto statements7/23/2013

In general, don’t jump outside of any pragma block

Page 23: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 23

Parallel for pragma

#pragma omp parallel for

for (i=0; i<max; i++) zero[i] = 0;

• Master thread creates additional threads, each with a separate execution context

– Implicit synchronization at end of for loop

• Loop index variable (i.e. i) made private

• Divide index regions sequentially per thread– Thread 0 gets 0, 1, …, (max/n)-1;

– Thread 1 gets max/n, max/n+1, …, 2*(max/n)-17/23/2013

Page 24: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 24

OpenMP Timing

• Elapsed wall clock time:double omp_get_wtime(void);

– Returns elapsed wall clock time in seconds– Time is measured per thread, no guarantee can be

made that two distinct threads measure the same time

– Time is measured from “some time in the past,” so subtract results of two calls to omp_get_wtime to get elapsed time

7/23/2013

Page 25: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 25

Matrix Multiply in OpenMPstart_time = omp_get_wtime();#pragma omp parallel for private(tmp, i, j, k) for (i=0; i<Mdim; i++){ for (j=0; j<Ndim; j++){ tmp = 0.0; for( k=0; k<Pdim; k++){ /* C(i,j) = sum(over k) A(i,k) * B(k,j)*/ tmp += *(A+(i*Pdim+k)) * *(B+(k*Ndim+j)); } *(C+(i*Ndim+j)) = tmp; } }run_time = omp_get_wtime() - start_time;

7/23/2013

Outer loop spread across N threads; inner loops inside a single thread

Page 26: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 26

Notes on Matrix Multiply Example

• More performance optimizations available:– Higher compiler optimization (-O2, -O3) to reduce

number of instructions executed– Cache blocking to improve memory performance– Using SIMD SSE instructions to raise floating point

computation rate (DLP)– Improve algorithm by reducing computations and

memory accesses in code (what happens in each loop?)

7/23/2013

Page 27: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 27

OpenMP Directives (Synchronization)

• These are defined within a parallel section• master

– Code block executed only by the master thread (all other threads skip)

• critical– Code block executed by only one thread at a time

• atomic– Specific memory location must be updated atomically

(like a mini-critical section for writing to memory)– Applies to single statement, not code block

7/23/2013

Page 28: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 28

OpenMP Reduction

• Reduction specifies that one or more private variables are the subject of a reduction operation at end of parallel region– Clause reduction(operation:var)– Operation: Operator to perform on the variables

at the end of the parallel region– Var: One or more variables on which to perform

scalar reduction#pragma omp for reduction(+:nSum) for (i = START ; i <= END ; i++) nSum += i; 7/23/2013

Page 29: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 29

Agenda

• OpenMP Introduction• Administrivia• OpenMP Directives

– Workshare– Synchronization

• OpenMP Common Pitfalls• Hardware: Transistors• Bonus: sections Example

7/23/2013

Page 30: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 30

OpenMP Pitfall #1: Data Dependencies

• Consider the following code:a[0] = 1;for(i=1; i<5000; i++) a[i] = i + a[i-1];

• There are dependencies between loop iterations!– Splitting this loop between threads does not

guarantee in-order execution– Out of order loop execution will result in

undefined behavior (i.e. likely wrong result)7/23/2013

Page 31: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 31

Open MP Pitfall #2: Sharing Issues

•Consider the following loop:#pragma omp parallel for

for(i=0; i<n; i++){ temp = 2.0*a[i]; a[i] = temp; b[i] = c[i]/temp;

} • temp is a shared variable!

#pragma omp parallel for private(temp)for(i=0; i<n; i++){

temp = 2.0*a[i]; a[i] = temp; b[i] = c[i]/temp;

}7/23/2013

Page 32: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 32

OpenMP Pitfall #3: Updating Shared Variables Simultaneously

• Now consider a global sum:#pragma omp parallel for

for(i=0; i<n; i++)

sum = sum + a[i];

• This can be done by surrounding the summation by a critical/atomic section or reduction clause:

#pragma omp parallel for reduction(+:sum)

for(i=0; i<n; i++)

sum = sum + a[i];

– Compiler can generate highly efficient code for reduction 7/23/2013

Page 33: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 33

OpenMP Pitfall #4: Parallel Overhead

• Spawning and releasing threads results in significant overhead

• Better to have fewer but larger parallel regions– Parallelize over the largest loop that you can (even

though it will involve more work to declare all of the private variables and eliminate dependencies)

7/23/2013

Page 34: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 34

OpenMP Pitfall #4: Parallel Overhead

start_time = omp_get_wtime();for (i=0; i<Mdim; i++){ for (j=0; j<Ndim; j++){ tmp = 0.0; #pragma omp parallel for reduction(+:tmp) for( k=0; k<Pdim; k++){ /* C(i,j) = sum(over k) A(i,k) * B(k,j)*/ tmp += *(A+(i*Pdim+k)) * *(B+(k*Ndim+j)); } *(C+(i*Ndim+j)) = tmp; }}run_time = omp_get_wtime() - start_time;

7/23/2013

Too much overhead in thread generation to have this statement run this frequently.

Poor choice of loop to parallelize.

Page 35: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 35

Get To Know Your Staff

• Category: Television

7/23/2013

Page 36: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 36

Agenda

• OpenMP Introduction• Administrivia• OpenMP Directives

– Workshare– Synchronization

• OpenMP Common Pitfalls• Hardware: Transistors• Bonus: sections Example

7/23/2013

Page 37: Instructor:   Justin Hsia

Great Idea #1: Levels of Representation/Interpretation

7/23/2013 Summer 2013 -- Lecture #17 37

lw $t0, 0($2)lw $t1, 4($2)sw $t1, 0($2)sw $t0, 4($2)

Higher-Level LanguageProgram (e.g. C)

Assembly Language Program (e.g. MIPS)

Machine Language Program (MIPS)

Hardware Architecture Description(e.g. block diagrams)

Compiler

Assembler

Machine Interpretation

temp = v[k];v[k] = v[k+1];v[k+1] = temp;

0000 1001 1100 0110 1010 1111 0101 10001010 1111 0101 1000 0000 1001 1100 0110 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111

Logic Circuit Description(Circuit Schematic Diagrams)

Architecture Implementation

We are here

Page 38: Instructor:   Justin Hsia

Hardware Design

• Upcoming: We’ll study how a modern processor is built, starting with basic elements as building blocks

• Why study hardware design?– Understand capabilities and limitations of hardware in

general and processors in particular– What processors can do fast and what they can’t do fast

(avoid slow things if you want your code to run fast!)– Background for more in-depth courses (CS150, CS152)– You may need to design own custom hardware for extra

performance (some commercial processors today have customizable hardware)

7/23/2013 Summer 2013 -- Lecture #17 38

Page 39: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 39

system

datapath control

stateregisters

combinationallogic

multiplexer comparatorcode

registers

register logic

switchingnetworks

Design Hierarchy

7/23/2013

Later thisweek

Nextweek

Today

Page 40: Instructor:   Justin Hsia

Switches (1/2)

• The basic element of physical implementations

• Convention: if input is a “1,” the switch is asserted

7/23/2013 Summer 2013 -- Lecture #17 40

In this example, Z A.

ZA

A ZClose switch if A is “1” (asserted)and turn ON light bulb (Z)

Open switch if A is “0” (unasserted)and turn OFF light bulb (Z)

Page 41: Instructor:   Justin Hsia

Switches (2/2)

• Can compose switches into more complex ones (Boolean functions)– Arrows show action upon assertion (1 = close)

7/23/2013 Summer 2013 -- Lecture #17 41

AND:A B

Z A and B“1”

OR:

A

B

Z A or B“1”

Page 42: Instructor:   Justin Hsia

Transistor Networks

• Modern digital systems designed in CMOS– MOS: Metal-Oxide on Semiconductor– C for complementary: use pairs of normally-open

and normally-closed switches• CMOS transistors act as voltage-controlled

switches– Similar, though easier to work with, than relay

switches from earlier era– Three terminals: Source, Gate, and Drain

7/23/2013 Summer 2013 -- Lecture #17 42

G

DS

Page 43: Instructor:   Justin Hsia

CMOS Transistors

• Switch action based on terminal voltages(VG, VD, VS) and relative voltages (e.g. VGS, VDS)– Threshold voltage (VT) determines whether or not

Source and Drain terminals are connected– When not connected, Drain left “floating”

7/23/2013 Summer 2013 -- Lecture #17 43

N-channel Transistor

VG–VS < VT: Switch OPENVGS–VDS > VT: Switch CLOSED

P-channel Transistor

VS–VG < -VT: Switch CLOSEDVSG–VSD > -VT: Switch OPEN

Gate

Source Drain

Gate

Source Drain

Circle symbol means “NOT” or “complement”

Page 44: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 44

MOS Networks

• What is the relationship between X and Y?

Called an inverter or NOT gate7/23/2013

3v

X

Y

0v

-3 V

+3 V

+3 V

+0 V

X YVoltage Source (“1”)

Ground (“0”)

Page 45: Instructor:   Justin Hsia

Two Input Networks

7/23/2013 Summer 2013 -- Lecture #1745

X Y Z

-3 V

+3 V -3 V+3 V

-3 V -3 V+3 V+3 V

0v

3v

X Y

Z

3v

X Y

0v

Z

X Y Z

-3 V

+3 V -3 V+3 V

-3 V -3 V+3 V+3 V

+3 V

+3 V+3 V+0 V

+3 V

+0 V+0 V+0 V

NAND gate (NOT AND)

NOR gate (NOT OR)

Page 46: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 46

Transistors and CS61C

• The internals of transistors are important, but won’t be covered in this class– Better understand Moore’s Law– Physical limitations relating to speed and power

consumption– Actual physical design & implementation process– Can take EE40, EE105, and EE140

• We will proceed with the abstraction of Digital Logic (0/1)

7/23/2013

Page 47: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 47

Block Diagrams

• In reality, chips composed of just transistors and wires– Small groups of transistors form useful building

blocks, which we show as blocks

• Can combine to build higher-level blocks – You can build AND, OR, and NOT out of NAND!

7/23/2013

0v

3v

X Y

Z≡ NAND

XY

Z

Page 48: Instructor:   Justin Hsia

48

Question: Which set(s) of inputs will result in the output Z being 3 volts?

Using digital logic – “0” is -3 V, “1” is +3 V

0

0

(A)

0

1

(B)

1

0

(C)

1

1

(D)

X

Y

0v

3v

X Y

Z

Page 49: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 49

Summary

• OpenMP as simple parallel extension to C– During parallel fork, be aware of which variables

should be shared vs. private among threads– Work-sharing accomplished with for/sections– Synchronization accomplished with

critical/atomic/reduction• Hardware is made up of transistors and wires

– Transistors are voltage-controlled switches– Building blocks of all higher-level blocks

7/23/2013

Page 50: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 50

You are responsible for the material contained on the following slides, though we may not have enough time to get to them in lecture.They have been prepared in a way that should be easily readable and the material will be touched upon in the following lecture.

7/23/2013

BONUS SLIDES

Page 51: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 51

Agenda

• OpenMP Introduction• Administrivia• OpenMP Directives

– Workshare– Synchronization

• OpenMP Common Pitfalls• Hardware: Transistors• Bonus: sections Example

7/23/2013

Page 52: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 52

Workshare sections Example (1/4)

#include <omp.h>#include <stdio.h>#include <stdlib.h>

#define N 50

int main (int argc, char *argv[]) { int i, nthreads, tid; float a[N], b[N], c[N], d[N];

/* Initialize arrays with some data */ for (i=0; i<N; i++) { a[i] = i * 1.5; b[i] = i + 22.35; c[i] = d[i] = 0.0; }

/* main() continued on next slides... */

7/23/2013

Page 53: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 53

Workshare sections Example (2/4)

#pragma omp parallel private(i,tid) { tid = omp_get_thread_num(); if (tid == 0) { nthreads = omp_get_num_threads(); printf("Number of threads = %d\n", nthreads); } printf("Thread %d starting...\n",tid); #pragma omp sections nowait { #pragma omp section { printf("Thread %d doing section 1\n",tid); for (i=0; i<N; i++){ c[i] = a[i] + b[i]; printf("Thread %d: c[%d]= %f\n",tid,i,c[i]); } }7/23/2013

Declare Sections

Define individual sections

Page 54: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 54

Workshare sections Example (3/4)

#pragma omp section { printf("Thread %d doing section 2\n",tid); for (i=0; i<N; i++){ d[i] = a[i] * b[i]; printf("Thread %d: d[%d]= %f\n",tid,i,d[i]); } } } /* end of sections */ printf("Thread %d done.\n",tid); } /* end of parallel section */} /* end of main */

7/23/2013

2nd section

Page 55: Instructor:   Justin Hsia

Summer 2013 -- Lecture #17 55

Workshare sections Example (4/4)

Number of threads = 2Thread 1 starting...Thread 0 starting...Thread 1 doing section 1Thread 0 doing section 2Thread 1: c[0]= 22.350000Thread 0: d[0]= 0.000000Thread 1: c[1]= 24.850000Thread 0: d[1]= 35.025002Thread 1: c[2]= 27.350000Thread 0: d[2]= 73.050003Thread 1: c[3]= 29.850000Thread 0: d[3]= 114.075005Thread 1: c[4]= 32.349998Thread 0: d[4]= 158.100006Thread 1: c[5]= 34.849998Thread 0: d[5]= 205.125000Thread 1: c[6]= 37.349998Thread 0: d[6]= 255.150009Thread 1: c[7]= 39.849998Thread 0: d[7]= 308.175018Thread 0: d[8]= 364.200012Thread 0: d[9]= 423.225006Thread 0: d[10]= 485.249969Thread 0: d[11]= 550.274963Thread 0: d[12]= 618.299988Thread 0: d[13]= 689.324951Thread 0: d[14]= 763.349976Thread 0: d[15]= 840.374939

. . .Thread 1: c[33]= 104.849998Thread 0: d[41]= 3896.024902Thread 1: c[34]= 107.349998Thread 0: d[42]= 4054.049805Thread 1: c[35]= 109.849998Thread 0: d[43]= 4215.074707Thread 1: c[36]= 112.349998Thread 0: d[44]= 4379.100098Thread 1: c[37]= 114.849998Thread 1: c[38]= 117.349998Thread 1: c[39]= 119.849998Thread 1: c[40]= 122.349998Thread 1: c[41]= 124.849998Thread 1: c[42]= 127.349998Thread 1: c[43]= 129.850006Thread 1: c[44]= 132.350006Thread 0: d[45]= 4546.125000Thread 1: c[45]= 134.850006Thread 0: d[46]= 4716.149902Thread 1: c[46]= 137.350006Thread 0: d[47]= 4889.174805Thread 1: c[47]= 139.850006Thread 0: d[48]= 5065.199707Thread 0: d[49]= 5244.225098Thread 0 done.Thread 1: c[48]= 142.350006Thread 1: c[49]= 144.850006Thread 1 done.

7/23/2013

nowait option in sections allows threads to exit “early”

Notice inconsistent

thread scheduling!