cse 260 parallel computation allan snavely, henri casanova [email protected] [email protected]...
TRANSCRIPT
![Page 1: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/1.jpg)
CSE 260CSE 260Parallel ComputationParallel Computation
Allan Snavely, Henri Casanova
http://www.sdsc.edu/~allans/cs260/cs260.htm
![Page 2: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/2.jpg)
OutlineOutline
• Introductions
• Why we need powerful computers
• Why powerful computers are parallel
• Issues in parallel performance
• Parallel computers, yesterday and today
• Class organization
![Page 3: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/3.jpg)
IntroductionsIntroductions
• Instructors: Allan Snavely, [email protected], www.sdsc.edu/~allans Henri Casanova, [email protected], www.cs.ucsd.edu/~casanova/
• T.A.: Michael McCracken, [email protected]
• Course web page: http://www.sdsc.edu/~allans/cs260/cs260.htm
• HPCS experiment: more at end of class today.
Thanks to Kathy Yelick and Jim Demmel , and John Gilbert at UCB for some of these slides.
![Page 4: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/4.jpg)
Why do we need Why do we need powerful computers?powerful computers?
![Page 5: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/5.jpg)
Simulation: The Third Pillar of Science Simulation: The Third Pillar of Science
• Traditional scientific and engineering paradigm:1) Do theory or paper design.
2) Perform experiments or build system.
• Limitations:• Too difficult -- build large wind tunnels.• Too expensive -- build a throw-away passenger jet.• Too slow -- wait for climate or galactic evolution.• Too dangerous -- weapons, drug design, climate experiments.
• Computational science paradigm:3) Use high performance computer systems to simulate the
phenomenon.• Base on known physical laws and efficient numerical methods.
![Page 6: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/6.jpg)
Some Challenging ComputationsSome Challenging Computations
• Science• Global climate modeling• Astrophysical modeling• Biology: genomics; protein folding; drug design• Computational Chemistry• Computational Material Sciences and Nanosciences
• Engineering• Crash simulation• Semiconductor design• Earthquake and structural modeling• Computation fluid dynamics (airplane design)• Combustion (engine design)
• Business• Financial and economic modeling• Transaction processing, web services and search engines
• Defense• Nuclear weapons -- test by simulation• Cryptography
![Page 7: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/7.jpg)
Units of Measure in HPCUnits of Measure in HPC• High Performance Computing (HPC) units are:
• Flops: floating point operations
• Flop/s: floating point operations per second
• Bytes: size of data (double precision floating point number is 8)
• Typical sizes are millions, billions, trillions…Mega Mflop/s = 106 flop/sec Mbyte = 106 byte
(also 220 = 1048576)
Giga Gflop/s = 109 flop/sec Gbyte = 109 byte
(also 230 = 1073741824)
Tera Tflop/s = 1012 flop/sec Tbyte = 1012 byte
(also 240 = 10995211627776)
Peta Pflop/s = 1015 flop/sec Pbyte = 1015 byte
(also 250 = 1125899906842624)
Exa Eflop/s = 1018 flop/sec Ebyte = 1018 byte
![Page 8: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/8.jpg)
Global Climate Modeling ProblemGlobal Climate Modeling Problem
• Problem is to compute:f(latitude, longitude, elevation, time)
temperature, pressure, humidity, wind velocity
• Approach:• Discretize the domain, e.g., a measurement point every 10 km• Devise an algorithm to predict weather at time t+1 given t
• Uses:- Predict major events,
e.g., El Nino
- Use in setting air emissions standards
Source: http://www.epm.ornl.gov/chammp/chammp.html
![Page 9: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/9.jpg)
Global Climate Modeling ComputationGlobal Climate Modeling Computation• One piece is modeling the fluid flow in the atmosphere
• Solve Navier-Stokes problem• Roughly 100 Flops per grid point with 1 minute timestep
• Computational requirements:• To match real-time, need 5x 1011 flops in 60 seconds = 8 Gflop/s
• Weather prediction (7 days in 24 hours) 56 Gflop/s
• Climate prediction (50 years in 30 days) 4.8 Tflop/s
• To use in policy negotiations (50 years in 12 hours) 288 Tflop/s
• To double the grid resolution, computation is at least 8x • State of the art models require integration of
atmosphere, ocean, sea-ice, land models, plus possibly carbon cycle, geochemistry and more
• Current models are coarser than this
![Page 10: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/10.jpg)
High Resolution Climate Modeling on NERSC-3 – P. Duffy,
et al., LLNL
![Page 11: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/11.jpg)
A 1000 Year Climate SimulationA 1000 Year Climate Simulation
• Warren Washington and Jerry Meehl, National Center for Atmospheric Research; Bert Semtner, Naval Postgraduate School; John Weatherly, U.S. Army Cold Regions Research and Engineering Lab Laboratory et al.
• http://www.nersc.gov/aboutnersc/pubs/bigsplash.pdf
• Demonstration of the Community Climate Model (CCSM2)
• A 1000-year simulation shows long-term, stable representation of the earth’s climate.
• 760,000 processor hours used
• Temperature change shown
![Page 12: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/12.jpg)
Climate Modeling on the Earth Simulator System
Development of ES started in 1997 with the goal of enabling a comprehensive understanding of global environmental changes such as global warming.
26.58 Tflops on a global atmospheric circulation code.
35.86 Tflops (87.5% of peak performance) on Linpack benchmark.
Construction was completed February, 2002 and practical operation started March 1, 2002
![Page 13: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/13.jpg)
Why are powerful Why are powerful computers parallel?computers parallel?
![Page 14: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/14.jpg)
Tunnel Vision by ExpertsTunnel Vision by Experts
• “I think there is a world market for maybe five computers.”
• Thomas Watson, chairman of IBM, 1943.
• “There is no reason for any individual to have a computer in their home”
• Ken Olson, president and founder of Digital Equipment Corporation, 1977.
• “640K [of memory] ought to be enough for anybody.”• Bill Gates, chairman of Microsoft,1981.
Slide source: Warfield et al.
![Page 15: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/15.jpg)
Technology Trends: Microprocessor CapacityTechnology Trends: Microprocessor Capacity
Moore’s Law: #transistors/chip doubles every 1.5 years
Moore’s Law
Microprocessors have become smaller, denser, and more powerful.
Gordon Moore (co-founder of Intel) predicted in 1965 that the transistor density of semiconductor chips would double roughly every 18 months.
Slide source: Jack Dongarra
![Page 16: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/16.jpg)
How fast can a serial computer be?How fast can a serial computer be?
• Consider the 1 Tflop sequential machine• data must travel some distance, r, to get from memory to CPU
• to get 1 data element per cycle, this means 10^12 times per second at the speed of light, c = 3e8 m/s
• so r < c/10^12 = .3 mm
• Now put 1 TB of storage in a .3 mm^2 area • each word occupies ~ 3 Angstroms^2, the size of a small atom
r = .3 mm1 Tflop 1 TB sequential machine
![Page 17: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/17.jpg)
Scaling microprocessorsScaling microprocessors
• What happens when feature size shrinks by a factor of x?
• Clock rate goes up by x • actually a little less
• Transistors per unit area goes up by x2
• Die size also tends to increase• typically another factor of ~x
• Raw computing power of the chip goes up by ~ x4 !• of which x3 is devoted either to parallelism or locality
![Page 18: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/18.jpg)
““Automatic” Parallelism in Modern MachinesAutomatic” Parallelism in Modern Machines
• Bit level parallelism• within floating point operations, etc.
• Instruction level parallelism• multiple instructions execute per clock cycle
• Memory system parallelism• overlap of memory operations with computation
• OS parallelism• multiple jobs run in parallel on commodity SMPs
There are limits to all of these -- for very high performance, user must identify, schedule and coordinate parallel tasks
![Page 19: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/19.jpg)
Number of transistors per processor chipNumber of transistors per processor chip
i4004
i80286
i80386
i8080
i8086
R3000R2000
R10000
Pentium
1,000
10,000
100,000
1,000,000
10,000,000
100,000,000
1970 1975 1980 1985 1990 1995 2000 2005
Year
Transistors
![Page 20: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/20.jpg)
Number of transistors per processor chipNumber of transistors per processor chip
i4004
i80286
i80386
i8080
i8086
R3000R2000
R10000
Pentium
1,000
10,000
100,000
1,000,000
10,000,000
100,000,000
1970 1975 1980 1985 1990 1995 2000 2005
Year
Transistors
Bit-LevelParallelism
Instruction-LevelParallelism
Thread-LevelParallelism?
![Page 21: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/21.jpg)
Issues in parallel Issues in parallel performanceperformance
![Page 22: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/22.jpg)
Locality and Locality and ParallelismParallelism
• Large memories are slow, fast memories are small
• Storage hierarchies are large and fast on average
• Parallel processors, collectively, have large, fast cache• the slow accesses to “remote” data we call “communication”
• Algorithm should do most work on local data
ProcCache
L2 Cache
L3 Cache
Memory
Conventional Storage Hierarchy
ProcCache
L2 Cache
L3 Cache
Memory
ProcCache
L2 Cache
L3 Cache
Memory
potentialinterconnects
![Page 23: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/23.jpg)
Finding Enough Parallelism: Amdahl’s LawFinding Enough Parallelism: Amdahl’s Law
• Suppose only part of an application seems parallel• Amdahl’s law
• Let s be the fraction of work done sequentially, so (1-s) is the fraction parallelizable
• Let P = number of processors
Speedup(P) = Time(1)/Time(P)
<= 1/(s + (1-s)/P)
<= 1/s
• Even if the parallel part speeds up perfectly, the sequential part limits overall performance.
![Page 24: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/24.jpg)
Load ImbalanceLoad Imbalance
• Load imbalance is the time that some processors in the system are idle due to• insufficient parallelism (during that phase)
• unequal size tasks
• Examples of the latter• adapting to “interesting parts of a domain”
• tree-structured computations
• fundamentally unstructured problems
• Algorithm needs to balance load
![Page 25: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/25.jpg)
Parallel computers, Parallel computers, yesterday and todayyesterday and today
![Page 26: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/26.jpg)
• Dead supercomputers• Top 500 list• Flashmob computing (!?)
![Page 27: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/27.jpg)
Parallel Computing Today
Small class Beowulf cluster
Japanese Earth Simulator machine
![Page 28: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/28.jpg)
Course organizationCourse organization
![Page 29: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/29.jpg)
Course overviewCourse overview
• Key ideas:• Algorithms• Programming models• Performance
• Course outline – see home page
![Page 30: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/30.jpg)
ResourcesResources
• Course home page: http://www.sdsc.edu/~allans/cs260/cs260.htm
• Computing resources:• 128-multi-streaming processor (MSP) Cray X1 with 512 GB of
memory and 21 terabytes of disk. The X1, named Klondike at Arctic Region Supercomputing Center (ARSC)
• 1632 processor IBM Power4 SP: DataStar (SDSC)• Return the course questionnaire so we can create accounts!
• No textbook – see course homepage for references
![Page 31: CSE 260 Parallel Computation Allan Snavely, Henri Casanova asnavely@cs.ucsd.edu casanova@cs.ucsd.edu allans/cs260/cs260.htm](https://reader030.vdocuments.site/reader030/viewer/2022032801/56649dde5503460f94ad789a/html5/thumbnails/31.jpg)
RequirementsRequirements• Four 2-week homework assignments
• First one is assigned today!!!!!• Individual effort• 40% of course grade
• Final project• Significant parallel programming project• Teams of three• Teams should be interdisciplinary
(this is how real parallel software is built)• 50% of course grade
• Scribe notes for one lecture• Due one week after lecture• Sign up for a day to scribe• 10% of course grade for scribing and class participation