overview of spec hpg benchmarks spec bof sc2003
DESCRIPTION
spec. spec. spec. spec. Overview of SPEC HPG Benchmarks SPEC BOF SC2003 Matthias Mueller High Performance Computing Center Stuttgart [email protected] Kumaran Kalyanasundaram, G. Gaertner, W. Jones, R. Eigenmann, R. Lieberman, M. van Waveren, and B. Whitney SPEC High Performance Group. - PowerPoint PPT PresentationTRANSCRIPT
Matthias Müller Höchstleistungsrechenzentrum Stuttgart
Overview of SPEC HPG BenchmarksSPEC BOF SC2003
Matthias MuellerHigh Performance Computing Center Stuttgart
Kumaran Kalyanasundaram, G. Gaertner, W. Jones, R. Eigenmann,
R. Lieberman, M. van Waveren, and B. Whitney
SPEC High Performance Group
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Outline
• Some general remarks about benchmarks• Benchmarks currently produced by SPEC HPG:
– OMP– HPC2002
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Where is SPEC Relative to Other Benchmarks ?
There are many metrics, each one has its purpose
Raw machine performance: Tflops
Microbenchmarks: Stream
Algorithmic benchmarks: Linpack
Compact Apps/Kernels: NAS benchmarks
Application Suites: SPEC
User-specific applications: Custom benchmarks
Computer Hardware
Applications
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Why do we need benchmarks?
• Identify problems: measure machine properties• Time evolution: verify that we make progress• Coverage:
Help the vendors to have representative codes:– Increase competition by transparency– Drive future development (see SPEC
CPU2000)• Relevance:
Help the customers to choose the right computer
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Comparison of different benchmark classes
coverage relevance Identify problems
Time evolution
Micro 0 0 ++ +
Algorithmic - 0 + ++
Kernels 0 0 + +
SPEC + + + +
Apps - ++ 0 0
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC OMP
• Benchmark suite developed by SPEC HPG• Benchmark suite for performance testing of shared
memory processor systems• Uses OpenMP versions of SPEC CPU2000
benchmarks• SPEC OMP mixes integer and FP in one suite• OMPM is focused on 4-way to 16-way systems• OMPL is targeting 32-way and larger systems
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Benchmark
• Full Application benchmarks(including I/O) targeted at HPC platforms
• Currently three applications:– SPECenv: weather forecast – SPECseis: seismic processing, used
in the search for oil and gas– SPECchem: comp. chemistry, used
in chemical and pharmaceutical industries (gamess)
• Serial and parallel (OpenMP and/or MPI)• All codes include several data sizes
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Submitted Results
0
5
10
15
20
25
30
35
40
2001 2002 2003
OMPM
OMPL
HPC2002
Matthias Müller Höchstleistungsrechenzentrum Stuttgart
Details of SPEC OMP
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC OMP Applications
Code Applications Language lines ammp Molecular Dynamics C 13500 applu CFD, partial LU Fortran 4000 apsi Air pollution Fortran 7500 art Image Recognition\ neural networks C 1300 fma3d Crash simulation Fortran 60000 gafort Genetic algorithm Fortran 1500 galgel CFD, Galerkin FE Fortran 15300 equake Earthquake modeling C 1500 mgrid Multigrid solver Fortran 500 swim Shallow water modeling Fortran 400 wupwise Quantum chromodynamics Fortran 2200
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
CPU2000 vs OMPL2001
Characteristic CPU2000 OMPL2001 Max. working set 200 MB 6.5 GB Memory needed 256 MB 8 GB Benchmark runtime 30 min @ 300 MHz 9 hrs @ 300 MHz Language C, C++, F77, F90 C, F90, OpenMP Focus Single CPU > 16 CPU system System type Cheap desktop Engineering MP sys Runtime 24 hours 75 hours Runtime 1 CPU 24 hours 1000 hours Run modes Single and rate Parallel Number benchmarks 26 9 Iterations Median 3 or more 2 or more Source mods Not allowed Allowed Baseline flags Max of 4 Any, same for all Reference system 1 CPU @ 300 MHz 16 CPU @ 300 MHz
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC OMPL Results: Applications with scaling to 128
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC OMPL Results: Superlinear scaling of applu
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC OMPL Results: Applications with scaling to 64
Matthias Müller Höchstleistungsrechenzentrum Stuttgart
Details of SPEC HPC2002
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Benchmark
• Full Application benchmarks(including I/O) targeted at HPC platforms
• Currently three applications:– SPECenv: weather forecast – SPECseis: seismic processing, used
in the search for oil and gas– SPECchem: comp. chemistry, used
in chemical and pharmaceutical industries (gamess)
• Serial and parallel (OpenMP and/or MPI)• All codes include several data sizes
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC ENV 2002
• Based on the WRF weather model, a state-of-the-art, non-hydrostatic mesoscale weather model, see http://www.wrf-model.org
• The WRF (Weather Research and Forecasting) Modeling System development project is a multi-year project being undertaken by several agencies.
• Members of the WRF Scientific Board include representatives from EPA, FAA, NASA, NCAR, NOAA, NRL, USAF and several universities.
• 25.000 lines of C and 145.000 lines of F90
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC ENV2002
• Medium data set: SPECenvM2002– 260x164x35 grid over Continental United States– 22km resolution– Full physics– I/O associated with startup and final result. – Simulates weather for a 24 hour period starting from
Saturday, November 3nd, 2001 at 12:00 A.M.• SPECenvS2002 provided for benchmark researchers
interested in smaller problems.• Test and Train data sets for porting and feedback.• The benchmark runs use restart files that are created after
the model has run for several simulated hours. This ensures that cumulus and microphysics schemes are fully developed during the benchmark runs.
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Results: SPECenv scaling
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Results: SPECseis scaling
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC HPC2002 Results: SPECchem scaling
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Hybrid Execution for SPECchem
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Current and Future Work of SPEC HPG
• SPEC HPC:– Update of SPECchem– Improving portability, including tools– Larger datasets
• New release of SPEC OMP:– Inclusion of alternative sources– Merge OMPM and OMPL on one CD
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Adoption of new benchmark codes
• Remember that we need to drive the future development!
• Updates and new codes are important to stay relevant
• Possible candidates:– Should represent a type of computation that is
regularly performed on HPC systems– We currently examine CPU2004 for candidates– Your applications are very welcome !!!
Please contact SPEC HPG or me <[email protected]> if you have a code for us.
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Conclusion and Summary
• Results of OMPL and HPC2002:– Scalability of many programs to 128 CPUs
• Larger data sets show better scalability• SPEC HPC will continue to update and improve
the benchmark suites in order to be representative of the work you do with your applications!
Matthias Müller Höchstleistungsrechenzentrum Stuttgart
BACKUP
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
What is SPEC?
The Standard Performance Evaluation Corporation (SPEC) is a non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops suites of benchmarks and also reviews and publishes submitted results from our member organizations and other benchmark licensees.
For more details see http://www.spec.org
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC Members
• Members:3DLabs * Advanced Micro Devices * Apple Computer, Inc. * ATI Research * Azul Systems, Inc. * BEA Systems * Borland * Bull S.A. * Dell * Electronic Data Systems * EMC * Encorus Technologies * Fujitsu Limited * Fujitsu Siemens * Fujitsu Technology Solutions * Hewlett-Packard * Hitachi Data Systems * IBM * Intel * ION Computer Systems * Johnson & Johnson * Microsoft * Mirapoint * Motorola * NEC - Japan * Network Appliance * Novell, Inc. * Nvidia * Openwave Systems * Oracle * Pramati Technologies * PROCOM Technology * SAP AG * SGI * Spinnaker Networks * Sun Microsystems * Sybase * Unisys * Veritas Software * Zeus Technology *
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC HPG = SPEC High-Performance Group
• Founded in 1994• Mission: To establish, maintain, and endorse a
suite of benchmarks that are representative of real-world high-performance computing applications.
• SPEC/HPG includes members from both industry and academia.
• Benchmark products:– SPEC OMP (OMPM2001, OMPL2001)– SPEC HPC2002 released at SC 2002
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Currently active SPEC HPG Members
• Fujitsu• HP• IBM• Intel• SGI• SUN• UNISYS• University of Purdue• University of Stuttgart
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC Members
• Members:3DLabs * Advanced Micro Devices * Apple Computer, Inc. * ATI Research * Azul Systems, Inc. * BEA Systems * Borland * Bull S.A. * Dell * Electronic Data Systems * EMC * Encorus Technologies * Fujitsu Limited * Fujitsu Siemens * Fujitsu Technology Solutions * Hewlett-Packard * Hitachi Data Systems * IBM * Intel * ION Computer Systems * Johnson & Johnson * Microsoft * Mirapoint * Motorola * NEC - Japan * Network Appliance * Novell, Inc. * Nvidia * Openwave Systems * Oracle * Pramati Technologies * PROCOM Technology * SAP AG * SGI * Spinnaker Networks * Sun Microsystems * Sybase * Unisys * Veritas Software * Zeus Technology *
• Associates:Argonne National Laboratory * CSC - Scientific Computing Ltd. * Cornell University * CSIRO * Defense Logistics Agency * Drexel University * Duke University * Fachhochschule Gelsenkirchen, University of Applied Sciences * Harvard University * JAIST * Leibniz Rechenzentrum - Germany * Los Alamos National Laboratory * Massey University, Albany * NASA Glenn Research Center * National University of Singapore * North Carolina State University * PC Cluster Consortium * Purdue University * Queen's University * Seoul National University * Stanford University * Technical University of Darmstadt * Tsinghua University * University of Aizu - Japan * University of California - Berkeley * University of Edinburgh * University of Georgia * University of Kentucky * University of Illinois - NCSA * University of Maryland * University of Miami * University of Modena * University of Nebraska - Lincoln * University of New Mexico * University of Pavia * University of Pisa * University of South Carolina * University of Stuttgart * University of Tsukuba * Villanova University * Yale University *
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
CPU2000 vs. OMPM2001
Characteristic CPU2000 OMPM2001 Max. working set 200 MB 1.6 GB Memory needed 256 MB 2 GB Benchmark runtime 30 min @ 300 MHz 5 hrs @ 300 MHz Language C, C++, F77, F90 C, F90, OpenMP Focus Single CPU < 16 CPU system System type Cheap desktop MP workstation Runtime 24 hours 34 hours Runtime 1 CPU 24 hours 140 hours Run modes Single and rate Parallel Number benchmarks 26 11 Iterations Median 3 or more Worst of 2, median of 3 Source mods Not allowed Allowed Baseline flags Max of 4 Any, same for all Reference system 1 CPU @ 300 MHz 4 CPU @ 350 MHz
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
CPU2000 vs OMPL2001
Characteristic CPU2000 OMPL2001 Max. working set 200 MB 6.5 GB Memory needed 256 MB 8 GB Benchmark runtime 30 min @ 300 MHz 9 hrs @ 300 MHz Language C, C++, F77, F90 C, F90, OpenMP Focus Single CPU > 16 CPU system System type Cheap desktop Engineering MP sys Runtime 24 hours 75 hours Runtime 1 CPU 24 hours 1000 hours Run modes Single and rate Parallel Number benchmarks 26 9 Iterations Median 3 or more 2 or more Source mods Not allowed Allowed Baseline flags Max of 4 Any, same for all Reference system 1 CPU @ 300 MHz 16 CPU @ 300 MHz
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
Program Memory Footprints
OMPM2001 (Mbytes)
OMPL2001 (Mbytes)
wupwise 1480 5280 swim 1580 6490 mgrid 450 3490 applu 1510 6450 galgel 370 equake 860 5660 apsi 1650 5030 gafort 1680 1700 fma3d 1020 5210 art 2760 10670 ammp 160
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC ENV2002 – data generation
• The WRF datasets used in SPEC ENV2002 are created using the WRF Standard Initialization (SI) software and standard sets of data used in numerical weather prediction.
• The benchmark runs use restart files that are created after the model has run for several simulated hours. This ensures that cumulus and microphysics schemes are fully developed during the benchmark runs.
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPECenv execution models on a Sun Fire 6800
Medium scales betterOpenMP best for small sizeMPI best for medium size
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPECseis execution models on a Sun Fire 6800
Medium scales betterOpenMP scales better than
MPI
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPECchem execution models on a Sun Fire 6800
Medium shows better scalability
MPI is better than OpenMP
Matthias MüllerHöchstleistungsrechenzentrum Stuttgart
SPEC OMP Results
• 75 submitted results for OMPM• 28 submitted results for OMPL
Vendor HP HP SUN SGI
Architecture Superdome Superdome Fire 15K O3800
CPU PA-8700+ Itanium2 UltraSPARC III
R12000
Speed 875 1500 1200 400
L1 Inst 0.75 MB 16 KB 32 KB 32 KB
L1 Data 1.5 MB 16 KB 64 KB 32 KB
L2 - 256 KB 8 MB 8 MB
L3 - 6144 KB - -