Transcript
Page 1: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Linux Cluster Production ReadinessLinux Cluster Production Readiness

Egan [email protected]@sense.net

Page 2: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Agenda

• Production Readiness

• Diagnostics

• Benchmarks

• STAB

• Case Study

• SCAB

Page 3: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

What is Production Readiness?

• Production readiness is a series of tests to help determine if a system is ready for use.

• Production readiness falls into two categories:– diagnostic– benchmark

• The purpose is to confirm that all hardware is good and identical (per class).

• The search for consistency and predictability.

Page 4: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

What are diagnostics?

• Diagnostic tests are usually pass/fail and include but are not limited to – simple version checks

• OS, BIOS versions

– inventory checks• Memory, CPU, etc…

– configuration checks• Is HT off?

– vendor supplied diagnostics• DOS on a CD

Page 5: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Why benchmark?

• Diagnostics are usually pass/fail.– Thresholds may be undocumented.– ‘Why’ is difficult to answer.

• Diagnostics may be incomplete.– They may not test all subsystems.

• Other issues with diagnostics:– False positives.– Inconsistent from vendor to vendor.– Do no real work, cannot check for accuracy.– Usually hardware based.

• What about software?• What about the user environment?

Page 6: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Why benchmark?

• Benchmarks can be checked for accuracy.

• Benchmarks can stress all used subsystems.

• Benchmarks can stress all used software.

• Benchmarks can be measured and you can determine the thresholds.

Page 7: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Benchmark or diagnostics?

• Do both.

• All diagnostics should pass first.

• Benchmarks will be inconsistent if diagnostics fail.

Page 8: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

WARNING!

• The following slides will contain the word ‘statistics’.

• Statistics cannot prove anything.

• Exercise commonsense.

Page 9: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

A few words on statistics

• Statistics increases human knowledge through the use of empirical data.

• ”There are three kinds of lies: lies, damned lies and statistics.”

-- Benjamin Disraeli (1804-1881)

• ”There are three kinds of lies: lies, damned lies and linpack.”

Page 10: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

What is STAB?

• STatistical Analysis of Benchmarks• A systematic way of running a series of

increasing complex benchmarks to find avoidable inconsistencies.

• Avoidable inconsistencies may lead to performance problems.

• GOAL: consistent, repeatable, accurate results.

Page 11: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

What is STAB?

• Each benchmark is run one or more times per node, then the best representative of each node (ignore for multinode tests) is grouped together and analyzed as a single population.  The results are not as interesting as the shape of the distribution of the results.  Empirical evidence for all the benchmarks in the STAB HOWTO suggest that they should all form a normal distribution.

• A normal distribution is the classic bell curve that appears so frequently in statistics.  It is the sum of smaller, independent (may be unobservable), identically-distributed variables or random events.

Page 12: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Uniform Distribution

• Plot below is of 20000 random dice.

Page 13: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Normal Distribution

• Sum of 5 dice thrown 10000 times.

Page 14: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Normal Distribution

• Benchmarks also have many small independent (may be unobservable) identically-distributed variables that may affect performance, e.g.:– Competing processes – Context switching – Hardware interrupts – Software interrupts – Memory management – Process/Thread scheduling – Cosmic rays

• The above may be unavoidable, but is in part the source a normal distribution.

Page 15: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Non-normal Distribution• Benchmarks may also have non-identically-distributed observable variables

that may affect performance, e.g.:– Memory configuration – BIOS Version – Processor speed – Operating system – Kernel type (e.g. NUMA vs SMP vs UNI) – Kernel version – Bad memory (e.g. excessive ECCs) – Chipset revisions – Hyper-Threading or SMT – Non-uniform competing processes (e.g. httpd running on some nodes, but not

others) – Shared library versions – Bad cables– Bad administrators– Users

• The above is avoidable and is the purpose of the STAB HOWTO.  Avoidable inconsistencies may lead to multimodal or non-normal distributions.

Page 16: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

STAB Toolkit

• The STAB Tools are a collection of scripts to help run selected benchmarks and to analyze their results.– Some of the tools are specific to a particular benchmark.– Others are general and operate on the data collected by the

specific tools.

• Benchmark specific tools comprise of benchmark launch scripts, accuracy validation scripts, miscellaneous utilities, and analysis scripts to collect the data, report some basic descriptive statistics, and create input files to be used with general STAB tools for additional analysis.

Page 17: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

STAB Toolkit

• With a goal of consistent, repeatable, accurate results it is best to start with as few variables as possible.  Start with single node benchmarks, e.g., STREAM.  If all machines have similar STREAM results, then memory can be ruled out as a factor with other benchmark anomalies.  Next, work your way up to processor and disk benchmarks, then two node (parallel) benchmarks, then multi-node (parallel) benchmarks.  After each more complicated benchmark run a check for consistent, repeatable, accurate results before continuing.

Page 18: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

The STAB Benchmarks

• Single Node (serial) Benchmarks:– STREAM (memory MB/s) – NPB Serial (uni-processor FLOP/s and memory) – NPB OpenMP (multi-processor FLOP/s and memory) – HPL MPI Shared Memory (multi-processor FLOP/s and

memory) – IOzone (disk MB/s, memory, and processor)

• Parallel Benchmarks (for MPI systems only):– Ping-Pong (interconnect µsec and MB/s) – NAS Parallel (multi-node FLOP/s, memory, and

interconnect)– HPL Parallel (multi-node FLOP/s, memory, and

interconnect)

Page 19: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Getting STAB

• http://sense.net/~egan/bench– bench.tgz

• Code with source (all script)

– bench-oss.tgz• OSS code (e.g. Gnuplot)

– bench-examples.tgz• 1GB of collected data (all text, 186000+ files)

– stab.pdf (currently 150 pages)• Documentation (WIP, check back before 11/30/2005)

Page 20: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Install STAB• Extract bench*.tgz into home directory:

cd ~tar zxvf bench.tgztar zxvf bench-oss.tgztar zxvf bench-examples.tgz

• Add STAB tools to PATH:

export PATH=~/bench/bin:$PATH

• Append to .bashrc:

export PATH=~/bench/bin:$PATH

Page 21: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Install STAB

• STAB requires Gnuplot 4 and it must be built a specific way:

cd ~/bench/srctar zxvf gnuplot-4.0.0.tar.gzcd gnuplot-4.0.0./configure --prefix=$HOME/bench --enable-thin-splinesmakemake install

Page 22: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

STAB Benchmark Tools• Each benchmark supported in this document contains an anal (short

for analysis) script.  This script is usually run from a output directory, e.g.:

cd ~/bench/benchmark/output../analbenchmark nodes low high % mean median std dev

bt.A.i686 4 615.77 632.08 2.65 627.85 632.02 8.06cg.A.i686 4 159.78 225.08 40.87 191.05 193.16 26.86ep.A.i686 4 11.51 11.53 0.17 11.52 11.52 0.01ft.A.i686 4 448.05 448.90 0.19 448.63 448.81 0.39lu.A.i686 4 430.60 436.59 1.39 433.87 434.72 2.51mg.A.i686 4 468.12 472.54 0.94 470.86 472.12 2.00sp.A.i686 4 449.01 449.87 0.19 449.58 449.72 0.39

• The anal scripts produce statistics about the results to help find anomalies.  The theory is that if you have identical nodes then you should be able to obtain identical results (not always true).  The anal scripts will also produce plot.* files for use by dplot to graphically represent the distribution of the results, and by cplot to plot 2D correlations.

Page 23: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Rant: % vs. normal distribution

• % is good?– % variability can tell you something about the

data with respect to itself without knowing anything about the data

– It is non-dimensional with a range (usually 0-100) that has meaning to anyone.

– IOW, management understands percentages.

• % is not good?– It minimizes the amount of useful empirical data.– It hides the truth.

Page 24: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

% is not good, exhibit A• Clearly this is a normal distribution, but the variability is 500%.  This is an

extreme case where all the possible values exist for predetermined range.

Page 25: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

% is not good, exhibit B

• Low variability can hide a skewed distribution.  Variability is low, only 1.27%.  But the distribution is clearly skewed to the right.

Page 26: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

% is not good, exhibit C• A 5.74% variability hides a bimodal distribution.  Bimodal distributions are clear

indicators that there is an observable difference between two different sets of nodes.

Page 27: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

STAB General Analysis Tools• dplot is for plotting distributions.

– All the graphical output used as illustrations in this document up to this point was created with dplot.

– dplot provides a number of options for binning the data and analyzing the distribution.

• cplot is for correlating the results between two different sets of results.– E.g., does poor memory performance correlate to poor application

performance?• danal is very similar to the output provided by the custom anal scripts

provided with each benchmark, but has additional output options.– You can safely discard any anal screen output because it can be recreated

with danal and the resulting plot.benchmark file.• Each script will require one or more plot.benchmark files.

– dplot and danal are less strict and will work with any file of numbers as long as the numbers are in the first column; subsequent columns are ignored.

– cplot however requires the 2nd column; it is impossible to correlate two sets of results without an index.

Page 28: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

dplot• The first argument to dplot must be the number of bins,

auto, or whole.  auto (or a) will use the square root of the number of results to determine the bin sizes and is usually the best place to start.  whole (or w) should only be used if your results are whole numbers and if the data contains all possible values between low and high.  This is only useful for creating plots like the dice examples at the beginning of this document.

• The second argument is the plotfile.  The plotfile must contain one value per line in the first column, subsequent columns are ignored.  The order of the data is unimportant.

Page 29: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

dplot a numbers.1000

Page 30: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

dplot a numbers.1000 -n

Page 31: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

dplot 19 numbers.1000 -n

Page 32: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

dplot a plot.c.ppc64 -bi

Page 33: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

dplot a plot.c.ppc64 –bi -std

Page 34: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

dplot a plot.c.ppc64 –text

108 +--------------[]--------------------------------+ 0.22

| [] |

| [] |

| [] |

86 +--------------[]--------------------------------+ 0.18

| [][] |

| ::[][] |

| [][][] |

65 +------------[][][]------------------------------+ 0.13

| [][][] |

| [][][] |

| [][][].. |

43 +------------[][][][]----------------------------+ 0.09

| [][][][] |

| [][][][] |

| ::[][][][][] |

22 +----------[][][][][][]--------------------------+ 0.05

| [][][][][][] [].... |

| [][][][][][][]:: ..[][][][][].. |

| ..::::[][][][][][][][]::..[][][][][][][][][] |

0 +-------+-------+-------+-------+-------+-------++ 0.00

2023 2046 2068 2090 2112 2134 2156

Page 35: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

GUI vs Text

Page 36: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

dplot a plot.c_omp.ppc64 –n -chi

Page 37: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

chi-squared and scale

Page 38: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Abusing chi-squared$ findn plot.c_omp.ppc64

X^2: 26.75, scale: 0.43, bins: 21, normal distribution probability: 14.30%X^2: 13.29, scale: 0.25, bins: 12, normal distribution probability: 27.50%X^2: 24.34, scale: 0.45, bins: 22, normal distribution probability: 27.70%X^2: 22.04, scale: 0.41, bins: 20, normal distribution probability: 28.20%X^2: 4.65, scale: 0.12, bins: 6, normal distribution probability: 46.00%X^2: 8.68, scale: 0.21, bins: 10, normal distribution probability: 46.70%X^2: 16.79, scale: 0.37, bins: 18, normal distribution probability: 46.90%X^2: 12.52, scale: 0.29, bins: 14, normal distribution probability: 48.50%X^2: 16.77, scale: 0.39, bins: 19, normal distribution probability: 53.90%X^2: 8.55, scale: 0.23, bins: 11, normal distribution probability: 57.50%X^2: 12.33, scale: 0.31, bins: 15, normal distribution probability: 58.00%X^2: 13.25, scale: 0.33, bins: 16, normal distribution probability: 58.30%X^2: 2.84, scale: 0.1, bins: 5, normal distribution probability: 58.40%X^2: 10.22, scale: 0.27, bins: 13, normal distribution probability: 59.70%X^2: 6.27, scale: 0.19, bins: 9, normal distribution probability: 61.70%X^2: 1.36, scale: 0.08, bins: 4, normal distribution probability: 71.60%X^2: 11.28, scale: 0.35, bins: 17, normal distribution probability: 79.20%X^2: 3.36, scale: 0.17, bins: 8, normal distribution probability: 85.00%X^2: 2.27, scale: 0.14, bins: 7, normal distribution probability: 89.30%

Page 39: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Abusing chi-squared

Page 40: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

cplot• cplot or correlation plot is a perl front-end to Gnuplot to graphically

represent the correlation between any two sets of indexed numbers.• Correlation measures the relationship between two sets of results, e.g.

processor performance and memory throughput.• Correlations are often expressed as a correlation coefficient; a

numerical value with a range from -1 to +1.• A positive correlation would indicate that if one set of results increased,

the other set would increase, e.g. better memory throughput increases processor performance.

• A negative correlation would indication that if one set of results increases, the other set would decrease, e.g. better processor performance decreases latency.

• A correlation of zero would indicate that there is no relationship at all, IOW, they are independent.

• Any two sets of results with a non-zero correlation is considered dependent, however a check should be performed to determine if a dependent set of results is statistically significant.

Page 41: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

cplot

• A strong correlation between two sets of results should produce more questions, not quick answers.

• It is possible for two unrelated results to have a strong correlation because they share something in common.– E.g.  You can show a positive correlation with the sales of

skis and snowboards.  It is unlikely that increased ski sales increased snowboard sales, the mostly likely cause is an increase in the snow depth (or a decrease in temperature) at your local resort, i.e., something that is in common.  The correlation is valid, but it does not prove the cause of the correlation.

Page 42: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

cplot plot.c.ppc64 plot.cg.B.ppc64

Page 43: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

cplot plot.c.ppc64 plot.mg.B.ppc64

Page 44: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Correlation of temperature to memory performance

Page 45: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Correlation of 100 random numbers

Page 46: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Statistical Significance

Page 47: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Statistical Significance

Page 48: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Case Study

• 484 JS20 blades– dual PPC970– 2GB RAM

• Myrinet D– Full Bisection Switch

• Cisco GigE– 14:1 over subscribed

Page 49: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Diagnostics

• Vendor supplied (passed)

• BIOS versions (failed)

• Inventory– Number of CPUs (passed)– Total Memory (failed)

• OS/Kernel Versions (passed)

Page 50: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

BIOS Versions (failed)

• All nodes but node443 have BIOS dated 10/21/04. node443 is dated 09/02/2004.

• Inconsistent BIOS versions can affect performance.

Command output:

# rinv compute all | tee /tmp/foo# cat /tmp/foo | grep BIOS | awk '{print $4}' | sort | uniq09/02/200410/21/2004

# cat /tmp/foo | grep BIOS | grep 09/02/2004node433: VPD BIOS: 09/02/2004

Page 51: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Memory quantity (failed)

• All nodes except node224 have 2GB RAM.

Command output:

# psh compute free | grep Mem | awk '{print $3}' | sort | uniq146011619772041977208

#psh compute free | grep Mem | grep 1460116node224: Mem: 1460116 ...

Page 52: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

STREAM

• The STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth (in MB/s) and the corresponding computation rate for simple vector kernels.

• STREAM C, FORTRAN, and C OMP are run 10 times on each node, then the best result from each node is taken to be used to compare consistency. Each result is also tested for accuracy.

Page 53: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

STREAM validation results

• node483 failed to pass OMP test 3 of 10 test for accuracy. Try replacing memory, processors, and then system board in that order.

Command output:

# cd ~/bench/stream/output.raw# ../checkresultschecking stream_c_omp.ppc64.node483.3...failed

Page 54: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

STREAM consistency results# cd ~/bench/stream/output# ../anal

stream resultsbenchmark nodes low high % mean median std devc.ppc64 484 2031.43 2147.98 5.74 2077.03 2069.02 23.20c_omp.ppc64 484 1993.49 2124.24 6.56 2050.00 2050.51 22.86f.ppc64 484 2007.16 2092.68 4.26 2039.20 2034.63 17.87

Page 55: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS Serial• The NAS Parallel Benchmarks (NPB) are a small set of

programs designed to help evaluate the performance of parallel supercomputers. The benchmarks, which are derived from computational fluid dynamics (CFD) applications, consist of five kernels and three pseudo-applications.

• The NAS Serial Benchmarks are the same as the NAS Parallel Benchmarks except that MPI calls have been taken out and they run on one processor.

• bt.B, cg.B, ep.B, ft.B, lu.B, mg.B, and sp.B are run 5 times on each node, then the best result from each node is taken to be used to compare consistency. Each result is also tested for accuracy.

Page 56: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS Serial validation results• node483 failed to pass to a number of tests. Try replacing memory,

processors, and then system board in that order.

Command output:

# cd ~/bench/NPB3.2/NPB3.2-SER/output.raw# ../checkresultschecking bt.B.ppc64.node483.1...failed checking bt.B.ppc64.node483.2...failed checking bt.B.ppc64.node483.3...failed checking bt.B.ppc64.node483.4...failed checking bt.B.ppc64.node483.5...failed checking cg.B.ppc64.node483.4...failed checking ep.B.ppc64.node483.3...failed checking ft.B.ppc64.node483.1...failed checking ft.B.ppc64.node483.2...failed checking ft.B.ppc64.node483.3...failed checking ft.B.ppc64.node483.4...failed checking lu.B.ppc64.node483.1...failed checking mg.B.ppc64.node483.1...failed checking mg.B.ppc64.node483.3...failed checking sp.B.ppc64.node483.1...failed checking sp.B.ppc64.node483.2...failed checking sp.B.ppc64.node483.3...failed checking sp.B.ppc64.node483.4...failed checking sp.B.ppc64.node483.5...failed

Page 57: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS Serial consistency results# cd ~/bench/NPB3.2/NPB3.2-SER/output# ../anal

NPB Serialbenchmark nodes low high % mean median std devbt.B.ppc64 484 1077.69 1099.28 2.00 1087.60 1087.67 4.67cg.B.ppc64 484 40.93 45.30 10.68 41.94 41.38 1.31ep.B.ppc64 484 9.88 10.07 1.92 9.96 9.96 0.04ft.B.ppc64 484 480.87 503.33 4.67 487.07 486.23 3.71lu.B.ppc64 484 516.88 579.25 12.07 543.08 542.88 12.46mg.B.ppc64 484 618.16 654.23 5.84 638.31 638.85 6.76sp.B.ppc64 484 530.48 556.67 4.94 541.01 540.77 3.99

Page 58: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net
Page 59: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

How does memory correlate to performance?

Page 60: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net
Page 61: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Statistically significant?

• Command output:

$ findc plot* | grep plot.c.ppc64

0.13 0.13 00 plot.bt.B.ppc64 plot.c.ppc640.62 0.62 00 plot.c.ppc64 plot.c_omp.ppc640.93 0.93 00 plot.c.ppc64 plot.cg.B.ppc640.19 0.19 00 plot.c.ppc64 plot.ep.B.ppc640.89 0.89 00 plot.c.ppc64 plot.f.ppc640.17 0.17 00 plot.c.ppc64 plot.ft.B.ppc640.11 0.11 02 plot.c.ppc64 plot.lu.B.ppc640.50 -0.50 00 plot.c.ppc64 plot.mg.B.ppc640.05 -0.05 27 plot.c.ppc64 plot.sp.B.ppc64

Page 62: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS OMP

• The NAS OpenMP Benchmarks are the same as the NAS Parallel Benchmarks except that the MPI calls have been replaced with OpenMP calls to run on multiple processors on a shared memory system (SMP).

• bt.B, cg.B, ep.B, ft.B, lu.B, mg.B, and sp.B are run 5 times on each node, then the best result from each node is taken to be used to compare consistency. Each result is also tested for accuracy.

Page 63: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS OMP validation results• node483 failed to pass to a number of tests. Try replacing

memory, processors, and then system board in that order.

Command output: # cd ~/bench/NPB3.2/NPB3.2-OMP/output.raw# ../checkresultschecking bt.B.ppc64.node483.1...failed checking bt.B.ppc64.node483.2...failed checking bt.B.ppc64.node483.3...failed checking bt.B.ppc64.node483.4...failed checking bt.B.ppc64.node483.5...failed checking ft.B.ppc64.node483.1...failed checking ft.B.ppc64.node483.2...failed checking ft.B.ppc64.node483.3...failed checking ft.B.ppc64.node483.4...failed checking ft.B.ppc64.node483.5...failed checking lu.B.ppc64.node483.1...failed checking lu.B.ppc64.node483.3...failed checking lu.B.ppc64.node483.4...failed checking mg.B.ppc64.node483.1...failed checking mg.B.ppc64.node483.2...failed checking mg.B.ppc64.node483.3...failed checking mg.B.ppc64.node483.4...failed checking mg.B.ppc64.node483.5...failed checking sp.B.ppc64.node483.1...failed checking sp.B.ppc64.node483.2...failed checking sp.B.ppc64.node483.3...failed checking sp.B.ppc64.node483.4...failed checking sp.B.ppc64.node483.5...failed

Page 64: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS OMP consistency results# cd ~/bench/NPB3.2/NPB3.2-OMP/output# ../anal

NPB OpenMPbenchmark nodes low high % mean median std devbt.B.ppc64 484 1850.99 1898.65 2.57 1871.41 1870.45 9.25cg.B.ppc64 484 67.31 73.30 8.90 68.96 68.44 1.49ep.B.ppc64 484 19.69 20.36 3.40 19.88 19.88 0.09ft.B.ppc64 484 593.39 615.77 3.77 604.74 604.61 4.06lu.B.ppc64 484 739.30 820.71 11.01 773.09 772.05 16.76mg.B.ppc64 484 751.40 819.38 9.05 792.03 797.10 15.26sp.B.ppc64 484 722.73 824.39 14.07 745.99 747.33 8.51

Page 65: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net
Page 66: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

How does memory correlate to performance?

Page 67: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net
Page 68: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Statistically significant?• Command output:

$ findc plot* | grep plot.f.ppc640.37 0.37 00 plot.bt.B.ppc64 plot.f.ppc640.89 0.89 00 plot.c.ppc64 plot.f.ppc640.64 0.64 00 plot.c_omp.ppc64 plot.f.ppc640.77 0.77 00 plot.cg.B.ppc64 plot.f.ppc640.07 -0.07 12 plot.ep.B.ppc64 plot.f.ppc640.20 -0.20 00 plot.f.ppc64 plot.ft.B.ppc640.29 -0.29 00 plot.f.ppc64 plot.lu.B.ppc640.81 -0.81 00 plot.f.ppc64 plot.mg.B.ppc640.65 -0.65 00 plot.f.ppc64 plot.sp.B.ppc64

$ findc plot* | grep plot.c_omp.ppc640.29 0.29 00 plot.bt.B.ppc64 plot.c_omp.ppc640.62 0.62 00 plot.c.ppc64 plot.c_omp.ppc640.54 0.54 00 plot.c_omp.ppc64 plot.cg.B.ppc640.03 -0.03 51 plot.c_omp.ppc64 plot.ep.B.ppc640.64 0.64 00 plot.c_omp.ppc64 plot.f.ppc640.06 -0.06 19 plot.c_omp.ppc64 plot.ft.B.ppc640.20 -0.20 00 plot.c_omp.ppc64 plot.lu.B.ppc640.56 -0.56 00 plot.c_omp.ppc64 plot.mg.B.ppc640.44 -0.44 00 plot.c_omp.ppc64 plot.sp.B.ppc64

Page 69: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

HPL

• HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark.

• xhpl is run 10 times on each node, then the best result from each node is taken to be used to compare consistency. Each result it also tested for accuracy.

• NOTE: nodes 215 and 224 were excluded from this test. node215 would not boot up. node224 only had 1.5GB of RAM. This test used 1.8GB RAM.

Page 70: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

HPL validation test

• node483 failed to pass any test. Try replacing memory, processors, and then system board in that order.

• Command output:

# cd ~/bench/hpl/output.raw.single# ../checkresultschecking xhpl.ppc64.node483.1...failed checking xhpl.ppc64.node483.10...failed checking xhpl.ppc64.node483.2...failed checking xhpl.ppc64.node483.3...failed checking xhpl.ppc64.node483.4...failed checking xhpl.ppc64.node483.5...failed checking xhpl.ppc64.node483.6...failed checking xhpl.ppc64.node483.7...failed checking xhpl.ppc64.node483.8...failed checking xhpl.ppc64.node483.9...failed

Page 71: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

HPL consistency and correlation# cd ~/bench/hpl/output# ../anal

HPL resultsbenchmark nodes low high % mean median std devxhpl.ppc64 482 11.62 12.04 3.61 11.89 11.89 0.08

Page 72: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Ping-Pong• Ping-Pong is a simple benchmark that measures latency

and bandwidth for different message sizes.• Ping-Pong benchmarks should be run for each network

(e.g. Myrinet and GigE).  First run the serial Ping-Pongs and then the parallel Ping-Pongs.  The purpose of the serial benchmarks is to find any single node or set of nodes that is not performing as well as the other nodes. The purpose of the parallel benchmarks is to help calculate bisectional bandwidth and test that system wide MPI jobs can be run.

• There are four patterns, 3 deterministic and 1 random. The purpose for all four is to help isolate poor performing nodes and possibly poor performing routes or trunks (e.g. bad uplink cable). 

Page 73: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Ping-Pong

• Sorted

Page 74: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Ping-Pong

• Cut

Page 75: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Ping-Pong

• Fold

Page 76: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Myrinet consistency check# cd ~/bench/PMB2.2.1/output.gm# ../anal spp sort bwspp sort bw resultsbytes pairs low high % mean median std dev1 242 0.08 0.11 37.50 0.11 0.11 0.00...4194304 242 87.62 234.93 168.12 232.49 233.43 9.38# ../anal spp cut bw...4194304 242 87.13 234.99 169.70 232.16 233.15 9.40# ../anal spp fold bw...4194304 242 87.17 235.04 169.63 232.13 233.16 9.39# ../anal spp shuffle bw...4194304 242 87.61 234.77 167.97 232.14 232.70 9.36

The 4194304 results the mean and median are very close together and also close to the high indicating a one or a few nodes with poor performance.

Page 77: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Myrinet consistency# head -5 plot.spp.*.bw.4194304==> plot.spp.cut.bw.4194304 <==87.13 node164-node406230.95 node107-node349231.36 node147-node389231.41 node091-node333231.43 node045-node287==> plot.spp.fold.bw.4194304 <==87.17 node079-node406227.58 node214-node271229.34 node010-node475231.40 node091-node394231.48 node177-node308==> plot.spp.shuffle.bw.4194304 <==87.61 node024-node406231.47 node091-node166231.51 node227-node003231.55 node110-node293231.57 node013-node231==> plot.spp.sort.bw.4194304 <==87.62 node405-node406228.64 node039-node040231.64 node231-node232231.66 node091-node092231.66 node481-node482

Page 78: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Bisectional Bandwidthppp cut bw results

bytes pairs low high % mean median std dev4194304 242 60.28 233.44 287.26 138.94 137.92 36.87

Demonstrated BW = 242 * 138.94 = 33623.48 MB/s ~= 32.8 GB/s (262.4 Gb/s)

Page 79: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

IP consistency check# cd ~/bench/PMB2.2.1/output.ip# ../anal spp sort bwspp sort bw resultsbytes pairs low high % mean median std dev1 241 0.01 0.01 0.00 0.01 0.01 0.00...4194304 241 60.76 101.76 67.48 99.91 100.26 3.53# ../anal spp cut bw...4194304 241 45.54 89.88 97.36 86.96 88.60 6.58# ../anal spp fold bw...4194304 241 50.91 100.60 97.60 87.33 88.48 6.30# ../anal spp shuffle bw...4194304 241 49.31 100.71 104.24 87.26 88.53 6.72

Page 80: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

IP consistency check• The sorted pair output will be easiest to analyze for problem since each

pair will be restricted to a single switch within each Bladecenter. The other tests will run across the network and may have higher variability.

• Running the following command reviles that the pairs in bold performed poorly:

# head -5 plot.spp.sort.bw.4194304==> plot.spp.sort.bw.4194304 <==60.76 node025-node02668.97 node023-node02479.97 node325-node32698.83 node067-node06898.85 node071-node07298.94 node337-node33898.98 node175-node17699.02 node031-node03299.11 node401-node40299.16 node085-node086

• This may or may not be a problem. The uplink performance will be less 60MB/s/node because BC can at best provide an average of 35MB/s per blade (with a 4 cable trunk). Many Myrinet-based clusters only use GigE for management and NFS, both have greater bottlenecks elsewhere.

• You may want to check the switch logs and consider reseating the switches and blades.

Page 81: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

IP consistency checkRunning the following command reviles that there may be an uplink problem with nodes in BC #2. i.e. node015-node028.

# head -20 plot.spp.cut.bw.4194304 plot.spp.fold.bw.4194304 plot.spp.shuffle.bw.4194304==> plot.spp.cut.bw.4194304 <==45.54 node025-node26850.47 node026-node26954.85 node024-node26756.27 node002-node24557.08 node022-node26558.50 node023-node26662.74 node020-node26369.37 node016-node25969.48 node015-node25869.56 node021-node26469.73 node018-node26171.06 node028-node27171.42 node019-node26271.45 node042-node28572.06 node027-node27072.31 node017-node26084.69 node224-node46586.40 node225-node46687.10 node001-node24487.54 node084-node327

Page 82: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

IP consistency check

==> plot.spp.fold.bw.4194304 <==50.91 node026-node45951.72 node023-node46255.32 node002-node48358.39 node025-node46060.24 node024-node46165.66 node018-node46768.09 node022-node46368.28 node020-node46569.96 node021-node46470.23 node015-node47070.27 node016-node46970.61 node019-node46671.12 node027-node45871.50 node017-node46874.35 node028-node45784.75 node235-node25285.02 node236-node25185.79 node237-node25085.94 node238-node24987.19 node118-node367

Page 83: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

IP consistency check==> plot.spp.shuffle.bw.4194304 <==49.31 node001-node12649.46 node029-node02651.25 node024-node06356.34 node274-node02558.14 node023-node10068.00 node019-node24868.67 node443-node01568.88 node018-node22869.29 node020-node09169.38 node028-node24070.68 node022-node10270.80 node027-node10671.63 node021-node42371.96 node291-node01772.52 node460-node41172.66 node016-node04078.61 node031-node01183.85 node041-node05084.82 node407-node39385.08 node420-node399

The cut, fold, and shuffle tests run from BC to BC, and the nodes in BC #2 repeatable show up. Consider checking the uplink cables, ports, and the BC switch.

Page 84: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Bisectional Bandwidthppp cut bw results

bytes pairs low high % mean median std dev4194304 241 6.18 17.36 180.91 7.95 7.28 1.82

Demonstrated BW = 241 * 7.95 = 1915.95 MB/s ~= 1.87 GB/s (14.96 Gb/s)

Page 85: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI (8 node, 2ppn)• The NAS Parallel Benchmarks (NPB) are a small set of programs

designed to help evaluate the performance of parallel supercomputers. The benchmarks, which are derived from computational fluid dynamics (CFD) applications, consist of five kernels and three pseudo-applications.

• bt.B, cg.B, ep.B, ft.B, is.B, lu.B, mg.B, and sp.B are run 10 times on each set of 8 unique nodes using 2 different node set methods: sorted and shuffle.

– Sorted. Sets of 8 nodes are selected from a sorted list and assigned adjacently, e.g. node001-node008, node009-node016, etc…, this is used to find consistency within the same set of nodes.

– Shuffle. Sets of 8 nodes are selected from a shuffled list. Nodes are reshuffled between runs.

• Both sorted and shuffle sets are run in parallel, i.e. all the sorted sets of 8 are run at the same time, then all the shuffle sets are run at the same time.

• NOTE: node215 and node446 were not included in the shuffle and sorted tests. node215 failed to boot, node446 failed to startup Myrinet.

Page 86: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI verificationVerification command output:

# cd ~/bench/NPB3.2/NPB3.2-MPI/output.raw.shuffle

This command will find the failed results and place the names of the results filenames into the file ../failed:

# ../checkresults ../failed

This command will find the common nodes in all failed results in the file ../failed and sort them by number of occurrences (occurrences are counted by processor, not node):

# xcommon ../failed | tailnode395 12node440 12node056 12node464 12node043 12node429 14node297 14node391 20node174 22node483 96

Page 87: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI Consistency check

• Consistency check command output:

# cd ~/bench/NPB3.2/NPB3.2-MPI/output.raw.shuffle# ../analm

NPB MPIbenchmark runs low high % mean median std devbt.B.16 600 9089.46 10415.15 14.58 10204.94 10217.94 143.14cg.B.16 600 1095.60 1685.61 53.85 1570.48 1575.38 57.70ep.B.16 600 155.81 160.64 3.10 158.48 158.37 0.59ft.B.16 600 2102.39 3232.49 53.75 3052.71 3066.45 130.37is.B.16 600 87.06 185.29 112.83 155.97 154.39 12.94lu.B.16 600 5069.36 5892.62 16.24 5529.00 5531.17 111.84mg.B.16 600 3265.89 3898.99 19.39 3737.80 3739.77 74.91sp.B.16 600 2156.46 2404.05 11.48 2340.00 2340.05 26.89

Page 88: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI ConsistencyThe leading cause of variable for a stable system is switch contention. The only way to determine what is normal is to run the same set of benchmarks multiple times on an isolated set of stable nodes (nodes that passed single node tests) with the rest of the switch not in use. I did not have time to run a series of serial parallel tests, but this is close:

# cd ~/bench/NPB3.2/NPB3.2-MPI/output.raw.sort# ../analm $(nr –l node001-node080)

NPB MPIbenchmark runs low high % mean median std devbt.B.16 100 10025.30 10266.00 2.40 10129.42 10120.54 44.30cg.B.16 100 1678.27 1787.76 6.52 1714.04 1712.43 15.39ep.B.16 100 150.45 160.02 6.36 158.49 158.38 1.03ft.B.16 100 3248.41 3694.40 13.73 3563.50 3575.43 81.22is.B.16 100 159.31 168.14 5.54 163.91 164.22 1.98lu.B.16 100 5156.19 5522.79 7.11 5346.95 5350.06 87.51mg.B.16 100 3491.76 3685.78 5.56 3613.65 3614.44 37.25sp.B.16 100 2259.08 2308.16 2.17 2289.66 2290.30 9.55

The above results are from the first 80 nodes run sorted. Each set of 8 nodes were isolated to a single Myrinet line card reducing switch contention (however each 2 sets of nodes did share a single line card). Also to avoid possible variability because of memory performance I limited the report to the first 80 nodes.

Page 89: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI Distribution

Page 90: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net
Page 91: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI Correlation BT BW vs. Perf

Page 92: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI Distribution w/o node406

Page 93: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net
Page 94: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI Correlation BT STREAM vs. Perf

Page 95: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

NAS MPI Correlation BT STREAM vs. Perf

$ CPLOTOPTS="-dy ," findc plot* | grep plot.c.ppc64

0.09 -0.09 05 plot.c.ppc64 plot.cg.B.160.00 0.00 100 plot.c.ppc64 plot.ep.B.160.14 -0.14 00 plot.c.ppc64 plot.ft.B.160.22 -0.22 00 plot.c.ppc64 plot.is.B.160.21 -0.21 00 plot.c.ppc64 plot.lu.B.160.41 -0.41 00 plot.c.ppc64 plot.mg.B.160.42 -0.42 00 plot.c.ppc64 plot.sp.B.16

Page 96: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

HPL MPI• HPL is a software package that solves a (random) dense linear system

in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark.

• xhpl is run 10 (15 times for sorted) times on each set of 8 unique nodes using 2 different node set methods: sorted and shuffle.

– Sorted. Sets of 8 nodes are selected from a sorted list and assigned adjacently, e.g. node001-node008, node009-node016, etc…, this is used to find consistency within the same set of nodes.

– Shuffle. Sets of 8 nodes are selected from a shuffled list. Nodes are reshuffled between runs.

• Both sorted and shuffle sets are run in parallel, i.e. all the sorted sets of 8 are run at the same time, then all the shuffle sets are run at the same time.

Page 97: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

HPL MPI verification# cd ~/bench/hpl/output.raw.shuffle

This command will find the failed results and place the names of the results filenames into the file ../failed:

# ../checkresults ../failed

This command will find the common nodes in all failed results in the file ../failed and sort them by number of occurrences (occurrences are counted by processor, not node):

# xcommon ../failed | tailnode073 2node121 2node090 2node406 2node308 2node276 2node103 2node199 2node435 4node483 20

Page 98: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

HPL MPI consistency# cd ~/bench/hpl/output.raw.shuffle# ../analmHPL resultsbenchmark runs low high % mean median std devxhpl.16.15000 600 51.14 60.66 18.62 59.31 59.48 1.00xhpl.16.30000 600 69.34 78.48 13.18 77.16 77.35 1.08

Page 99: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

HPL MPI correlations

Page 100: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Summary

• node483 has accuracy issues.

• node406 has weak Myrinet performance.

• BC2 has a switch or uplink issue.

• nodes 1-84 has a different memory configuration that does correlate to application performance.

• Applications at large scales my experience no performance anomalies.

Page 101: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

What is SCAB?

• SCalability Analysis of Benchmarks• The purpose of the SCAB HOWTO is to

verify that the cluster you just built actually can do work at scale.  This can be accomplished by running a few industry accepted benchmarks.

• The STAB/SCAB tools provide tools to plot the scalability for visual analysis.

• The STAB HOWTO should be completed first to rule out any inconsistencies that may appear as scaling issues.

Page 102: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

The Benchmarks

• PMB (Pallas MPI Benchmark)

• NPB (NAS Parallel Benchmark)

• HPL (High Performance Linpack)

Page 103: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB• The Pallas MPI Benchmark (PMB) provides a concise set of

benchmarks targeted at measuring the most important MPI functions.

• NOTE:  Pallas has been acquired by Intel.  Intel has released the IMB (Intel MPI Benchmark).  The IMB is a minor update of the PMB.  The IMB were not used because they failed to execute properly for all MPI implementations that I tested.

• IMPORTANT:  Consistent PMB Ping-Ping should be achieved before running this benchmark (STAB Lab).  Unresolved inconsistencies in the interconnect may appear as scaling issues.

• The main purpose of this test is as a diagnostic to answer the following questions:– Are my MPI implementation basic functions complete? – Does my MPI implementation scale?

Page 104: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB• Example plot from larger BC cluster.

• Very impressive.  For the Sendrecv benchmark this cluster scales from 2 nodes to 240!  Could this be a non-blocking GigE configuration?  Another benchmark can help answer that question.

Page 105: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB• Example plot from larger BC cluster.

• Quite revealing.  The sorted benchmark has the 4M message size performing at ~115MB/s for all node counts, but shuffled it falls gradually as the number of nodes increase to ~10MB/s.  Why?

Page 106: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB

• This cluster is partitioned into 14 nodes/BladeCenter Chassis.  Each chassis has a GigE switch with only 4 uplinks, 3 of the 4 uplinks are bonded together to form a single 3Gbit uplink to a stacked SMC GigE core switch.  Assuming no blocking with the core switch, this solution blocks at 14:3.

• The Sendrecv benchmark is based on MPI_Sendrecv, the processes form a periodic communication chain. Each process sends to the right and receives from the left neighbor in the chain.

Page 107: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB

• Based on the previous illustration it is easy to see why the sorted list performed so well.  Most of the traffic was isolated to good performing local switches and the jump from chassis to chassis through the SMC core switch only requires the bandwidth of a single link (1Gb full duplex).

• The shuffled list has small odds that its left neighbor (receive from) and its right neighbor (send to) will be on the same switch.  This was illustrated in the second plot.

• Moral of the story.– Don’t trust interconnect vendors that do not provide the node

list.– Ask for sorted and shuffled benchmarks.

Page 108: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB Myrinet GM

Page 109: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB Myrinet GM

Page 110: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB Myrinet MX

Page 111: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PMB Myrinet MX

Page 112: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PBM IB

Page 113: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

PBM IB

Page 114: Linux Cluster Production Readiness Egan Ford IBM egan@us.ibm.com egan@sense.net

Questions w/ Answers

• Egan Ford, [email protected]@sense.net


Top Related