2008 8/17/2015page 1 solaris/linux performance measurement and tuning adrian cockcroft,...

94
2008 06/18/22 Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, [email protected] June 18, 2022

Upload: doris-cox

Post on 24-Dec-2015

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23 Page 1

Solaris/Linux Performance Measurement and TuningAdrian Cockcroft, [email protected] 19, 2023

Page 2: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008

Abstract

• This course focuses on the measurement sources and tuning parameters available in Unix and Linux, including TCP/IP measurement and tuning, complex storage subsystems, and with a deep dive on advanced Solaris metrics such as microstates and extended system accounting.

• The meaning and behavior of metrics is covered in detail. Common fallacies, misleading indicators, sources of measurement error and other traps for the unwary will be exposed.

• Free tools for Capacity Planning are covered in detail by this presenter in a separate Usenix Workshop.

04/19/23Solaris/Linux Performance Measurement and Tuning Slide 2

Page 3: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 3

Sources

• Adrian Cockcroft– Sun Microsystems 1988-2004, Distinguished Engineer– eBay Research Labs 2004-2007, Distinguished Engineer– Netflix 2007, Director - Web Engineering– Note: I am a Netflix employee, but this material does not refer to and is not endorsed by

Netflix. It is based on the author's work over the last 20 years.

• CMG Papers and Sunday Workshops by the author - see www.cmg.org– Unix CPU Time Measurement Errors - (Best paper 1998)– TCP/IP Tutorial - Sunday Workshop– Capacity Planning - Sunday Workshop– Grid Tutorial - Sunday Workshop– Capacity Planning with Free Tools - Sunday Workshop

• Books by the author– Sun Performance and Tuning, Prentice Hall, 1994, 1998 (2nd Ed)– Resource Management, Prentice Hall, 2000– Capacity Planning for Internet Services, Prentice Hall, 2001

Page 4: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 4

Contents

• Capacity Planning Definitions

• Metric collection interfaces

• Process - microstate and extended accounting

• CPU - measurement issues

• Network - Internet Servers and TCP/IP

• Disks - iostat, simple disks and RAID

• Memory

• Quick tips and Recipes

• References

Page 5: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 5

Definitions

Page 6: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 6

Capacity Planning Definitions

• Capacity

– Resource utilization and headroom

• Planning

– Predicting future needs by analyzing historical data and modeling future scenarios

• Performance Monitoring

– Collecting and reporting on performance data

• Unix/Linux (apologies to users of OSX, HP-UX, AIX etc.)

– Emphasis on Solaris since it is a comprehensively instrumented and full featured Unix

– Linux is mostly a subset

Page 7: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 7

Measurement Terms and Definitions

• Bandwidth - gross work per unit time [unattainable]

• Throughput - net work per unit time

• Peak throughput - at maximum acceptable response time

• Response time - time to complete a unit of work including waiting

• Service time - time to process a unit of work after waiting

• Queue length - number of requests waiting

• Utilization - busy time relative to elapsed time [can be misleading]

• Rule of thumb: Estimate 95th percentile response time as three times mean response time

Page 8: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 8

Capacity Planning Requirements

• We care about CPU, Memory, Network and Disk resources, and Application response times

• We need to know how much of each resource we are using now, and will use in the future

• We need to know how much headroom we have to handle higher loads

• We want to understand how headroom varies, and how it relates to application response times and throughput

• We want to be able to find the bottleneck in an under-performing system

Page 9: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 9

Metrics

Page 10: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 10

Measurement Data Interfaces

• Several generic raw access methods– Read the kernel directly– Structured system data– Process data– Network data– Accounting data– Application data

• Command based data interfaces– Scrape data from vmstat, iostat, netstat, sar, ps– Higher overhead, lower resolution, missing metrics

• Data available is platform and release specific either way

Page 11: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 11

Reading kernel memory - kvm

• The only way to get data in very old Unix variants

• Use kernel namelist symbol table and open /dev/kmem

• Solaris wraps up interface in kvm library

• Advantages

– Still the only way to get at some kinds of data

– Low overhead, fast bulk data capture

• Disadvantages

– Too much intimate implementation detail exposed

– No locking protection to ensure consistent data

– Highly non-portable, unstable over releases and patches

– Tools break when kernel moves between 32 and 64bit address support

Page 12: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 12

Structured Kernel Statistics - kstat

• Solaris 2 introduced kstat and extended usage in each release

• Used by Solaris 2 vmstat, iostat, sar, network interface stats, etc.

• Advantages

– The recommended and supported Solaris metric access API

– Does not require setuid root commands to access for reads

– Individual named metrics stable over releases

– Consistent data using locking, but low overhead

– Unchanged when kernel moves to 64bit address support

– Extensible to add metrics without breaking existing code

• Disadvantages

– Somewhat complex hierarchical kstat_chain structure

– State changes (device online/offline) cause kstat_chain rebuild

Page 13: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 13

Kernel Trace - TNF, Dtrace, ktrace

• Solaris, Linux, Windows and other Unixes have similar features

– Solaris has TNF probes and prex command to control them

– User level probe library for hires tracepoints allows instrumentation of multithreaded applications

– Kernel level probes allow disk I/O and scheduler tracing

• Advantages

– Low overhead, microsecond resolution

– I/O trace capability is extremely useful

• Disadvantages

– Too much data to process with simple tracing capabilities

– Trace buffer can overflow or cause locking issues

• Solaris 10 Dtrace is a quite different beast! Much more flexible

Page 14: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 14

Dtrace – Dynamic Tracing

• One of the most exiting new features in Solaris 10, rave reviews

• Book: "Solaris Performance and Tools" by Richard McDougall and Brendan Gregg

• Advantages

– No overhead when it is not in use– Low overhead probes can be put anywhere/everywhere– Trace data is correlated and filtered at source, get exactly the data you want,

very sophisticated data providers included– Bundled, supported, designed to be safe for production systems

• Disadvantages

– Solaris specific, but being ported to BSD/Linux– No high level tools support yet– Yet another scripting language to learn – somewhat similar to “awk”

Page 15: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 15

Hardware counters

• Solaris cpustat for X86 and UltraSPARC pipeline and cache counters

• Solaris busstat for server backplanes and I/O buses, corestat for multi-core systems

• Intel Trace Collector, Vampir for Linux

• Most modern CPUs and systems have counters

• Advantages

– See what is really happening, more accurate than kernel stats

– Cache usage useful for tuning code algorithms

– Pipeline usage useful for HPC tuning for megaflops

– Backplane and memory bank usage useful for database servers

• Disadvantages

– Raw data is confusing, lots of architectural background info needed

– Most tools focus on developer code tuning

Page 16: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 16

Configuration information

• Configuration data comes from too many sources!

– Solaris device tree displayed by prtconf and prtdiag

– Solaris 8 adds dynamic configuration notification device picld

– SunVTS component test system has vtsprobe to get config

– SCSI device info using iostat -E in Solaris

– Logical volume info from product specific vxprint and metastat

– Hardware RAID info from product specific tools

– Critical storage config info must be accessed over ethernet…

• It is very hard to combine all this data!

• DMTF CIM objects try to address this, but no-one seems to use them…

• Free tool - Config Engine: http://www.cfengine.org

Page 17: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 17

Application instrumentation Examples

• Oracle V$ Tables – detailed metrics used by many tools

• ARM standard instrumentation

• Custom do-it-yourself and log file scraping

• Advantages

– Focussed application specific information

– Business metrics needed to do real capacity planning

• Disadvantages

– No common access methods

– ARM is a collection interface only, vendor specific tools, data

– Very few applications are instrumented, even fewer have support from performance tools vendors

Page 18: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 18

Kernel values, tunables and defaults

• There is often far too much emphasis on kernel tweaks– There really are few “magic bullet” tunables– It rarely makes a significant difference

• Fix the system configuration or tune the application instead!

• Very few adjustable components– “No user serviceable parts inside”– But Unix has so much history people think it is like a 70’s car– Solaris really is dynamic, adaptive and self-tuning– Most other “traditional Unix” tunables are just advisory limits– Tweaks may be workarounds for bugs/problems– Patch or OS release removes the problem - remove the tweak

Solaris Tunable Parameters Reference Manual (if you must…)– http://docs.sun.com/app/docs/doc/817-0404

Page 19: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 25

Processes

Page 20: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 26

Process based data - /proc

• Used by ps, proctool and debuggers, pea.se, proc(1) tools on Solaris

• Solaris and Linux both have /proc/pid/metric hierarchy

• Linux also includes system information in /proc rather than kstat

• Advantages

– The recommended and supported process access API

– Metric data structures reasonably stable over releases

– Consistent data using locking

– Solaris microstate data provides accurate process state timers

• Disadvantages

– High overhead for open/read/close for every process

– Linux reports data as ascii text, Solaris as binary structures

Page 21: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 27

Tracing and profiling

• Tracing Tools– truss - shows system calls made by a process– sotruss / apitrace - shows shared library calls– prex - controls TNF tracing for user and kernel code

• Profiling Tools– Compiler profile feedback using -xprofile=collect and use– Sampled profile relink using -p and prof/gprof– Function call tree profile recompile using -pg and gprof– Shared library call profiling setenv LD_PROFILE and gprof

• Accurate CPU timing for process using /usr/proc/bin/ptime

• Microstate process information using pea.se and pw.se10:40:16 name lwmx pid ppid uid usr% sys% wait% chld% size rss pfnis_cachemgr 5 176 1 0 1.40 0.19 0.00 0.00 16320 11584 0.0jre 1 17255 3184 5743 11.80 0.19 0.00 0.00 178112 110336 0.0sendmail 1 16751 1 0 1.01 0.43 0.00 0.43 18624 16384 0.0se.sparc.5.6 1 16741 1186 9506 5.90 0.47 0.00 0.00 16320 14976 0.0imapd 1 16366 198 5710 6.88 1.09 1.02 0.00 34048 29888 0.1dtmail 10 16364 9070 5710 0.75 1.12 0.00 0.00 102144 94400 0.0

Page 22: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 28

Accounting Records

• Standard Unix System V Accounting - acct– Tiny, incomplete (no process id!) low resolution, no overhead!

• Solaris Extended System and Network Accounting - exacct– Flexible, Overly complex, Detailed data– Interval support for recording long running processes– No overhead! 100% capture ratio for infrequent samples!

Page 23: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 29

Extracct for Solaris

• extracct tool to get extended acct data out in a useful form

• See http://perfcap.blogspot.com for description and get code from http://www.orcaware.com/orca/pub/extracct

• Pre-compiled code for Solaris SPARC and x86. Solaris 8 to 10.– Useful data is logged in regular columns for easy import– Includes low overhead network accounting config file for TCP flows– Interval accounting option to force all processes to cut records– Automatic log filename generation and clean switching– Designed to run directly as a cron job, useful today

• More work needed to interface output to SE toolkit and Orca

Page 24: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 30

Example Extracct Output

# ./extracctUsage: extracct [-vwr] [ file | -a dir ]

-v: verbose-w: wracct all processes first-r: rotate logs-a dir: use acctadm.conf to get input logs, and write output files to dir

The usual way to run the command will be from cron as shown

0 * * * * /opt/exdump/extracct -war /var/tmp/exacct > /dev/null 2>&12 * * * * /bin/find /var/adm/exacct -ctime +7 -exec rm {} \;

This also shows how to clean up old log files, I only delete the binary files in this example, and I created /var/tmp/exacct to hold the text files. The process data in the text file looks like this:

timestamp locltime duration procid ppid uid usr sys majf rwKB vcxK icxK sigK sycK arMB mrMB command1114734370 17:26:10 0.0027 16527 16526 0 0.000 0.002 0 0.53 0.00 0.00 0.00 0.1 0.7 28.9 acctadm1114734370 17:26:10 0.0045 16526 16525 0 0.000 0.001 0 0.00 0.00 0.00 0.00 0.1 1.1 28.9 sh1114734370 17:26:10 0.0114 16525 8020 0 0.001 0.005 0 1.71 0.00 0.00 0.00 0.3 1.0 28.9 exdump1109786959 10:09:19 -1.0000 1 0 0 4.311 3.066 96 47504.69 49.85 0.18 0.34 456.2 0.9 1.0 init1109786959 10:09:19 -1.0000 2 0 0 0.000 0.000 0 0.00 0.00 0.00 0.00 0.0 0.0 0.0 pageout

Page 25: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 31

How busy is that system?

A: I have no idea…

A: 10%

A: Why do you want to know?

A: I’m sorry, you don’t understand your question….

What would you say if you were asked:

Page 26: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 32

Headroom Estimation

• CPU Capacity– Relatively easy to figure out

• Network Usage– Use bytes not packets/s

• Memory Capacity– Tricky - easier in Solaris 8

• Disk Capacity– Can be very complex

Page 27: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 33

Headroom

• Headroom is available usable resources

– Total Capacity minus Peak Utilization and Margin

– Applies to CPU, RAM, Net, Disk and OSusr+sys CPU for Peak Period

0

10

20

30

40

50

60

70

80

90

100

Time

CPU %

Headroom

Utilization

Margin

Page 28: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 34

Utilization

• Utilization is the proportion of busy time

• Always defined over a time interval

usr+sys CPU for Peak Period

0

10

20

30

40

50

60

70

80

90

100

Time

CPU %

Utilization

OnCPU Scheduling for Each CPU

0

0.56

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41

Microseconds

OnCPU and Mean CPU Util

Page 29: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 35

Response Time

• Response Time = Queue time + Service time

• The Usual Assumptions…

– Steady state averages

– Random arrivals

– Constant service time

– M servers processing the same queue

• Approximations

– Queue length = Throughput x Response Time

• (Little's Law)

– Response Time = Service Time / (1 - UtilizationM)

Page 30: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 36

Response Time CurvesThe traditional view of Utilization as a proxy for response time

Systems with many CPUs can run at higher utilization levels, but degrade more

rapidly when they run out of capacity

Headroom margin should be set according to a response time target.

Response Time Curves

0.00

1.00

2.00

3.00

4.00

5.00

6.00

7.00

8.00

9.00

10.00

0 10 20 30 40 50 60 70 80 90 100

Total System Utilization %

Response Time Increase Factor

One CPUTwo CPUsFour CPUsEight CPUs16 CPUs32 CPUs64 CPUs

Headroommargin

R = S / (1 - (U%)m)

Page 31: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 37

So what's the problem with Utilization?

• Unsafe assumptions! Complex adaptive systems are not simple!

• Random arrivals?

– Bursty traffic with long tail arrival rate distribution

• Constant service time?

– Variable clock rate CPUs, inverse load dependent service time

– Complex transactions, request and response dependent

• M servers processing the same queue?

– Virtual servers with varying non-integral concurrency

– Non-identical servers or CPUs, Hyperthreading, Multicore, NUMA

• Measurement Errors?

– Mechanisms with built in bias, e.g. sampling from the scheduler clock

– Platform and release specific systemic changes in accounting of interrupt time

Page 32: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 38

Threaded CPU Pipelines

• CPU microarchitecture optimizations– Extra register sets working with one execution pipeline

– When the CPU stalls on a memory read, it switches registers/threads

– Operating system sees multiple schedulable entities (CPUs)

• Intel Hyperthreading– Each CPU core has an extra thread to use spare cycles

– Typical benefit is 20%, so total capacity is 1.2 CPUs

– I.e. Second thread much slower when first thread is busy

– Hyperthreading aware optimizations in recent operating systems

• Sun “CoolThreads”– "Niagara" SPARC CPU has eight cores, one shared floating point unit

– Each CPU core has four threads, but each core is a very simple design

– Behaves like 32 slow CPUs for integer, snail like uniprocessor for FP

– Overall throughput is very high, performance per watt is exceptional

– New Niagara 2 has dedicated FPU and 8 threads per core (total 64 threads)

Page 33: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 39

Variable Clock Rate CPUs

• Laptop and other low power devices do this all the time– Watch CPU usage of a video application and toggle mains/battery power….

• Server CPU Power Optimization - AMD PowerNow!™– AMD Opteron server CPU detects overall utilization and reduces clock rate– Actual speeds vary, but for example could reduce from 2.6GHz to 1.2GHz– Changes are not understood or reported by operating system metrics– Speed changes can occur every few milliseconds (thermal shock issues)– Dual core speed varies per socket, Quad core varies per core– Quad core can dynamically stop entire cores to save power

• Possible scenario:– You estimate 20% utilization at 2.6GHz– You see 45% reported in practice (at 1.2GHz)– Load doubles, reported utilization drops to 40% (at 2.6GHz)– Actual mapping of utilization to clock rate is unknown at this point

• Note: Older and "low power" Opterons used in blades fix clock rate

Page 34: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 40

Virtual Machine Monitors

• VMware, Xen, IBM LPARs etc.– Non-integral and non-constant fractions of a machine

– Naiive operating systems and applications that don't expect this behavior

– However, lots of recent tools development from vendors

• Average CPU count must be reported for each measurement interval

• VMM overhead varies, application scaling characteristics may be affected

Page 35: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 41

Measurement Errors

• Mechanisms with built in bias

– e.g. sampling from the scheduler clock underestimates CPU usage

– Solaris 9 and before, Linux, AIX, HP-UX “sampled CPU time”

– Solaris 10 and HP-UX “measured CPU time” far more accurate

– Solaris microstate process accounting always accurate but in Solaris 10 microstates

are also used to generate system-wide CPU

• Accounting of interrupt time

– Platform and release specific systemic changes

– Solaris 8 - sampled interrupt time spread over usr/sys/idle

– Solaris 9 - sampled interrupt time accumulated into sys only

– Solaris 10 - accurate interrupt time spread over usr/sys/idle

– Solaris 10 Update 1 - accurate interrupt time in sys only

Page 36: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 42

Storage Utilization

• Storage virtualization broke utilization metrics a long time ago

• Host server measures busy time on a "disk"– Simple disk, "single server" response time gets high near 100%

utilization

– Cached RAID LUN, one I/O stream can report 100% utilization, but full capacity supports many threads of I/O since there are many disks and RAM buffering

• New metric - "Capability Utilization"– Adjusted to report proportion of actual capacity for current workload

mix

– Measured by tools such as Ortera Atlas (http://www.ortera.com)

Page 37: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 43

How to plot Headroom

• Measure and report absolute CPU power if you can get it…

• Plot shows headroom in blue, margin in red, total power tracking day/night workload variation, plotted as mean + two standard deviations.

Page 38: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 44

“Cockcroft Headroom Plot”• Scatter plot of response time

(ms) vs. Throughput (KB) from iostat metrics

• Histograms on axes

• Throughput time series plot

• Shows distributions and shape of response time

• Fits throughput weighted inverse gaussian curve

• Coded using "R" statistics package

• Blogged development at

http://perfcap.blogspot.com/search?q=chp

Page 39: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 45

Response Time vs. Throughput• A different problem…

• Thread-limited appserver

• CPU utilization is low

• Measurements are of a single SOA service pool

• Response is in milliseconds

• Throughput is executions/s

Exec Resp

Min. : 1.00 Min. : 0.0

1st Qu.: 2.00 1st Qu.: 150.0

Median : 8.00 Median : 361.0

Mean : 64.68 Mean : 533.5

3rd Qu.: 45.00 3rd Qu.: 771.9

Max. :10795.00 Max. :19205.0

Page 40: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 46

How busy is that system again?

• Check your assumptions…

• Record and plot absolute capacity for each measurement interval

• Plot response time as a function of throughput, not just utilization

• SOA response characteristics are complicated…

• More detailed discussion in CMG06 Paper and blog entries– “Utilization is Virtually Useless as a Metric” - Adrian Cockcroft - CMG06

http://perfcap.blogspot.com/search?q=utilization

http://perfcap.blogspot.com/search?q=chp

Page 41: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 47

CPU

Page 42: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 48

CPU Capacity Measurements

• CPU Capacity is defined by CPU type and clock rate, or a benchmark rating like SPECrateInt2000

• CPU throughput - CPU scheduler transaction rate– measured as the number of voluntary context switches

• CPU Queue length– CPU load average gives an approximation via a time

decayed average of number of jobs running and ready to run

• CPU response time– Solaris microstate accounting measures scheduling delay

• CPU utilization– Defined as busy time divided by elapsed time for each CPU– Badly distorted and undermined by virtualization……

Page 43: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 49

CPU time measurements

• Biased sample CPU measurements– See 1998 Paper "Unix CPU Time Measurement Errors"– Microstate measurements are accurate, but are platform and tool specific. Sampled

metrics are more inaccurate at low utilization

• CPU time is sampled by the 100Hz clock interrupt– sampling theory says this is accurate for an unbiased sample– the sample is very biased, as the clock also schedules the CPU– daemons that wakeup on the clock timer can hide in the gaps– problem gets worse as the CPU gets faster

• Increase clock interrupt rate? (Solaris)– set hires_tick=1 sets rate to 1000Hz, good for realtime wakeups– harder to hide CPU usage, but slightly higher overhead

• Use measured CPU time at per-process level– microstate accounting takes timestamp on each state change– very accurate and also provides extra information– still doesn’t allow for interrupt overhead– Prstat -m and the pea.se command uses this accurate measurement

Page 44: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 50

More CPU Measurement Issues

• Platform and release specific details

• Are interrupts included in system time? It depends…

• Is vmstat CPU sampled (Linux) or measured (Solaris 10)?

• Load average includes CPU queue (Solaris) or CPU+Disk (Linux)

• Wait for I/O is a misleading subset of idle time, metric removed in Solaris 10, ignore it in all other Unix/Linux releases

Page 45: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 51

Controlling and CPUs in Solaris

• psrinfo - show CPU status and clock rate

• Corestat - show internal behavior of multi-core CPUs

• psradm - enable/disable CPUs

• pbind - bind a process to a CPU

• psrset - create sets of CPUs to partition a system– At least one CPU must remain in the default set, to run kernel services

like NFS threads– All CPUs still take interrupts from their assigned sources– Processes can be bound to sets

• mpstat shows per-CPU counters (per set in Solaris 9)CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 45 1 0 232 0 780 234 106 201 0 950 72 28 0 01 29 1 0 243 0 810 243 115 186 0 1045 69 31 0 02 27 1 0 235 0 827 243 110 199 0 1000 75 25 0 03 26 0 0 217 0 794 227 120 189 0 925 70 30 0 04 9 0 0 234 92 403 94 84 1157 0 625 66 34 0 0

Page 46: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 52

Monitoring CPU mutex lock statistics

• To fix mutex contention change the application workload or upgrade to a newer OS release

• Locking strategies are too complex to be patched

• Lockstat Command– very powerful and easy to use– Solaris 8 extends lockstat to include kernel CPU time profiling– dynamically changes all locks to be instrumented– displays lots of useful data about which locks are contending

# lockstat sleep 5Adaptive mutex spin: 3318 eventsCount indv cuml rcnt spin Lock Caller -------------------------------------------------------------------------------601 18% 18% 1.00 1 flock_lock cleanlocks+0x10 302 9% 27% 1.00 7 0xf597aab0 dev_get_dev_info+0x4c 251 8% 35% 1.00 1 0xf597aab0 mod_rele_dev_by_major+0x2c 245 7% 42% 1.00 3 0xf597aab0 cdev_size+0x74 160 5% 47% 1.00 7 0xf5b3c738 ddi_prop_search_common+0x50

Page 47: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 53

Network

Page 48: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 54

Network protocol data

• Based on a streams module interface in Solaris

• Solaris 2 ndd interface used to configure protocols and interfaces

• Solaris 2 mib interface used by netstat -s and snmpd to get TCP stats etc.

• Advantages

– Individual named metrics reasonably stable over releases

– Consistent data using locking

– Extensible to add metrics without breaking existing code

– Solaris ndd can retune TCP online without reboot

– System data is often also made available via SNMP prototcol

• Disadvantages

– Underlying API is not supported, SNMP access is preferred

Page 49: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 55

Network interface and NFS metrics

• Network interface throughput counters from kstat– rbytes, obytes — read and output byte counts

– multircv, multixmt — multicast byte counts

– brdcstrcv, brdcstxmt — broadcast byte counts

– norcvbuf, noxmtbuf — buffer allocation failure counts

• NFS Client Statistics Shown in iostat on Solariscrun% iostat -xnP extended device Statisticsr/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 crun:vold(pid363) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 servdist:/usr/dist 0.0 0.5 0.0 7.9 0.0 0.0 0.0 20.7 0 1 servhome:/export/home/adrianc 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 servhome:/var/mail 0.0 1.3 0.0 10.4 0.0 0.2 0.0 128.0 0 2 c0t2d0s0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t2d0s2

Page 50: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 56

How NFS Works

• Showing the many layers of caching involved

DiskStorage

DNLCnamecache

UFS Inodeinformationcache

In-memorypagecache

Disk A rraywrite cache/Prestoserve

stdio1 KBbuffer

open

fopen

UFS meta-data buf fercache

putchar

printf etc.readwrite

read

writereaddir

pointers

mmap

pgin

pgout

fstat

breadbwrite

lread

lwrite

page-out

getchar

NFS Rnodeinformationcache

In-memorypagecache

pointers

readwrite

NFS ServerNFS Client

lookup

StorageCacheFS

64KBchunks

Page 51: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 57

Network Capacity Measurements

• Network Interface Throughput– Byte and packet rates input and output

• TCP Protocol Specific Throughput– TCP connection count and connection rates– TCP byte rates input and output

• NFS/SMB Protocol Specific Throughput– Byte rates read and write– NFS/SMB service response times

• HTTP Protocol Specific Throughput– HTTP operation rates– Get and post payload byte rates and size distribution

Page 52: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 58

TCP - A Simple Approach

• Capacity and Throughput Metrics to Watch

• Connections– Current number of established connections– New outgoing connection rate (active opens)– Outgoing connection attempt failure rate– New incoming connection rate (passive opens)– Incoming connection attempt failure rate (resets)

• Throughput– Input and output byte rates– Input and output segment rates– Output byte retransmit percentage

Page 53: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 59

Obtaining Measurements

• Get the TCP MIB via SNMP or netstat -s

• Standard TCP metric names:– tcpCurrEstab: current number of established connections– tcpActiveOpens: number of outgoing connections since boot– tcpAttemptFails: number of outgoing failures since boot– tcpPassiveOpens: number of incoming connections since boot– tcpOutRsts: number of resets sent to reject connection– tcpEstabResets: resets sent to terminate established

connections– (tcpOutRsts - tcpEstabResets): incoming connection failures– tcpOutDataSegs, tcpInDataSegs: data transfer in segments– tcpRetransSegs: retransmitted segments

Page 54: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 60

Internet Server Issues

• TCP Connections are expensive– TCP is optimized for reliable data on long lived connections– Making a connection uses a lot more CPU than moving data– Connection setup handshake involves several round trip

delays– Each open connection consumes about 1 KB plus data

buffers

• Pending connections cause “listen queue” issues

• Each new connection goes through a “slow start” ramp up

• Other TCP Issues– TCP windows can limit high latency high speed links– Lost or delayed data causes time-outs and retransmissions

Page 55: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 61

TCP Sequence Diagram for HTTP Get

Page 56: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 62

Stalled HTTP Get and Persistent HTTP

Page 57: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 63

Memory

Page 58: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 64

Memory Capacity Measurements

• Physical Memory Capacity Utilization and Limits– Kernel memory, Shared Memory segment– Executable code, stack and heap– File system cache usage, Unused free memory

• Virtual Memory Capacity - Paging/Swap Space– When there is no more available swap, Unix stops working

• Memory Throughput– Hardware counter metrics can track CPU to Memory traffic– Page in and page out rates

• Memory Response Time– Platform specific hardware memory latency makes a difference, but

hard to measure– Time spent waiting for page-in is part of Solaris microstate

accounting

Page 59: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 65

Page Size Optimization

• Systems may support large pages for reduced overhead– Solaris support is more dynamic/flexible than Linux at present

• Intimate Shared Memory locks large pages in RAM– No swap space reservation

– Used for large database server Shared Global Area

• No good metrics to track usage and fragmentation issues

• Solaris ppgsz command can set heap and stack pagesize

• SPARC Architecture– Base page size is 8KB, Large pages are 4MB

• Intel/AMD x86 Architectures– Base page size is 4KB, Large pages are 2MB

Page 60: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 66

Cache principles

• Temporal locality - “close in time”– If you need something frequently, keep it near you– If you don’t use it for a while, put it back– If you change it, save the change by putting it back

• Spacial locality - “close in space - nearby”– If you go to get one thing, get other stuff that is nearby– You may save a trip by prefetching things– You can waste bandwidth if you fetch too much you don’t use

• Caches work well with randomness– Randomness prevents worst case behaviour– Deterministic patterns often cause cache busting accesses

• Very careful cache friendly tuning can give great speedups

Page 61: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 67

The memory go round - Unix/Linux

• Memory usage flows between subsystems

Kernel System V

Free

Process Filesystem

Head

Tail

RAM

CacheStack andHeap

exitbrk

reclaim

pagein

pageout pageout

delete

readwritemmap

reclaim

SharedMemory

List

kernelalloc

kernelfree shmget

shm_unlink

MemoryBuffers

scanner scanner

Page 62: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 68

The memory go round - Solaris 8 and Later

• Memory usage flows between subsystems

Kernel System V

Free RAM List

ProcessFilesystem

Head

Tail

CacheStack andHeap

exitbrk

reclaim

pagein

pageout

deletereadwritemmap

SharedMemory

kernelalloc

kernelfree shmget

shm_unlink

MemoryBuffers

scanner

Page 63: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 69

Swap space

• Swap is very confusing and badly instrumented!# se swap.se

ani_max 54814 ani_resv 19429 ani_free 37981 availrmem 13859 swapfs_minfree 1972 ramres 11887 swap_resv 19429 swap_alloc 16833 swap_avail 47272 swap_free 49868

Misleading data printed by swap -s

134664 K allocated + 20768 K reserved = 155432 K used, 378176 K available

Corrected labels:

134664 K allocated + 20768 K unallocated = 155432 K reserved, 378176 K available

Mislabelled sar -r 1

freeswap (really swap available) 756352 blocks

Useful swap data:

Total swap 520 M available 369 M reserved 151 M Total disk 428 M Total RAM 92 M

# swap -s

total: 134056k bytes allocated + 20800k reserved = 154856k used, 378752k available

# sar -r 1

18:40:51 freemem freeswap

18:40:52 4152 756912

Page 64: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 70

Disk

Page 65: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 71

Disk Capacity Measurements

• Detailed metrics vary by platform

• Easy for the simple disk cases

• Hard for cached RAID subsystems

• Almost Impossible for shared disk subsystems and SANs– Another system or volume can be sharing a backend

spindle, when it gets busy your own volume can saturate, even though you did not change your own workload!

Page 66: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 72

Solaris Filesystem issues

ufs - standard, reliable, good for lots of small filesufs with transaction log - faster writes and recovery

tmpfs - fastest if you have enough RAM, volatile

NFSNFS2 - safe and common, 8KB blocks, slow writesNFS3 - more readahead and writebehind, faster

default 32KB block size - fast sequential, may be slow randomdefault TCP instead of UDP, more robust over WAN

NFS4 - adds stateful behaviorcachefs - good for read-mostly NFS speedup

Veritas VxFS - useful on old Solaris releases

Solaris 8 UFS Upgradeufs was extended to be more competitive with VxFStransaction log unbuffered direct access option and snapshot backup capability

now available “for free” with Solaris 8

Page 67: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 73

Solaris 10 ZFS - What it doesn't have....

• Nice features– No extra cost - its bundled in a free OS– No volume manager - its built in – No space management - file systems use a common pool– No long wait for newfs to finish - create a 3TB file system in a second – No fsck - its transactional commit means its consistent on disk– No slow writes - disk write caches are enabled and flushed reliably– No random or small writes - all writes are large batched sequential– No rsync - snapshots can be differenced and replicated remotely – No silent data corruption - all data is checksummed as it is read – No bad archives - all the data in the file system is scrubbed regularly – No penalty for software RAID - RAID-Z has a clever optimization – No downtime - mirroring, RAID-Z and hot spares – No immediate maintenance - double parity disks if you need them

• Wish-list– No way to know how much performance headroom you have!– No clustering support

Page 68: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 74

Linux Filesystems

• There are a large number of options!– http://en.wikipedia.org/wiki/Comparison_of_file_systems

• EXT3– Common default for many Linux distributions– Efficient for CPU and space, small block size– relatively simple for reliability and recovery– Journalling support options can improve performance– EXT4 is in development

• XFS– Based on Silicon Graphics XFS, mature and reliable– Better for large files and streaming throughput– High Performance Computing heritage

Page 69: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 75

Disk Configurations

• Sequential access is ~10 times faster than random– Sequential rates are now about 50-100 MB/s per disk– Random rates are 166 operations/sec, (250/sec at 15000rpm)– The size of each random read should be as big as possible

• Reads should be cached in main memory– “The only good fast read is the one you didn’t have to do”– Database shared memory or filesystem cache is microseconds– Disk subsystem cache is milliseconds, plus extra CPU load– Underlying disk is ~6ms, as its unlikely that data is in cache

• Writes should be cached in nonvolatile storage– Allows write cancellation and coalescing optimizations– NVRAM inside the system - Direct access to Flash storage– Solid State Disks based on Flash are the "Next Big Thing"

Page 70: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 76

Slow idle disks explained

extended disk statisticsdisk r/s w/s Kr/s Kw/s wait actv svc_t %w %bsd2 1.3 0.3 11.7 3.3 0.1 0.1 146.6 0 3sd3 0.0 0.1 0.1 0.7 0.0 0.0 131.0 0 0

Why do these disks have high svc_t when they are idle?Use prex to turn on kernel TNF probes for disk I/Osdstrategy is called when an I/O is startedbiodone is called when it completesmatch the pairs of TNF records to see the time sequencesWe find a burst of writes from pid 3 every 30s

fsflush is updating inodes scattered all over the filesystemall writes are issued back to back without waiting to completea long queue forms, each write taking on average ~10ms to service, but

response (svc_t) includes a long queue timeTypically 20 or so writes each 30s is 0% busy, 100-200ms svc_t

Page 71: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 77

Disk Throughput

0

2000

4000

6000

8000

10000

12000

14000

disk_wK/sdisk_rK/s

Page 72: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 78

Max and Avg Disk Utilization (Same data)

0

10

20

30

40

50

60

70

80

90

100

disk_max%disk_avg%

Page 73: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 79

Data from iostat

• What can we see here? extended disk statistics

disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b

sd7 0.1 1.7 0.1 13.3 0.0 0.2 109.8 0 1

sd15 534.2 17.5 1320.4 35.0 0.0 0.3 0.6 0 26

sd45 291.9 23.0 603.2 49.8 0.0 0.2 0.6 0 15

sd60 3.1 0.0 25.3 0.0 0.0 0.0 7.8 0 2

sd61 3.3 0.0 26.4 0.0 0.0 0.0 7.6 0 2

sd62 3.2 0.0 26.1 0.0 0.0 0.0 8.1 0 3

sd63 3.8 0.0 30.1 0.0 0.0 0.0 7.2 0 3

sd64 3.6 0.0 28.8 0.0 0.0 0.0 7.4 0 3

sd65 3.8 0.0 31.2 0.0 0.0 0.0 7.3 0 3

sd67 9.7 1.5 77.8 4.3 0.0 0.1 9.0 0 8

sd68 10.7 1.4 85.3 4.2 0.0 0.1 9.0 0 10

sd69 10.0 1.5 79.9 4.2 0.0 0.1 9.0 0 9

sd70 10.4 1.0 83.1 3.2 0.0 0.1 9.1 0 9

sd71 9.9 1.4 78.8 4.6 0.0 0.1 8.7 0 9

sd72 10.0 1.1 79.9 3.7 0.0 0.1 8.5 0 8

sd75 0.0 27.6 0.0 297.3 0.0 0.0 1.1 0 2

sd210 12.1 0.3 108.9 0.6 0.0 0.1 9.8 0 10

sd211 12.9 0.4 114.8 0.7 0.0 0.1 10.6 0 11

sd212 12.0 0.6 107.1 1.3 0.0 0.1 11.1 0 10

sd213 13.8 0.3 122.2 0.9 0.0 0.2 11.1 0 11

sd214 12.5 0.5 112.1 1.0 0.0 0.1 10.3 0 10

sd215 12.1 0.3 109.5 0.8 0.0 0.1 10.5 0 10

sd7 root ufs

solid state disks

stripe 8K RR

stripe

cached write log

stripe

Page 74: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 80

Simple Disks

• Utilization shows capacity usageMeasured using iostat %b

• Response time is svc_tsvc_t increases due to waiting in the queues caused by bursty

loads

• Service time per I/O is Util/IOPSCalculate as(%b/100)/(rps+wps)

Decreases due to optimization of queued requests as load increases

Page 75: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 81

Single Disk Parameters

• e.g. Seagate 18GB ST318203FC– Obtain from www.seagate.com

– RPM = 10000 = 6.0ms = 166/s

– Avg read seek = 5.2ms

– Avg write seek = 6.0ms

– Avg transfer rate = 24.5 MB/s

– Random IOPS• Approx 166/s for small requests• Approx 24.5/size for large requests

Page 76: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 82

Mirrored Disks

• All writes go to both disks

• Read policy alternatives– All reads from one side

– Alternate from side to side

– Split by block number to reduce seek

– Read both and use first to respond

• Simple Capacity Assumption– Assume duplicated interconnects

– Same capacity as unmirrored

Page 77: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 83

Concatenated and Fat Stripe Disks

• Request size less than interlace

• Requests go to one disk

• Single threaded requests– Same capacity as single disk

• Multithreaded requests– Same service time as one disk

– Throughput of N disks if more than N threads are evenly distributed

Page 78: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 84

Striped Disks

• Request size more than interlace

• Requests split over N disks– Single and multithreaded requests

– N = request size / interlace

– Throughput of N disks

• Service Time Reduction– Reduced size of request reduces service time for large

transfers

– Need to wait for all disks to complete - slowest dominates

Page 79: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 85

RAID5 for Small Requests

• Writes must calculate parity– Read parity and old data blocks

– Calculate new parity

– Write log and data and parity

– Triple service time

– One third throughput of one disk

• Read performs like stripe– Throughput of N-1, service of one

– Degraded mode throughput about one

log

Page 80: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 86

RAID5 for Large Requests

• Write full stripe and parity

• Capacity similar to stripe– Similar read and write performance

– Throughput of N-1 disks

– Service time for size reduced by N-1

– Less interconnect load than mirror

• Degraded Mode– Throughput halved and service similar

– Extra CPU used to regenerate data

log

Page 81: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 87

Cached RAID5

• Nonvolatile cache– No need for recovery log disk

• Fast service time for writes– Interconnect transfer time only

• Cache optimizes RAID5– Makes all backend writes full stripe

Page 82: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 88

Cached Stripe

• Write caching for stripes– Greatly reduced service time

– Very worthwhile for small transfers

– Large transfers should not be cached

– In many cases, 128KB is crossover point from small to large

• Optimizations– Rewriting same block cancels in cache

– Small sequential writes coalesce

Page 83: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 89

Capacity Model Measurements

• Derived from iostat outputs extended disk statistics

disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b

sd9 33.1 8.7 271.4 71.3 0.0 2.3 15.8 0 27

• Utilization U = %b / 100 = 0.27

• Throughput X = r/s + w/s = 41.8

• Size K = Kr/s + Kw/s / X = 8.2K

• Concurrency N = actv = 2.3

• Service time S = U / X = 6.5ms

• Response time R = svc_t = 15.8ms

Page 84: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 90

Cache Throughput

• Hard to model clustering and write cancellation improvements

• Make pessimistic assumption that throughput is unchanged

• Primary benefit of cache is fast response time

• Writes can flood cache and saturate back-end disks– Service times suddenly go from 3ms to 300ms

– Very hard to figure out when this will happen

– Paranoia is a good policy….

Page 85: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 91

Concluding SummaryWalk out of here with the most useful content fresh in your mind!

Page 86: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 92

Quick Tips #1 - Disk

• The system will usually have a disk bottleneck

• Track how busy is the busiest disk of all

• Look for unbalanced, busy or slow disks with iostat

• Options: timestamp, look for busy controllers, ignore idle disks: % iostat -xnzCM -T d 30Tue Jan 21 09:19:21 2003 extended device statistics r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device 141.0 8.6 0.6 0.0 0.0 1.5 0.0 10.0 0 25 c0 3.3 0.0 0.0 0.0 0.0 0.0 0.0 6.5 0 2 c0t0d0 137.7 8.6 0.6 0.0 0.0 1.5 0.0 10.1 0 74 c0t1d0

Watch out for sd_max_throttle limiting throughput when set too low

Watch out for RAID cache being flooded on writes, causes sudden very large increase in write service time

Page 87: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 93

Quick Tips #2 - Network

• If you ever see a slow machine that also appears to be idle, you should suspect a network lookup problem. i.e. the system is waiting for some other system to respond.

• Poor Network Filesystem response times may be hard to see– Use iostat -xn 30 on a Solaris client

– wsvc_t is the time spent in the client waiting to send a request

– asvc_t is the time spent in the server responding

– %b will show 100% whenever any requests are being processed, it does NOT mean that the network server is maxed out, as an NFS server is a complex system that can serve many requests at once.

• Name server delays are also hard to detect– Overloaded LDAP or NIS servers can cause problems

– DNS configuration errors or server problems often cause 30s delays as the request times out

Page 88: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 94

Quick Tips #3 - Memory

• Avoid the common vmstat misconceptions– The first line is average since boot, so ignore it

• Linux, Other Unix and earlier Solaris Releases– Ignore “free” memory– Use high page scanner “sr” activity as your RAM shortage indicator

• Solaris 8 and Later Releases– Use “free” memory to see how much is left for code to use– Use non-zero page scanner “sr” activity as your RAM shortage indicator

• Don’t panic when you see page-ins and page-outs in vmstat

• Normal filesystem activity uses paging

solaris9% vmstat 30kthr memory page disk faults cpur b w swap free re mf pi po fr de sr f0 s0 s1 s6 in sy cs us sy id0 0 0 2367832 91768 3 31 2 1 1 0 0 0 0 0 0 511 404 350 0 0 990 0 0 2332728 75704 3 29 0 0 0 0 0 0 0 0 0 508 537 410 0 0 99

Page 89: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 95

Quick Tips #4 - CPU

• Look for a long run queue (vmstat procs r) - and add CPUs– To speedup with a zero run queue you need faster CPUs, not more of them

• Check for CPU system time dominating user time– Most systems should have lots more Usr than Sys, as they are running

application code

– But... dedicated NFS servers should be 100% Sys

– And... dedicated web servers have high Sys as well

– So... assume that lots of network service drives Sys time

• Watch out for processes that hog the CPU– Big problem on user desktop systems - look for looping web browsers

– Web search engines may get queries that loop

– Use resource management or limit cputime (ulimit -t) in startup scripts to terminate web queries

Page 90: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 96

Quick Tips #5 - I/O Wait

• Look for processes blocked waiting for disk I/O (vmstat procs b)– This is what causes CPU time to be counted as wait not idle

– Nothing else ever causes CPU wait time!

• CPU wait time is a subset of idle time, consumes no resources– CPU wait time is not calculated properly on multiprocessor machines

on older Solaris releases, it is greatly inflated!

– CPU wait time is no longer calculated, zero in Solaris 10

– Bottom line - don’t worry about CPU wait time, it’s a broken metric

• Look at individual process wait time using microstates– prstat -m or SE toolkit process monitoring

• Look at I/O wait time using iostat asvc_t

Page 91: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 97

Quick Tips #6 - iostat

• For Solaris remember “expenses” iostat -xPncez 30

• Add -M for Megabytes, and -T d for timestamped logging

• Use 30 second interval to avoid spikes in load. Watch asvc_t which is the response time for Solaris

• Look for regular disks over 5% busy that have response times of more than 10ms as a problem.

• If you have cached hardware RAID, look for response times of more than 5ms as a problem.

• Ignore large response times on idle disks that have filesystems - its not a problem and the cause is the fsflush process

Page 92: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 98

Recipe to fix a slow system

• Essential Background Information– What is the business function of the system?– Who and where are the users?– Who says there is a problem, and what is slow?– What changed recently and what is on the way?

• What is the system configuration?– CPU/RAM/Disk/Net/OS/Patches, what application software is in use?

• What are the busy processes on the system doing?– use top, prstat, pea.se or /usr/ucb/ps uax | head

• Report CPU and disk utilization levels, iostat -xPncezM -T d 30– What is making the disks busy?

• What is the network name service configuration?– How much network activity is there? Use netstat -i 30 or nx.se 30

• Is there enough memory?– Check free memory and the scan rate with vmstat 30

Page 93: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 99

Further Reading - Books

General Solaris/Unix/Linux Performance Tuning– System Performance Tuning (2nd Edition) by Gian-Paolo D. Musumeci and Mike

Loukides; O'Reilly & Associates

Solaris Performance Tuning Books– Solaris Performance and Tools, Richard McDougall, Jim Mauro, Brendan Gregg;

Prentice Hall– Configuring and Tuning Databases on the Solaris Platform, Allan Packer; Prentice Hall– Sun Performance and Tuning, by Adrian Cockcroft and Rich Pettit; Prentice Hall

Sun BluePrints™– Capacity Planning for Internet Services, Adrian Cockcroft and Bill Walker; Prentice Hall– Resource Management, Richard McDougall, Adrian Cockcroft et al. Prentice Hall

Linux – Linux Performance Tuning and Capacity Planning by Jason R. Fink and Matthew D.

Sherer– Google has a Linux specific search mode http://www.google.com/linux

Page 94: 2008 8/17/2015Page 1 Solaris/Linux Performance Measurement and Tuning Adrian Cockcroft, acockcroft@netflix.comacockcroft@netflix.com August 17, 2015

2008 04/19/23Solaris/Linux Performance Measurement and Tuning Slide 100

Questions?(The End)