performance engineering

53
Performance Engineering 1 Performance Engineering Prof. Jerry Breecher oftware Performance Engineering

Upload: alvis

Post on 11-Feb-2016

63 views

Category:

Documents


0 download

DESCRIPTION

Performance Engineering. Software Performance Engineering. Prof. Jerry Breecher. An ACM “Queue” Podcast interview with a performance analyst. Sample help wanted ads – what does the market define today when looking for a Performance Engineer/Analyst. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Performance Engineering

Performance Engineering 1

Performance Engineering

Prof. Jerry Breecher

Software Performance Engineering

Page 2: Performance Engineering

Performance Engineering 2

What’s In This Document?

An ACM “Queue” Podcast interview with a performance analyst.

Sample help wanted ads – what does the market define today when looking for a Performance Engineer/Analyst.

One aspect of Performance Engineering – building performance into a product.

Page 3: Performance Engineering

Performance Engineering 3

Several Views Of What A Performance Analyst Does

See http://queue.acm.org/ Under Browse Topics, click on performance. There are a number of great articles there – focused on practice.

Five Common Issues When A Performance Problem Exists:- Product wasn’t developed with a performance test harness.- When a problem develops, no one takes responsibility- The developers on-site, don’t use the tools that are available to solve the

problem.- After developing a list of possible causes, there’s no elimination of the

unlikely problems – people don’t know how to determine what matters.- Often people don’t have the patience just to sift through the data.

A List of Useful Tools:- dtrace – instruments existing code by putting breaks at known points- Vtune – finds out what code is being executed.- strace on LINUX – prints out all the system calls executed

Page 4: Performance Engineering

Performance Engineering 4

Bob Wescott’s Rules Of PerformanceGreat book and the price is right. Gleaned from many years of experience.

Bob Wescott, “The Every Computer Performance Book “ ISBN-13: 978-1482657753http://www.treewhimsy.com/TECPB/Book.html

The less a company knows about the work their system did in the last five minutes, the more deeply screwed up they are. <If you can’t measure it, you can’t manage it.>

What you fail to plan for, you are condemned to endure. <Bad things WILL happen.> If they don’t trust you, your results are worthless. <Be clear how much you trust your numbers.>

Always preserve and protect the raw performance data. <You massage it too much, and the data will lose it’s meaning. Be able to get it back.>

The meters should make sense to you at all times, not just when it is convenient.<Know your tools – what they can and cannot do; and what they give you.>

If the response time is improving under increased load, then something is broken.<If results don’t fit your model, something is broken.>

If you have to model, build the least accurate model that will do the job.<And I say, always have a model.>

You’ll do this again. Always take time to make things easier for your future self.<Write down everything you do.>

Ignore everything to the right of the decimal point. <Significant figures!!!!>

Never offer more than two possible solutions or discuss more than three. <KISS>

Page 5: Performance Engineering

Performance Engineering 5

Sample Want Ads• Conduct reviews of application designs,

business and functional requirements

• Implement test plans and cases based on technical specifications

• Design and execute automated and manual scripted test cases

• Document, maintain and monitor software problem reports

• Work with team members to resolve product performance issues

• Utilize multiple test tools to drive load and characterize system performance

• Execute tests and report performance/scalability test results

Skills/Requirements 4-6 years post-graduation experience in

QA testing client and server applications

Demonstrated experience with MS SQL server databases

Experience with running UI automated test scripts.  Familiarity with SilkTest preferred.

Exposure to multi-threading and network programming

Undergraduate degree from top-tier

computer science/engineering university

Performance Testing

Page 6: Performance Engineering

Performance Engineering 6

Sample Want Ads• Use home-grown and commercial tools to

measure, analyze, and characterize performance, robustness, and scalability of the EdgeSuite Platform

• Serve as a technical point of escalation to operations and customer care

• Debug complex service issues: service incidents, complex customer setups, field trials, performance issues, and availability issues

• Enable specific capabilities to our operational networks that are outside the capabilities of our Operations group

• Work across all technical areas in the company to enable innovative new solutions that span multiple technologies and services, often to meet specific customer needs

Skills/Requirements - Familiarity with data analysis- Experience in network operation and

monitoring- Depth networking principles and

implementation, including TCP/IP, UDP, DNS, HTTP, and SSL protocols a plus.

- Thorough understanding of distributed systems

- Experience with principles of software development and design

Performance Debugging

Page 7: Performance Engineering

Performance Engineering 7

Sample Want Ads• Ever stay up all night trying to squeeze 3

more fps out of your overclocked GPU? • Crash your bike because you were too

busy thinking of ways to speed up a nasty triply nested loop?

• Recompile your Linux kernel to extract that last ounce ofperformance? We have a job waiting for you.

Endeca is seeking an energetic and driven engineer to join our new System Analysis team.

• Engineers on this team will be responsible for exploring and understanding the behaviors and characteristics of Endeca's system.

• Members of this team will work with developers to tune system performance, provide technical guidance to architects building customer applications, and help our customers continue to achieve unprecedented levels of performance and scalability.

Skills/Requirements - - 3 years experience in software

engineering

- Undergraduate or graduate degree in computer science, or equivalent depth of study in CS

- Familiarity with the process of software performance investigation and tuning

- Experience with Linux and Windows

- Experience with scripting languages

- Ability to grasp the complexities of large distributed systems

- Strong analytical and troubleshooting skills

- Very highly motivated, quick learner

CustomerPerformance

Page 8: Performance Engineering

Performance Engineering 8

Sample Want AdsThe Performance Engineer will provide technical leadership to the organization in the areas of software frameworks and architecture, infrastructure architecture, middleware architecture and UI architecture.

The Performance Engineer is expected to have versatile expertise in application performance (DB, middleware, UI, infrastructure). This engineer will collaborate with all teams within IT to implement an application performance measurement framework using end-to-end performance measurement and monitoring tools. Using data collected from these tools the Performance Engineer will work with the architects to influence application and infrastructure design.

This performance engineer must demonstrate skill versatility in the areas of application architecture, infrastructure architecture and application performance.

JOB RESPONSIBILITY • Implement end-to-end performance

measurement tools/frameworks. • Build processes around tools to conduct

application performance benchmarks. • Design application benchmarks that will

simulate application workloads. • Design and implement capacity measurement

tools and performance benchmarks and testing.

• Ability to wear many hats to help expedite multiple projects

Skills/Requirements - Strong performance measurement skills

using tools like LoadRunner, SilkRunner. - Strong performance analysis skills with a

thorough understanding of application bottlenecks and infrastructure bottlenecks (OS, Storage etc).

- Strong skills using performance measurement/monitoring tools like BMC patrol, BMC Perform/Predict, HP OpenView, MOM.

- Hands on experience writing LoadRunner scripts and simulating performance benchmarks

- Experience with J2EE performance measurement tools is a plus

Performance Architect

Page 9: Performance Engineering

Performance Engineering 9

Sample Want AdsThis individual will work with the systems architects and key stakeholders to develop a performance strategy for SSPG products and implement a methodology to measure fine grained resource utilization.

This individual will establish a set of benchmarks and a benchmark methodology that include all supported storage protocols, the control path, applications and solutions.

Will also be an evangelist for performance within the group and ensure that performance is a core SSPG competency.

The position requires strong “hands on” development skills and a desire to work in a fast paced collaborative environment. Candidate must have a strong knowledge of operating system technology, device drivers, multiprocessor systems, and contemporary software engineering principles. 

Skills/Requirements BS in CS/CE plus 7-10 years experience or

equivalent.

Proven experience with storage performance benchmarking and tuning including hands on experience with performance related applications such as Intel VTune, SpecFS, and IOMeter

Strong operating system knowledge base with a focus on Linux, Windows and embedded operating systems.

Strong C/C++ programming and Linux scripting experience.

Knowledge of any of the following protocols and technologies is a plus: iSCSI, TCP/IP, Fibre Channel, SAS, File systems, RAID and storage systems

Design and development experience with embedded system is desirable

Candidate should possess excellent verbal and written communications skills.

Performance Architect

Page 10: Performance Engineering

Performance Engineering 10

Performance Engineering Motivation

This section is devoted to motivation, and talking through a number of the guiding tenets of Performance Engineering.

Performance Engineering is the practice of applying Software Engineering principles to the product life cycle in order to assure the best performance for a product. The purpose is to know at each stage of development the performance attributes of the product being built.

Page 11: Performance Engineering

Performance Engineering 11

Performance Engineering Motivation

A project is planned and scheduled under tight constraints; marketing feels that it is strategic to offer this product, and upper management inquires on a daily basis about the status of the project. Numerous short-cuts are taken in the design and implementation of the project. The product gets to alpha "on schedule", but it's discovered that the product is bug-ridden, and performs at 1/10th the speed of the slowest competitor. When the product finally ships, it's 6 months behind schedule, never wins a benchmark, and serves only as a line item in the product catalog. Within a year, a project is launched to build it "right".

Does this sound familiar? Does this ever happen in your life? Is there ever a situation where such an occurrence is acceptable? In the above example, what if the quality was ok, but the performance remained

terrible; would the scenario then be acceptable? Is it ever acceptable to not spec a product because there won't be enough time?

Example: Read the unbiased, "true-to-life" example portrayed below and answer the questions posed.

Page 12: Performance Engineering

Performance Engineering 12

Performance Engineering Motivation

1. Can you get performance for free? Does it naturally fall out of a "good" design?

2. Can you add performance at the end of a project?

3. Are performance problems as {easy | hard} to fix as functional bugs?

4. Is it easier to design in quality or performance?

5. It’s often stated that since performance is decided by algorithms rather than by coding methodology, it's primarily project leaders or high level designers who need to worry about performance. Do you agree with this?

6. What are the politics of performance estimation? What happens if you don't meet your performance goal? What will happen if you up-front make your best guess and then your product comes in below this guess (after all, it was a guess, just like we've been doing in class?) Does putting an uncertainty on the number make it OK?

7. Does it help to have management stress the importance of performance? When it comes to the crunch, does management emphasize Performance, Quality, or Schedules?

LOTS OF OTHER QUESTIONS RELATE TO THIS TOPIC:

Page 13: Performance Engineering

Performance Engineering 13

Performance Engineering Motivation

It leads to more development time

There will be Maintenance Problems (due to “tricky code”).

It's too difficult to build in performance.

Performance problems are rare.

Performance doesn't matter on this product since so few people will be using it.

Performance can be solved with hardware, and hardware is (relatively) inexpensive.

We can tune it later.

Sam and Sally and Sarah didn't have to worry about performance, so it really isn't very important.

Good performance is a natural byproduct of good design and coding techniques.

If we move to hardware three times faster, the problem will disappear.

FOLKLORE: Believe it or not, these are all comments/excuses I’ve heard!!

Page 14: Performance Engineering

Performance Engineering 14

Performance Engineering Motivation

MANY systems initially perform TERRIBLY because they weren't well designed.

Problems are often due to fundamental architectural or design factors rather than inefficient code.

Performance engineering is no more expensive than software engineering.

Performance problems are visible and memorable.

It's possible to avoid being surprised by the performance of the finished product.

THE REALITY IS:

Page 15: Performance Engineering

Performance Engineering 15

Performance Engineering Motivation

Good-performing systems result in:

• User satisfaction

• User Productivity

• Development staff productivity

• Selling more systems and getting a bigger paycheck.

• Performance can be "orders of magnitude" better with early, high level optimization.

THE BENEFITS OF PERFORMANCE ENGINEERING

INCLUDE:

Timely implementation allows for:

• Staff effectiveness

• Fire prevention rather than fire fighting.

• No surprises.

Page 16: Performance Engineering

Performance Engineering 16

Performance Engineering Motivation

• The critical path time to deliver is minimized if modeling, analysis, and reporting are done up-front. This is the “Software Engineering Religion.”

• Time is required by the design team. Performance experts are part of that design team.

• Time for modifications - Pay now or pay later.

• Cost of needed skills.

THE COSTS OF PERFORMANCE ENGINEERING:

The way we do performance engineering today is analogous to the marksman - he shoots first and whatever he hits, he calls the target.

Page 17: Performance Engineering

Performance Engineering 17

Performance Engineering Introduction

In this section we begin looking at some of the practical ways of doing Performance Engineering. Performance Engineering isn't magic or miraculous, but an organized mechanism for building in performance.

Little’s Law ….. Utilization …. Blah, blah, !!!

Then a Miracle Happens!!

An amazingly good-performing product results.

YOU NEED TO BE A BIT MORE EXPLICIT ABOUT SOME OF THE

DETAILS!!

Page 18: Performance Engineering

Performance Engineering 18

KEY POINTS IN PERFORMANCE ENGINEERING

By setting broad, verifiable performance targets at the beginning, in the Marketing Requirements, we can track those targets through the whole development lifecycle and verify along the way that the goals are being met.

The goal is to show how to incorporate performance information into the Standard Development Life Cycle. Developers already have within them the information needed to make performance predictions; they need only to understand how to express that information.

The trickle down philosophy:

Page 19: Performance Engineering

Performance Engineering 19

KEY POINTS IN PERFORMANCE ENGINEERING

We build on Software Engineering Methodology:

This methodology employs a number of documents and review mechanisms to ensure the completeness and quality of our software. These same techniques can be used to improve the performance of systems; neither quality nor performance is an add-on, so the procedures in place to improve quality can also be used to improve performance.

Performance is an intangible:

It's easy to see and describe a function, but much harder to determine how fast it will go or what resources it will devour. Performance Engineering makes visible the performance expectations of a new product and quantifies whatever can be nailed down at any particular point in the development cycle.

Page 20: Performance Engineering

Performance Engineering 20

KEY POINTS IN PERFORMANCE ENGINEERING

Bootstrapping

FUNCTIONALITY PERFORMANCE

REQUIREMENTS Where the specifiedfunctionality fits in the market.

Where the specifiedperformance fits in the market.

FUNCTIONAL SPEC What it will do; interfaces. Resources needed by each ofthe functions. To what extentare functions used.

DESIGN SPEC/TEST How it will/does work. Specific performance costs.

Page 21: Performance Engineering

Performance Engineering 21

KEY POINTS IN PERFORMANCE ENGINEERING

Performance Engineering depends on a combination of verification and validation. For those of you who've forgotten this nuance, here's a brief review:

VALIDATION AND VERIFICATION

Validation Showing at project completion that the performance meets the stated goals.

Verification Showing at each stage in the development that the projected performance will meet the previously stated goals.

The costs to VALIDATE performance?

Establish performance goals.

Establish performance tests.

Schedule time for Performance Assurance to do their thing.

Schedule time to fix the performance.

The costs to VERIFY performance?

Establish performance goals.

Establish performance tests.

Schedule time for developers to conduct analysis and inspections.

Schedule time for Performance Assurance since no one will believe you've verified the performance.

Page 22: Performance Engineering

Performance Engineering 22

KEY POINTS IN PERFORMANCE ENGINEERING

Unambiguous

There should be no doubt as to what the goal means. It is no good saying "A will be the same as B" without saying what will be the same. Specify in terms of CPU time, IO's, etc. Specify also the environment that will be used.

Measurable

Every performance goal must have an associated measurement. The measurement must be defined as carefully as the goal because it is the measurement that will tell you that you have reached your goal. Avoid vague goals without well defined measurements; they will lead to unreasonable expectations being set for your design.

SETTING MEASURABLE PERFORMANCE

OBJECTIVES

Page 23: Performance Engineering

Performance Engineering 23

KEY POINTS IN PERFORMANCE ENGINEERING

MetricsThere are an infinite number of ways to measure performance, many of

them invalid, inaccurate, or just plain dumb. The problem lies in trying to state the performance of a complex system in simple terms.

We will concentrate on:

• Finding the most common paths/functions.

• Determining metrics for those paths.

• Defining tests to evaluate these metrics.

SETTING MEASURABLE PERFORMANCE

OBJECTIVES

Page 24: Performance Engineering

Performance Engineering 24

PERFORMANCE ENGINEERING INSPECTIONS

Performance Inspections are a technique, very similar to Software Engineering inspections, for analyzing performance issues during the preparation of specifications.

The goal of inspections is to gather information needed to complete the performance documentation.

There's a mapping between:

REQUIREMENTS IN SPECS <===> QUESTIONS ON INSPECTIONS

What Are They?

Page 25: Performance Engineering

Performance Engineering 25

PERFORMANCE ENGINEERING INSPECTIONS

1. These inspections should be conducted in a formal way within one meeting.

2. There may well be questions generated that can only be answered by more thorough research.

3. Experience shows an inspection requires several hours, with a few more hours to resolve action items.

4. Be careful -- like any inspection, several people should be involved, including a dispassionate outsider.

5. Be careful -- it's very possible to get so mired in details that the whole performance business becomes an overwhelming burden.

Practical Aspects of Doing Inspections

Page 26: Performance Engineering

Performance Engineering 26

PERFORMANCE ENGINEERING INSPECTIONS

1. Whenever possible, make a guess. But clearly label your guess and talk about the assumptions going into it.

2. Software developers have a way of being overly detail-conscious when it comes to gathering performance numbers.

3. The specs themselves should contain answers to the questions posed here. When reviewing the document, those involved in the review should insure that the questions are indeed answered.

4. In each of the following sections are questions that might be asked during inspections. Many others are also possible, especially those which delve into the details of the specific project.

Practical Aspects of Doing Inspections

Page 27: Performance Engineering

Performance Engineering 27

PERFORMANCE ENGINEERING INSPECTIONS

1. Whenever possible, make a guess. But clearly label your guess and talk about the assumptions going into it.

2. Software developers have a way of being overly detail-conscious when it comes to gathering performance numbers.

3. The specs themselves should contain answers to the questions posed here. When reviewing the document, those involved in the review should insure that the questions are indeed answered.

4. In each of the following sections are questions that might be asked during inspections. Many others are also possible, especially those which delve into the details of the specific project.

Practical Aspects of Doing Inspections

Page 28: Performance Engineering

Performance Engineering 28

PERFORMANCE ENGINEERING MARKETING REQUIREMENTS

DOCUMENTS

Determine the best and worst expectations for this product.

State the performance needed to meet marketing needs: this can range from "we must beat the competition" to "get it out no matter how slow it is". (As we've discussed, the second approach will come back to haunt you.)

What is the "drop dead" point - the performance below which the project shouldn't be done.

To determine a target at which we can aim later.

OVERALL GOALS AT THE

REQUIREMENTS LEVEL:

Page 29: Performance Engineering

Performance Engineering 29

PERFORMANCE ENGINEERING MARKETING REQUIREMENTS

DOCUMENTS

1. What metrics matter.

2. What are the current competitor’s products and what performance do they achieve (or suffer.)

3. The current products you produce and the performance they achieve. NOTE: there is ALWAYS a comparable product against which the performance of a new product should be compared; NO ONE creates totally new product lines, companies merely extend existing ones.

4. Overall performance goals. In order to be a viable product, what are the maximum resources that can be used.

WHAT PERFORMANCE

ITEMS SHOULD BE IN THE

REQUIREMENTS?

Page 30: Performance Engineering

Performance Engineering 30

PERFORMANCE ENGINEERING MARKETING REQUIREMENTS

DOCUMENTS

1. Placement in the market:

• What are the expected/potential performance wins in the new product.

• What are the expected/potential performance pitfalls in the new product. At this point, there is little need for detail on how to combat the problems - identification is enough.

2. Stretching the limits: Where will the performance of your company and of its competitors be in 1 year / 2 years?

3. Into what environment/market will this product be sold? What other applications will be run on the machine? What machine resources are available for this product?

WHAT PERFORMANCE

ITEMS SHOULD BE IN THE

REQUIREMENTS?

Page 31: Performance Engineering

Performance Engineering 31

PERFORMANCE ENGINEERING MARKETING REQUIREMENTS

DOCUMENTS1. From Inspections (see the next section.)

2. Input comes from marketing and from looking around.

3. Determining expectations. Expectations are set based on:• Marketing• Observing the competition• Baseline of the previous product.• The "field"

4. Setting general performance goals. Goals should be determined by, and expressed in terms of:

• Customer satisfaction• Sales• Benchmarks

5. How to gather statistics. This can also be seen as resolving general goals into metrics. A goal of "customers will be happy" is all fine and good, but it's difficult to measure. We need real concrete metrics (we'll know we've succeeded when we achieve these metrics.)

WHERE DOES THIS INFORMATION COME FROM?

Page 32: Performance Engineering

Performance Engineering 32

PERFORMANCE ENGINEERING MARKETING REQUIREMENTS

DOCUMENTS1. What is the current performance of competitor's products?

2. What is the current performance of your existing products? (When none exist, use close cousins.)

3. Based on 1 and 2, what's the minimum performance we need in order to achieve parity?

• This can be answered by: "as fast as Compaq", "20% better than today", etc.• If the number is answered qualitatively rather than quantitatively, how can a

more solid number be obtained ( and who will get it )?

4. In order to meet these minimum performance requirements, is it acceptable to use the entire machine’s resources?

QUESTIONS TO USE ON A

REQUIREMENTS INSPECTION

Page 33: Performance Engineering

Performance Engineering 33

PERFORMANCE ENGINEERING MARKETING REQUIREMENTS

DOCUMENTS

5. What performance problems/successes did the competition encounter when introducing the comparison product? What performance problems/successes did you encounter when introducing the comparison product?

6. These are "looking ahead" type goals:

• To be a force in the market, what performance do we need?• What performance increment would be required to open new markets?

7. There are other types of questions asking about environments:

• What fraction of a module can be used to produce this performance? (What other work must the machine carry on?)

• How will customers be using this product; what are typical scenarios?

QUESTIONS TO USE ON A

REQUIREMENTS INSPECTION

Page 34: Performance Engineering

Performance Engineering 34

PERFORMANCE ENGINEERING PROJECT PLAN/SCHEDULE

• Preparation of Performance components of specs. Analysis necessary to include performance components in the various documentation.

• Performance walkthroughs.

• Performance checkpoints; ensuring at each stage of the project that performance targets are being met.

• Final performance verification.

• Include time for performance enhancement - we still don't know how to get it right the first time.

Detailed schedules should include work

items such as:

Page 35: Performance Engineering

Performance Engineering 35

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

• The goal of a functional spec is to define the interfaces of a product (that is, address environmental issues) and to describe how the user of the product will view that interface, without telling how the thing works.

• The performance portions of the spec have the same goal:

• We want to know who will call the function, and what will be the most common modes they will use -- we want to define the environment.

• Comparison with the MRD:

• Knowing the goals at the MRD level, it's possible now to set limits in terms of definable resources such as I/O and CPU.

• We want to determine ways to assure that we've been successful.

OVERALL GOALS AT THE FUNCTIONAL

SPEC LEVEL INCLUDE:

Page 36: Performance Engineering

Performance Engineering 36

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

It is reasonable to expect the following performance information at this time:

1. Who will be calling this function? Approximately how many times per second will this function be called. Given the resource usage in item 2, what fraction of the system resources will be expended on this function?

TOTAL COST = COST PER REQUEST * TOTAL REQUESTS

• Having done this, you can answer:

• If you can't win on all the functions you've defined, which ones are the most important (must wins!)?

• Which situations provide big wins?

SLIGHTLY MORE DETAILED GOALS:

Page 37: Performance Engineering

Performance Engineering 37

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

2. Set performance goals for CPU, I/O, and memory. Though there is still no detailed information on resource usage, it is time for informed guesses. This means we expect answers in milliseconds, furlongs, accesses/sec. etc.

• Ultimately, you can estimate the final performance!

3. Here we divide up the total project and estimate how many resources each part will take. The mechanism defined in this functional spec, or in all the functional specs addressing an MRD, must be able to deliver the performance promised in the MRD!

4. How will success in meeting these goals be measured. A description of the necessary tools should be at the same level of detail as the functional spec itself.

SLIGHTLY MORE DETAILED GOALS:

Page 38: Performance Engineering

Performance Engineering 38

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

Your functional spec will normally defend decisions made; why was one algorithm chosen over another, why store particular data in this spot, etc. You should also include performance factors; defending decisions based on performance criteria.

REMEMBER - the philosophy here is to make estimates - no hard numbers make any sense at this point.

WHAT PERFORMANCE ITEMS SHOULD BE IN

THE FUNCTIONAL SPEC?

Page 39: Performance Engineering

Performance Engineering 39

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

1. What is (are) the most frequently used time-lines described in this spec?

• What really matters is the small amount of code that is frequently traveled. All other code can be ignored. Techniques for determining this are discussed.

• How do you gather this data? The best method is intuition. Sure it's possible to go off and make lots of detailed measurements, but at this phase of the project such detail may not be possible. It's probably adequate to follow arguments such as the following: "This routine is used be every system call, therefore it is frequently used." or "This routine is called when opening a direct queue, so it happens less often".

• This item is designed simply to single out those routines meriting further investigation. We'll get more numerical later on.

• The remaining questions apply only to these often-used time-lines identified in item 1; all other time-lines can be ignored.

WHAT PERFORMANCE ITEMS SHOULD BE IN

THE FUNCTIONAL SPEC?

Page 40: Performance Engineering

Performance Engineering 40

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

2. When determining resource numbers, make sure you include the cost of calling routines at layers below those defined by this spec. If you don't know, guess.

• What "lower level" functions will be called by these time-lines? By "lower level" is meant functions called by the mechanism you are designing.

a. Estimate the CPU usage for the called functions.b. Estimate the disk usage for the called functions.c. Estimate the number of suspends for the called functions, and include the

cost of doing suspends/reschedules in CPU usage.

WHAT PERFORMANCE ITEMS SHOULD BE IN

THE FUNCTIONAL SPEC?

Page 41: Performance Engineering

Performance Engineering 41

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

3. Specific resource-usage numbers for CPU, memory and I/O. These numbers should be estimated for the most common time-lines in the most common environments. Where numbers are available from previous revs or from the competition, they should be included.

• For the high-usage time-lines described in your spec, estimatea) CPU usageb) Disk usagec) Suspends/reschedules

• Based on the answers to questions 2 and 3, you can determine the total cost of executing your new high-usage functions.

WHAT PERFORMANCE ITEMS SHOULD BE IN

THE FUNCTIONAL SPEC?

Page 42: Performance Engineering

Performance Engineering 42

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

4. How many times per second will these time-lines be called by higher level functions? This is an environment question; you may have figured this out already when you identified in question 1 that certain functions were "high-usage".

5. Based on #4, and the sum of #2 + #3, what fraction of the total system resources ( utilization ) are used by this time-lines?

6. What fraction of the resources called out in the MRD will be used by these time-lines.

WHAT PERFORMANCE ITEMS SHOULD BE IN

THE FUNCTIONAL SPEC?

Page 43: Performance Engineering

Performance Engineering 43

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

7. Checkpointing: When you add up all the time(s) in your most commonly used time-lines, did you get a number consistent with what you estimated in the MRD?

8. What are the metrics (what will you measure) in order to assure the performance given above? (Do NOT describe how to measure at this point.)

9. Describe in general terms how you expect to measure that these goals have been met. A description of the necessary methodology should be at the same level of detail as the functional spec itself.

WHAT PERFORMANCE ITEMS SHOULD BE IN

THE FUNCTIONAL SPEC?

Page 44: Performance Engineering

Performance Engineering 44

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

Lot's of information has been requested here in order to meet the ultimate goal of determining the total resource usage of your product. Here are some of the places where you can find help in preparing numbers:

• The MRD.

• Previously known performance

• Previous products (how fast did this system call run in the last rev?)

• How fast can the competition do this operation?

• Benchmarks of system performance.

• Intuition.

• The philosophy which says all the performance and resources must come from one pie; you can only cut it so many ways; pies and resources are both finite.

WHERE DOES THIS INFORMATION COME

FROM?

Page 45: Performance Engineering

Performance Engineering 45

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

0. What other algorithms were looked at; why was this determined to be the best for performance reasons?

The philosophy here is to make guesses - no hard numbers make any sense at this point.

The use of "time_lines" is explained in the unit on Design Strategies.

1. What is ( are ) the most frequently used time-lines described in this spec?

THE REMAINING QUESTIONS apply only to these often-used time-lines; all other time-lines can be ignored.

QUESTIONS TO USE ON A FUNCTIONAL

SPEC WALKTHROUGH

Page 46: Performance Engineering

Performance Engineering 46

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

2. What lower level functions will be called by these time-lines?

• Estimate the time for CPU usage for the called functions.

• Estimate the time for disk usage for the called functions.

• Estimate the time spent in interrupts resulting both directly and indirectly from this function.

• Estimate the number of suspends for the called functions, and include the time for doing suspends/reschedules.

• Estimate the amount of time a lock will be held by this function, and thus the percentage contention on the lock. Include this contention in your time-line.

3. For the high-usage time-lines themselves, as described in your spec, estimate

• CPU usage

• Disk usage

• Suspends/reschedules

QUESTIONS TO USE ON A FUNCTIONAL

SPEC WALKTHROUGH

Page 47: Performance Engineering

Performance Engineering 47

PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION

4. How many times per second will these time-lines be called by higher level functions?

5. Based on #4, and the sum of #2 + #3, what fraction of the total system resources (utilization) are used by this time-line?

6. What fraction of the resources called out in the MRD will be used by these time-lines.

7. When you add up all the resources, do they equal what was specified in the MRD?

8. What are the metrics (what will you measure) in order to assure the performance given above? (Do NOT describe the details of measurement at this point; remember, this is a functional level.)

QUESTIONS TO USE ON A FUNCTIONAL

SPEC WALKTHROUGH

Page 48: Performance Engineering

Performance Engineering 48

PERFORMANCE ENGINEERING DESIGN SPECIFICATION

This is where you should be able to make detailed estimates. And this is where you have a real chance to ensure that the numbers you’ve been guessing are real. At this point, in the design, you should be able to make very concrete assumptions.

Again, you can roll the detailed numbers you get back into the functional spec and requirements. Will the product perform as required? Now you know.

OVERALL GOALS AT THE DESIGN SPEC LEVEL INCLUDE:

Page 49: Performance Engineering

Performance Engineering 49

PERFORMANCE ENGINEERING DESIGN SPECIFICATION

REMEMBER - the philosophy here is to get numbers. These numbers should be as accurate as possible, but the code isn't written so the data can only be a best guess.

NOTE ALSO - the methodology is the same as used at Functional Spec level.

0. What metrics matter?

1a. Are the most-used time-lines the same as they were in the functional spec? If not, or none were defined, what are they?

1b. What are the low level library routines that are important in this design? Identify those routines that have a large fan-in.

QUESTIONS TO USE ON A DESIGN SPEC

INSPECTION

Page 50: Performance Engineering

Performance Engineering 50

PERFORMANCE ENGINEERING DESIGN SPECIFICATION

THE FOLLOWING QUESTIONS APPLY ONLY TO THE HEAVILY USED PATHS.

2. Determine the low level functions, in other components, called by your time-lines. These are routines subsidiary to those in the spec. What are the costs of using these functions? As before, these costs include

• CPU usage• Disk usage• Suspends/reschedules• Other

3. Calculate also the costs to do CPU in your own routines. This means you can estimate the total lines of code you'll run.

Do these calculations for both library routines and often-used time lines, though the library routine work is meant mainly to raise red flags.

QUESTIONS TO USE ON A DESIGN SPEC

INSPECTION

Page 51: Performance Engineering

Performance Engineering 51

PERFORMANCE ENGINEERING DESIGN SPECIFICATION

THE FOLLOWING QUESTIONS APPLY ONLY TO THE HEAVILY USED PATHS.

4. What is the frequency of calling the high level often-used routines and also the frequency for the library routines?

5. Based on #4, and the sum of #2 + #3, what fraction of the total system resources (utilization) are used by these time-lines?

6. What fraction of the resources called out in the Functional Spec will be used by these timelines?

QUESTIONS TO USE ON A DESIGN SPEC

INSPECTION

Page 52: Performance Engineering

Performance Engineering 52

PERFORMANCE ENGINEERING DESIGN SPECIFICATION

THE FOLLOWING QUESTIONS APPLY ONLY TO THE HEAVILY USED PATHS.

7. Checkpointing: When you add up all the time(s) in your most commonly used time-lines, did you get a number consistent with what you estimated in the Functional Spec?

8. Are there portions of the high-use functions that would benefit from being written in assembler?

9. What kind of performance tests will be used? At the design level these tests should be fairly specific. The goal is to build measurements that will look at the most used paths - these are not paranoid QA tests. Specifically, how do these tests measure the metrics you consider important?

QUESTIONS TO USE ON A DESIGN SPEC

INSPECTION

Page 53: Performance Engineering

Performance Engineering 53

CONCLUSION

This section has laid out a detailed methodology for assuring that the performance of a product being developed has the performance required of it when it’s completed.