technical university of darmstadt j2ee performance and scalability – from measuring to predicting...

Post on 27-Mar-2015

216 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Technical University of Darmstadt

J2EE Performance and Scalability –From Measuring to Predicting

Samuel KounevDatabases and Distributed Systems Group, TU-Darmstadt

SPEC-OSG Java Subcommittee

SPEC Benchmark Workshop 2006

MotivationMEASURING

PREDICTING

What performance would the application exhibit and how much hardware would be needed to meet SLAs?

How do we ensure that the application design does not have any inherent scalability bottlenecks?

Which platform would provide the best cost/performance ratio for a given application?

How do we ensure that the selected platform does not have any inherent scalability bottlenecks?

TU-Darmstadt © S. Kounev 2

J2EE applications becoming increasingly ubiquitous.

Quality of service requirements of crucial importance!

System architects and deployers faced with questions such as:

Roadmap & Agenda

Will present a case study with SPECjAppServer2004.

Benchmark used as a representative J2EE application.

A systematic approach for performance prediction introduced.

Agenda:

Overview of SPECjAppServer2004

Approach and methodology for performance prediction

Case study of SPECjAppServer2004

Validation of the approach

Summary and conclusions

TU-Darmstadt © S. Kounev 3

SPECjAppServer2004 Business Model

CorporateDomain

CustomerDomain

DealerDomain

Dealers

Suppliers ManufacturingDomain

SupplierDomain

TU-Darmstadt © S. Kounev 5

Sample Deployment Environment (Sun)

6

Approaches to Performance Prediction

Educated Guess + Quick, easy and cheap.

- Very inaccurate and risky.

Performance Modeling + Often cheaper and quicker than load-testing.

Could be applied at the design stage.

- Extremely complex. Accuracy depends on how

representative models are.

Load Testing (brute force) + Accurate. Helps to identify bottlenecks and

fine-tune system prior to production.

- Expensive and time-consuming.

Assumes system availability for testing.

7

Modeling Approach

QUEUE DEPOSITORY

- Queueing Petri Net (QPN) models used

- Combine queueing networks and Petri nets

- Allow integration of queues into places of PNs

- Ordinary vs. Queueing Places

- Queueing Place = Queue + Depository

- Benefits of using QPNs:

Excellent modeling power and expressiveness.

Can be exploited to accurately model both hardware and software aspects

of system behavior (ISPASS’03).

The knowledge of the structure of QPNs can be exploited for efficient

analysis by means of simulation (Performance Evaluation’06).

Can be used to combine qualitative and quantitative system analysis.

TU-Darmstadt © S. Kounev 8

Performance Modeling Methodology

1. Establish performance modeling objectives.

2. Characterize the system in its current state.

3. Characterize the workload.

4. Develop a performance model.

5. Validate, refine and/or calibrate the model.

6. Use model to predict system performance.

7. Analyze results and address modeling objectives.

TU-Darmstadt © S. Kounev 9

“Performance Engineering of Distributed Component-Based Systems -

Benchmarking, Modeling and Performance Prediction”, © Shaker Verlag, 2006,

ISBN: 3832247130.

Case Study: Modeling SPECjAppServer

TU-Darmstadt © S. Kounev 10

1 GBitLAN

.

.

.

JDBCHTTP

HT

TP

Internet

Suppliers

Dealers

HTTP

HTTP

WebLogic 8.1 ClusterEach node with 1 x AMD XP2000+ CPU, 1GB

HTTP Load Balancer1 x AMD XP2000+ CPU, 1GB

Oracle 9i Server2xAMD MP2000+

2GB RAM

Modeling Objectives

Normal Conditions: 72 concurrent dealer clients (40 Browse, 16 Purchase, 16 Manage) and 50 planned production lines in the mfg domain.

Peak Conditions: 152 concurrent dealer clients (100 Browse, 26 Purchase, 26 Manage) and 100 planned production lines in the mfg domain.

Goals:

• Predict system performance under normal operating conditions with 4 and 6 application servers.

• Predict how much system performance would improve if the load balancer is upgraded with a slightly faster CPU.

• Study the scalability of the system as the workload increases and additional application server nodes are added.

• Determine which servers would be most utilized under heavy load and investigate if they are potential bottlenecks.

TU-Darmstadt © S. Kounev 11

Performance Model

12

Performance Prediction

TU-Darmstadt © S. Kounev 13

Performance Prediction (2)

TU-Darmstadt © S. Kounev 14

150 Browse Clients 200 Browse Clients

TU-Darmstadt © S. Kounev 15

Performance Prediction (3)

Results Analysis

TU-Darmstadt © S. Kounev 18

0 10 20 30 40 50 60 70 80 90 100

4AS / NORMAL

6AS / NORMAL

6AS / PEAK / ORIG. LB

6AS / PEAK / UPG. LB

8AS / HEAVY 1

8AS / HEAVY 2

LB-C AS-C DB-C

Summary and Conclusions

TU-Darmstadt © S. Kounev 19

Studied a representative J2EE application and predicted its

performance under realistic load conditions.

The modeling error for response time did not exceed 21.2% and

was much lower for throughput and utilization.

QPN models can be exploited for accurate performance

prediction in realistic scenarios.

Proposed methodology provides a powerful tool for sizing and

capacity planning.

Performance prediction is hard, but it’s possible!

Acknowledgements

TU-Darmstadt © S. Kounev 20

Kaivalya Dixit (in memoriam), SPEC’s Co-founder, Former

SPEC President and Chair of the Java Subcommittee.

All members of the OSG Java Subcommittee, eSPECially:

John Stecher, IBM

Tom Daly and Akara Sucharitakul, Sun Microsystems

Ricardo Morin, Intel

Steve Realmuto, BEA

Thank You for Your Patience

Questions?

TU-Darmstadt © S. Kounev 21

Enjoy your lunch!

top related