webmethods process engine 8.2 with windows os · pdf filewebmethods process engine 8.2 with...

23
webMethods Process Engine 8.2 with Windows OS Performance Technical Report April 2011 Subtitle if necessary goes here

Upload: phamcong

Post on 06-Feb-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows OSPerformance Technical Report

April 2011

Subtitle if necessary goes here

Page 2: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

Table of Contents

1.0 INTRODUCTION 3

2.0 BENCHMARK GOALS 4

3.0 HARDWARE AND SOFTWARE INFORMATION 5

4.0 DEPLOYMENT ARCHITECTURE DIAGRAM 6

5.0 TEST HARNESS 7

6.0 TEST SETUP AND TEST CONDITIONS 8

6.1 Quality of Service Configurations 8

6.1.1 Low Quality of Service (LQoS) 8

6.1.1.1 LQoS Process Properties 8

6.1.1.2 Logging Settings 8

6.1.1.3 Trigger settings: 9

6.1.2 Medium Quality of Service (MQoS) 9

6.1.2.1 MQoS Process Properties 9

6.1.2.2 Logging Settings 9

6.1.2.3 Trigger Settings: 10

6.1.3 High Quality of Service (HQoS) 10

6.1.3.1 HQoS Process Properties 10

6.1.3.2 Logging Settings 11

6.1.3.3 Trigger Settings 11

6.2 JDBC Pool Settings 11

6.3 Document Types 12

6.4 Process Step Models 12

7.0 BENCHMARK SCENARIOS 13

7.1 Document Size Variation 13

7.2 Process Length 16

7.3 Quality of Service 19

8.0 Conclusion 21

8.1 Performance Considerations 21

9.0 APPENDIX 22

9.1 Terminology 22

9.2 Product Tuning 22

9.3 OS/HW Tuning 22

9.4 Build and Fixes 22

9.5 What this Report Does Not Cover 22

©2011 Software AG. All rights reserved. Page 2 of 23

Page 3: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 20111.0 INTRODUCTION

This technical report is one in a series that define and measure synthetic benchmarks that are representative of how the webMethods 8.2 Suite is used in the field. It focuses on the Process Engine component of the performance tests.

The intended audiences are application architects, consultants, developers, and managers involved in capacity planning. This document by itself does not facilitate capacity planning; it only shows the relative performance of the core components of the Software AG platform and demonstrates many of the operations that applications will perform. This information can be used to get a sense of the capacity of various types of hardware platforms.

©2011 Software AG. All rights reserved. Page 3 of 23

Page 4: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

2.0 BENCHMARK GOALS

This report focuses on measuring the performance of Process Engine (PE) running on Integration Server. The primary goal of this benchmark is to evaluate the effect on throughput when factors such as document size, quality of service (QoS), and number of processing steps are varied.

The path to this goal consists of the following steps:

1. Measure and compare the throughput on different process model lengths.

2. Measure and compare the throughput between different quality of service options.

3. Identify the solution’s scaling factor by gradually increasing the document size to see if the dependency is linear.

Capacity planning needs:

You can use the resource utilization statistics in this report to weigh design-time choices and provide input into the hardware selection process.

©2011 Software AG. All rights reserved. Page 4 of 23

Page 5: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

3.0 HARDWARE AND SOFTWARE INFORMATION

All tests used two Integration Server instances, one Broker Server, and one Oracle 11g server. This section identifies the hardware and software resources used to conduct the tests.

Hardware Resource Details:

Server OS Hardware Type Processor RAM Disk

Integration Server/ Process

Engine Server

Windows Server 2003

DELL PowerEdge 2950

Intel® Xeon®E5420 (2 quadcore

processors) 2x2.5GHz

24 GBSAS with RAID 5, 15k RPM, 400GB, DELL

PERC 6/i SCSI

Broker Server/

Integration Server

Windows Server 2003

DELL PowerEdge 2950

Intel® Xeon®E5420 (2 quadcore

processors) 2x2.5GHz

24 GBSAS with RAID 5, 15k RPM, 400GB, DELL

PERC 6/i SCSI

Oracle 11g + Load

Generator

Windows Server 2003

DELL PowerEdge 2950

Intel® Xeon®E5420 (2 quadcore

processors) 2x2.5GHz

24 GBSAS with RAID 5, 15k RPM, 400GB, DELL

PERC 6/i SCSI

The servers used 2 Gigabit Ethernet (over copper).

Two of the servers hosted the Integration Server and the Broker Server. The third server hosted the Oracle 11g database and was used as a load generator.

Software Information:

• Integration Server Version & Build number: 8.2.1.0.308

• Broker Server Version & Build number: 8.2.1.0.89

• Process Engine Version & Build number: 8.2.1.0.139

• Oracle 11g

©2011 Software AG. All rights reserved. Page 5 of 23

Page 6: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

4.0 DEPLOYMENT ARCHITECTURE DIAGRAM

The following diagram shows the test harness deployment architecture for all tests:

Figure 1: Deployment architecture for Process Engine tests

©2011 Software AG. All rights reserved. Page 6 of 23

Test and OS MetricsLoad

Load Generator

Integration Server (Process Engine)

Broker Server

Oracle DB

Page 7: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

5.0 TEST HARNESS

The idea of performance benchmarking includes saturating the system under test by loading it to the maximum stable condition and gathering performance data. In this case, one load generator was used to start the test and to collect the utilization metrics. The generator’s state was constantly observed and was never the bottleneck of the system.

The tools/utilities used to perform these tests were:

• Integration Server built-in services

• Silk Performer 2008 R2

• perfmon

• jstat

Custom Integration Server (IS) services were used to trigger the tests. In order to isolate and measure only the Process Engine performance, the tests were executed as described below:

• Using a built-in service, documents were published from one Integration Server to another Integration Server that was hosting the Process Engine.

• Once the documents were subscribed by the corresponding subscription trigger, the process instances were started.

The tests were executed in a controlled environment, and the system (or systems) under test did not run any applications other than the application being tested. The throughput metrics were calculated using timestamps at the beginning and end of each test.

©2011 Software AG. All rights reserved. Page 7 of 23

Page 8: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

6.0 TEST SETUP AND TEST CONDITIONS

Performance benchmarking tests were carried out in a controlled environment. No other applications were running during the tests except the necessary commands and processes to capture resource utilization. Tests were conducted by varying the following parameters:

• Document size

• Document type

• Quality of service

• Process length

6.1 Quality of Service Configurations

Three different quality of service options were used in the tests:

• Low quality of service (LQoS) provides highest throughput but no failover capability.

• Medium quality of service (MQoS) provides a balanced configuration to get decent throughput and ensure data security.

• High quality of service (HQoS) was used when it was vital to persist data such as document size, execution of process steps, or transitions between each step. This ensures that the system can be restored at any given point in time in case of failure.

6.1.1 Low Quality of Service (LQoS)

6.1.1.1 LQoS Process Properties

Parameter Name Value

Optimize locally Checked

Express pipeline Checked

Volatile transition documents Checked

Volatile tracking Checked

Local correlation Checked

Logging level None

Important: RESUBMISSION was disabled on all process steps in the My webMethods Sever.

6.1.1.2 Logging Settings

Logger List Enabled

Error logger No

Process logger No

©2011 Software AG. All rights reserved. Page 8 of 23

Page 9: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

Service logger No

6.1.1.3 Trigger settings:

• Subscription trigger:

Parameter Name All Process Models 20-Step Business Process Model

Capacity 500 500

Refill level 400 400

Acknowledgment queue size 200 300

Max execution threads 200 300

• Transition trigger:

Parameter Name All Process Models 20-Step Business Process Model

Capacity 100 100

Refill level 90 90

Acknowledgment queue size 100 100

Max execution threads 100 100

6.1.2 Medium Quality of Service (MQoS)

6.1.2.1 MQoS Process Properties

Parameter Name Value

Optimize locally Checked

Express pipeline Checked

Volatile transition documents Unchecked

Volatile tracking Unchecked

Local correlation Checked

Logging level Error Only

Important: RESUBMISSION was enabled for every third process step (Step 1, Step 4, Step 7, and so on) in the My webMethods Server.

6.1.2.2 Logging Settings

Logger List Enabled Mode Guaranteed Destination

©2011 Software AG. All rights reserved. Page 9 of 23

Page 10: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

Error logger Yes Synchronous Yes Database

Process logger Yes Synchronous Yes Database

Service logger Yes Synchronous Yes Database

6.1.2.3 Trigger Settings:

• Subscription trigger:

Parameter Name All Process Models 20-Step Business Process Model

Capacity 300 300

Refill level 200 250

Acknowledgment queue size 150 100

Max execution threads 150 100

• Transition trigger:

Parameter Name All Process Models 20-Step Business Process Model

Capacity 100 100

Refill level 90 90

Acknowledgment queue size 100 100

Max execution threads 100 100

6.1.3 High Quality of Service (HQoS)

6.1.3.1 HQoS Process Properties

Parameter Name Value

Optimize locally Unchecked

Express pipeline Checked

Volatile transition documents Unchecked

Volatile tracking Unchecked

Local correlation Checked

Logging level Process And All Steps

Important: RESUBMISSION was enabled for every process step in the My webMethods Server.

©2011 Software AG. All rights reserved. Page 10 of 23

Page 11: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 20116.1.3.2 Logging Settings

Logger List Enabled Mode Guaranteed Destination

Error logger Yes Synchronous Yes Database

Process logger Yes Synchronous Yes Database

Service logger Yes Synchronous Yes Database

6.1.3.3 Trigger Settings

• Subscription trigger:

Parameter Name10-Step Process

Model20-Step Process

Model20-Step Business

Process Model

40-Step Process Model

Capacity 10 10 10 10

Refill level 9 9 4 4

Acknowledgment queue size 2 1 1 1

Max execution threads 2 1 1 1

• Transition trigger:

Parameter Name10-Step Process

Model20-Step Process

Model20-Step Business

Process Model

40-Step Process Model

Capacity 100 100 100 100

Refill level 80 80 90 80

Acknowledgment queue size 20 30 100 40

Max execution threads 20 30 100 40

6.2 JDBC Pool Settings

Two JDBC pools were created for the corresponding functional aliases:

• ProcessAudit

• ProcessEngine

The settings for the ProcessAudit pool were as follows:

Parameter Name Value

Minimum connections 2

Maximum connections 8

Idle timeout 120000

©2011 Software AG. All rights reserved. Page 11 of 23

Page 12: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

The settings for the ProcessEngine pool were as follows:

Parameter Name Value

Minimum connections 10

Maximum connections 64

Idle timeout 120000

6.3 Document Types

Two document types were used as process input documents:

• Guaranteed documents are stored on the hard disk right after they are published.

• Volatile documents are stored in the memory right after they are published.

6.4 Process Step Models

This report shows results obtained for ten-, twenty-, and forty-step models, all of which work as follows:

1 Receive document.

2 Map to intermediate document.

3 Pass to final step.

Figure 2 shows a five-step model as an example. The ten-, twenty-, and forty-step models follow the same procedure, but the number of mapping steps will vary.

Figure 2: Deployment architecture for Process Model

©2011 Software AG. All rights reserved. Page 12 of 23

Receive Step Map 2 Map 3 Map 4 Echo

Page 13: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

7.0 BENCHMARK SCENARIOS

7.1 Document Size Variation

This section demonstrates how the system behaved when loaded with documents of different sizes (1KB, 10KB, 50KB, 200KB, 500KB and 1MB). All qualities of service were used for this set of tests.

A 10-step process model was chosen for the tests. LQoS used volatile document types, MQoS and HQoS used guaranteed document types. Results are shown in Figure 3.

Document Size Variation

7752

2156

543

171

70.6

34.3

344

161

5247

11.1

23.95

347.2

4.7

1.03

2.08

7.57

65.7

1

10

100

1000

10000

Proc

ess

Inst

ance

s pe

r Se

con

d

LQoS 7752 2156 543 171 70.6 34.3

MQoS 347.2 344 161 52 23.95 11.1

HQoS 65.7 47 7.57 4.7 2.08 1.03

1 KB 10 KB 50 KB 200 KB 500 KB 1000 KB

Figure 3: Process instances per second as function of the document size

©2011 Software AG. All rights reserved. Page 13 of 23

Page 14: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

Process Engine CPU, Database HDD, and Database CPU Utilizations for Document Size Variation:

IS CPU Utilization (Document Size Variation)

64

39

22 23 2229

95 97 97

80 80 80

53

45

14

36 3732

0

10

20

30

40

50

60

70

80

90

100

CPU

Uti

lizat

ion

%

LQoS 64 39 22 23 22 29

MQoS 95 97 97 80 80 80

HQoS 53 45 14 36 37 32

1 KB 10 KB 50 KB 200 KB 500 KB 1000 KB

Figure 4: CPU utilization as function of the document size

DB Disk Avg. Queue Length (Document Size Variation)

6

9.62

4.35

1.432

2.384 2.413

1.171.34

0.180.65

0.240.25

0

2

4

6

8

10

12

Avg

. Q

ueue

Siz

e

LQoS

MQoS 0.25 0.24 0.65 0.18 1.34 1.17

HQoS 6 9.62 4.35 1.432 2.384 2.413

1 KB 10 KB 50 KB 200 KB 500 KB 1000 KB

Figure 5:DB HDD utilization as function of the document size

©2011 Software AG. All rights reserved. Page 14 of 23

Page 15: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

DB CPU Utilization (Document Size Variation)

8580

47

105 3

62

50

10

2320

17

0

10

20

30

40

50

60

70

80

90

100

CPU

Uti

lizat

ion

%

LQoS

MQoS 85 80 47 10 5 3

HQoS 62 50 10 23 20 17

1 KB 10 KB 50 KB 200 KB 500 KB 1000 KB

Figure 6: DB CPU utilization as function of the document size

Interpreting Test Results:

As Figure 3 shows, in all cases the throughput for smaller documents was higher. As the document size increased, the throughput degraded for all qualities of service. LQoS provided better throughput for all document sizes because it did not require data to be stored in the database.

The Broker Server CPU was not utilized to more than 70% during the LQoS tests and 20% during MQoS and HQoS tests. The highest Broker hard disk load (average disk queue length was 0.55) was achieved during the HQoS tests.

For LQOS the overall throughput of the system was dependent on the efficiency of the subscription trigger because there was no business logic behind the rest of the steps in the process model and all the transitions were very fast. With bigger documents the subscription step became slower; therefore, the CPU consumption for LQoS declined.

The HQoS tests with small documents (1k and 10k) were limited by the database hard disk being overloaded. For each process instance the full process related data (step executions and transitions) had to be stored in the database. Processing bigger messages was limited by the maximum rate at which the transition trigger could consume documents from its Broker queue.

©2011 Software AG. All rights reserved. Page 15 of 23

Page 16: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

7.2 Process Length

Figures 5 and 6 demonstrate differences in performance with respect to the number of steps within a process model. Throughput was measured for processes with 10, 20, and 40 steps. Tests were run for all three qualities of service. Document size was constant (10KB) for all cases.

Process Model Length

2156 2136 2118

344183

8247

22.212.5

1

10

100

1000

10000

Proc

ess

Inst

ance

s pe

r Se

con

d

LQoS 2156 2136 2118

MQoS 344 183 82

HQoS 47 22.2 12.5

10 20 40

Figure 7: Process instances per second as function of the process length

©2011 Software AG. All rights reserved. Page 16 of 23

Page 17: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

Process Engine CPU, Database HDD, and Database CPU Utilizations for Process Length Variation:

IS CPU Utilization (Process Model Length)

39 3945

97

55

82

45

33

45

0

10

20

30

40

50

60

70

80

90

100

CPU

Uti

lizat

ion

%

LQoS 39 39 45

MQoS 97 55 82

HQoS 45 33 45

10 20 40

Figure 8: PE CPU utilization as function of the process length

DB Disk Avg. Queue Length (Process Model Length)

0.240.72

1.75

9.62

4.5

1.02

0

2

4

6

8

10

12

Avg

. Q

ueue

Siz

e

LQoS

MQoS 0.24 0.72 1.75

HQoS 9.62 4.5 1.02

10 20 40

Figure 9: DB HDD utilization as function of the process length

©2011 Software AG. All rights reserved. Page 17 of 23

Page 18: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

DB CPU Utilization (Process Model Length)

80

454950

35

50

0

10

20

30

40

50

60

70

80

90

100

CPU

Uti

lizat

ion

%

LQoS

MQoS 80 45 49

HQoS 50 35 50

10 20 40

Figure 10: DB CPU utilization as function of the process length

Interpreting Test Results:

Figure 7 shows that for LQoS the throughput was almost constant. This is because there was no overhead from the transition steps. All transitions were processed into the memory and no business logic was involved. As stated earlier, the throughput of the whole system for LQoS was dependent on the efficiency of the subscription step. Therefore, throughput in steps per second was better when there are more regular steps per subscription step.

The throughput for HQoS decreased linearly as the process model length grew. As in the previous scenario, the throughput for HQOS was limited by the database.

For MQoS with smaller process models, the Process Engine was limited by the maximum level of concurrency that the subscription trigger could achieve while receiving documents from the Broker Server. As the process length was smaller, more input documents had to be consumed concurrently, which kept the system from achieving higher throughput.

The Broker server CPU was never utilized to more than 55% during the tests. Again as in the previous section the highest Broker Server hard disk load (average disk queue length was 0.45) was achieved during the HQoS tests. The network utilization was not a bottleneck for the system.

©2011 Software AG. All rights reserved. Page 18 of 23

Page 19: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

7.3 Quality of Service

Figure 12 demonstrates how the system throughput changed depending on the quality of service. The test used a 10-step-process model and 10KB documents. By definition, LQoS requires volatile documents and HQoS requires guaranteed documents. Guaranteed documents were also used for MQoS.

Quality of Service

2156

344

47

1

10

100

1000

10000

Proc

ess

Inst

ance

s pe

r Se

con

d

QoS 2156 344 47

LQoS MQoS HQoS

Figure 11: Process instances per second as function of the quality of service

©2011 Software AG. All rights reserved. Page 19 of 23

Page 20: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

CPU Utilization for Quality of Service Variation:

IS CPU Utilization (Quality of Service)

39

97

45

0

10

20

30

40

50

60

70

80

90

100

CPU

Uti

lizat

ion

%

QoS 39 97 45

LQoS MQoS HQoS

Figure 12: CPU utilization as function of the quality of service

Interpreting Test Results:

Figure 12 shows the natural trend to improve throughput when the configured quality of service is lower.

As in the previous scenarios, the CPU for LQoS could not be fully utilized because the subscription trigger reached its full capacity. When MQoS was used there were more read/write operations, which require higher CPU utilization.

CPU utilization was lower for HQoS due to the time it took the database to complete all of its required operations.

©2011 Software AG. All rights reserved. Page 20 of 23

Page 21: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

8.0 CONCLUSION

All test results presented in this white paper show that the performance of webMethods Process Engine is heavily dependent on the selection of quality of service. When the highest level of quality of service is implemented, Process Engine achieves the lowest throughput due to the great number of persistence operations on the local hard drives and the database server.

Based on the desired deployment and throughput requirements, different qualities of service can be implemented, but you should always consider the key factors that affect the system:

• Database server storage and processors

• Integration Server/Process Engine storage and processors

Provided that Process Engine is given enough CPU resources and very fast storage (local and database), it provides a solid platform for large enterprise deployments.

8.1 Performance Considerations

Archiving the active process tables:

Minimizing the amount of data in the active process tables will improve performance, so it is recommended that you archive or delete this data on a regular basis. Also, when auditGuaranteed is set to true, the temporary storage tablespace associated with the RDBMS user starts to fill up. This tablespace should be reinitialized from time to time.

Guaranteed high quality of service is costly:

QoS options from both ends of the spectrum were used to get an indication of the full range of performance. These tests reinforced the concept that the higher the QoS, the higher the performance penalty will be. Designers need to balance the need for performance with the need for data security. Designer and Integration Server provide a wide range of QoS options that allow one to strike a balance between these seemingly competing objectives.

Database utilization:

Because the limiting factor for HQoS performance is the database utilization, you should always use a powerful machine to host it. Also, deploy the fastest available storage to avoid an I/O bottleneck.

Configuration of the ProcessAudit JDBC pool is a key factor in HQOS performance. The Maximum Connections parameter should be manipulated carefully because values that are too large can lead to poor database performance.

How to choose QoS:

QoS has a big impact on throughput. Choosing QoS settings is a very expensive decision. When maximum reliability is needed, HQoS is the only choice; whereas when higher throughput is needed, LQoS is considered to be the best choice. For increased capacity where less reliability can be tolerated, MQoS is an option.

©2011 Software AG. All rights reserved. Page 21 of 23

Page 22: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define

webMethods Process Engine 8.2 with Windows April 2011

9.0 APPENDIX

9.1 Terminology

Steps per second: The number of successfully executed steps in one second.

9.2 Product Tuning

Integration Server:

• watt.server.threadPool = 400

• JVM minimum heap size = 4000MB

• JVM maximum heap size = 4000MB

• The following JVM parameters were added:

-XX:-UseParallelOldGC

-Xmn2g

• WmPRT package

- Database Operation Retry Limit = 1000

- Database Operation Retry Interval (sec) = 5

- Cleanup Service Execution Interval (sec) = 0

- Completed Process Expiration (sec) = 6

- Failed Process Expiration (sec) = 3600

Oracle Database Tuning:

• Set REDO logs to at least 1.5 GB

• Eliminate statement parsing - add the string “MaxPooledStatements=35” in the JDBC pool URL

• Set CURSOR_SHARING = EXACT

9.3 OS/HW Tuning

9.4 Build and Fixes

The build number of the Integration Server-hosted Mediator is 8.2.1.0.301.

The Broker Server build used for the tests in this report is 8.2.1.0.139.

Java 1.6 64-bit build 16

9.5 What this Report Does Not Cover

• Testing on 32-bit hardware

• Testing on 32 JVM on both 32-bit and 64-bit hardware

©2011 Software AG. All rights reserved. Page 22 of 23

Page 23: webMethods Process Engine 8.2 with Windows OS · PDF filewebMethods Process Engine 8.2 with Windows April 2011 1.0INTRODUCTION This technical report is one in a series that define