olitaire

38
Solitaire Interglobal (847) 931-9100 One Douglas Avenue, Suite 200 FAX (847) 931-7938 Elgin IL 60120 www.sil-usa.com Whitepaper: DB2 Performance on IBM System p® and System Introduction One of the most difficult architectural component decisions in today’s information management environment is that which addresses the choice of database management technology. By far the largest market share in this type of software is held by the Oracle®, Microsoft SQL Server® and IBM DB2 TM products. In a recent release of an ongoing study by the Solitaire Interglobal Ltd. (SIL) research team, the production behaviors of Oracle and DB2 on the IBM® System p® and similar behaviors of Oracle, SQL Server and DB2 on the IBM® System x® platforms, were analyzed and compared. This release is a scheduled part of the continuing Operational Characterization Master Study (OPMS) that has been conducted over the last 19 years by the SIL research team. OPMS is used in support of industry standards and performance certification worldwide. The characteristics of the different DBMS products have been distilled by a review of more than 2,354,000 data points covering over 4,100 closely watched production comparisons, with more being collected each day. The relative advantages translate into hard cost savings for each customer than can substantially affect the bottom line cost of ownership. The real world affect on business is considerable. This is most achievable when a good fit between the DBMS and hardware platform is employed. Allowing the strengths of the components to mingle is one of the best ways to support the customer in their continuing quest for lower costs and improved user satisfaction. During this study, the main behavioral characteristics of both hardware and DBMS were examined closely. These traits affect the overt capacity and reliability of the combined architecture. The resultant behavior has to be examined within this conceptual framework to be clearly understood and to prepare that understanding for projection to other situations. Although the raw performance of a subject system is an important metric, the translation of that performance into business terms is more germane to today’s market. In this venue, the issue of relative costs can be a more significant metric than base performance. This measure encompasses a myriad of other factors, including reliability, staffing levels, vendor service responsiveness and time-to-market effects.

Upload: tess98

Post on 25-Jun-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: olitaire

SolitaireInterglobal

(847) 931-9100 One Douglas Avenue, Suite 200FAX (847) 931-7938 Elgin IL 60120www.sil-usa.com

Whitepaper:DB2 Performance on IBM System p® and Systemx®

IntroductionOne of the most difficult architectural component decisions in today’s informationmanagement environment is that which addresses the choice of databasemanagement technology. By far the largest market share in this type of software isheld by the Oracle®, Microsoft SQL Server® and IBM DB2TM products. In a recentrelease of an ongoing study by the Solitaire Interglobal Ltd. (SIL) research team, theproduction behaviors of Oracle and DB2 on the IBM® System p® and similarbehaviors of Oracle, SQL Server and DB2 on the IBM® System x® platforms, wereanalyzed and compared. This release is a scheduled part of the continuingOperational Characterization Master Study (OPMS) that has been conducted over thelast 19 years by the SIL research team. OPMS is used in support of industry standardsand performance certification worldwide.

The characteristics of the different DBMS products have been distilled by a review ofmore than 2,354,000 data points covering over 4,100 closely watched productioncomparisons, with more being collected each day. The relative advantages translateinto hard cost savings for each customer than can substantially affect the bottom linecost of ownership. The real world affect on business is considerable.

This is most achievable when a good fit between the DBMS and hardware platform isemployed. Allowing the strengths of the components to mingle is one of the bestways to support the customer in their continuing quest for lower costs and improveduser satisfaction.

During this study, the main behavioral characteristics of both hardware and DBMSwere examined closely. These traits affect the overt capacity and reliability of thecombined architecture. The resultant behavior has to be examined within thisconceptual framework to be clearly understood and to prepare that understanding forprojection to other situations.

Although the raw performance of a subject system is an important metric, thetranslation of that performance into business terms is more germane to today’smarket. In this venue, the issue of relative costs can be a more significant metric thanbase performance. This measure encompasses a myriad of other factors, includingreliability, staffing levels, vendor service responsiveness and time-to-market effects.

Page 2: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 2

Benchmarks versus Production BehaviorThe SIL research activity focuses on the capture of actual customer productioninformation, rather than the construction and observation of benchmark tests. Oneof the main reasons for this strategy is that benchmarks tend to be artificialconstructs, which miss some (or all) of the important behavioral combinations.Although every effort can be devoted to making a comprehensive test bed, the sheernumber of combinations can be defeating. This problem was succinctly illustrated bya corollary study performed by SIL. In this study, 250 benchmark evaluation effortswere tracked against the number of unique combinations of system activities thatoccurred within 18 calendar months. All of these benchmarks were setup to fullydefine the activities of the subject systems. These activities were further weightedwith the performance characteristics and timings that made a significant impact onthe operations of the testing organization.

The tracked number of activities that were carefully defined to the test system isshown below. This is identified by organization number, which was assigned in theorder that the organization was incorporated into the study, and therefore indicatesno priority or relative precedence among the scenarios.

Benchmark-defined Activities

Benchmark Activities

0

50

100

150

200

250

1 17 33 49 65 81 97 113 129 145 161 177 193 209 225 241

Client ID

Test

Co

un

t

The actual number of unique activity combinations that were identified during thesubsequent 18-month testing period varied significantly from those defined by thetest bed. While not making any assertion concerning the relative importance of the

Page 3: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 3

different activity combinations, the large gap between the defined test activities andthe naturally-occurring ones is large enough to cast doubts about thecomprehensiveness of the benchmarks.

Activity Comparison – Benchmark versus Actual1

13

25

37

49

61

73

85

97

109

121

133

145

157

169

181

193

205

217

229

241

Ben

chm

ark

Act

ivitie

s

0

50

100

150

200

250

300

350

Client ID

Activity Comparison

Benchmark Activities Actual Activities

While the relative importance is difficult to judge on the overall scope of activities inthis comparison, it is possible to look at the performance sensitivity of both the test-defined and undefined activities in this situation. In this analysis, the activitiesshown are those that were not defined in the test bed and which showed a resultantperformance load that exceeded the maximum threshold delineated by thebenchmark test. Since each one of these excessive activities represents a significantrisk to the operational system, the large number shown on this analysis identifies anotable area of exposure.

Page 4: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 4

Activity Sensitivity – Actual Impact over Tested

Untested Sensitive Activities

0

10

20

30

40

50

60

1 16 31 46 61 76 91 106 121 136 151 166 181 196 211 226 241

Client ID

Pro

ble

m C

ou

nt

Another reason for the SIL approach is that benchmark tests most frequently spantoo small a cross-section of actual activity to produce a significant difference in therecorded observations. Also, the combination of values that are used to test and setdefinitive boundaries tend to occur only within the benchmark, and are seldom foundin actual production. Therefore, the correlation of the benchmark to actual customerproduction operations is very small.

The approach taken by SIL uses a compilation and correlation of operationalproduction behavior, using real systems and real business activities. Some of thesesystems have provided detailed side-by-side comparison runs of Oracle, SQL Serverand DB2, while others have contributed to a pool of systems running with similarvolumetrics. For the purposes of this investigation, over 650 AIX® and 3,450 Intel-based production systems were observed, recorded and analyzed to substantiate thefindings.

Page 5: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 5

DBMS CharacteristicsThe basic memory handling is considerably different among the DBMS products,with unique patterns of memory usage and reclamation significantly affecting therequired memory levels, refresh patterns and overall workload. Each of these areasof differentiation can be clearly seen in the details of the comparative tests. Whileeach test is unique, a substantive trend can be seen in the distinctive handling ofmemory for each DBMS.

Hardware CharacteristicsThe IBM equipment has definitive characteristics of its own to contribute to theoverall behavioral mixture. These traits are actually a combination of the operatingsystem and the physical hardware that the operating system addresses. Whilelooking at the System p running AIX, the salient characteristics are:

• Width of the I/O channels

• High level of start I/Os that the AIX operating system supports

• Efficiency of the memory-handling algorithms

• Network port addressing efficiencies

Obviously, the same hardware running a different operating system will demonstratedifferent combined qualities. These marks of the hardware and operating systemprovide an interesting amalgamation with DBMS activity.

The I/O trait is a dual-dimensioned factor. The first dimension is the number ofbytes that can be sent along the necessary platform pathways, while the seconddimension defines the sheer number of I/O activities that can be initiated in a singlesecond on one platform processor. The conduit restriction of the I/O channels is afunction of both of these factors.

A similar synergy can be seen in the combination of the System x platform and theDBMS product, although the differences from other Intel-based servers are not aspronounced. Here the contributing hardware characteristics are the breadth of theI/O channel on the System x and the efficiency of the memory algorithms.

Study ScopeThe evaluation domain included platforms from IBM and other vendors. This broadscope was used in order to crystallize an understanding of the impact of DBMS andplatform synergy. Once the domain was collected, the relative degree of difference inperformance on each platform set was compared to provide the net effect of therespective combinations.

These comparison options were run in one of two general topologies – side-by-sideand correlated. The side-by-side process methodology allows the system options tohave separately dispatched activities. In this architecture, none of the executing

Page 6: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 6

platforms is dependant on the execution or completion of any of the others. Due tothis lack of dependency, no allowance was made for serial latency in the analyzedtimings.

The second architecture (referred to as “correlated” in this document) identifies oneexecuting system as the parent or original operation. Each transaction or execution isfirst sent to that system. Then, depending on the type of spawning mechanism, thesame request can be sent on immediately to the next system, and the next and so on.In the test cases, only the DSS system used the correlated form of the parallel testingmechanism. Therefore, the DSS performance was adjusted to eliminate thedependency figures and latencies.

The following diagram illustrates the difference in execution architecture between thetwo methods.

Dispatching Architectures

The details of the different test comparisons were intriguing and provide a goodinsight into the relative efficiencies of the possible system deployments.

Page 7: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 7

System p ComparisonsThe projects (and their method of comparison) that were examined for the System pplatforms are:

• Credit card processes – side-by-side

• E-mail posting – side-by-side

• Web usage – side-by-side

• Customer support (CRMS) activities - side-by-side

• Decision Support System (DSS) – side-by-side

• Transactions – correlated

Many of the systems were run either in full or partial parallel for comparisonpurposes, while operating in full production. All of the System p tested were in thep5xx model range and used the p6 architecture. The number of separateorganizations, along with the expanded venues that were run at those locations isshown in the following table.

Study Structure – System p Comparison Components

System Type Core SystemsComparison Option

Count

Credit Card 41 62

Email 121 173

Web Usage 306 1,193

CRMS 27 70

DSS 41 223

Transaction 117 456

Total 653 2,177

Therefore, the total UNIX®/AIX comparison environment was comprised of 2,177separate comparison options of 653 core systems. The individual projects werecomprised of normal business activity in the subject areas. The activity was trackedby the contributing business action, including the number of expected rows perquery, etc. A short list of the individual actions can be seen in Appendix A for eachgeneral project type.

Performance

The first area of examination is the base performance of the relative systems. Adetailed comparison of the client tests, showing Oracle and DB2 performancecharacteristics, are represented by the system-specific graphs.

Page 8: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 8

Credit Card Authorization Activities

0

5,000

10,000

15,000

20,000

25,000

30,000

35,000

40,000

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

Activity ID

TP

S

DB2 Oracle

Email Activities

0

5,000

10,000

15,000

20,000

25,000

30,000

35,000

1 2 3 4 5 6 7 8 9 10

Activity ID

TP

S

DB2 Oracle

Page 9: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 9

Web Activities

0

10,000

20,000

30,000

40,000

50,000

60,000

1 2 3 4 5 6 7 8 9 10 11 12

Activity ID

TP

S

DB2 Oracle

CRMS

0

5,000

10,000

15,000

20,000

25,000

30,000

35,000

40,000

1 2 3 4 5 6 7 8 9 10

Activity ID

TP

S

DB2 Oracle

Page 10: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 10

Transaction Processing Activities

0

10000

20000

30000

40000

50000

60000

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63

Activity ID

TP

S

DB2 Oracle

DSS Activities

0

100

200

300

400

500

600

700

800

900

1000

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87

Activity ID

TP

M

DB2 Oracle

The operational characteristics and efficiencies of the DB2 and Oracle comparisonsshow that the DB2 DBMS demonstrates a good range of differentiation when hostedon IBM platforms. The most critical factor in the transaction efficiency appears to

Page 11: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 11

depend on two factors within the DB2 architecture: memory handling and buffering.The efficiency of the DB2 memory handling shows faster throughput, although thevariations in response are more widely varied than that of Oracle with its shared poolallocation. Even if the shared pool dynamics are removed, the optimized bufferingmechanisms for storage and retrieval provide clear advantages in efficiency. Theamount of divergence varies widely, based on application, platform, etc. When DB2is hosted on IBM System p equipment, the range of improved performance is heavilyclustered, with a synergistic alignment of memory handling optimizations in bothDBMS and hardware. This provides a more sharply defined demarcation toanswering the DBMS uncertainty. The summation of the differences in performancecan be seen on the following chart.

CCA E-mail WebUsage

CRMSTransProc

Oracle

DB20

5,000

10,000

15,000

20,000

25,000

30,000

35,000

40,000

Application Type

Average TPS Summation - pSeries

The DSS comparison does not lend itself to the transaction per second metric, and soits comparison was done on a different basis and is summarized in the chart below.

Page 12: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 12

DSS

Oracle

DB2

0

100

200

300

400

500

600

Application Type

Average Query Per Minute Summation - pSeries

The improved efficiency of the DB2 product on the System p is significantly higherthan on the overall equipment population domain, ranging from 15.1% through37.0% on the transaction-based metrics and averaging 58.6% better on the query-based metric. Once again this can be traced to the efficiencies in memory handlingand buffering. Even more significant is the small mean variance for each of theapplication type comparisons. The data values representing DBMS execution on allplatform types varied by approximately 22.8 points. For the System p and DB2combination, variance was less than 3.6 points. This is significant in that it increasesthe predictability of performance behavior. Since the dependability of the projectionsis indicated by how closely they are clustered among the observations, this tightcorrelation is important.

Business Measures

Many businesses consider the total cost and risk of ownership to be key metrics whenevaluating purchase options. Some of the contributory metrics that go into buildingthis picture are reliability, staffing, total costs, and time-to-market effects. Thesefactors were measured during the same time period that the performance metricswere gathered for all of the subject tests. A summary of these findings is presentedhere, so that a more comprehensive understanding of the real benefits of the platformchoice can be obtained.

ReliabilityThe dependability of the implementation is a combination of the individual reliabilityof each component, along with the quality and effectiveness of the actualimplementation. As such, both the planned and unplanned outages affect the overallusability of the total system. The charts below show the number of outages that were

Page 13: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 13

recorded during the testing period, as well as, the total number of minutes that thoseoutages consumed.

0

2

4

6

8

10

12

14

Ou

tag

e C

ou

nt

Month 1 Month 2 Month 3 Month 4 Month 5 Month 6 Month 7 Month 8 Month 9 Month 10 Month 11 Month 12Month

Normalized Outages (All)

DB2 Oracle

The number of outages has been normalized for a 200-platform operation, with bothplanned and unplanned outages included. One of the most contributory factors is theneed for reorganization and redeployment. Each of these outages takes valuableaccess time away from the corporate resources. The following chart shows the totalnumber of minutes of unavailability that those outages represent.

Normalized Monthly Outages (all)

0

200

400

600

800

1000

1200

1400

1600

1800

2000

Month 1 Month 2 Month 3 Month 4 Month 5 Month 6 Month 7 Month 8 Month 9 Month 10 Month 11 Month 12

Month

Ou

tag

e T

ime

DB2 Oracle

When the root causes of reliability are examined, the biggest reported differences canbe found in the amount and count of planned downtime occurrences. DB2 washeavily favored in this regard, with customers consistently reporting easiermovement and allocation of resources without any interruption of availability. Withfewer and shorter outages for normal operational activities, the overall availabilityand reliability of the DBMS shows some clear differentiation. The secondarycontribution to the reliability was the need for fewer upgrades, which requiresignificant outages and schedule interrupts.

Page 14: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 14

The timing and periodicity of the tracked outages is also significant in that it displayssome interesting patterns for support. Even in this relatively restricted sampling, theinterval of operational support has a clearly higher frequency for the Oracleinstallations than for the DB2 implementations. This is supported by subjectivecustomer feedback on the occurrence of operational activities for DBAs and storagepersonnel.

StaffingThe number and skill category of the staff required to support the variousimplementations is another way of looking at the business issues for a deployment.The chart below shows the relative staffing for a normalized operations group. Thiscomparison can be used to understand the challenges of staffing and overall budgetcosts for the two DBMS implementations on IBM equipment. These staffing figureswere collected from the actual operation groups measured in the other metriccollection efforts, and cover organizations in North and South America, Asia, Europe,Antarctica and Australia. These organizations have reported staffing for 24x7coverage, rather than single shift. Since a wide variety of working environments havebeen included in the analysis base, all the reporting has been normalized using a unitof measure called FTE (full-time equivalent), which assumes a standard 2080-hourwork year.

Relative Staffing by Discipline

0.01.02.03.04.05.06.07.08.0

Account managem

ent

Application managem

ent

Backup and archiving

Business recovery services

Database Management and Adm

inistra...

Disk and file managem

ent

HW and network configuration / re-con...

HW deploym

ent

Operations

OS Support

Planning and Process Management

Performance tuning

Repository managem

ent

Security and virus protection

Service Desk

Software deployment

Storage capacity planning

Systems research, planning and produ..

Traffic managem

ent & planning

User administration

Skill Category

FTE

DB2 Oracle

Page 15: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 15

The source chart that was used to construct this graph has also been included.

Staff Discipline DB2 Oracle

Account management 0.1 0.1

Application management 0.9 1.4

Backup and archiving 0.1 0.7

Business recovery services 0.2 0.3

Database management and administration 1.5 4.3

Disk and file management 0.9 1.2

HW and network configuration / re-configuration 0.3 0.3

HW deployment 0.3 0.3

Operations 4.7 7.2

OS support 0.3 0.3

Planning and process management 0.7 1.0

Performance tuning 0.7 2.1

Repository management 0.2 0.2

Security and virus protection 0.1 0.1

Service desk 3.2 5.7

Software deployment 0.3 0.5

Storage capacity planning 0.3 0.7

Systems research, planning and product management 0.1 0.1

Traffic management & planning 0.9 0.9

User administration 0.2 0.5

Total staffing level 16.0 27.9

Note that even though the number of staff to support this small section of machineshas a small variation, given FTE rates that run in excess of $96,250 in mostoperations, the 11.9 FTE difference accumulates quickly.

This cost and risk component is extremely sensitive to the degree and quality ofintegration and any autonomic features. The substantial strides made in this area inrecent releases have significantly affected the necessary staffing levels.

One of the staffing areas that shows a large amount of difference is the service desk,or help center. This difference can be traced to a substantial variance in overall calls,and an equivalent reduction in the amount of time that is required to settle each call.Since many of the calls to the reported help desks were due to latency issues, unevenperformance results and resource unavailability, the differences in the DBMSreliability translates into significant support cost savings. Not only are fewer callsinitiated (25.4% less), but a higher percentage of calls are resolved in Tier I analysis(31.2%). Several significant contributing factors are present in the different resolutionlevels. Primarily, more support calls are resolved in Tier I because fewer spawnedactions have to be performed. A spawned support action can be a further request tooperations to cancel a job, restart a database instance or other technically restrictedservice requests. A lower number of calls that are initiated by hung jobs andoverloaded database queues translate into faster resolution percentages on thesupport calls. The other significant contributor to the higher portion of Tier Iresolutions appears to be increased transparency to database performance. While the

Page 16: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 16

number of performance-initiated calls have been reported as slightly higher for theDB2 DBMS, a significant portion of those calls resolve themselves during the actualcall itself. This results in a shorter handling time per call that ranges from 17.3-45.7%. All of these factors help to form a clear differentiation in staffing.

Time-to-Market EffectsUnderstanding relative time-to-market metrics tend to be difficult, since it is veryrare that organizations actually conduct concurrent development of similar systems.In addition, rapidly changing application development methods and tools can causecomparisons to be done between significantly different development streams, withhighly skewed results. In order to avoid such a misleading situation, a set of highly-concurrent and paired development efforts were analyzed carefully to understand thefactors of the application development and speed as correlated to a DBMS. These 25development efforts highlight some interesting characteristics and trends.

All of the contributory factors, such as staffing, reliability, etc. radically affect thespeed in which a company can move a business concept from inception to market.This nimbleness is a key element of increasing market share and continued corporateviability. While the performance metrics were gathered on the production systems,additional measurements were also collected to track the amount of time that thesystems took to move from the initial conception to the full productionimplementation. The results of this effort are summarized on the following chart.

Time to Market - Days

0

200

400

600

800

1000

1200

1400

1600

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Tracked System Pair ID

Sta

ff D

ays

DB2 Oracle

The 25 systems tracked for this portion of the study were paired based on eithersimultaneous comparative development, or function point equivalents andapplication type. The comparison is intended to be evocative and not quantitative,since other critical success factors can enter into this picture. However, there aresome interesting contributors to the difference in the time-to-market interval. Theseare primarily the lower number of schedule delays and a substantially lower staff timeto provision, where provisioning is defined as the creation of the resources necessary

Page 17: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 17

to operate a system (e.g., storage space). The significant reduction in provisioningtime has been reported as 8.2-26.7%. The reasons for schedule delays appear to belayered, but any time additional funding has to be obtained, the schedules tend toexpand. Additional data points show that the tools available within the DBMS play abig role in the speed of implementation. The broad DBA performance and monitoringtools were cited in 24 of the 25 cases as a significant efficiency factor, although thehistorical performance analysis was identified in 19 of the situations as a majorassistance in operational tuning. Hence, any reliance on the speed of implementationshould take this source of efficiency into account.

Service-Oriented ArchitectureOne area of growing interest in the industry is the implementation of service-orientated architecture (SOA). In general, SOA operates under the tenets of:

o Reuse

o Granularity

o Modularity

o Composability

o Componentization

o Interoperability

o Compliance to standard (common and industry)

o Service Categorization, including

Provisioning and delivery

Monitoring and tracking

These are the general principals. From an observed characteristic perspective, theresult of these principles can be quantified by higher utilization of the resources,faster provisioning and lack of defect. For most organizations, success in SOAimplementation can be defined as more than a 25% increase in resource utilizationand a 30% or more increase in the speed of provisioning. For the purposes of thisstudy that metric has been used. While the advantages in cost containment andflexibility with SOA are widely accepted, the successful implementation of thisarchitecture is significantly affected by the underlying DBMS and hardwarecomponents. If the success rate of SOA implementation is viewed against the DBMS,a noticeable difference is apparent.

The success rate has been normalized on a basis of 100 implementations, to avoidskewing the results.

Page 18: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 18

0

10

20

30

40

50

60

70

80

90

100

Su

ccess

Perc

en

tag

e (

%)

DB2 OracleDBMS

SOA Implementation Success

All of these factors are significant when an organization selects a platform. To payattention simply to performance is to ignore significant data and to make decisionswith willfully truncated information.

System x Comparisons

Similarly to the examination of the System p platforms, the System x platforms wereinvestigated for raw performance and a myriad of other factors, including reliability,time-to-market effects, staffing levels and total costs. Microsoft SQL Server® wasalso included in this venue, due to its substantial market position in the Intel-basedplatform arena.

PerformanceThe examination of performance on the System x was restricted to the modelsnormally used for general workload, such as System x Models x3950 and Blades suchas the HS21XM. This examination was conducted within the available productionenvironments and centered on those application workloads that typically aredesignated for System x, such as client ERP activity, customer support presentationlayers, order entry, etc.

The delineation between the DBMS products is not as clearly defined on the System xplatforms as on the System p. Primarily due to the smaller range of platformcharacteristics in the production domain for Intel-based systems, the System x showsa smaller, but still definitive, advantage with the DB2 product over Oracle and MSSQL Server. The projects examined under this category include platforms utilizingMicrosoft Windows or Linux operating systems. These projects (and their method ofcomparison) that were examined for the System x platforms are:

Page 19: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 19

• ERP – side-by-side

• CRMS – side-by-side

• Web usage – side-by-side

• Order entry – side-by-side

• Retail and distribution - side-by-side

• Transactions – correlated

Many of the systems were run either in full or partial parallel for comparisonpurposes, in full production. The number of separate organizations, along with theexpanded venues that were run at those locations is shown in the following table.

Study Structure – System x Comparison Components

System Type Core Systems Comparison Option Count

ERP 53 70

CRMS 384 783

Order Entry 243 877

Retail and Distribution 97 217

Web Usage 2,034 15,886

Transaction 647 2,646

Total 3,458 20,480

Therefore, this total comparison environment was comprised of 20,480 separatecomparison options of 3,458 core systems. The individual projects were comprised ofnormal activity in the subject areas. The activity was tracked by the contributingbusiness action, including the number of expected rows per query, etc. A short list ofthe individual actions can be seen in Appendix A.

A detailed comparison of the client tests, showing Oracle, MS SQL Server and DB2performance characteristics, is represented by the following system-specific graphs.

ERP Activities

0

1000

2000

3000

4000

5000

6000

7000

8000

1 2 3 4 5 6 7 8 9 10 11

Activity ID

TP

S

DB2 MS SQL Server Oracle

Page 20: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 20

CRMS Activities

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

1 2 3 4 5 6 7 8 9 10

Activity ID

TP

S

DB2 MS SQL Server Oracle

Web Activities

0

2000

4000

6000

8000

10000

12000

14000

16000

18000

20000

1 2 3 4 5 6 7 8 9 10 11 12

Activity ID

TP

S

DB2 MS SQL Server Oracle

Page 21: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 21

Order Entry Activities

0

2000

4000

6000

8000

10000

12000

14000

1 2 4 8 9 11 12 14 15 16 18 19 21 22 23 25 26 28 29 30 32 33 34 36

Activity ID

TP

S

DB2 MS SQL Server Oracle

Retail and Distribution Activities

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

1 2 3 4 5 6 7 8 9 10

Activity ID

TP

S

DB2 MS SQL Server Oracle

Page 22: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 22

Transaction Processing Activities

0

2000

4000

6000

8000

10000

12000

14000

16000

18000

1 4 7 9 12 15 18 21 23 26 29 32 34 37 40 43 46 48 51 54 57 59 62

Activity ID

TP

S

DB2 MS SQL Server Oracle

The amount of divergence between the performance of DB2, MS SQL Server andOracle on System x is not as wide as with the System p and UNIX arena, but is stillbased on application, platform, etc. The summation of the differences inperformance can be seen on the following chart.

0

2,000

4,000

6,000

8,000

10,000

12,000

14,000

16,000

TP

S

ERP Web CRMS Order Entry Retail and Distr Trans Proc

Average TPS Summary

OracleMS SQL ServerDB2

Page 23: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 23

The improved efficiency of the DB2 product on the System x is notably higher than onthe overall equipment population domain, ranging from 26.6% through 81.4% on allof the transaction-based metrics, compared to Oracle. The comparison with MS SQLServer shows increased efficiency of 20.1% through 56.8%.

Business Measures

Similarly to the UNIX environment, the total cost and risk of ownership remain theprimary focus when evaluating purchase options. The contributory metrics that gointo building this picture are still reliability, staffing, total costs and time-to-marketeffects. These factors were measured during the same time period that theperformance metrics were gathered for all of the subject tests for the Intel-based testgroup. As in the UNIX section, a summary of these findings is presented here.

ReliabilityThe dependability of the implementation is a combination of the individual reliabilityof each component, along with the quality and effectiveness of the actualimplementation. As such, both the planned and unplanned outages affect the overallusability of the total system. The charts below show the number of outages that wererecorded during the testing period, as well as the total number of minutes that thoseoutages consumed.

Normalized Monthly Outages (all)

0

5

10

15

20

25

Month 1 Month 2 Month 3 Month 4 Month 5 Month 6 Month 7 Month 8 Month 9 Month 10 Month 11 Month 12

Month

Co

un

t

DB2 MS SQL Server Oracle

The number of outages has been normalized for a 75-platform operation, with bothplanned and unplanned outages included. Since normal learning curve may skew theresults, the first three months of any project incorporating a previously-unknown

Page 24: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 24

operational architecture have also been excluded. In the remaining factors, one of themost prevalent contributors is the need for database reorganization andredeployment. Each of these outages takes valuable access time away from thecorporate resources. The following chart shows the total number of minutes ofunavailability that those outages represent.

Normalized Monthly Outage Time (all)

0

50

100

150

200

250

300

350

400

450

Month 1 Month 2 Month 3 Month 4 Month 5 Month 6 Month 7 Month 8 Month 9 Month 10 Month 11 Month 12

Tim

e (

min

ute

s)

DB2 MS SQL Server Oracle

The architectural characteristics of the different DBMS products result in widelyvaried patterns of tuning and “burn-in” where procedures and methods areestablished for a particular implementation. It is notable that while the number ofoutages for DB2 are higher during the initial quarter-year, the overall time of outageis significantly lower than would be expected, with lower per outage time averages, ascan be seen in the following chart.

Average Time per Outage

0.0

5.0

10.0

15.0

20.0

25.0

30.0

1 2 3 4 5 6 7 8 9 10 11 12

Month

Tim

e

DB2 MS SQL Server Oracle

Page 25: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 25

StaffingAnother important perspective of business issues for any deployment is the numberand skills of the supporting staff. These factors represent a significant costcontribution and are indicative of the complexity and risk of the environment. Thechart below shows the relative staffing for a normalized operations group. Thiscomparison can be used to understand the challenges of staffing and overall budgetcosts for the three DBMS implementations on IBM equipment. These staffing figureswere collected from the actual operation groups measured in the other metriccollection efforts, and cover organizations in North and South America, Asia, Europe,Antarctica and Australia.

Relative Staffing by Discipline

0.01.02.03.04.05.06.07.08.0

Account m

anagement

Application m

anagement

Backup and archiving

Business recovery services

Database m

anagement and

administration

Disk and file m

anagement

HW

and network configuration

/ re-configuration

HW

deployment

Operations

OS support

Planning and processm

anagement

Performance tuning

Repository m

anagement

Security and virus protection

Service desk

Softw

are deployment

Storage capacity planning

System

s research, planningand product m

anagement

Traffic managem

ent &planning

User adm

inistration

Skill Category

FTE

DB2 MS SQL Server Oracle

The source chart that was used to construct this graph has also been included.

Staff Discipline Detail

Staff Discipline DB2 MS SQLServer

Oracle

Account management 0.1 0.1 0.2

Application management 0.5 0.5 0.7

Backup and archiving 0.1 0.2 0.3

Business recovery services 0.1 0.2 0.2

Database management and administration 1.3 2.0 2.1

Disk and file management 0.5 0.8 0.6

HW and network configuration / re-configuration 0.5 0.5 0.5

HW deployment 0.1 0.1 0.1

Operations 3.0 3.8 3.5

OS support 0.5 0.5 0.5

Planning and process management 0.2 0.3 0.2

Page 26: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 26

Staff Discipline DB2 MS SQLServer

Oracle

Performance tuning 0.3 1.0 0.7

Repository management 0.1 0.3 0.3

Security and virus protection 0.3 0.3 0.3

Service desk 1.0 1.3 1.0

Software deployment 0.4 0.4 0.4

Storage capacity planning 0.1 0.1 0.2

Systems research, planning and product management 0.1 0.1 0.1

Traffic management & planning 0.4 0.6 0.4

User administration 0.1 0.2 0.1

Total staffing level 9.7 13.3 12.4

Note that even though the number of staff to support this small section of machineshas a small variation, given FTE rates that run in excess of $68,500 in mostoperations, the FTE difference accumulates quickly. Once again, integration and anyautonomic features affect the necessary staffing levels.

One of the staffing areas that shows a difference is the service desk, or help center.This difference can be traced to a substantial variance in overall calls, and anequivalent reduction in the amount of time that is required to settle each call. Sincemany of the calls to the reported help desks were due to due to latency issues, unevenperformance results and resource unavailability, the differences in the DBMSreliability translates into significant support cost savings. Not only are fewer callsinitiated (9.1% less), but a higher percentage of calls are resolved in Tier I analysis(12.0%). This results in a shorter handling time per call that ranges from 3.2-7.6%.

Time-to-Market EffectsAll of the contributory factors, such as staffing, reliability, etc. radically affect thespeed in which a company can move a business concept from inception to market.This nimbleness is a key element of increasing market share and continued corporateviability. In addition to the production performance metrics, data was also collectedon various aspects of the project lifecycle. Relevant portions of the examination aresummarized on the following chart.

Page 27: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 27

Time-to-Market - Average

0

200

400

600

800

1000

1200

Small Medium Large Very large

Implementation Class

Sta

ff D

ays

DB2 MS SQL Server Oracle

Note that the implementation interval is broken down by the relative size of theproject. This has been done to avoid any obliteration of important details.

The systems tracked for this portion of the study were associated based on eithersimultaneous comparative development, or function point equivalents andapplication type. The comparison is intended to be evocative and not quantitative.The average development days to market are significant indications within theapplication classification.

ConclusionsAfter extensive review of more than 2,354,000 data points covering over 4,100 closelywatched production comparisons, the advantage of running DB2 on IBM System pand System x equipment is strongly supported. The advantages of the synergybetween DBMS and platform translate into hard cost savings for each customer, andsubstantially affect the bottom line cost of ownership. The resulting cost containmentstems from advantages in performance, reliability, staffing and agility. Bydemonstrating higher performance levels, and consistent reliability, theimplementing customer can expect lower staffing levels and faster time-to-market.This in turn positions the organization for better strategic competition, profit levelsand market response.

This is most achievable when a good fit between the DBMS and hardware platform isemployed. Allowing the strengths of the System x and System p platforms to minglewith the strengths of the DB2 product is one of the best ways to support the customerin their continuing quest for lower costs and improved user satisfaction.

Page 28: olitaire

DB2 Performance on IBM System p and System x Whitepaper

©Solitaire Interglobal Ltd., 2008 Page 28

Attributions

AIX, DB2, IBM, eServer, System p, and System x are trademarks or registered trademarks of International Business MachinesCorporation in the United States of America and other countries.Microsoft, Windows, SQL Server and the Windows logo are trademarks of Microsoft Corporation in the United States ofAmerica.UNIX is a registered trademark in the United States of America and other countries, licensed exclusively through The Open Group.Oracle is a registered trademark in the United States of America and other countries of ORACLE Corporation.Other company, product and service names may be trademarks or service marks of others.

Page 29: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.1

Appendix A: Test Cases

UNIX Test Cases

A short list of the individual actions can be seen below for each general project type.In all of the situations, the components of the system type were carefully recorded,with overall performance and capacity characteristics analyzed closely. The findingsfrom this analysis were then scrutinized to show root cause and the uniqueperformance footprint that each DBMS and platform creates.

Tracked System Activity: Credit Card Processes

1. card verification 2. card approval 3. card rejection 4. merchant verification 5. merchant approval 6. merchant rejection 7. purchase amount approval 8. purchase amount rejection 9. posting purchase to cardholder account 10. posting purchase to merchant account 11. posting returns to cardholder account 12. posting returns to merchant account 13. posting cardholder payments 14. posting merchant payments 15. card limit adjustment 16. over limit notification to cardholder 17. merchant limit adjustment 18. periodic billing to cardholder 19. billing adjustments to cardholder 20. special request bill printing 21. periodic accounting to merchant 22. accounting adjustments to merchant 23. fraud analysis 24. clearinghouse batch transmission 25. cash advance posting 26. interest recalculation 27. retroactive posting calculations

A total of 41 core systems were tracked with this type of large-scale application. Thesubject organizations covered a mixture of full financial service companies, othersthat are solely focused on this market sector and retail organizations that also offerthis type of customer product. Fourteen of these organizations ran full parallel effortson different equipment and DBMS that allowed closely detailed comparisons. Theremainder ran partial comparisons that provided significant input.

Page 30: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.2

Tracked System Activity: Email Posting

1. send message 2. retrieve message 3. open message 4. delete message from active 5. erase message from account 6. forward message 7. add new address 8. delete address 9. amend address 10. save draft copy of message

A total of 121 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. Sixty of theseorganizations ran full parallel efforts on different equipment and DBMS that allowedclosely detailed comparisons. The remainder ran partial comparisons that providedsignificant input.

Tracked System Activity: Web Usage

1. logon 2. logoff 3. follow link 4. display image 5. access menu 6. post message 7. display post 8. update information field 9. cancel session 10. search via engine 11. search direct 12. view online AV content

A total of 306 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. Seventy-two ofthese organizations ran full parallel efforts on different equipment and DBMS thatallowed closely detailed comparisons. The remainder ran partial comparisons thatprovided significant input.

Tracked System Activity: Customer Support

1. log new support call 2. update support call data 3. log completion of support call 4. lookup customer name 5. lookup customer address 6. lookup customer program information 7. lookup customer bill or invoice 8. update customer name 9. update customer address

Page 31: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.3

10. update customer program information

A total of 27 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. Eleven of theseorganizations ran full parallel efforts on different equipment and DBMS that allowedclosely detailed comparisons. The remainder ran partial comparisons that providedsignificant input.

Tracked System Activity: DSS

1. receive data 2. clean data 3. resolve data discrepancies 4. verify user 5. insert one row of data into single small table 6. insert one row of data into single medium table 7. insert one row of data into single large table 8. insert one row of data into each of 3-5 small tables 9. insert one row of data into each of 3-5 medium tables 10. insert one row of data into each of 3-5 large tables 11. insert one row of data into each of 3-5 combination tables 12. insert ten rows of data into single small table 13. insert ten rows of data into single medium table 14. insert ten rows of data into single large table 15. insert ten rows of data into each of 3-5 small tables 16. insert ten rows of data into each of 3-5 medium tables 17. insert ten rows of data into each of 3-5 large tables 18. insert ten rows of data into each of 3-5 combination tables 19. insert 100 rows of data into single small table 20. insert 100 rows of data into single medium table 21. insert 100 rows of data into single large table 22. insert 100 rows of data into each of 3-5 small tables 23. insert 100 rows of data into each of 3-5 medium tables 24. insert 100 rows of data into each of 3-5 large tables 25. insert 100 rows of data into each of 3-5 combination tables 26. insert 1000 rows of data into single small table 27. insert 1000 rows of data into single medium table 28. insert 1000 rows of data into single large table 29. insert 1000 rows of data into each of 3-5 small tables 30. insert 1000 rows of data into each of 3-5 medium tables 31. insert 1000 rows of data into each of 3-5 large tables 32. insert 1000 rows of data into each of 3-5 combination tables 33. roll-up data for 3-5 small tables from daily to weekly 34. roll-up data for 3-5 small tables from daily to calendar month 35. roll-up data for 3-5 small tables from daily to fiscal month 36. roll-up data for 3-5 small tables from daily to calendar quarter 37. roll-up data for 3-5 small tables from daily to fiscal quarter 38. roll-up data for 3-5 small tables from daily to calendar year

Page 32: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.4

39. roll-up data for 3-5 small tables from daily to fiscal year 40. roll-up data for 3-5 medium tables from daily to weekly 41. roll-up data for 3-5 medium tables from daily to calendar month 42. roll-up data for 3-5 medium tables from daily to fiscal month 43. roll-up data for 3-5 medium tables from daily to calendar quarter 44. roll-up data for 3-5 medium tables from daily to fiscal quarter 45. roll-up data for 3-5 medium tables from daily to calendar year 46. roll-up data for 3-5 medium tables from daily to fiscal year 47. roll-up data for 3-5 large tables from daily to weekly 48. roll-up data for 3-5 large tables from daily to calendar month 49. roll-up data for 3-5 large tables from daily to fiscal month 50. roll-up data for 3-5 large tables from daily to calendar quarter 51. roll-up data for 3-5 large tables from daily to fiscal quarter 52. roll-up data for 3-5 large tables from daily to calendar year 53. roll-up data for 3-5 large tables from daily to fiscal year 54. roll-up data for 3-5 combination tables from daily to weekly 55. roll-up data for 3-5 combination tables from daily to calendar month 56. roll-up data for 3-5 combination tables from daily to fiscal month 57. roll-up data for 3-5 combination tables from daily to calendar quarter 58. roll-up data for 3-5 combination tables from daily to fiscal quarter 59. roll-up data for 3-5 combination tables from daily to calendar year 60. roll-up data for 3-5 combination tables from daily to fiscal year 61. retrieve one row of data from single small table 62. retrieve one row of data from single medium table 63. retrieve one row of data from single large table 64. retrieve one row of data from each of 3-5 small tables 65. retrieve one row of data from each of 3-5 medium tables 66. retrieve one row of data from each of 3-5 large tables 67. retrieve one row of data from each of 3-5 combination tables 68. retrieve ten rows of data from single small table 69. retrieve ten rows of data from single medium table 70. retrieve ten rows of data from single large table 71. retrieve ten rows of data from each of 3-5 small tables 72. retrieve ten rows of data from each of 3-5 medium tables 73. retrieve ten rows of data from each of 3-5 large tables 74. retrieve ten rows of data from each of 3-5 combination tables 75. retrieve 100 rows of data from single small table 76. retrieve 100 rows of data from single medium table 77. retrieve 100 rows of data from single large table 78. retrieve 100 rows of data from each of 3-5 small tables 79. retrieve 100 rows of data from each of 3-5 medium tables 80. retrieve 100 rows of data from each of 3-5 large tables 81. retrieve 100 rows of data from each of 3-5 combination tables 82. retrieve 1000 rows of data from single small table 83. retrieve 1000 rows of data from single medium table 84. retrieve 1000 rows of data from single large table 85. retrieve 1000 rows of data from each of 3-5 small tables

Page 33: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.5

86. retrieve 1000 rows of data from each of 3-5 medium tables 87. retrieve 1000 rows of data from each of 3-5 large tables 88. retrieve 1000 rows of data from each of 3-5 combination tables

A total of 41 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. Sixteen of theseorganizations ran full parallel efforts on different equipment and DBMS that allowedclosely detailed comparisons. The remainder ran partial comparisons that providedsignificant input.

Tracked System Activity: Transactions

1. insert one row of data into single small table 2. insert one row of data into single medium table 3. insert one row of data into single large table 4. insert one row of data into each of 3-5 small tables 5. insert one row of data into each of 3-5 medium tables 6. insert one row of data into each of 3-5 large tables 7. insert one row of data into each of 3-5 combination tables 8. insert ten rows of data into single small table 9. insert ten rows of data into single medium table 10. insert ten rows of data into single large table 11. insert ten rows of data into each of 3-5 small tables 12. insert ten rows of data into each of 3-5 medium tables 13. insert ten rows of data into each of 3-5 large tables 14. insert ten rows of data into each of 3-5 combination tables 15. insert 100 rows of data into single small table 16. insert 100 rows of data into single medium table 17. insert 100 rows of data into single large table 18. insert 100 rows of data into each of 3-5 small tables 19. insert 100 rows of data into each of 3-5 medium tables 20. insert 100 rows of data into each of 3-5 large tables 21. insert 100 rows of data into each of 3-5 combination tables 22. read data in single small table 23. read data in single medium table 24. read data in single large table 25. read data in 3-5 small tables 26. read data in 3-5 medium tables 27. read data in 3-5 large tables 28. read data in 3-5 combination tables 29. update one row of data in each of 3-5 small tables 30. update one row of data in each of 3-5 medium tables 31. update one row of data in each of 3-5 large tables 32. update one row of data in each of 3-5 combination tables 33. update ten rows of data in single small table 34. update ten rows of data in single medium table 35. update ten rows of data in single large table 36. update ten rows of data in each of 3-5 small tables

Page 34: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.6

37. update ten rows of data in each of 3-5 medium tables 38. update ten rows of data in each of 3-5 large tables 39. update ten rows of data in each of 3-5 combination tables 40. update 100 rows of data in single small table 41. update 100 rows of data in single medium table 42. update 100 rows of data in single large table 43. update 100 rows of data in each of 3-5 small tables 44. update 100 rows of data in each of 3-5 medium tables 45. update 100 rows of data in each of 3-5 large tables 46. update 100 rows of data in each of 3-5 combination tables 47. delete one row of data from each of 3-5 small tables 48. delete one row of data from each of 3-5 medium tables 49. delete one row of data from each of 3-5 large tables 50. delete one row of data from each of 3-5 combination tables 51. delete ten rows of data from single small table 52. delete ten rows of data from single medium table 53. delete ten rows of data from single large table 54. delete ten rows of data from each of 3-5 small tables 55. delete ten rows of data from each of 3-5 medium tables 56. delete ten rows of data from each of 3-5 large tables 57. delete ten rows of data from each of 3-5 combination tables 58. delete 100 rows of data from single small table 59. delete 100 rows of data from single medium table 60. delete 100 rows of data from single large table 61. delete 100 rows of data from each of 3-5 small tables 62. delete 100 rows of data from each of 3-5 medium tables 63. delete 100 rows of data from each of 3-5 large tables 64. delete 100 rows of data from each of 3-5 combination tables

A total of 117 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. Fifty-one of theseorganizations ran full parallel efforts on different equipment and DBMS that allowedclosely detailed comparisons. The remainder ran partial comparisons that providedsignificant input.

Intel-based Test Cases

Tracked System Activity: ERP

1. refresh screen 2. move from one screen to another 3. add a customer 4. update customer data 5. terminate a customer 6. add a vendor 7. update vendor data 8. terminate a vendor 9. add an employee

Page 35: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.7

10. update employee data 11. terminate an employee

A total of 53 core systems were tracked with this type of large-scale application. Thesubject organizations covered a mixture of full financial service companies, othersthat are solely focused on this market sector and retail organizations that also offerthis type of customer product. Twelve of these organizations ran full parallel effortson different equipment and DBMS that allowed closely detailed comparisons. Theremainder ran partial comparisons that provided significant input.

Tracked System Activity: Customer Support

1. log new support call 2. update support call data 3. log completion of support call 4. lookup customer name 5. lookup customer address 6. lookup customer program information 7. lookup customer bill or invoice 8. update customer name 9. update customer address 10. update customer program information

A total of 384 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. Eighty-six of theseorganizations ran full parallel efforts on different equipment and DBMS that allowedclosely detailed comparisons. The remainder ran partial comparisons that providedsignificant input.

Tracked System Activity: Retail and Distribution

1. check inventory 2. apply payment 3. calculate discount 4. consolidate shipments 5. create backorder 6. delete suspended transaction 7. search for alternative inventory location 8. search for product by keyword 9. update buy list 10. validate credit

A total of 97 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. Twenty-two ofthese organizations ran full parallel efforts on different equipment and DBMS thatallowed closely detailed comparisons. The remainder ran partial comparisons thatprovided significant input.

Tracked System Activity: Order Entry

1. retrieve customer data 2. verify customer credit standing

Page 36: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.8

3. add ship-to address 4. retrieve product data 5. insert one detail order line 6. insert ten detail order lines 7. insert 100 detail order lines 8. update one detail order line 9. update ten detail order lines 10. update 100 detail order lines 11. insert one detail return line 12. insert ten detail return lines 13. insert 100 detail return lines 14. update one detail return line 15. update ten detail return lines 16. update 100 detail return lines 17. verify in-stock products 18. lookup product location 19. purge one row of order header information 20. purge ten rows of order header information 21. purge 100 rows of order header information 22. purge one row of order detail information 23. purge ten rows of order detail information 24. purge 100 rows of order detail information

A total of 243 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. Fifty-six of theseorganizations ran full parallel efforts on different equipment and DBMS that allowedclosely detailed comparisons. The remainder ran partial comparisons that providedsignificant input.

Tracked System Activity: Web Usage

1. logon2. logoff3. follow link4. display image5. access menu6. post message7. display post8. update information field9. cancel session10. search via engine11. search direct12. view online AV content

A total of 2,034 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. One hundredthirty-nine of these organizations ran full parallel efforts on different equipment andDBMS that allowed closely detailed comparisons. The remainder ran partialcomparisons that provided significant input.

Page 37: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.9

Tracked System Activity: Transactions

1. insert one row of data into single small table 2. insert one row of data into single medium table 3. insert one row of data into each of 3-5 small tables 4. insert one row of data into each of 3-5 medium tables 5. insert one row of data into each of 3-5 combination tables 6. insert ten rows of data into single small table 7. insert ten rows of data into single medium table 8. insert ten rows of data into each of 3-5 small tables 9. insert ten rows of data into each of 3-5 medium tables 10. insert ten rows of data into each of 3-5 combination tables 11. insert 100 rows of data into single small table 12. insert 100 rows of data into single medium table 13. insert 100 rows of data into each of 3-5 small tables 14. insert 100 rows of data into each of 3-5 medium tables 15. insert 100 rows of data into each of 3-5 combination tables 16. read data in single small table 17. read data in single medium table 18. read data in 3-5 small tables 19. read data in 3-5 medium tables 20. read data in 3-5 combination tables 21. update one row of data in each of 3-5 small tables 22. update one row of data in each of 3-5 medium tables 23. update one row of data in each of 3-5 combination tables 24. update ten rows of data in single small table 25. update ten rows of data in single medium table 26. update ten rows of data in each of 3-5 small tables 27. update ten rows of data in each of 3-5 medium tables 28. update ten rows of data in each of 3-5 combination tables 29. update 100 rows of data in single small table 30. update 100 rows of data in single medium table 31. update 100 rows of data in each of 3-5 small tables 32. update 100 rows of data in each of 3-5 medium tables 33. update 100 rows of data in each of 3-5 combination tables 34. delete one row of data from each of 3-5 small tables 35. delete one row of data from each of 3-5 medium tables 36. delete one row of data from each of 3-5 combination tables 37. delete ten rows of data from single small table 38. delete ten rows of data from single medium table 39. delete ten rows of data from each of 3-5 small tables 40. delete ten rows of data from each of 3-5 medium tables 41. delete ten rows of data from each of 3-5 combination tables 42. delete 100 rows of data from single small table 43. delete 100 rows of data from single medium table 44. delete 100 rows of data from each of 3-5 small tables 45. delete 100 rows of data from each of 3-5 medium tables 46. delete 100 rows of data from each of 3-5 large tables

Page 38: olitaire

DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases

©Solitaire Interglobal Ltd., 2005 Page A.10

47. delete 100 rows of data from each of 3-5 combination tables

A total of 647 core systems were tracked with this type of application. The subjectorganizations covered a mixture of industries and market sectors. One hundredsixty-two of these organizations ran full parallel efforts on different equipment andDBMS that allowed closely detailed comparisons. The remainder ran partialcomparisons that provided significant input.