perf tune siebel sun

79
Performance Tuning Siebel Software on the Sun™ Platform Khader Mohiuddin Engineering Lead Sun-Siebel Alliance Market Development Engineering Sun Microsystems, Inc. June 2006

Upload: minetto85

Post on 07-Apr-2015

257 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Perf Tune Siebel Sun

Performance Tuning Siebel Software

on the Sun™ Platform

Khader MohiuddinEngineering Lead Sun-Siebel AllianceMarket Development EngineeringSun Microsystems, Inc.

June 2006

Page 2: Perf Tune Siebel Sun

Copyright 2006 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved.

U.S. Government Rights - Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions of the FAR and its supplements. Use is subject to license terms. This distribution may include materials developed by third parties.

Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and in other countries, exclusively licensed through X/Open Company, Ltd. X/Open is a registered trademark of X/Open Company, Ltd.

Sun, Sun Microsystems, the Sun logo, Solaris, Sun Fire, Sun Enterprise, StorEdge, Java, and “The Network Is The Computer” are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries.

All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon architecture developed by Sun Microsystems, Inc.

This product is covered and controlled by U.S. Export Control laws and may be subject to the export or import laws in other countries. Nuclear, missile, chemical biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export or reexport to countries subject to U.S. embargo or to entities identified on U.S. export exclusion lists, including, but not limited to, the denied persons and specially designated nationals lists is strictly prohibited.

DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.

Performance Tuning Siebel Software on the Sun Platform Page 2

Page 3: Perf Tune Siebel Sun

AbstractThis paper discusses the performance optimization of a complete Siebel enterprise solution on the Sun platform. The article covers tuning for the SolarisTM Operating System, Siebel software, Oracle database server, Sun StorEdgeTM products, and Sun JavaTM System Web Server. We also discuss unique features of the Solaris OS that reduce risk while helping to improve the performance and stability of Siebel applications. All of the techniques described here are lessons learned from a series of performance tuning studies that were conducted under the auspices of the Siebel Platform Sizing and Performance Program (PSPP).

Performance Tuning Siebel Software on the Sun Platform Page 3

Page 4: Perf Tune Siebel Sun

1 Tuning for Price/Performance: Summary...................................................................................7

2 Siebel Application Architecture Overview.................................................................................. 9

3 Optimal Sun/Siebel Architecture for Benchmark Workload.................................................. 11

3.1 Hardware and Software Used................................................................................................ 13

4 Workload Description................................................................................................................. 14

4.1 OLTP (Siebel Web Thin Client End Users)............................................................................144.2 Batch Server Components...................................................................................................... 14

5 10,000 Concurrent Users: Test Results Summary.................................................................... 15

5.1 Response Times and Transaction Throughput....................................................................... 155.2 Server Resource Utilization ...................................................................................................16

6 Siebel Scalability on the Sun Platform.......................................................................................17

7 Performance Tuning....................................................................................................................20

7.1 Tuning Solaris OS for Siebel Server.......................................................................................217.1.1 Solaris MTmalloc Tuning for Siebel............................................................................... 217.1.2 Solaris Alternate Threads Library Usage.........................................................................227.1.3 The Solaris Kernel and TCP/IP Tuning Parameters for Siebel Server............................ 23

7.2 Tuning Siebel Server for the Solaris OS.................................................................................237.2.1 Tuning Call Center, Sales/Service, and eChannel Siebel Modules................................. 247.2.2 Workflow ....................................................................................................................... 267.2.3 Assignment Manager Tuning...........................................................................................277.2.4 EAI-MQseries.................................................................................................................. 287.2.5 EAI-HTTP Adapter..........................................................................................................29

7.3 Siebel Server Scalability Limitations and Solutions.............................................................. 307.3.1 The Siebel MaxTasks Upper Limit Problem................................................................... 307.3.2 Bloated Siebel Processes (Commonly Mistaken as Memory Leaks)...............................34

7.4 Tuning Sun Java System Web Server......................................................................................367.5 Tuning the Siebel Web Server Extension (SWSE)..................................................................387.6 Tuning Siebel Standard Oracle Database and Sun Storage.................................................. 39

7.6.1 Optimal Database Configuration..................................................................................... 397.6.2 Properly Locating Data on the Disk for Best Performance..............................................407.6.3 Disk Layout and Oracle Data Partitioning....................................................................... 417.6.4 Solaris MPSS Tuning for Oracle Server..........................................................................447.6.5 Hot Table Tuning and Data Growth................................................................................ 47

Performance Tuning Siebel Software on the Sun Platform Page 4

Page 5: Perf Tune Siebel Sun

7.6.6 Oracle Parameters Tuning................................................................................................477.6.7 Solaris Kernel Parameters on Oracle Database Server.................................................... 507.6.8 SQL Query Tuning...........................................................................................................507.6.9 Rollback Segment Tuning............................................................................................... 537.6.10 Database Connectivity Using Host Names Adapter...................................................... 537.6.11 High I/O with Oracle Shadow Processes Connected to Siebel......................................54

7.7 Siebel Database Connection Pooling.....................................................................................557.8 Tuning Sun Java System Directory Server (LDAP)................................................................56

8 Performance Tweaks with No Gains..........................................................................................57

9 Scripts, Tips, and Tricks for Diagnosing Siebel on the Sun Platform.................................... 59

9.1 To Monitor Siebel Open Session Statistics.............................................................................599.2 To List the Parameter Settings for a Siebel Server................................................................ 599.3 To Find Out All OMs Currently Running for a Component.................................................. 599.4 To Find Out the Number of Active Servers for a Component................................................ 609.5 To Find Out the Tasks for a Component................................................................................ 609.6 To Set Detailed Trace Levels on the Siebel Server Components (Siebel OM)....................... 609.7 To Find Out the Number of GUEST Logins for a Component...............................................609.8 To Calculate the Memory Usage for an OM..........................................................................619.9 To Find the Log File Associated with a Specific OM.............................................................619.10 To Produce a Stack Trace for the Current Thread of an OM.............................................. 629.11 To Show System-Wide Lock Contention Issues Using lockstat............................................ 629.12 To Show the Lock Statistic of an OM Using plockstat ........................................................ 649.13 To “Truss” an OM............................................................................................................... 659.14 How to Trace the SQL Statements for a Particular Siebel Transaction............................. 659.15 Changing Database Connect Stringvi $ODBCINI and Editing the field .ServerName....... 669.16 Enabling/Disabling Siebel Components...............................................................................66

10 Appendix A: Transaction Response Times............................................................................. 68

11 Appendix B: Database Objects Growth During the Test.......................................................74

12 Appendix C: Oracle statspack Report .................................................................................... 77

13 References...................................................................................................................................79

Performance Tuning Siebel Software on the Sun Platform Page 5

Page 6: Perf Tune Siebel Sun

Introduction To ensure that the most demanding global enterprise customers can meet their deployment requirements, engineers from Siebel Systems and Sun are working jointly on several engineering projects. Their common goal is to further enhance Siebel server performance on Sun’s highly-scalable Solaris Operating System.

This article is an effort to document and spread knowledge of tuning and optimizing Siebel 7 eBusiness Applications Suite on the Solaris platform. All of the techniques discussed here are lessons learned from a series of performance tuning studies conducted under the auspices of the Siebel Platform Sizing and Performance Program (PSPP). The tests conducted under this program are based on real-world scenarios derived from Siebel Systems customers, which reflect some of the most frequently used and most critical components of the Siebel eBusiness Applications Suite. This article also provides tips and best practices--based on our experience--for field staff, benchmark engineers, system administrators, and customers who are interested in achieving optimal performance and scalability with their Siebel-on-Sun installations. The following areas are addressed in this paper:

• What are the unique features of the Solaris Operating System that reduce risk while helping to improve the performance and stability of Siebel applications?

• For maximum scalability at a low cost, what is the optimal way to configure Siebel on the Solaris OS?

• How does Sun’s Chip Multithreading (CMT) technology based on the UltraSPARC IV processor benefit Siebel solutions?

• How can transaction response times be improved for end users in large Siebel-on-Sun deployments?

• How can an Oracle database running on Sun StorEdge be tuned for higher performance for Siebel software?

The performance and scalability testing was conducted at Sun’s Enterprise Technology Center (ETC) in Menlo Park, California, by Sun’s Market Development Engineering (MDE) with assistance from Siebel Systems. The ETC is a massive, distributed testing facility packing more computer power than many Fortune 1000 corporations. A facility of this magnitude provides the resources needed to test the limits of software on a much greater scale than most enterprises will ever require.

Performance Tuning Siebel Software on the Sun Platform Page 6

Page 7: Perf Tune Siebel Sun

1 Tuning for Price/Performance: SummaryThe Solaris Operating System is the cornerstone software technology that enables Sun to deliver high performance and scalability for Siebel applications; and the OS contains a number of features that enable customers to tune for optimal price/performance levels. Among the core features of the Solaris OS that contributed to the superior results achieved in the tests:

Solaris MTmalloc: A standard feature with the Solaris system was enabled on the Siebel application servers. This is an alternate memory allocator module that was built specifically for multithreaded applications such as Siebel. The MTmalloc routines provide a faster, concurrent malloc implementation. This feature resulted in lowering CPU consumption by 35%. Though memory consumption doubled as a side effect, overall price/performance benefits are positive. There have been improvements made on MTmalloc in Solaris 10 OS which reduce the space-inefficiency penalty. More details on this topic are available in S ection 7.1.1 .

Siebel Process Size: For an application process running on the Solaris OS, the default setting for stack and data size is unlimited. We found that Siebel software running with the default Solaris system setting caused bloated stack size and runaway processes that compromised scalability and stability of Siebel on the Solaris OS. Limiting the stack size to 1MB and increasing the data size limit to 4GB resulted in increased scalability and higher stability. Both adjustments let a Siebel process use its process address space more efficiently, fully utilizing the total process address space of 4GB available to a 32-bit application process. A significant drop in failure rate of transactions (only eight failures out of 1.2 million total transactions!) was observed as a result of these two changes. More details on this topic are available in Section 7.3 .

Solaris Alternate Threads Library: Solaris 8 OS provides an alternate threads implementation with a one-level model (1x1) in which user-level threads are associated one-to-one with lightweight processes (LWPs). This implementation is simpler than the standard (MxN) two-level model in which user-level threads are multiplexed over (possibly) fewer LWPs. When used on Siebel Application Servers, the 1x1 model provided good performance improvements to the Siebel multithreaded applications. More details on this topic are available in Section 7.1.2 .

Solaris Multiple Page Size Support (MPSS): A standard feature available in Solaris 9 OS and subsequent versions, MPSS gives applications the ability to run on the same OS with more than one page size. Use of this Solaris library allows a larger heap size to be set for certain applications, which results in better performance due to reduced TLB miss%. This feature was enabled on the Oracle server and resulted in performance gains. More details on this topic are available in Section 7.6.4 .

Solaris Resource Manager was used on all tiers of the setup to efficiently manage CPU resources. This resulted in fewer process migrations and translated to higher cache hits.

Performance Tuning Siebel Software on the Sun Platform Page 7

Page 8: Perf Tune Siebel Sun

Sun Storage: Oracle database files were laid out on a Sun StorEdge SE6320 system. I/O balancing was implemented on the Siebel workload to reduce hot spots. Also zone bit recording was used on disks to provide higher throughput to the Siebel transactions. Direct I/O was enabled on certain Oracle files and on the Siebel file system. More details on this topic are available in Section 7.6.2 .

Connection Pooling: Siebel’s database connection pooling was used, providing good benefits for CPU and memory. Twenty end users shared a single connection to the database. More details on this topic are available in Section 7.7 .

Usage of Appropriate Sun Hardware: Pilot tests were performed to characterize the performance of the web, application, and database across the current Sun product line. Hardware was chosen based on best price/performance rather than pure performance. Details on this topic are available in Section 3 .

Performance Tuning Siebel Software on the Sun Platform Page 8

Page 9: Perf Tune Siebel Sun

2 Siebel Application Architecture OverviewSiebel server is a flexible and scalable application server platform that supports a variety of services operating on the middle tier of the Siebel N-tier architecture, including data integration, workflow, data replication, and synchronization service for mobile clients. Figure 2.1 provides a high-level view of the Siebel application suite architecture.

Figure 2.1

Siebel server includes business logic and infrastructure for running the different CRM modules as well as connectivity interfaces to the back end database. It consists of several multithreaded processes commonly referred to as ‘Siebel Object Managers’. These can be configured so that several instances of it can run on a single Solaris machine. The Siebel 7.x server makes use of gateway components to track user sessions.

Siebel 7.x has a thin client architecture for connected clients. The Siebel 7.x thin client architecture is enabled through the Siebel plug-in (SWSE – Siebel Webserver Extension) running on the web server. It's the primary interface between the client and the Siebel

Performance Tuning Siebel Software on the Sun Platform Page 9

Page 10: Perf Tune Siebel Sun

application server. For more information on the individual Siebel components please refer to Siebel product documentation at www.siebel.com.

Performance Tuning Siebel Software on the Sun Platform Page 10

Page 11: Perf Tune Siebel Sun

3 Optimal Sun/Siebel Architecture for Benchmark WorkloadSun offers a wide variety of products ranging from hardware and networks to software and storage systems. To obtain the best price/performance from an application, one needs to determine which Sun products are appropriate. This selection process can be achieved by understanding the application's characteristics, picking Sun products suitable to those application characteristics, conducting a series of tests, and then finalizing the choice of machines.

For this project, tests were done to characterize web, application, and database performance across the current Sun product line. Hardware was selected based on best price/performance rather than pure performance criteria. Figure 3.1 illustrates the hardware configuration used in the Sun ETC testing. (Note: "SF" stands for Sun Fire server.)

Performance Tuning Siebel Software on the Sun Platform Page 11

Web ServersSF V440/SF V880

Siebel GatewaySun Java System Directory ServerSF V440

Siebel App Servers

SF V890

SF E2900

SF V440

SF E2900

SF V480

SF V440

Sun StorEdgeSE 6320

Load GeneratorsSF V65x

Key:

Figure 3.1

Network traffic between load generators and web servers

Point-Point Gbic between each app server and database server

Network packets between web servers to app servers

Page 12: Perf Tune Siebel Sun

After collecting detailed knowledge of Siebel's performance characteristics, the hardware/network topology depicted in Figure 3.1 was created. The Siebel end user workload (OLTP) was distributed across three nodes: Sun FireTM V890, Sun Fire E2900, and Sun Fire V440 servers. Each node ran all the Siebel components under test, that is, Call Center, eService, eSales, and eChannel. Siebel server component jobs (batch tests EAI-HTTP, EAI-MQ, WorkFlow, and HTTP-adapter) were distributed across Sun Fire V440 and Sun Fire V480 servers. The Siebel gateway and the Sun Java System Directory Server (LDAP) were intentionally placed on one physical machine (the Sun Fire V440 server) because they are very low consumers of CPU and memory resources. With the Sun Ethernet GIG connectivity, network throughput was adequate. Each of the machines had three network interface cards (NICs), which we used to isolate the main categories of network traffic:

1. End-user (load generator)-to-web-server traffic (shown in green)2. Web server-to-gateway-to-Siebel-applications-server traffic (shown in black)3. Siebel-applications-server-to-database-server traffic (shown in red)

The networking was done using a Cisco router (Catalyst 4000). Two VLANs were created to separate network traffic between (1) and (2), while (3) was further optimized with individual point-to-point network interfaces from each application server to the database. This separation was done to alleviate any possible network bottlenecks at any tier which could have occurred as a result of simulating thousands of Siebel users. The load generators were all Sun Fire V65 servers running Mercury LoadRunner software. The load was spread across three web server machines by directing different kinds of users (such as Call Center or eService users, and so on) to the three web servers.

All of the Siebel application servers belonged to a single Siebel enterprise. A single E2900 server hosted the Oracle database; this was connected to a Sun StorEdgeTM

SE6320 system using a fiber channel.

Performance Tuning Siebel Software on the Sun Platform Page 12

Page 13: Perf Tune Siebel Sun

3.1 Hardware and Software Used

Gateway Server/LDAP: • 1 x Sun Fire V440

o 1 x 1.2 GHz UltraSPARC IIIio 16 GB RAMo Solaris 8 OS 2/04 Generic_117350-02o Sun Java System Directory Server

LDAP 4.1 SP9

Application Servers:• 1x Sun Fire V890

o 8 x 1.2GHz UltraSPARC IVo 32 GB RAMo Solaris 8 OS 2/04 Generic_117350-02o Siebel 7.5.2

• 1 x Sun Fire E2900o 12 x 1.2 GHz UltraSPARC IVo 48 GB RAMo Solaris 8 OS 2/04 Generic_117350-02o Siebel 7.5.2

• 2 x Sun Fire V440o 4 x 1.2 GHz UltraSPARC IIIio 16 GB RAMo Solaris 8 OS 2/04 Generic_117350-02o Siebel 7.5.2

• 1 x Sun Fire V480o 4 x 1.2 GHz UltraSPARC IIIi+o 16 GB RAMo Solaris 8 OS 2/02 Generic_108528-27o Siebel 7.5.2o IBM MQ Series 5.2 FP2

Database Server:• 1 x Sun Fire E2900

o 4 x 1.2 GHz UltraSPARC IVo 16 GB RAMo Solaris 9 OS 2/04

Generic_117150-05o Oracle 9.2.0.2 32-bito Sun StorEdge SE6320

Storage Array 4 trays (2+2), 4x14x36GB 15Krpm FC-AL drives.

Load Runner Drivers:• 5 x Sun Fire V65x

o 4 x 3.02 GHz Xeono 3 GB RAMo Windows XP SP1o Mercury LoadRunner 7.5.1

Web Servers:• 2 x Sun Fire V440

o 4 x1.2 GHz UltraSPARC IIIio 16 GB RAMo Solaris 8 OS 2/04

Generic_117350-02o Sun Java System Web Server

6.0 SP2o Siebel 7.5.2 SWSE

• 1 x Sun Fire V880o 2 x 900 MHz UltraSPARC IIIio 16 GB RAMo Solaris 8 OS 2/02

Generic_108528-13o Sun Java System Web Server

6.0 SP2o Siebel 7.5.2 SWSE

Performance Tuning Siebel Software on the Sun Platform Page 13

Page 14: Perf Tune Siebel Sun

4 Workload DescriptionAll of the tuning discussed in this document is specific to the PSPP workload as defined by Siebel Systems. The workload was based on scenarios derived from large Siebel customers to reflect some of the most frequently used and most critical components of the Siebel eBusiness Application Suite. At a high level the workload for these tests can be categorized into the two categories: (1) OLTP and (2) batch server components.

4.1 OLTP (Siebel Web Thin Client End Users)OLTP simulated the real world requirements of a large organization with 10,000 concurrent users involved in the following tasks and functions (in a mixed ratio):

• Call Center (sales and service representatives) – 7000 concurrent users• Partner Relationship Management (partner organizations) – eChannel,

1000 concurrent users• Web sales (customers) – eSales, 1000 concurrent users• Web service (customers) – eService, 1000 concurrent users

The end users were simulated using LoadRunner version 7.51 SP1 from Mercury Interactive, with a think time in the range of 5 to 55 seconds (or an average of 30 seconds) between user operations.

4.2 Batch Server ComponentsThe batch component of the workload consisted of:

1. Siebel Assignment Manager2. Siebel Workflow3. Siebel EAI MQ Series Adapter4. Siebel EAI-HTTP Adapter

The Siebel 7 Assignment Manager processed assignment transactions for sales opportunities based on employee positions and territories. Siebel 7 Workflow Manager executed workflow steps based on inserted service requests. The Siebel 7 EAI MQ Series Adapter read from and placed transactions into IBM MQ Series queues. The Siebel 7 EAI-HTTP Adapter executed requests between different web infrastructures.

All of the tests were conducted by making sure that both the OLTP and batch components were run in conjunction for a one hour period (steady state) within the same Siebel enterprise installation.

Performance Tuning Siebel Software on the Sun Platform Page 14

Page 15: Perf Tune Siebel Sun

5 10,000 Concurrent Users: Test Results Summary

The test system demonstrated that Siebel 7 architecture on Sun Fire servers and Oracle 9i database easily scales to 10,000 concurrent users.

● Vertical scalability. The Siebel 7 Server showed excellent scalability within an application server.

● Horizontal scalability. The benchmark demonstrates scalability across multiple servers without degradation.

● Low network utilization. The Siebel 7 Smart Web Architecture and Smart Network Architecture efficiently managed the network consuming only 5.5 kbps per user.

● Efficient use of the database server. The Siebel 7 Smart Database Connection Pooling and Multiplexing allowed the database to service 10,000 concurrent users and the supporting Siebel 7 Server application services with 480 database connections.

The actual results of the performance and scalability tests conducted at the Sun ETC for the Siebel workload are summarized in the following section of the article. Chapter 8 resents specific performance tuning tips and the methodology used to achieve this level of performance.

5.1 Response Times and Transaction ThroughputWorkload Number of

UsersAverage Operation Response Time (Sec)

Business Transactions Throughput/ Hour

Workload Business Transactions Throughput/ Hour

Call Center (Sales and Service) 7,000 .126 34,778 Assignment Manager 11,427 Partner Relationship Management 1,000 .303 12,540 EAI - HTTP Adapter 278,352

eSales 1,000 .274 6,775 EAI - MQ Series Adapter 181,319

eService 1,000 .199 12,870 Workflow Manager 53,439 Totals 10,000 N/A 66,964

Table 5.1

Performance Tuning Siebel Software on the Sun Platform Page 15

Page 16: Perf Tune Siebel Sun

5.2 Server Resource Utilization

Figure 5.2

Node Functional Use % CPUUtilization

MemoryUtilization (MB)

1 x Sun Fire E2900 (4CPU,16GB) Oracle Database Server 51 16901 x Sun Fire V480 (4CPU,16GB) Siebel App Server – EAI-HTTP + WorkFlow 65 20751 x Sun Fire V440 (4CPU, 16GB) Siebel App Server – AM + EAI MQ series 37 17421 x Sun Fire V440 (4CPU, 16GB) Siebel App Server – 1600 end users 91 10,6661 x Sun Fire E2900 (12CPU, 48GB) Siebel App Server – 4800 end users 66 38,9561x Sun Fire V890 (8CPU,32GB) Siebel App Server – 3600 end users 68 25,871

1x Sun Fire V440 (4CPU, 16GB)Sun Java System Web Server – HTTP Adapter, WorkFlow 8 126

1 x Sun Fire V880 (4CPU,16GB) Sun Java System Web Server – Application requests 49 1861 x Sun Fire V440 (4CPU, 16GB) Sun Java System Web Server – Application requests 54 225

1 x Sun Fire V440 (4CPU, 16GB)Siebel Gateway Server/Sun Java System Directory Server 11 81

Table 5.2

Performance Tuning Siebel Software on the Sun Platform Page 16

Page 17: Perf Tune Siebel Sun

6 Siebel Scalability on the Sun Platform

On the Sun platform the Siebel application scales to high concurrent users extremely well. With Siebel’s flexible distributed architecture and Sun’s large server product line, an optimal Siebel-on-Sun deployment can be achieved either with several small (one to four CPU) machines or with a single large Sun machine (such as a E15K, E6900, and so on). The following graphs depict Siebel scalability on different Sun machine types. All the tuning that was applied to achieve these results (as documented in the next chapter in this article) can be applied on production systems.

Figure 6.1

Performance Tuning Siebel Software on the Sun Platform Page 17

Page 18: Perf Tune Siebel Sun

Figure 6.2

The V440 server uses UltraSPARC III chips while the V890 and E2900 use UltraSPARC IV chips. Figure 6.2 shows the difference in scalability between the classes of Sun machines. Customers can this data for the capacity planning and sizing of their real world deployments. Keep in mind that the workload used for these results should be identical to the customer real-world deployment or appropriate adjustments need to be made to the server sizing. For more workload details, please see Section 4.

Performance Tuning Siebel Software on the Sun Platform Page 18

Page 19: Perf Tune Siebel Sun

Cost per Siebel User on Sun Platform

Figure 6.3*Note:$$/user number is based purely on hardware cost and does not include environmental factors, facilities, service, or management.

Sun servers provide the best price/performance for Siebel applications. Figure 6.3 depicts the costs of the typical Siebel user on various models of Sun servers tested.

Price/Performance Summary Per Tier of the Sun-Siebel Deployment

Application tier: Sun Fire V440: 440 users/CPU ($17.57/user) Sun Fire V890: 662 users/CPU ($23.42/user) Sun Fire E2900: 606 users/CPU ($37.54/user)

Database tier (Sun Fire E2900): 4902 users/CPU ($5.46/user)

Web tier (Sun Fire V440): 2453 users/CPU ($3.15/user)

Average response time: from 0.126 to 0.303 sec (component-specific) Success rate: > 99.999% (8 failures out of ~1.2 million transactions)

Performance Tuning Siebel Software on the Sun Platform Page 19

Page 20: Perf Tune Siebel Sun

7 Performance Tuning

Many people think solving a performance problem is a mysterious talent, however there is a particular methodology at work. Figure 8.1 shows the process flow of approaching a performance problem or simply tuning the system for best performance.

Figure 7

Use of a methodology like the process shown in Figure 7.1 can take the black magic out of performance tuning. because there are several books and white papers dedicated to the subject of performance tuning methodologies, the subject is not being covered in detail in this white paper. The scope of this white paper is to provide the reader with specific tuneables for the Siebel/Sun platform. The current chapter provides specific suggestions for performance and scalability tuning. These suggestions are based on lessons learned from the tests conducted at the Sun ETC and through years of collaborative engineering between Sun and Siebel.

Performance Tuning Siebel Software on the Sun Platform Page 20

Page 21: Perf Tune Siebel Sun

7.1 Tuning Solaris OS for Siebel Server

7.1.1 Solaris MTmalloc Tuning for Siebel

The alternate memory allocator module, which is standard to the Solaris OS and was built specifically for multithreaded applications such as Siebel, was enabled on the Siebel application servers. The MTmalloc routines provide a faster, concurrent malloc implementation. This feature resulted in lowering CPU consumption by 35%. Though memory consumption doubled as a side effect, overall price/performance benefits were positive. In Solaris 10 OS, improvements have been made on MTmalloc to reduce the space-inefficiency cost.

Effect of MTmalloc on CPU Benefit and Memory Cost

Figure 7.1.1

In Figure7.1.1, the blue curve shows the percentage reduction in CPU usage for Siebel applications running on different Sun Fire SMP machines with MTmalloc enabled . The red curve shows the corresponding increase in memory usage due to use of MTmalloc. As is apparent from Figure 7.1.1, enabling MTmalloc on a 4 CPU machine will not be beneficial; there is 51% increase in memory usage by Siebel while CPU utilization remains the same. Performance gains begin to get better when MTmalloc is used on Siebel server running on 8 CPU machines and upwards. On an 8 CPU machine (V890) one can expect 28% reduction in CPU utilization when using the MTmalloc feature. Though memory usage by the Siebel application increases when MTmalloc is tuned,

Performance Tuning Siebel Software on the Sun Platform Page 21

Page 22: Perf Tune Siebel Sun

this is not a big disadvantage, as the standard Sun machine configuration has ample memory.

The v440 has 4 CPUs with 16Gbytes of memory. A typical Siebel user on the Sun platform uses about 4Mbytes of memory while the CPU use is much higher in comparison, so the higher memory footprint is a beneficial trade off.

How does one enable MTmalloc?

1. Edit the file $SIEBEL_ROOT/bin/siebmtshw. 2. Add the line LD_PRELOAD=/usr/lib/libmtmalloc.so.3. Save the file and bounce Siebel servers.4. After Siebel restarts, verify that MTmalloc is enabled by executing the

following command:$%pldd -p <pidofsiebmtshmw> | grep -i mtmalloc

7.1.2 Solaris Alternate Threads Library Usage

Solaris 8 OS provides an alternate threads implementation with a one-level model (1x1). User-level threads are associated one-to-one with lightweight processes (LWPs). This implementation is simpler than the standard (MxN) two-level model in which user-level threads are multiplexed over possibly fewer lightweight processes. The 1x1 model was used on Siebel Application Servers and provided good performance improvements to the Siebel multithreaded applications because the Siebel 7.5.2 alternate threads library is enabled by default. If you are using a version of Siebel older than 7.5.2 then the alternate threads library is not enabled by default.

Procedure to enable the alt thread feature:

1. Update the env variable for the UNIX® user that runs Siebel server.2. If you are using Korn shell, open up the .profile for the Siebel owner

and add this to the bottom of the file.3. Export LD_LIBRARY_PATH=/usr/lib/lwp:$LD_LIBRARY_PATH.4. Stop Siebel.5. Exit from shell and log in again so the change in .profile is in effect.6. Start Siebel.

To verify if alt thread is indeed being used:

pldd -p <pidofsiebmtshmw> | grep -i lwp

The preceding command should return the current Solaris lib, which is /usr/lib/lwp/.

Performance Tuning Siebel Software on the Sun Platform Page 22

Page 23: Perf Tune Siebel Sun

7.1.3 The Solaris Kernel and TCP/IP Tuning Parameters for Siebel ServerParameter Scope Default

Value Tuned Value

shmsys:shminfo_shmmax /etc/system 0xffffffffffffffffshmsys:shminfo_shmmin /etc/system 100shmsys:shminfo_shmseg /etc/system 200semsys:seminfo_semmns /etc/system 12092semsys:seminfo_semmsl /etc/system 512semsys:seminfo_semmni /etc/system 4096semsys:seminfo_semmap /etc/system 4096semsys:seminfo_semmnu /etc/system 4096semsys:seminfo_semopm /etc/system 100semsys:seminfo_semume /etc/system 2048

msgsys:msginfo_msgmni /etc/system 2048msgsys:msginfo_msgtql /etc/system 2048msgsys:msginfo_msgssz /etc/system 64msgsys:msginfo_msgseg /etc/system 32767msgsys:msginfo_msgmax /etc/system 16384msgsys:msginfo_msgmnb /etc/system 16384ip:dohwcksum (for resonate gbic)

/etc/system 0

rlim_fd_max /etc/system 1024 16384 rlim_fd_cur /etc/system 64 16384 sq_max_size /etc/system 2 0 tcp_time_wait_interval ndd /dev/tcp 240000 60000 tcp_conn_req_max_q ndd /dev/tcp 128 1024 tcp_conn_req_max_q0 ndd /dev/tcp 1024 4096 tcp_ip_abort_interval ndd /dev/tcp 480000 60000 tcp_keepalive_interval ndd /dev/tcp 7200000 900000 tcp_rexmit_interval_initial ndd /dev/tcp 3000 3000 tcp_rexmit_interval_max ndd /dev/tcp 240000 10000 tcp_rexmit_interval_min ndd /dev/tcp 200 3000 tcp_smallest_anon_port ndd /dev/tcp 32768 1024 tcp_slow_start_initial ndd /dev/tcp 1 2 tcp_xmit_hiwat ndd /dev/tcp 8129 32768 tcp_fin_wait_2_flush_interval ndd/dev/tcp 67500 675000tcp_recv_hiwat ndd /dev/tcp 8129 32768

Table 7.1.3

7.2 Tuning Siebel Server for the Solaris OSThe key factor in tuning Siebel server performance is the number of threads or users per Siebel object manager (OM) process. Siebel server architecture consists of multithreaded server processes servicing different business needs. Currently, Siebel is

Performance Tuning Siebel Software on the Sun Platform Page 23

Page 24: Perf Tune Siebel Sun

designed such that one thread of a Siebel OM services one user session or task. The ratio of threads or users/process is configured using the Siebel parameters:

• MinMTServers• MaxMTServers• MaxTasks

From several tests conducted it was found that on the Solaris platform with Siebel 7.5 the following users/OM ratios provided optimal performance:

• Call Center – 80 users/OM• eChannel – 40 users/OM• eSales – 50 users/OM• eService – 60 users/OM

As you can see, the optimal ratio of threads/process varies with the Siebel OM and the type of Siebel workload per user. MaxTasks divided by MaxMTServers determines the number of users/process. For example, for 300 users the setting would be MinMtServers=6, MaxMTServers=6, Maxtasks=300. This would direct Siebel to distribute the users across the 6 processes evenly, with 50 users on each. The notion of anonymous users must also be considered in this calculation, as discussed in the Call Center section, the eChannel section, and so on. The prstat –v or the top command shows how many threads or users or being serviced by a single multithreaded Siebel process.

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 1880 pspp 504M 298M cpu14 28 0 0:00.00 10% siebmtshmw/69 1868 pspp 461M 125M sleep 58 0 0:00.00 2.5% siebmtshmw/61 1227 pspp 687M 516M cpu3 22 0 0:00.03 1.6% siebmtshmw/62 1751 pspp 630M 447M sleep 59 0 0:00.01 1.5% siebmtshmw/59 1789 pspp 594M 410M sleep 38 0 0:00.02 1.4% siebmtshmw/60 1246 pspp 681M 509M cpu20 38 0 0:00.03 1.2% siebmtshmw/62

A thread count of more than 50 threads per process is due to the fact that the count also includes some admin threads. If the MaxTasks/MTservers ratio is greater than 100, performance degrades in terms of longer transaction response times. The optimal users per process setting depends on the workload, that is, how busy each user is.

7.2.1 Tuning Call Center, Sales/Service, and eChannel Siebel ModulesCall Center or sccobjmgr-specific information is presented here. Eighty users/process was found to be the optimal ratio.

Performance Tuning Siebel Software on the Sun Platform Page 24

Page 25: Perf Tune Siebel Sun

In the file $SIEBEL_ROOT/bin/enu/uagent.cfg:

● Set EnableCDA = FALSE to disable invoking CDA functionality.● Set CommEnable = FALSE for Call Center, to disable downloading the CTI

bar.● Set CommConfigManager = FALSE.

It should be noted that the preceding settings are required to be TRUE for scenarios where these functions are required. The benchmark setup required these settings to be disabled.

To decide on the MaxTasks, MTservers, AnonUserPool settings, use the following example:

Target No. of Users 4000AnonUserPool 400Buffer 200Maxtasks 4600MaxMTserver=MinMTserver 58

Table 7.2

If the target number of users is 4000, then AnonUserPool is 10% of 4000, or 400. Allow a 5% buffer, add them all together (4000+400+200=4600), and MaxTasks would be 4600. Since we want to run 80 users/process, the MaxMTserver value will be 4600/80=57.5 (rounded to 58).

The AnonUserPool value is set in the eapps.cfg file on the Siebel Web Server Engine. If the load is distributed across multiple web server machines or instances, simply divide this number. In this example, if two web servers were being used, set AnonUserPool to 200 in each web server’s eapps.cfg file. Here is a snapshot of the file and the setting for Call Center:

[/callcenter]AnonSessionTimeout = 360GuestSessionTimeout = 60SessionTimeout = 300AnonUserPool = 200ConnectString = siebel.tcpip.none.none://19.1.1.18:2320/siebel/SCCObjMgr

Set the following env variables in the file $SIEBEL_HOME/siebsrvr/siebenv.csh or siebenv.sh:

export SIEBEL_ASSERT_MODE=0export SIEBEL_OSD_NLATCH = 7 * Maxtasks + 1000export SIEBEL_OSD_LATCH = 1.2 * Maxtasks

Set Asserts OFF by setting the environment variable SIEBEL_ASSERT_MODE=0.

Performance Tuning Siebel Software on the Sun Platform Page 25

Page 26: Perf Tune Siebel Sun

OSD LATCH will need to be set higher based on the number of users being run. This is the calculation to be followed:

SIEBEL_OSD_NLATCH = 7 * Maxtasks + 1000SIEBEL_OSD_LATCH = 1.2 * Maxtasks

Restart Siebel server after setting these environment variables.

eChannel

The same tuning information as Call Center applies for eChannel, except for two parameters:The users per process ratio giving the best performance for this workload was 40. The optimal AnonUserPool setting found was 30%.

eService

The same tuning information as Call Center applies for eService, except for two parameters. The users per process ratio giving the best performance for this workload was 60. The optimal AnonUserPool setting found was 30%.

eSales

Fifty users per Siebel object manager process provided better response times and throughput. The AnonUserPool setting is 20%.

7.2.2 Workflow

Siebel 7 Workflow Manager executed workflow steps based on inserted service requests. Siebel Workflow can be run in two modes: async or sync. Remote async mode is discussed here. The workflow test consists of 500 Call Center end users working on 20,000 service requests.

Connection pooling of 20:1 was implemented with 10% benefit.Modify uagent.cfg and eai.cfg under $SIEBEL_HOME/bin of the Batch application server to have the following settings:

● Change EnableCDA=FALSE

● Under [SWE] add:

EnableShuttle=TRUE

EnableReportsFromToolbar=TRUE

EnableSIDataLossWarning=TRUE

EnablePopupInlineQuery=TRUE

Performance Tuning Siebel Software on the Sun Platform Page 26

Page 27: Perf Tune Siebel Sun

7.2.3 Assignment Manager Tuning

The Siebel 7 Assignment Manager (AM) processed assignment transactions for sales opportunities based on employee positions and territories. AM was run in batch assignment mode.

For this component the main tunable is the requests parameter.

Tuneable parameter: RequestsDefault value: 5000 Value used in benchmark: 20

DescriptionMaximum number of requests read per iteration. This controls the maximum number of requests WorkMon reads from the requests queue within one iteration.

It did not help to change deletesize to 400 (from default of 500) or to add indexes to the table S_ESCL_REQ .

After reducing the number of requests per iteration from 5000 (default value) to 20 for component WorkMon, CPU utilization was reduced from 72% to 53% on the Siebel server node where AM and EAI-MQ tests were run together.

Through put (txns/sec) Value for the Requests parameter

Average CPU utilization

78,511 5000 72.34098 10 52.898146 20 53.31

Table 7.2.3

Performance Tuning Siebel Software on the Sun Platform Page 27

Page 28: Perf Tune Siebel Sun

The procedure to list/change the parameter requests for Siebel server follows:

1. Connect to server manager at applications server level.2. srvrmgr:siebapp2> list param Requests for comp WorkMon

PA_ALIAS PA_VALUE PA_DATATYPE PA_SCOPE .. PA_NAME -------- -------- ----------- --------- ---------------------- Requests 5000 Integer Component ..Requests per iteration

1 row returned.

3. srvrmgr:siebapp2> change param Requests=20 for comp WorkMonCommand completed successfully

Note: To see the entire list of parameters for comp WorkMon, type the following: srvrmgr:siebapp2> list param for comp WorkMon

7.2.4 EAI-MQseriesThe Siebel 7.5 EAI MQ Series Adapter read from and placed transactions into IBM MQ Series queues. This test is designed to receive 400,000 messages from IBM MQ series into the Siebel application. Messages are divided into different categories depending on the type of operations they perform during receive. This test stresses the file system on which the queues reside by performing about 10% database inserts and 10% updates. Persistent queues are used. As a result, the database tuning and disk layout explained in this document are key for performance.

To achieve best throughput the following setup was done:

Minmtservers=maxmtservers=1, Maxtasks=45 for component MqSeriesSrvRcvr on the Siebel server.

Moving the following directories to a different disk helped to improve performance by alleviating I/O bottlenecks:

/mqm/qmgrs/PERFQMGR/queues/SENDQUEUE - Sendqueue /mqm/qmgrs/PERFQMGR/queues/RECEIVEQUEUE - Receivequeue/mqm/qmgrs/PERFQMGR/queues/LOG(ACTIVE) - Active Logs

Performance Tuning Siebel Software on the Sun Platform Page 28

Page 29: Perf Tune Siebel Sun

MQ series parameter tuning:

PERFQMGR.OAMPipe_msg=2000000 (changed from default)in file /var/mqm/qmgrs/PERFQMGR/qmstatus.ini

LogBufferPages=512 (changed from default 17)in file /var/mqm/qmgrs/PERFQMGR/qm_ini

logPrimary files=12 (default value 2)LogSecondaryfiles=2 (default value 2)Logfilepages =1024 (changed to 16384(max))Logtype=CIRCULAR(default CIRCULAR)

From all of the above tweaks, a performance improvement of 35% in throughput was measured. Note: It was discovered during the benchmark that MQ Series version 5.1/5.2 has a 640,000 message limit, while 5.3 does not.

7.2.5 EAI-HTTP AdapterThe Siebel 7 EAI-HTTP Adapter executed requests between different web infrastructures. Siebel EAI-HTTP Adapter Transport Business Service lets one send XML messages over HTTP to a target URL (web site). The Siebel Web Engine serves as the transport to receive XML messages sent over the HTTP protocol to Siebel.

The Siebel EAI compgrp is enabled on one appserver and the HTTP driver is run on a Sun Fire v65/Windows XP machine.

To achieve the result of 287,352 business transactions/hour, 4 threads were run concurrently, where each thread worked on 70,000 records. The optimum values for the number of Siebel servers of type EAIObjMgr for this workload were:

MaxMTServers=2 minmtservers=2 MaxTasks=10

The other Siebel OM parameters for EAIObjMgr are best left at default; changing SISPERSISSCON to anything from default of 20 does not help. All of the Oracle database optimizations explained under that section in this paper helped achieve the throughput of 79 objects/sec.

The HTTP driver machine was maxed out, causing slow performance. The reason for this was I/O, as the HTTP test program was generating about 3Gbytes of logs during the test; once this was turned, off as shown below, performance improved.

driver.logresponse=falsedriver.printheaders=false

The AnonSessionTimeout and SessionTimeout values in the SWSE were set to 300.

HTTP adapter (EAIObjMgr_enu) tunable parameters:

● MaxTasks - Recommended value at the moment: 10● MaxMTServers, MinMTServers - Recommended value: 2

Performance Tuning Siebel Software on the Sun Platform Page 29

Page 30: Perf Tune Siebel Sun

● UpperThreshold - Recommended value: 25● LoadBalanced - Recommended value: true (default value, of course)● Driver.Count (at client) - Recommended value: 4

7.3 Siebel Server Scalability Limitations and Solutions

7.3.1 The Siebel MaxTasks Upper Limit Problem

In order to configure Siebel server to run over 8000 concurrent users, the Siebel parameter MaxTasks has to be set to a value of 8800. When this is done, the Siebel object manager processes (that is, the Siebel servers) failed to start and logged error messages. This failure was not due to any resource limitation from the Solaris OS, as there were ample amounts of CPU, memory, and swap space on the machine. The failure to start was due to the E2900 with 12 CPU, 48Gbytes RAM.

Siebel enterprise server logged the following error messages and failed to start:

GenericLog GenericError 1 2004-07-30 21:40:07 (sissrvr.cpp 47(2617) err=2000026 sys=17) SVR-00026: Unable to allocate shared memoryGenericLog GenericError 1 2004-07-30 21:40:07 (scfsis.cpp 5(57) err=2000026 sys=0) SVR-00026: Unable to allocate shared memoryGenericLog GenericError 1 2004-07-30 21:40:07 (listener.cpp 21(157) err=2000026 sys=0) SVR-00026: Unable to allocate shared memory

ExplanationTo understand what occurred, it is necessary to review some background on the Siebel server process siebsvc:

The siebsvc process runs as a system service that monitors and controls the state of every Siebel server component operating on that Siebel server. Each Siebel server is an instantiation of the Siebel Server System Service (siebsvc) within the current Siebel Enterprise Server. Siebel server runs as a daemon process in a UNIX environment.

During startup, the Siebel Server System Service (siebsvc) performs the following sequential steps:

1. Retrieve configuration information from the Siebel Gateway Name Server.2. Create a shared memory file located in the "admin" subdirectory of the

Siebel server root directory on UNIX. By default, this file has the name Enterprise_Server_Name.Siebel_Server_Name.shm.

The size of this file *.shm is directly proportional to the MaxTask setting. The higher the number of concurrent users the higher the MaxTask.

The Siebel Server System Service deletes this .shm file when it shuts down.

Performance Tuning Siebel Software on the Sun Platform Page 30

Page 31: Perf Tune Siebel Sun

Investigating the Reason for the FailureThe .shm file that was being created during startup will be memory-mapped and be part of the heap size of the siebsvc process. That means the process size of siebsvc process grows proportionally with the increase in the size of the .shm file.

With 9500 MaxTask configured, an .shm file of size 1.15Gbytes was created during server startup:

siebapp6:/tmp/%ls -l /export/siebsrvr/admin/siebel.sdcv480s002.shm-rwx------ 1 sunperf other 1212096512 Jan 24 11:44 /export/siebsrvr/admin/siebel.sdcv480s002.shm

And the siebsvc had a process size of 1.14Gbytes when the process died abruptly:

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 25309 sunperf 1174M 1169M sleep 60 0 0:00:06 6.5% siebsvc/1

A truss of the process reveals that it is trying to mmap a file 1.15Gbytes in size and fails with ENOMEM.

Here is the truss output:

8150: brk(0x5192BF78) = 0 8150: open("/export/siebsrvr/admin/siebel_siebapp6.shm", O_RDWR|O_CREAT|O_EXCL, 0700) = 9 8150: write(9, "\0\0\0\0\0\0\0\0\0\0\0\0".., 1367736320) = 1367736320 8150: mmap(0x00000000, 1367736320, PROT_READ|PROT_WRITE, MAP_SHARED, 9, 0) Err#12 ENOMEM

If the mmap succeeds, the process may have a process size > 2Gbytes. Because Siebel is a 32-bit application, it can have a process size up to 4Gbytes (2^32 = 4Gbytes) on the Solaris OS. But in our case, the process failed with a size just over 2Gbytes.

Performance Tuning Siebel Software on the Sun Platform Page 31

Page 32: Perf Tune Siebel Sun

The following were the system resource settings from the failed machine:

sdcv480s002:/export/home/sunperf/18306/siebsrvr/admin/%ulimit -atime(seconds) unlimitedfile(blocks) unlimiteddata(kbytes) unlimitedstack(kbytes) 8192coredump(blocks) unlimitednofiles(descriptors) 256vmemory(kbytes) unlimited

Even though the maximum size of the datasize or heap is reported as unlimited by the ulimit command above, the max limit is 2Gbytes by default on the Solaris OS. The upper limits seen by an application running on Solaris hardware can be achieved by using a simple C program that calls the getrlimit API from sys/resource.h. The following program prints the system limits for data, stack, and vmemory:

#include <stdio.h> #include <stdlib.h> #include <stddef.h> #include <sys/time.h> #include <sys/resource.h> static void showlimit(int resource, char* str) { struct rlimit lim; if (getrlimit(resource, &lim) != 0) { (void)printf("Couldn't retrieve %s limit\n", str); return; } (void)printf("Current/maximum %s limit is \t%lu / %lu\n", str, lim.rlim_cur, lim.rlim_max); } int main() { showlimit(RLIMIT_DATA, "data"); showlimit(RLIMIT_STACK, "stack"); showlimit(RLIMIT_VMEM, "vmem"); return 0; }

Output from the C program on the failed machine is as follows:

sdcv480s002:/export/siebsrvr/admin/%showlimitsCurrent/maximum data limit is 2147483647 / 2147483647Current/maximum stack limit is 8388608 / 2147483647Current/maximum vmem limit is 2147483647 / 2147483647

From the output shown, it is clear that the processes were bound to a maximum data limit of 2Gbytes on an out-of-the-box Solaris system setup. This limitation is the reason for the failure of the siebsvc process as it tried to grow beyond 2Gbytes.

SolutionThe solution is to increase the default system limit for datasize and reduce the stacksize. An increase in datasize creates more room for process address space and reduction of stacksize reduces the reserved stack space. Both these adjustments

Performance Tuning Siebel Software on the Sun Platform Page 32

Page 33: Perf Tune Siebel Sun

let a Siebel process use its process address space more efficiently, hence allowing the total Siebel process size to grow up to 4Gbytes (which is the upper limit for a 32-bit application).

1. What are the recommended values for data and stack sizes on Solaris OS while running the Siebel application? How does one change the limits of datasize and stacksize?

Set the datasize to 4Gbytes (that is, the maximum address space allowed for a 32-bit process) and set the stacksize to any value <1Mbyte, depending on the stack's usage during high load. In general, even with very high loads the stack may take up to 64Kbytes; setting its value to 512Kbytes wouldn't harm the application.

System limits can be changed using "ulimit" or "limit" user commands depending on the shell.

The following commands change the limits:ksh:ulimit -s 512ulimit -d 4194303

csh:limit -stacksize 512limit -datasize 4194303

2. How does one execute the aforementioned commands?

These commands can be executed either from ksh or csh directly before running Siebel. The Siebel processes inherit the limits during the shell forking.

But $SIEBEL_ROOT/bin/start_server script is the recommended place to put those commands:

sdcv480s002:/export/home/sunperf/18306/siebsrvr/bin/%more start_server......USAGE="usage: start_server [-r <siebel root>] [-e <enterprise>] \[-L <language code>] [-a] [-f] [-g <gateway:port>] { <server name> ... | ALL }"

ulimit -d 4194303ulimit -s 512

## set variables used in siebctl command below......

3. How do I check the system limits for a running process?

The plimit <pid> command prints the system limits as seen by the process, that is:

sdcv480s002:/tmp/%plimit 1160011600: siebsvc -s siebsrvr -a -g sdcv480s002 -e siebel -s sdcv480s002 resource current maximum time(seconds) unlimited unlimited

Performance Tuning Siebel Software on the Sun Platform Page 33

Page 34: Perf Tune Siebel Sun

file(blocks) unlimited unlimited data(kbytes) 4194303 4194303 stack(kbytes) 512 512 coredump(blocks) unlimited unlimited nofiles(descriptors) 65536 65536 vmemory(kbytes) unlimited unlimited

The preceding setting allows the siebsvc to mmap the .shm file and thereby siebsvc succeeds in forking the rest of the processes and Siebel server processes start up successfully. Thus this finding enables one to configure MaxTasks greater than 9000 on the Solaris OS. That means one can get around the scalability limit of 9000 users per Siebel server, but how much higher can one go? We determined the limit to be 18,000. If MaxTasks was configured to > 18000, Siebel calculates the *.shm file size to be about 2Gbytes. siebsvc tries to mmap this file and fails, as it has hit the limit for a 32-bit application process.

The following listing shows the Siebel .shm file with maxtask set to 18,000:

sdcv480s002:/export/siebsrvr/admin/%ls -l *.shm ; ls -lh *.shm-rwx------ 1 sunperf other 2074238976 Jan 25 13:23 siebel.sdcv480s002.shm*-rwx------ 1 sunperf other 1.9G Jan 25 13:23 siebel.sdcv480s002.shm*

sdcv480s002:/export/siebsrvr/admin/%plimit 1252712527: siebsvc -s siebsrvr -a -g sdcv480s002 -e siebel -s sdcv480s002 resource current maximum time(seconds) unlimited unlimited file(blocks) unlimited unlimited data(kbytes) 4194303 4194303 stack(kbytes) 512 512 coredump(blocks) unlimited unlimited nofiles(descriptors) 65536 65536 vmemory(kbytes) unlimited unlimited

sdcv480s002:/export/siebsrvr/admin/%prstat -s size -u sunperf -a PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 12527 sunperf 3975M 3966M sleep 59 0 0:00:55 0.0% siebsvc/3 12565 sunperf 2111M 116M sleep 59 0 0:00:08 0.1% siebmtsh/8 12566 sunperf 2033M 1159M sleep 59 0 0:00:08 3.1% siebmtshmw/10 12564 sunperf 2021M 27M sleep 59 0 0:00:01 0.0% siebmtsh/12 12563 sunperf 2000M 14M sleep 59 0 0:00:00 0.0% siebproc/1 26274 sunperf 16M 12M sleep 59 0 0:01:43 0.0% siebsvc/2

In the rare case there is a need to run more than 18,000 Siebel users on a single Siebel server node, the way to do it is to install another Siebel server instance on the same node. This works well and is a supported configuration on Siebel/Sun platform. It is not advisable to configure more MaxTasks than needed, as this could affect the performance of the overall Siebel enterprise.

7.3.2 Bloated Siebel Processes (Commonly Mistaken as Memory Leaks)

SymptomDuring some of the high concurrent user tests, it was observed that hundreds of users suddenly failed.

Performance Tuning Siebel Software on the Sun Platform Page 34

Page 35: Perf Tune Siebel Sun

DescriptionThe reason for this symptom was that some of the Siebel server processes servicing these users had either hung or crashed. Close observation with repeated replays revealed that the Siebmtshmw process memory shot up from 700M to 2GB.

The UltraSPARC IV-based E2900 server can handle up to 30,000 processes and about 87,000 LWPs (threads), so there was no resource limitation from the machine here. The Siebel applications are running into 32-bit process space limits -- but why is the memory per process going from 700Mbytes to 2GBytes? Further debugging lead us to the problem: the stack size.

The total memory size of a process is made up of heap size + stack size + data and anon segments. In this case, it was found that the bloating in size was at the stack, which grew from 64Kbytes to 1Gbyte, which was abnormal. pmap is a utility on the Solaris system that provides a detailed breakdown of memory sizes per process.

Here is an output of pmap that shows that the stack segment bloated to about 1Gbyte:

siebapp6@/export/pspp> grep stack pmap.4800mixed.prob.txt (pmap output unit is in Kbytes) FFBE8000 32 32 - 32 read/write/exec [ stack ] FFBE8000 32 32 - 32 read/write/exec [ stack ] FFBE8000 32 32 - 32 read/write/exec [ stack ] FFBE8000 32 32 - 32 read/write/exec [ stack ] FFBE8000 32 32 - 32 read/write/exec [ stack ] C012A000 1043224 1043224 - 1043224 read/write/exec [ stack ]

We reduced the stack size to 128Kbytes and this fixed the problem. Now a single Siebel server configured with 10,000 MaxTasks started up successfully. It was failing to startup before breaking at the mmap call. The stack size of 8Mytes is a default setting on the Solaris system. However, limiting the stack size to 128Kbytes removed a lot of the instability in our high load tests and we were able to run 5000 user tests without errors, pushing up to 80% to 90% CPU usage on the 12-way E2900 server.

Changing the mainwin address MW_GMA_VADDR=0xc0000000 to other values did not seem to make a big difference. Please do not vary this parameter.

SolutionChange the stack size hard limit of the Siebel process from unlimited to 512 bytes.

Stack size on the Solaris OS has a hard limit and a soft limit. The default values for these two limits are unlimited and 8Mbytes, respectively. This means that an application process on the Solaris OS can have its stack size anywhere up to the hard limit. Since the default hard limit on the Solaris system is unlimited the Siebel application processes could grow their stack size all the way up to 2Gbytes. When this occurs, the total memory size of the Siebel process hits the max limit of memory addressable by a 32-bit process and bad things happen (such as a hang or crash). Setting the limit for stack to 1Mbytes or a lower value resolved the issue.

Performance Tuning Siebel Software on the Sun Platform Page 35

Page 36: Perf Tune Siebel Sun

How is this done?

sdcv480s002:/export/home/sunperf/18306/siebsrvr/admin/%limit stackstacksize 8192 kbytes

sdcv480s002:/export/siebsrvr/admin/%limit stacksize 1024sdcv480s002:/export/siebsrvr/admin/%limit stacksizestacksize 512 kbytes

Please note that a large stack limit can inhibit the growth of the data segment, because the total process size upper limit is 4Gbytes for a 32-bit application. Also even if the process stack hasn't grown to a large extent, the virtual memory space will be reserved for it according to the limit value. While the recommendation to limit stack size to 512 bytes worked well for the workload defined in this paper, this setting may have to be tweaked for different Siebel deployments and workloads. The range could be from 512 to 1024 bytes.

7.4 Tuning Sun Java System Web Server

The three main files where tuning can be done are obj.conf, server.xml and magnus.conf.

Edit magnus.conf:1. Set the RqThrottle=4028 in the magnus.conf file under the web server

root directory.2. ListenQ 16000.3. ConnQueueSize 8000.4. KeepAliveQueryMeanTime 50.

Edit server.xml:1. Replaced host name with ip address in server.xml.

Edit obj.conf:1. Turned off access logging.2. Turned off cgi, jsp, servlet support.3. Removed the following lines from obj.conf: ###PathCheck fn="check-

acl" acl="default" and ###PathCheck fn=unix-uri-clean, since these are not being used by Siebel.

Tuning parameters used for high user load with Sun Java System Web Servers are listed in the following table.

Performance Tuning Siebel Software on the Sun Platform Page 36

Page 37: Perf Tune Siebel Sun

Parameter Scope Default Value

Tuned Value

shmsys:shminfo_shmmax /etc/system 0xffffffffffffffff

shmsys:shminfo_shmmin /etc/system 100shmsys:shminfo_shmseg /etc/system 200semsys:seminfo_semmns /etc/system 12092semsys:seminfo_semmsl /etc/system 512semsys:seminfo_semmni /etc/system 4096semsys:seminfo_semmap /etc/system 4096semsys:seminfo_semmnu /etc/system 4096semsys:seminfo_semopm /etc/system 100semsys:seminfo_semume /etc/system 2048

msgsys:msginfo_msgmni /etc/system 2048msgsys:msginfo_msgtql /etc/system 2048msgsys:msginfo_msgssz /etc/system 64msgsys:msginfo_msgseg /etc/system 32767msgsys:msginfo_msgmax /etc/system 16384msgsys:msginfo_msgmnb /etc/system 16384

rlim_fd_max /etc/system 1024 16384 rlim_fd_cur /etc/system 64 16384 sq_max_size /etc/system 2 0 tcp_time_wait_interval ndd /dev/tcp 240000 60000 tcp_conn_req_max_q ndd /dev/tcp 128 1024 tcp_conn_req_max_q0 ndd /dev/tcp 1024 4096 tcp_ip_abort_interval ndd /dev/tcp 480000 60000 tcp_keepalive_interval ndd /dev/tcp 720000

0 900000

tcp_rexmit_interval_initial ndd /dev/tcp 3000 3000 tcp_rexmit_interval_max ndd /dev/tcp 240000 10000 tcp_rexmit_interval_min ndd /dev/tcp 200 3000 tcp_smallest_anon_port ndd /dev/tcp 32768 1024 tcp_slow_start_initial ndd /dev/tcp 1 2 tcp_xmit_hiwat ndd /dev/tcp 8129 32768 tcp_fin_wait_2_flush_interval

ndd/dev/tcp 67500 675000

tcp_recv_hiwat ndd /dev/tcp 8129 32768

Table 7.4

Performance Tuning Siebel Software on the Sun Platform Page 37

Page 38: Perf Tune Siebel Sun

7.5 Tuning the Siebel Web Server Extension (SWSE)

1. In the Siebel Web Plugin installation directory, go to the bin directory. Edit the eapps.cfg file and make the following changes:

• Set AnonUserPool to 15% of <target #users>.• Set the following settings in the “default” section:

5. GuestSessionTimeout to 60; this is required for scenarios where the user is browsing without logging in.

6. AnonSessionTimeout = 3007. SessionTimeout = 300

• Set appropriate AnonUser name/passwords SADMIN/SADMIN for eChannel and Call Center (Database login) GUEST1/GUEST1 for eService and eSales (LDAP login) GUESTERM/GUESTERM for ERM (Database login)

The AnonUserPool setting can vary on different types of Siebel users (call center, sales, and so on).

The table below summarizes the individual eapps.cfg settings for each type of Siebel application: Call Center, eChannel, eSales and eService, used in the 10,000 users test.

Parameter callcenter_enu prmportal_enu esales_enu eservice_enu

AnonUserName SADMIN GUESTCP eApps2 eApps3

AnonPassword SADMIN GUESTCP eApps2 eApps3

AnonUserPool CC1=420,CC2=525, CC3=315, CC4=210

320 160 360

AnonSessionTimeout 360 360 360 360

GuestSession Timeout

60 60 60 60

SessionTimeout 300 300 300 300

Table 7.5

The SWSE stats web page is a very good resource for tuning Siebel.

Performance Tuning Siebel Software on the Sun Platform Page 38

Page 39: Perf Tune Siebel Sun

7.6 Tuning Siebel Standard Oracle Database and Sun StorageThe size of the database used was approximately 140 Gbytes. The database was built to simulate customers with large transaction volumes and data distributions that represented the most common customer data shapes. Table 7.6 shows a sampling of record volumes and sizes in the database for key business entities of the standard Siebel volume database.

Business Entity Database Table Name

Number of Records Size in Kbytes

Accounts S_ORG_EXT 1,897,161 3,145,728 Activities S_EVT_ACT 8,744,305 6,291,456 Addresses S_ADDR_ORG 3,058,666 2,097,152 Contacts S_CONTACTS 3,366,764 4,718,592 Employees S_EMPLOYEE_ATT 21,000 524 Opportunities S_OPTY 3,237,794 4,194,304 Orders S_ORDER 355,297 471,859 Products S_PROD_INT 226,000 367,001 Quote Items S_QUOTE_ITEM 1,984,099 2,621,440 Quotes S_QUOTE_ATT 253,614 524 Service Requests S_SRV_REQ 5,581,538 4,718,592

Table 7.6

7.6.1 Optimal Database Configuration

Creating a well-planned database to begin with will require less tuning/reorganizing during runtime. While many resources are available to facilitate creation of high-performance Oracle databases, most tuning engineers would find themselves tweaking a database consisting of thousands of tables and indexes, piece by piece. This is both time consuming and prone to mistakes. Eventually one would end up rebuilding the entire database from scratch.

The following approach provides an alternative to tuning a pre-existing, pre-packaged database:

1. Measure the exact space used by each object in the schema. The dbms_space packages provide the accurate space used by an index or a table. Other sources like dba_free_space only tell you “how much is free” from the total allocated space, which is always more. Next, run the benchmark test and again measure the space used. The difference will result in an accurate report of “how much each table/index grows during the test.” One can use this data to right-size all of the tables, such as the capacity planned for growth during the test. Also the data can be used to figure out and concentrate on only the hot tables used by the test.

2. Create a new database with multiple index and data tablespaces. The idea is to place all equi-extent-sized tables into their own tablespace. Keeping the data and index objects in their own tablespace reduces contention and fragmentation,

Performance Tuning Siebel Software on the Sun Platform Page 39

Page 40: Perf Tune Siebel Sun

and also provides for easier monitoring. Keeping tables with equal extent sizes in their own tablespace reduces fragmentation as old and new extent allocations are always of the same size within a given tablespace, leaving no room for empty odd-sized pockets in between. This leads to compact data placement which reduces the number of I/Os done.

3. Build a script to create all of the tables and indexes. This script should have the tables being created in their appropriate tablespaces with the appropriate parameters like freelists, freelist_groups, pctfree, pctused, and so on. Use this script to place all of the tables in their tablespaces and then import the data. This will result in a clean, defragmented, optimized, and right-sized database.

The tablespaces should also be built to be locally managed. This allows the space management to be done locally within the tablespace, unlike default (dictionary managed) tablespaces that write to the system tablespace for every extent change. The list of hot tables for Siebel is available in Appendix B.

7.6.2 Properly Locating Data on the Disk for Best PerformanceTo achieve the incredible capacity on current disk drives, disk manufacturers have implemented zone bit recording. This means that the outer edge of the disk drive has more available storage area than the inside edge of the disk drive; that is, the number of sectors per track decreases as you move toward the center of the disk. Disk drive manufactures take advantage of this situation by recording more data on the outer edges. Since the disk drive rotates at a constant speed, the outer tracks have faster transfer rates than the inner tracks. For example, a Seagate 36 Gbyte Cheetah1 drive has a data transfer speed range of 57 MBytes/sec on the inner tracks to 86 Mbytes/sec on the outer tracks -- a 50% improvement in transfer speed.For benchmarking purposes, it is desirable to:

1. Place active large block transfers on the outer edges of the disk to minimize data transfer time.

2. Place active random small block transfers on the outer edges of the disk drive only if active large block transfers are not in the benchmark.

3. Place inactive random small block transfers on the inner sections of disk drive to minimize the impact of the data transfer speed discrepancies.

Further, if the benchmark only deals with small block I/Os, like SPC Benchmark-1™2 benchmarking, the priority is to put the most active LUNs on the outer edge and the less active LUNs on the inner edge of the disk drive.

1 The Cheetah 15 KB RPM disk drive datasheet can be found at www.seagate.com.2 Further information about SPC Benchmark-1 TM can be found at www.StoragePerformance.org.

Performance Tuning Siebel Software on the Sun Platform Page 40

Page 41: Perf Tune Siebel Sun

86 MB/second

57 MB/second

Figure 7.6.2 Figure 7.6.2 shows a zone bit recording example with five zones. The outer edge holds the most data and has the fastest transfer rate.

7.6.3 Disk Layout and Oracle Data Partitioning

An I/O subsystem with less contention and high throughput is key for obtaining high performance with Oracle. After analyzing the Siebel workload an appropriate design was made.

The I/O subsystem consisted of Sun StorEdge SE6320 connected to the E2900 database server via fiber channel. The SE6320 has 2 base + 2 expansion arrays driven through two controllers. Each tray consists of 14x36Gbyte disks at 15,000 RPM. That means there are 56 total disks providing over 2 Tbytes of total storage. Each tray has a cache of 1Gbyte. All of the trays were formatted in the RAID 0 mode and two LUNs per tray were created. Eight striped volumes of 300 Gbytes each were carved. Each volume was striped across seven physical disks with stripe sizes of 64KB. Eight filesystems (ufs) were built on top of these striped volumes: T4disk1, T4disk2, T4disk3, T4disk4, T4disk5, T4disk6, T4disk7, T4disk8.

The following example shows the tuning done at the disk array level. Please note that cache writebehind was turned off and disk scrubber was turned off.

Performance Tuning Siebel Software on the Sun Platform Page 41

Page 42: Perf Tune Siebel Sun

array00:/:<1>sys listcontroller : 2.5blocksize : 64kcache : automirror : automp_support : nonenaca : offrd_ahead : offrecon_rate : medsys memsize : 256 MBytescache memsize : 1024 MBytesfc_topology : autofc_speed : 2Gbdisk_scrubber : offondg : befitarray00:/:<2

Since Oracle writes every transaction to its redo log files, these files typically will have higher I/O activity compared to other Oracle data files. Also the writes to Oracle redo log files are sequential. The Oracle redo log files were situated on a dedicated tray using a dedicated controller. Additionally the LUN containing the redo log files was placed on the outer edge of the physical disks (see Figure 8.3: Zone Bit Recording).

The first file system created using a LUN occupies the outer edge of the physical disks. Once the outer edge reaches capacity, the inner sectors are used. This can be used in performance tuning to place highly-used data on outer edges and rarely used data on the inner edge of a disk.

The data tablespaces, index tablespaces, rollback segments, temporary tablespaces, and system tablespaces, were built using 4Gbyte datafiles spread across the remaining two trays. This ensured that there would be no disk hotspotting. The spread made for effective usage of the two controllers available with this setup. One controller with its 1Gyte cache for the redo log files and the other controller/cache for non-redo data files belonging to Oracle.

The 2547 data and 12,391 index objects of the Siebel schema were individually sized. Their current space usage and expected growth during the test were accurately measured using the dbms_space procedure. Three data tablespaces were created using the locally managed (bitmapped) feature in Oracle. Similarly, three index tablespaces were also created. The extent sizes of these tablespace were UNIFORM. This ensures that fragmentation does not occur during the numerous deletes, updates, and inserts. The tables and indexes were distributed evenly across these tablespaces based on their size; their extents were pre-created so that no allocation of extents takes place during the benchmark tests.

Data Partitioning per Oracle Tablespace

RBS All rollback segment objects. DATA_512000K Contained all of the large Siebel tables.INDX_51200K Tablespace for the indexes on large tables.INDX_5120K Tablespace for the indexes on medium tables.

Performance Tuning Siebel Software on the Sun Platform Page 42

Page 43: Perf Tune Siebel Sun

DATA_51200K Tablespace to hold the medium Siebel tables. INDX_512K Tablespace for all the indexes on small tables.DATA_5120K Tablespace for the small Siebel tables.TEMP Oracle temporary segments.TOOLS Oracle performance measurement objects.DATA_512K Tablespace for Siebel small tables.SYSTEM Oracle system tablespace.

Tablespace to Logical Volume MappingDATA_512000K /t3disk2/oramst/oramst_data_512000K.01 /t3disk2/oramst/oramst_data_512000K.04 /t3disk2/oramst/oramst_data_512000K.07 /t3disk2/oramst/oramst_data_512000K.10 /t3disk3/oramst/oramst_data_512000K.02 /t3disk3/oramst/oramst_data_512000K.05 /t3disk3/oramst/oramst_data_512000K.08 /t3disk3/oramst/oramst_data_512000K.11 /t3disk4/oramst/oramst_data_512000K.03 /t3disk4/oramst/oramst_data_512000K.06

DATA_51200K /t3disk2/oramst/oramst_data_51200K.02 /t3disk3/oramst/oramst_data_51200K.03 /t3disk4/oramst/oramst_data_51200K.01 /t3disk4/oramst/oramst_data_51200K.04

DATA_5120K /t3disk3/oramst/oramst_data_5120K.01

DATA_512K /t3disk2/oramst/oramst_data_512K.01

INDX_51200K /t3disk5/oramst/oramst_indx_51200K.02 /t3disk5/oramst/oramst_indx_51200K.05 /t3disk5/oramst/oramst_indx_51200K.08 /t3disk5/oramst/oramst_indx_51200K.11 /t3disk6/oramst/oramst_indx_51200K.03 /t3disk6/oramst/oramst_indx_51200K.06 /t3disk6/oramst/oramst_indx_51200K.09 /t3disk6/oramst/oramst_indx_51200K.12 /t3disk7/oramst/oramst_indx_51200K.01 /t3disk7/oramst/oramst_indx_51200K.04 /t3disk7/oramst/oramst_indx_51200K.07 /t3disk7/oramst/oramst_indx_51200K.10

INDX_5120K /t3disk5/oramst/oramst_indx_5120K.02 /t3disk6/oramst/oramst_indx_5120K.03 /t3disk7/oramst/oramst_indx_5120K.01

INDX_512K /t3disk5/oramst/oramst_indx_512K.01 /t3disk5/oramst/oramst_indx_512K.04 /t3disk6/oramst/oramst_indx_512K.02 /t3disk6/oramst/oramst_indx_512K.05 /t3disk7/oramst/oramst_indx_512K.03

RBS /t3disk2/oramst/oramst_rbs.01 /t3disk2/oramst/oramst_rbs.07 /t3disk2/oramst/oramst_rbs.13 /t3disk3/oramst/oramst_rbs.02 /t3disk3/oramst/oramst_rbs.08 /t3disk3/oramst/oramst_rbs.14 /t3disk4/oramst/oramst_rbs.03 /t3disk4/oramst/oramst_rbs.09 /t3disk4/oramst/oramst_rbs.15

Performance Tuning Siebel Software on the Sun Platform Page 43

Page 44: Perf Tune Siebel Sun

/t3disk5/oramst/oramst_rbs.04 /t3disk5/oramst/oramst_rbs.10 /t3disk5/oramst/oramst_rbs.16 /t3disk6/oramst/oramst_rbs.05 /t3disk6/oramst/oramst_rbs.11 /t3disk6/oramst/oramst_rbs.17 /t3disk7/oramst/oramst_rbs.06 /t3disk7/oramst/oramst_rbs.12 /t3disk7/oramst/oramst_rbs.18

SYSTEM /t3disk2/oramst/oramst_system.01

TEMP /t3disk7/oramst/oramst_temp.12 TOOLS /t3disk2/oramst/oramst_tools.01 /t3disk3/oramst/oramst_tools.02 /t3disk4/oramst/oramst_tools.03 /t3disk5/oramst/oramst_tools.04 /t3disk6/oramst/oramst_tools.05 /t3disk7/oramst/oramst_tools.06

With the preceding setup of Oracle using the hardware level stripping, and placing Oracle objects in different tablespaces, an optimal setup was reached with no I/O waits noticed and no single disk being more than 20% occupied during the tests. Veritas was not used as this setup provided the required I/O throughput.

Following is the Iostat output: two snapshots at five-second intervals, taken during steady state of the test on the database server. There are minimal reads (r/s), and writes are balanced across all volumes. With the exception of c7t1d0, this is the dedicated T4+ array for Oracle redologs. It is quite normal to see high writes/sec on redologs, this just indicates that transactions are getting done rapidly in the database. The reads/sec is abnormal. This volume is at 27% busy, which is considered borderline high. Fortunately, service times are very low.

Wed Jan 8 15:25:20 2003 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 1.0 0.0 5.6 0.0 0.0 0.0 1.0 0 1 c0t3d0 1.0 27.6 8.0 220.8 0.0 0.0 0.0 1.3 0 2 c2t7d0 0.0 26.0 0.0 208.0 0.0 0.0 0.0 0.5 0 1 c3t1d0 0.6 37.4 4.8 299.2 0.0 0.0 0.0 0.8 0 2 c4t5d0 0.0 23.4 0.0 187.2 0.0 0.0 0.0 0.6 0 1 c5t1d0 0.0 10.2 0.0 81.6 0.0 0.0 0.0 0.5 0 0 c6t1d0 3.8 393.0 1534.3 3143.8 0.0 0.2 0.0 0.6 0 22 c7t1d0 0.0 28.2 0.0 225.6 0.0 0.0 0.0 0.6 0 1 c8t1d0 Wed Jan 8 15:25:25 2003 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 1.4 19.8 11.2 158.4 0.0 0.0 0.0 1.0 0 2 c2t7d0 0.0 18.2 0.0 145.6 0.0 0.0 0.0 0.5 0 1 c3t1d0 0.8 39.0 6.4 312.0 0.0 0.0 0.0 0.7 0 2 c4t5d0 0.0 18.8 0.0 150.4 0.0 0.0 0.0 0.6 0 1 c5t1d0 0.0 17.4 0.0 139.2 0.0 0.0 0.0 0.5 0 1 c6t1d0 5.2 463.2 1844.8 3705.6 0.0 0.3 0.0 0.7 0 27 c7t1d0 0.0 29.4 0.0 235.2 0.0 0.0 0.0 0.5 0 1 c8t1d0

Performance Tuning Siebel Software on the Sun Platform Page 44

Page 45: Perf Tune Siebel Sun

7.6.4 Solaris MPSS Tuning for Oracle ServerAvailable as a standard feature since Solaris 9 OS, Multiple Page Size Support (MPSS) allows a program to use any hardware-supported page sizes to access portions of virtual memory. MPSS improves virtual memory performance by allowing applications to use large page sizes, therefore improving resource efficiency and reducing overhead, and accomplishes this without recompiling or recoding applications.

Enable MPSS (Multiple Page Size Support) for Oracle Server and shadow processes on Solaris 9 OS or later versions to reduce the TLB miss% rate. It is recommended to use the largest available page size if the TLB miss% is high.

Enabling MPSS for Oracle Processes

1. Enable the kernel cage if the machine is not an E10K or F15K, and reboot the system. Kernel cage an be enabled by the following setting in /etc/system:set kernel_cage_enable=1

Why do we need the kernel cage? To address a problem with memory fragmentation. Immediately after a system boot, a sizeable pool of large pages are available and the applications can get all of their mmap() memory allocated from large pages. This can be verified using pmap -xs <pid>. If the machine has been in use for a while, the application may not get the desirable large pages until the machine is rebooted. This is mainly due to the fragmentation of physical memory.

We can vastly minimize the fragmentation by enabling the kernel cage. With the kernel cage enabled, the kernel will be allocated from a small contiguous range of memory, minimizing the fragmentation of other pages within the system.

2. Find out all possible hardware address translation (HAT) sizes supported by the system with pagesize -a.

$ pagesize -a8192655365242884194304

3. Run trapstat -T. The value shown in the ttl row and %time column is the percentage of time the processor(s) spent in virtual-to-physical memory address translations. Depending on %time, make a wise choice of a pagesize that will help reduce reducing the iTLB/dTLB miss rate.

4. Create a simple config file for MPSS as follows:oracle*:<desirable heap size>:<desirable stack size>

Desirable heap and stack size must be one of the supported HAT sizes. By default, 8Kbytes is the page size for heap and stack on all Solaris releases.

Performance Tuning Siebel Software on the Sun Platform Page 45

Page 46: Perf Tune Siebel Sun

5. Set the environment variables MPSSCFGFILE and MPSSERRFILE. MPSSCFGFILE should point to the config file that was created in Step 3. MPSS writes any errors during runtime to file.

6. Preload MPSS interposing library mpss.so.1, and bring up Oracle server. It is recommended to put the MPSSCFGFILE, MPSSERRFILE, and LD_PRELOAD environment variables in the Oracle startup script.

With all the env variables mentioned above, a typical startup script may look like the following:

echo starting listenerlsnrctl start

echo preloading mpss.so.1 ..MPSSCFGFILE=/tmp/mpsscfgMPSSERRFILE=/tmp/mpsserr

LD_PRELOAD=/usr/lib/mpss.so.1:$LD_PRELOADexport MPSSCFGFILE MPSSERRFILE LD_PRELOAD

echo starting oracle server processes ..

sqlplus /nolog <<!connect / as sysdbastartup pfile=/tmp/oracle/admin/oramst/pfile/initoramst.ora!

$ cat /tmp/mpsscfgoracle*:4M:64K

7. Go back to Step 3 again and measure the difference in %time. Repeat Steps 4 through 7 until there is some noticeable performance improvement.

Suggested ReadingSee the mpss.so.1 man page.Supporting Multiple Page Sizes in the Solaris Operating System (White Paper) at http://www.solarisinternals.com/si/reading/817-5917.pdf

7.6.5 Hot Table Tuning and Data GrowthDuring the four-hour test of 10,000 users' OLTP and server component workload, the data in the database grew by 2.48 Gytes. In total, 256 tables and indexes had new data inserted into them. The following table lists the top 20 tables and indexes, based on growth in size. For the complete list of tables and indexes that grew, please see Appendix B.

Performance Tuning Siebel Software on the Sun Platform Page 46

Page 47: Perf Tune Siebel Sun

Siebel Object Name TypeGrowth in Bytes

S_DOCK_TXN_LOG TABLE 1,177,673,728S_EVT_ACT TABLE 190,341,120S_DOCK_TXN_LOG_P1 INDEX 96,116,736S_DOCK_TXN_LOG_F1 INDEX 52,600,832S_ACT_EMP TABLE 46,202,880S_SRV_REQ TABLE 34,037,760S_AUDIT_ITEM TABLE 29,818,880S_OPTY_POSTN TABLE 28,180,480S_ACT_EMP_M1 INDEX 25,600,000S_EVT_ACT_M1 INDEX 23,527,424S_EVT_ACT_M5 INDEX 22,519,808S_ACT_EMP_U1 INDEX 21,626,880S_ACT_CONTACT TABLE 21,135,360S_EVT_ACT_U1 INDEX 18,391,040S_ACT_EMP_M3 INDEX 16,850,944S_EVT_ACT_M9 INDEX 16,670,720S_EVT_ACT_M7 INDEX 16,547,840S_AUDIT_ITEM_M2 INDEX 16,277,504S_ACT_EMP_P1 INDEX 15,187,968S_AUDIT_ITEM_M1 INDEX 14,131,200

Table 7.6.5

7.6.6 Oracle Parameters TuningHere are the key Oracle init.ora tunables that were tuned.

db_cache_size=3048576000

The preceding parameter determines the size for Oracle’s SGA (Shared Global Area). Database performance is highly dependent on available memory. In general, more memory increases caching, which reduces physical I/O to the disks. Oracle’s SGA is a memory region in the application that caches database tables and other data for processing. With 32-bit Oracle software on a 64-bit Solaris OS the SGA is limited to 4 Gbytes.

Oracle comes in two basic architectures: 64-bit and 32-bit. The number of address bits determines the maximum size of the virtual address space.

32-bits = 232 = 4 GB maximum 64-bits = 264 = 16777216 TB maximum

For the Siebel 10,000 concurrent users' PSPP workload, the 4Gbyte SGA was sufficient. As a result, the 32-bit Oracle server version was used.

Performance Tuning Siebel Software on the Sun Platform Page 47

Page 48: Perf Tune Siebel Sun

db_block_max_dirty_target=0db_writer_processes=4 Fast_start_io_target=0

The default value of db_block_max_dirty_target was changed from 4294967294 to 0. Setting this value to 0 disables writing of buffers for incremental checkpointing purposes. This stops deferred checkpointing and achieves CPU and bus improvements. The default value for db_writer_processes is 1. Changing db_writer_processes to 4 starts up four dbwr processes. These three parameter changes drastically reduced wait times in the database and thus improved Siebel overall throughput.

Db_block_size=8K – default value is 2K. An 8K value for Siebel is optimal. DB_BLOCK_LRU_LATCHES=48 – Specifies the upper bound of the number of LRU latch sets. Set this parameter to a value equal to the desired number of LRU latch sets. Oracle decides whether to use this value or reduce it based on a number of internal checks. If the parameter is not set, Oracle calculates a value for the number of sets based on the number of CPUs. The value calculated by Oracle is usually adequate; increase this value only if misses are higher than 3% in V$LATCH. For Siebel 10,000 users run on a four CPU machine, setting the value to 48 was optimal.

distributed_transactions=0 – Setting this value to 0 disables the Oracle background process called reco. Siebel does not use distributed transactions, meaning that we can regain CPU and bus by having one less Oracle background process. The default value is 99. replication_dependency_tracking=FALSESiebel does not use replication so it is safe to turn this off by setting it to false.

transaction_auditing=FALSE Writes less redo for every commit. It is the nature of Siebel OLTP to do many small transactions with frequent commits. Setting transaction_auditing to false buys back CPU and bus.

Here is the complete listing of the init.ora file used with all the parameters set for 10,000 Siebel users.

Oracle 9.2.0.2 init.ora file

# Oracle 9.2.0.2 init.ora for solaris 9, running up to 10000 Siebel 7.5.2 user benchmark.db_block_size=8192db_cache_size=3048576000db_domain=""db_name=oramstbackground_dump_dest=/export/pspp/oracle/admin/oramst/bdumpcore_dump_dest=/export/pspp/oracle/admin/oramst/cdumptimed_statistics=FALSEuser_dump_dest=/export/pspp/oracle/admin/oramst/udumpcontrol_files=("/export/pspp/oracle/admin/oramst/ctl/control01.ctl", "/export/pspp/oracle/admin/oramst/ctl/control02.ctl", "/export/pspp/oracle/admin/oramst/ctl/control03.ctl")

Performance Tuning Siebel Software on the Sun Platform Page 48

Page 49: Perf Tune Siebel Sun

instance_name=oramstjob_queue_processes=0aq_tm_processes=0compatible=9.2.0.0.0hash_join_enabled=TRUEquery_rewrite_enabled=FALSEstar_transformation_enabled=FALSEjava_pool_size=0large_pool_size=8388608shared_pool_size=503316480processes=2500pga_aggregate_target=25165824rollback_segments = (rb_001,rb_002,rb_003,rb_004,rb_005,rb_006,rb_007,rb_008,rb_009,rb_010,rb_011,rb_012,rb_013,rb_014,rb_015,rb_016,rb_017,rb_018,rb_019,rb_020,rb_021,rb_022,rb_023,rb_024,rb_025,rb_026,rb_027,rb_028,rb_029,rb_030,rb_031,rb_032,rb_033,rb_034,rb_035,rb_036,rb_037,rb_038,rb_039,rb_040,rb_041,rb_042,rb_043,rb_044,rb_045,rb_046,rb_047,rb_048,rb_049,rb_050,rb_051,rb_052,rb_053,rb_054,rb_055,rb_056,rb_057,rb_058,rb_059,rb_060,rb_061,rb_062,rb_063,rb_064,rb_065,rb_066,rb_067,rb_068,rb_069,rb_070,rb_071,rb_072,rb_073,rb_074,rb_075,rb_076,rb_077,rb_078,rb_079,rb_080,rb_081,rb_082,rb_083,rb_084,rb_085,rb_086,rb_087,rb_088,rb_089,rb_090,rb_091,rb_092,rb_093,rb_094,rb_095,rb_096,rb_097,rb_098,rb_099,rb_100)log_checkpoint_timeout=10000000000000000nls_sort=BINARYsort_area_size = 10485760sort_area_retained_size = 10485760nls_date_format = "MM-DD-YYYY:HH24:MI:SS"transaction_auditing = falsereplication_dependency_tracking = falsesession_cached_cursors = 8000open_cursors=4048cursor_space_for_time = TRUEdb_file_multiblock_read_count = 8 # stripe size is 64K and not 1Mdb_block_checksum=FALSElog_buffer = 10485760optimizer_mode=RULEfilesystemio_options=setallpre_page_sga=TRUEfast_start_mttr_target=0db_writer_processes=6distributed_transactions = 0transaction_auditing = FALSEreplication_dependency_tracking = falsE#timed_statistics = TRUE # turned offmax_rollback_segments = 120job_queue_processes = 0java_pool_size = 0db_block_lru_latches = 48db_writer_processes = 4 session_cached_cursors = 8000FAST_START_IO_TARGET = 0DB_BLOCK_MAX_DIRTY_TARGET = 0pre_page_sga = TRUE

7.6.7 Solaris Kernel Parameters on Oracle Database ServerParameter Scope Default

Value Tuned Value

shmsys:shminfo_shmmax /etc/system 0xffffffffffffffffshmsys:shminfo_shmmin /etc/system 100shmsys:shminfo_shmseg /etc/system 200semsys:seminfo_semmns /etc/system 16384semsys:seminfo_semmsl /etc/system 4096semsys:seminfo_semmni /etc/system 4096semsys:seminfo_semmap /etc/system 4096

Performance Tuning Siebel Software on the Sun Platform Page 49

Page 50: Perf Tune Siebel Sun

semsys:seminfo_semmnu /etc/system 4096semsys:seminfo_semopm /etc/system 4096semsys:seminfo_semume /etc/system 2048semsys:seminfo_semvmx /etc/system 32767semsys:seminfo_semaem /etc/system 16384

msgsys:msginfo_msgmni /etc/system 4096msgsys:msginfo_msgtql /etc/system 4096msgsys:msginfo_msgmax /etc/system 16384msgsys:msginfo_msgmnb /etc/system 16384rlim_fd_max /etc/system 1024 16384 rlim_fd_cur /etc/system 64 16384

Table 7.6.7

7.6.8 SQL Query TuningDuring the course of the test, the most resource-intensive and long-running queries were tracked. In general, the best way to tune a query is to change the SQL statement (keeping the result set the same). The other method is to add or drop indexes so the execution plan changes. The latter method is the only option in most benchmark tests. We added four additional indexes to the Siebel schema which helped performance. With Siebel 7.5 there is no support for CBO (cost-based optimization) with Oracle database. CBO support is available in Siebel 7.7.

The following example shows one of the resource-consuming queries.

Buffer Gets Executions Gets per Exec % Total Hash Value--------------- ------------ -------------- ------- ------------ 220,402,077 35,696 6,174.4 33.2 2792074251 This query was responsible for 33% of the total buffer gets from all queries during the benchmark tests.

SELECTT4.LAST_UPD_BY,T4.ROW_ID,T4.CONFLICT_ID,T4.CREATED_BY,T4.CREATED,T4.LAST_UPD,T4.MODIFICATION_NUM,T1.PRI_LST_SUBTYPE_CD,T4.SHIP_METH_CD,T1.PRI_LST_NAME,T4.SUBTYPE_CD,T4.FRGHT_CD,T4.NAME,T4.BU_ID,T3.ROW_ID,T2.NAME,T1.ROW_ID,T4.CURCY_CD,T1.BU_ID,T4.DESC_TEXT,T4.PAYMENT_TERM_ID

FROMORAPERF.S_PRI_LST_BU T1,ORAPERF.S_PAYMENT_TERM T2,ORAPERF.S_PARTY T3,

Performance Tuning Siebel Software on the Sun Platform Page 50

Page 51: Perf Tune Siebel Sun

ORAPERF.S_PRI_LST T4WHERE

T4.PAYMENT_TERM_ID = T2.ROW_ID (+)AND

T1.BU_ID = :V1 AND

T4.ROW_ID = T1.PRI_LST_IDAND

T1.BU_ID = T3.ROW_IDAND

((T1.PRI_LST_SUBTYPE_CD != 'COST LIST'AND

T1.PRI_LST_SUBTYPE_CD != 'RATE LIST') AND

(T4.EFF_START_DT <= TO_DATE(:V2,'MM/DD/YYYY HH24:MI:SS')AND

(T4.EFF_END_DT IS NULLOR

T4.EFF_END_DT >= TO_DATE(:V3,'MM/DD/YYYY HH24:MI:SS'))AND

T1.PRI_LST_NAME LIKE :V4 AND

T4.CURCY_CD = :V5))ORDER BY T1.BU_ID, T1.PRI_LST_NAME;

Executing plan and statistics before the new index was added:

Execution Plan---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=RULE 1 0 NESTED LOOPS (OUTER) 2 1 NESTED LOOPS 3 2 NESTED LOOPS 4 3 TABLE ACCESS (BY INDEX ROWID) OF 'S_PRI_LST_BU' 5 4 INDEX (RANGE SCAN) OF 'S_PRI_LST_BU_M1' (NON-UNIQUE) 6 3 TABLE ACCESS (BY INDEX ROWID) OF 'S_PRI_LST' 7 6 INDEX (UNIQUE SCAN) OF 'S_PRI_LST_P1' (UNIQUE) 8 2 INDEX (UNIQUE SCAN) OF 'S_PARTY_P1' (UNIQUE) 9 1 TABLE ACCESS (BY INDEX ROWID) OF 'S_PAYMENT_TERM' 10 9 INDEX (UNIQUE SCAN) OF 'S_PAYMENT_TERM_P1' (UNIQUE)

Statistics---------------------------------------------------------- 364 recursive calls 1 db block gets 41755 consistent gets 0 physical reads 0 redo size 754550 bytes sent via SQL*Net to client 27817 bytes received via SQL*Net from client 341 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 5093 rows processed

New Index createdcreate index S_PRI_LST_X2 on S_PRI_LST(CURCY_CD, EFF_END_DT, EFF_START_DT)STORAGE(INITIAL 512K NEXT 512KMINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0 FREELISTS 7 FREELISTGROUPS 7 BUFFER_POOL DEFAULT) TABLESPACE INDX NOLOGGING PARALLEL 4 ;

As shown from the difference in statistics, the new index caused consistent reads to reduce by about 100%.

Performance Tuning Siebel Software on the Sun Platform Page 51

Page 52: Perf Tune Siebel Sun

Execution Plan---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=RULE 1 0 SORT (ORDER BY) 2 1 NESTED LOOPS 3 2 NESTED LOOPS 4 3 NESTED LOOPS (OUTER) 5 4 TABLE ACCESS (BY INDEX ROWID) OF 'S_PRI_LST' 6 5 INDEX (RANGE SCAN) OF 'S_PRI_LST_X2' (NON-UNIQUE 7 4 TABLE ACCESS (BY INDEX ROWID) OF 'S_PAYMENT_TERM' 8 7 INDEX (UNIQUE SCAN) OF 'S_PAYMENT_TERM_P1' (UNIQUE) 9 3 TABLE ACCESS (BY INDEX ROWID) OF 'S_PRI_LST_BU' 10 9 INDEX (RANGE SCAN) OF 'S_PRI_LST_BU_U1' (UNIQUE) 11 2 INDEX (UNIQUE SCAN) OF 'S_PARTY_P1' (UNIQUE)

Statistics---------------------------------------------------------- 0 recursive calls 0 db block gets 27698 consistent gets 0 physical reads 0 redo size 754550 bytes sent via SQL*Net to client 27817 bytes received via SQL*Net from client 341 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk)

5093 rows processed

Similarly, the three other new indexes added were:

• create index S_CTLG_CAT_PROD_F1 on ORAPERF.S_CTLG_CAT_PROD (CTLG_CAT_ID ASC) PCTFREE 10 INITRANS 2MAXTRANS 255 STORAGE (INITIAL 5120K NEXT 5120K MINEXTENTS 2 MAXEXTENTS UNLIMITED PCTINCREASE 0 FREELISTS 47 FREELIST GROUPS 47 BUFFER_POOL DEFAULT) TABLESPACE INDX_5120K NOLOGGING;

• create index S_PROG_DEFN_X1 on ORAPERF.S_PROG_DEFN (NAME, REPOSITORY_ID)STORAGE(INITIAL 512K NEXT 512KMINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0BUFFER_POOL DEFAULT) TABLESPACE INDX_512K NOLOGGING;

• create index S_ESCL_OBJECT_X1 on ORAPERF.S_ESCL_OBJECT(NAME, REPOSITORY_ID, INACTIVE_FLG)STORAGE(INITIAL 512K NEXT 512KMINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0BUFFER_POOL DEFAULT) TABLESPACE INDX_512K NOLOGGING;

The last two indexes are for assignment manager tests. No inserts occurred on the base tables during these tests.

7.6.9 Rollback Segment Tuning

Incorrect number and size of rollback segments will cause bad performance. The right number and size of rollback segments depends on the application workload. Most OLTP workloads require several small-sized rollback segments. The number of rollback segments should be equal or more than the number of concurrent active transactions in

Performance Tuning Siebel Software on the Sun Platform Page 52

Page 53: Perf Tune Siebel Sun

the database during peak load. In the Siebel tests this was about 80 during a 10,000 user test.

The size of each rollback segment should be approximately equal to the size in bytes of a user transaction. For the Siebel workload 100 rollback segments of 20Mbytes with extent size of 1Mbyte was found suitable. Note: If the rollback segment sizing is larger than required by the application, one would end up wasting valuable space in the database cache.

The new feature from oracle UNDO Segments, which can be used instead of Rollback Segments, was not tested during this project with Siebel application.

7.6.10 Database Connectivity Using Host Names Adapter

It has been observed that high-end Siebel test runs using Oracle as the backend have performed better when client-to-Oracle server connectivity is done via the hostnames adapter feature. This is an Oracle feature that allows for an alternate method of connecting to the Oracle database server without using the tnsnames.ora file.

Set the GLOBAL_DBNAME to something other than the Oracle_SID in the Listener.ora file on the database server as shown below. Bounce the Oracle listener after making this change.

LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dbserver)(PORT = 1521)) ) ) (DESCRIPTION = (PROTOCOL_STACK = (PRESENTATION = GIOP) (SESSION = RAW) ) (ADDRESS = (PROTOCOL = TCP)(HOST = dbserver)(PORT = 2481)) ) )SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /export/pspp/oracle) (PROGRAM = extproc) ) (SID_DESC = (GLOBAL_DBNAME = oramst.dbserver) (ORACLE_HOME = /export/pspp/oracle) (SID_NAME = oramst)

Performance Tuning Siebel Software on the Sun Platform Page 53

Page 54: Perf Tune Siebel Sun

On the Siebel applications server, go into the Oracle client installation and delete (or rename) the tnsnames.ora and sqlnet.ora files. You no longer need these as Oracle now connects to the database by resolving the name from the /etc/hosts file.

As root edit the file /etc/hosts on the client machine (that is, the Siebel applications server in this case) and add an entry like the following:

<ip.address of database server> oramst.dbserver

The name oramst.dbserver should be whatever you have provided as the GLOBAL_DBNAME in the listener .ora file. This becomes the connect string to connect to this database from any client.

7.6.11 High I/O with Oracle Shadow Processes Connected to Siebel

The disk on which Oracle binaries were installed was close to 100% occupied during the peak load of 1000 concurrent users. This problem was diagnosed to be the well-known oraus.msb problem. On Oracle clients that make use of OCI (Oracle Call Interface), the OCI driver makes thousands of calls to translate messages from the oraus.msb file. This problem has been documented by Oracle under bug ID 2142623.

The Sun workaround for this problem is to cache the oraus.msb file in memory and translate the file access and system calls to user calls and memory operations. The caching solution is dynamic, and code changes are not needed. With Siebel this workaround helped in reducing the calls. The 100% occupied I/O went away and a 4% reduction in CPU was observed. also, the response times for transactions got a 11% boost.

This problem is supposed to have been fixed in Oracle 9.2.0.4. Details on how to implement the Sun workaround by LD_PRELOAD of an interpose library are available at http://developers.sun.com/solaris/articles/oci_cache.html.

7.7 Siebel Database Connection Pooling

The database connection pooling feature built into the Siebel server software provides better performance. A users:database connection ratio of 20:1 has been proven to provide good results with Siebel7.5/Oracle9202. This reduced about 3% CPU at the Siebel server, as fewer connections were made from Siebel server to database during a 2000 user Call Center test. Siebel memory/user was 33% lower and Oracle memory/user was 79% lower, because 20 Siebel users share the same Oracle connection.

Siebel anonymous users do not use connection pooling. If the anonuser count is set too high (that is, higher than the recommended 10 to 20%) you would end up wasting the tasks, as MaxTasks is a number inclusive of real users. Also the anon sessions do not

Performance Tuning Siebel Software on the Sun Platform Page 54

Page 55: Perf Tune Siebel Sun

use connection pooling, so you would have lots of one-to-one connections and use increased memory and CPU on both the database server and the applications server.

How to Enable Connection PoolingSet the following Siebel parameters at the server level via the Siebel thin client GUI or svrmgr: MaxSharedDbConns integer full <number of connections to be used>MinSharedDbConns integer full <number of connections to be used>MaxTrxDbConns integer full <number of connections to be used>

Then bounce the Siebel server. For example, if you are set up to run 1000 users, the value for <number of connections to be used> is 1000/20=50. Set all of three parameters to the same value (50) to direct Siebel to share a single database connection for 20 Siebel users or tasks.

srvrmgr:SIEBSRVR> change param MaxTasks=1100, MaxMTServers=20, MinMTServers=20, MinSharedDbConns=50, MaxSharedDbConns=50, MinTrxDbConns=50 for comp esalesobjmgr_enu

How to Check if Connection Pooling is Enabled During the steady state, log in to dbserver and ps -eaf | grep NO | wc -lThis should return around 50 for this example; if this action returns 1000 then connection pooling is not in effect.

7.8 Tuning Sun Java System Directory Server (LDAP)

In order to prevent connections from timing out, change the following parameter on the LDAP directory server: idletimeout = 15 secs. Also increase the cache entries parameter for LDAP from the default to 25,000. To change these parameters:

1. Stop the LDAP Server.

2. Change the value associated with cache entries in slapd.conf.

3. Change the value associated with idletimeout in slapd.conf to 15.

4. Restart the LDAP Server.

Performance Tuning Siebel Software on the Sun Platform Page 55

Page 56: Perf Tune Siebel Sun

8 Performance Tweaks with No GainsThis section is most important as it sheds light on various myths of performance tuning. Sometimes a tunable that you think is sure to result in a performance gain, does not – but it ends up being counted as if it did. These are tunables that mostly help other applications in a different scenario or are default settings already in effect. This section lists out some of the tuning parameters that provided no gain when tested.

Please note: These observations are specific to the workload, architecture, software versions, and so on used during the project at SUN ETC labs. The workload is described in Chapter 4. The outcome of certain tuneables may vary when implemented with a different workload on a different architecture/configuration.

Changing the mainwin address MW_GMA_VADDR=0xc0000000Changing this to other values did not seem to make a big difference to performance. This value is set in the siebenv file.

Solaris Kernel Parameter stksizeThe default value for this is 16k, or 0x4000 on Sun4U (Ultra) architecture machines booted in 64 bit mode (which is always the default). Increasing this to 24k, 0x6000 by the below settings in /etc/system file did not provide any gains in performance during the tests.

set rpcmod:svc_default_stksize=0x6000set lwp_default_stksize=0x6000

Siebel Server Parameters1. Recycle Factor. Enabling this did not provide any performance gains. Default is

disabled.2. SISSPERSISSCONN. This is the parameter that changes the multiplexing ratio

between Siebel server and web server; its default value is 20. Varying this value did not result in any performance gains for the specific modules tested in this project with the PSPP standard workload (defined in Chapter 4).

Resonate: Cache size -- RES_PERSIST_BLOCK_SIZEThis is an environment variable which is set inside the resonate startup script. This was changed to as high as 240Kbytes and there was no difference in behavior.

Sun Java System Web Server maxprocsThis parameter, when changed from the default of 1, starts up more than one ns-httpd (web server process). No gain was measured with a value greater than 1. It was better to use a new web server instance.

Database Connection Pooling with Siebel Server ComponentsEnabling this for the server component batch workload caused performance to degrade such that server processes could not start. Some of the server component modules connect to the database using ODBC, which does not support connection pooling.

Performance Tuning Siebel Software on the Sun Platform Page 56

Page 57: Perf Tune Siebel Sun

Oracle Database Server1. Larger than required sharedpoolsize. For a 10,000 Siebel user benchmark, a

value of 400Mbytes was more than sufficient. Too large a value for sharedpoolsize wastes valuable database cache memory.

2. Large SGA. A larger than required SGA size is not going to improve performance, whereas too small an SGA size will degrade performance.

3. Large RBS. Being larger than required by an application would waste space in the database cache. It is better to make the application commit more often.

Performance Tuning Siebel Software on the Sun Platform Page 57

Page 58: Perf Tune Siebel Sun

9 Scripts, Tips, and Tricks for Diagnosing Siebel on the Sun Platform

We found the following tips to be very helpful for diagnosing performance and scalability issues while running Siebel on the Sun platform.

9.1 To Monitor Siebel Open Session StatisticsYou can use the following URLs to monitor the amount of time that a Siebel end user transaction is taking within your Siebel enterprise. This data is updated near realtime.

Call Center

http://webserver:port/callcenter_enu/_stats.swe?verbose=higheSales http://webserver:port/esales_enu/_stats.swe?verbose=higheService http://webserver:port/eservice_enu/_stats.swe?verbose=higheChannel http://webserver:port/prmportal_enu/_stats.swe?verbose=high

Table 10.1

The stat page provides a lot of diagnostic information. Watch out for any rows that are in bold. They represent requests that have been waiting for over 10 seconds.

9.2 To List the Parameter Settings for a Siebel Server

Use the following server command to list all parameter settings for Siebel server:

srvrmgr> list params for server servername comp component show PA_ALIAS, PA_VALUE

Parameters of interest are MaxMTServers, MinMTSServers, MaxTasks, MinSharedDbConns, MaxSharedDbConns, MinTrxDbConns.

9.3 To Find Out All OMs Currently Running for a Component

Use the following server command to find out all OMs that are running in a Siebel enterprise, sorted by Siebel component type:

$> srvrmgr /g gateway /e enterprise /u sadmin /p sadmin -c "list tasks for comp component show SV_NAME, CC_ALIAS, TK_PID, TK_DISP_RUNSTATE" | grep Running | sort -k 3 | uniq -c | sort -k 2,2 -k 1,1

The following output came from one of the runs:

47 siebelapp2_1 eChannelObjMgr_enu 19913 Running 132 siebelapp2_1 eChannelObjMgr_enu 19923 Running 133 siebelapp2_1 eChannelObjMgr_enu 19918 Running 158 siebelapp2_1 eChannelObjMgr_enu 19933 Running 159 siebelapp2_1 eChannelObjMgr_enu 19928 Running 118 siebelapp2_1 eSalesObjMgr_enu 19943 Running 132 siebelapp2_1 eSalesObjMgr_enu 19948 Running 156 siebelapp2_1 eSalesObjMgr_enu 19963 Running 160 siebelapp2_1 eSalesObjMgr_enu 19953 Running

Performance Tuning Siebel Software on the Sun Platform Page 58

Page 59: Perf Tune Siebel Sun

160 siebelapp2_1 eSalesObjMgr_enu 19958 Running 169 siebelapp2_1 eServiceObjMgr_enu 19873 Running 175 siebelapp2_1 eServiceObjMgr_enu 19868 Running 178 siebelapp2_1 eServiceObjMgr_enu 19883 Running 179 siebelapp2_1 eServiceObjMgr_enu 19878 Running 179 siebelapp2_1 eServiceObjMgr_enu 19888 Running 45 siebelapp2_1 SCCObjMgr_enu 19696 Running 45 siebelapp2_1 SCCObjMgr_enu 19702 Running 51 siebelapp2_1 SCCObjMgr_enu 19697 Running 104 siebelapp2_1 SCCObjMgr_enu 19727 Running

Run States are Running, Online, Shutting Down, Shutdown, and Unavailable. Run Tasks should be evenly distributed across servers and close to MaxTasks.

9.4 To Find Out the Number of Active Servers for a Component

Use the following server command to find out the number of active MTS Servers for a component:

srvrmgr> list comp component for server servername show SV_NAME, CC_ALIAS, CP_ACTV_MTS_PROCS, CP_MAX_MTS_PROCS

The number of active MTS servers should be close to the number of Max MTS servers.

9.5 To Find Out the Tasks for a Component

Use the following server command to find the tasks for a component:

srvrmgr> list task for comp component server servername order by TK_TASKID

Ordering by task id places the most recently started task at the bottom. It is a good sign if the most recently started tasks (that is, those started in the past few seconds or minutes) are still running. Otherwise, further investigation is required.

9.6 To Set Detailed Trace Levels on the Siebel Server Components (Siebel OM)

Log in to svrmgr and execute the following commands:Change evtloglvl taskcounters=4 for comp sccobjmgrchange evtloglvl taskcounters=4 for comp eserviceobjmgr

change evtloglvl taskevents=3 for comp sccobjmgrchange evtloglvl taskevents=3 for comp eserviceobjmgr

change evtloglvl mtwaring=2 for comp sccobjmgrchange evtloglvl mtwaring=2 for comp eserviceobjmgrchange evtloglvl set mtInfraTrace = True

9.7 To Find Out the Number of GUEST Logins for a ComponentUse the following server command to find out the number of GUEST login for a component:

Performance Tuning Siebel Software on the Sun Platform Page 59

Page 60: Perf Tune Siebel Sun

$ srvrmgr /g gateway /e enterprise /s server /u sadmin /p sadmin /c “list task for comp component” | grep Running | grep GUEST | wc -l

9.8 To Calculate the Memory Usage for an OM

1. We use the following script, pmem_sum.sh, to calculate the memory usage for an OM:

#!/bin/sh

if [ $# -eq 0 ]; then echo "Usage: pmem_sum.sh <pattern>"fi

WHOAMI=`/usr/ucb/whoami`

PIDS=`/usr/bin/ps -ef | grep $WHOAMI" " | grep $1 | grep -v "grep $1" | grep -v pmem_sum | \ awk '{ print $2 }'`

for pid in $PIDSdoecho 'pmem process :' $pidpmem $pid > `uname -n`.$WHOAMI.pmem.$piddone

pmem $PIDS | grep total | awk 'BEGIN { FS = " " } {print $1,$2,$3,$4,$5,$6}

{tot+=$4} {shared+=$5} {private+=$6} END {print "Total memory used:", tot/1024 "M by

"NR" procs. Total Private mem: "private/1024" M Total Shared mem: " shared/1024 "M

Actual used memory:" ((private/1024)+(shared/1024/NR)) "M"}'

2. To use it, type the following:pmem_sum.sh siebmtshmw

9.9 To Find the Log File Associated with a Specific OM

1. Check the server log file for the creation of the multithreaded server process:

ServerLog Startup 1 2003-03-19 19:00:46 Siebel Application Server is ready and awaiting requests…ServerLog ProcessCreate 1 2003-03-19 19:00:46 Created multithreaded server process (OS pid = 24796) for Call Center Object Manager (ENU) with task id 22535…

2. The log file associated with the preceding OM is SCCObjMgr_enu_24796.log.

1021 2003-03-19 19:48:04 2003-03-19 22:23:20 -0800 0000000d 001 001f 0001 09 SCCObjMgr_enu 24796 24992 111 /export/pspp/siebsrvr/enterprises/siebel2/siebelapp1/log/SCCObjMgr_enu_24796.log 7.5.2.210 [16060] ENUENU…

Performance Tuning Siebel Software on the Sun Platform Page 60

Page 61: Perf Tune Siebel Sun

9.10 To Produce a Stack Trace for the Current Thread of an OM

1. Find out the current thread number that the OM is running (assuming the pid is 24987):% > cat SCCObjMgr_enu_24987.log1021 2003-03-19 19:51:30 2003-03-19 22:19:41 -0800 0000000a 001 001f 0001 09 SCCObjMgr_enu 24987 24982 93 /export/pspp/siebsrvr/enterprises/siebel2/siebelapp1/log/SCCObjMgr_enu_24987.log 7.5.2.210 [16060] ENUENU…

2. The thread number for the preceding example is 93.

3. Use pstack to produce the stack trace:

$ > pstack 24987 | sed –n ’/lwp# 93/,/lwp# 94/p’

----------------- lwp# 93 / thread# 93 --------------------

7df44b7c lwp_mutex_lock (c00000c0) 7df40dc4 mutex_lock_kernel (4ea73a00, 0, 7df581b8, 7df56000, 0, c00000c0) + c8 7df41a64 mutex_lock_internal (4ea73a00, 7df581ac, 0, 7df56000, 1, 0) + 44c 7e3c430c CloseHandle (11edc, 7e4933a8, c01f08c8, 7ea003e4, c1528, 4ea73a98) + a8 7ea96958 __1cKCWinThread2T6M_v_ (7257920, 2, c1538, 1d, 0, 0) + 14 7ea97768 __SLIP.DELETER__B (7257920, 1, 7ebc37c0, 7ea00294, dd8f8, 4a17f81c) + 4 7ea965f0 __1cMAfxEndThread6FIi_v_ (7257ab8, 7257920, 0, 1, 1, 0) + 58 7edd2c6c __1cVOSDSolarisThreadStart6Fpv_0_ (7aba9d0, 1, c01f08c8, 51ecd, 1, 0) + 50 7fb411bc __1cUWslThreadProcWrapper6Fpv_I_ (7aba9e8, 7e4933a8, c01f08c8, c01f08c8, 0, ffffffff) + 48 7ea9633c __1cP_AfxThreadEntry6Fpv_I_ (51ecc, ffffffff, 1, 7ea9787c, 4000, 4a17fe10) + 114 7e3ca658 __1cIMwThread6Fpv_v_ (1, 7e4a6d00, 7e496400, c0034640, c01f0458, c01f08c8) + 2ac 7df44970 _lwp_start (0, 0, 0, 0, 0, 0)

----------------- lwp# 94 / thread# 94 --------------------

9.11 To Show System-Wide Lock Contention Issues Using lockstat

1. You can use lockstat to find out a lot of things about lock contentions. One interesting question is, “What is the most contended lock in the system?”

2. The following shows the system locks contention during one of the 4600 user runs with large latch values and double ramp up time.

# lockstat sleep 5

Adaptive mutex spin: 17641 events in 4.998 seconds (3529 events/sec)

Count indv cuml rcnt spin Lock Caller ------------------------------------------------------------------------------- 3403 19% 19% 1.00 51 0x30017e123e0 hmestart+0x1c8 3381 19% 38% 1.00 130 service_queue background+0x130 3315 19% 57% 1.00 136 service_queue background+0xdc 2142 12% 69% 1.00 86 service_queue qenable_locked+0x38 853 5% 74% 1.00 41 0x30017e123e0 hmeintr+0x2dc

Performance Tuning Siebel Software on the Sun Platform Page 61

Page 62: Perf Tune Siebel Sun

… 1 0% 100% 1.00 5 0x300267b75f0 lwp_unpark+0x60 1 0% 100% 1.00 18 0x3001d9a79c8 background+0xb0 ----------------------------------------------------------------------------

Adaptive mutex block: 100 events in 4.998 seconds (20 events/sec)

Count indv cuml rcnt nsec Lock Caller ---------------------------------------------------------------------------- 25 25% 25% 1.00 40179 0x30017e123e0 hmeintr+0x2dc 8 8% 33% 1.00 765800 0x30017e123e0 hmestart+0x1c8 6 6% 39% 1.00 102226 service_queue background+0xdc 5 5% 44% 1.00 93376 service_queue background+0x130 … 1 1% 100% 1.00 74480 0x300009ab000 callout_execute+0x98 ----------------------------------------------------------------------------

Spin lock spin: 18814 events in 4.998 seconds (3764 events/sec)

Count indv cuml rcnt spin Lock Caller ---------------------------------------------------------------------------- 2895 15% 15% 1.00 2416 sleepq_head+0x8d8 cv_signal+0x38 557 3% 18% 1.00 1184 cpu[10]+0x78 disp_getbest+0xc 486 3% 21% 1.00 1093 cpu[2]+0x78 disp_getbest+0xc … 1 0% 100% 1.00 1001 turnstile_table+0xf68 turnstile_lookup+0x50 1 0% 100% 1.00 1436 turnstile_table+0xbf8 turnstile_lookup+0x50 1 0% 100% 1.00 1618 turnstile_table+0xc18 turnstile_lookup+0x50 ----------------------------------------------------------------------------

Thread lock spin: 33 events in 4.998 seconds (7 events/sec)

Count indv cuml rcnt spin Lock Caller ---------------------------------------------------------------------------- 2 6% 6% 1.00 832 sleepq_head+0x8d8 setrun+0x4 2 6% 12% 1.00 112 cpu[3]+0xb8 ts_tick+0xc 2 6% 18% 1.00 421 cpu[8]+0x78 ts_tick+0xc … 1 3% 97% 1.00 1 cpu[14]+0x78 turnstile_block+0x20c 1 3% 100% 1.00 919 sleepq_head+0x328 ts_tick+0xc ----------------------------------------------------------------------------

R/W writer blocked by writer: 73 events in 4.998 seconds (15 events/sec)

Count indv cuml rcnt nsec Lock Caller ---------------------------------------------------------------------------- 8 11% 11% 1.00 100830 0x300274e5600 segvn_setprot+0x34 5 7% 18% 1.00 87520 0x30029577508 segvn_setprot+0x34 4 5% 23% 1.00 96020 0x3002744a388 segvn_setprot+0x34 … 1 1% 99% 1.00 152960 0x3001e296650 segvn_setprot+0x34 1 1% 100% 1.00 246960 0x300295764e0 segvn_setprot+0x34 ----------------------------------------------------------------------------

R/W writer blocked by readers: 40 events in 4.998 seconds (8 events/sec)

Count indv cuml rcnt nsec Lock Caller ---------------------------------------------------------------------------- 4 10% 10% 1.00 54860 0x300274e5600 segvn_setprot+0x34 3 8% 18% 1.00 55733 0x3002744a388 segvn_setprot+0x34 3 8% 25% 1.00 102240 0x3001c729668 segvn_setprot+0x34 … 1 2% 98% 1.00 48720 0x3002759b500 segvn_setprot+0x34 1 2% 100% 1.00 46480 0x300295764e0 segvn_setprot+0x34 ----------------------------------------------------------------------------

R/W reader blocked by writer: 52 events in 4.998 seconds (10 events/sec)

Performance Tuning Siebel Software on the Sun Platform Page 62

Page 63: Perf Tune Siebel Sun

Count indv cuml rcnt nsec Lock Caller ---------------------------------------------------------------------------- 5 10% 10% 1.00 131488 0x300274e5600 segvn_fault+0x38 3 6% 15% 1.00 111840 0x3001a62b940 segvn_fault+0x38 3 6% 21% 1.00 139253 0x3002792f2a0 segvn_fault+0x38 … 1 2% 98% 1.00 98400 0x3001e296650 segvn_fault+0x38 1 2% 100% 1.00 100640 0x300295764e0 segvn_fault+0x38 ----------------------------------------------------------------------------

Lockstat record failure: 5 events in 4.998 seconds (1 events/sec)

Count indv cuml rcnt Lock Caller ---------------------------------------------------------------------------- 5 100% 100% 0.00 lockstat_lock lockstat_record ----------------------------------------------------------------------------

9.12 To Show the Lock Statistic of an OM Using plockstat

1. The syntax is: plockstat [ -o outfile ] -p pid. The program grabs a process and shows the lock statistics upon exit or interrupt.

2. The following shows the lock statistics of an OM during one of the 4600 user runs with large latch values and double ramp up time:

$> plockstat -p 4027^C

----------- mutex lock statistics ----------- lock try_lock sleep avg sleep avg hold location: count count fail count time usec time usec name 2149 0 0 1 5218 142 siebmtshmw: __environ_lock 2666 0 0 0 0 3 [heap]: 0x9ebd0 948 0 0 0 0 1 [heap]: 0x9f490 312 0 0 2 351 88 [heap]: 0x9f4c8 447 0 0 0 0 2 [heap]: 0x9f868 237 0 0 0 0 101 [heap]: 0x9f8a0 2464 0 0 1 4469 2 [heap]: 0xa00f0 1 0 0 0 0 11 [heap]: 0x17474bc0… 219 0 0 0 0 2 libsscassmc: m_cacheLock+0x8 41 41 0 0 0 2 0x79a2a828 152295 0 0 15 11407 1 libthread: tdb_hash_lock 2631 0 0 10 297603 468 libc: _time_lock 1807525 0 0 16762 59752 14 libc: __malloc_lock ----------- condvar statistics ----------- cvwait avg sleep tmwait timout avg sleep signal brcast location: count time usec count count time usec count count name 0 0 41 40 4575290 0 0 [heap]: 0x2feec30 8 16413463 0 0 0 8 0 [heap]: 0x305fce8 20 7506539 0 0 0 20 0 [heap]: 0x4fafbe8 16 6845818 0 0 0 16 0 [heap]: 0x510a8d8… 12 8960055 0 0 0 12 0 [heap]: 0x110f6138 13 10375600 0 0 0 13 0 [heap]: 0x1113e040 ----------- readers/writer lock statistics ----------- rdlock try_lock sleep avg sleep wrlock try_lock sleep avg sleep avg hold location: count count fail count time usec count count fail count time usec time usec name

Performance Tuning Siebel Software on the Sun Platform Page 63

Page 64: Perf Tune Siebel Sun

382 0 0 0 0 0 0 0 0 0 0 [heap]: 0x485c2c0 102100 0 0 0 0 0 0 0 0 0 0 libsscfdm: g_CTsharedLock

9.13 To “Truss” an OM

1. Modify siebmtshw:

#!/bin/ksh

. $MWHOME/setmwruntime

MWUSER_DIRECTORY=${MWHOME}/system

LD_LIBRARY_PATH=/usr/lib/lwp:${LD_LIBRARY_PATH}

#exec siebmtshmw $@truss -l -o /tmp/$$.siebmtshmw.trc siebmtshmw $@

2. After you start up the server, this wrapper creates the truss output in a file named pid.siebmtshmw.trc in /tmp.

9.14 How to Trace the SQL Statements for a Particular Siebel Transaction

If response times are high, or if you think that the database is a bottleneck, you can check how long the SQL queries are taking to execute by running a SQL trace on a LoadRunner script. The SQL trace is run through Siebel and tracks all of the database activity and how long things take to execute. If execution times are too high, there is a problem with the database configuration, which most likely is contributing to high response times. To run a SQL Trace on a particular script, follow these instructions:

1. Configure the Siebel environment to the default settings (that is, the component should have only one OM, and so on).

2. Open the LoadRunner script in question in the Virtual User Generator.3. Place a breakpoint at the end of Action 1 and before Action 2. This will stop

the user at the breakpoint.4. Run a user.

Performance Tuning Siebel Software on the Sun Platform Page 64

Page 65: Perf Tune Siebel Sun

5. Once the user has stopped at the breakpoint, enable SQL tracing via srvrmgr:

change evtloglvl ObjMgrSqlLog=4 for comp <component>

6. Press Play, which will resume the user.7. Wait until the user has finished.8. Under the $SIEBEL_SERVER_HOME/enterprises/<enterprise>/<server>

directory, there will be a log for the component that is running. This log contains detailed information on the database activity, including how long the SQL queries took to execute. Search for high execution times (greater than 0.01 seconds).

9. Once you are done, disable SQL tracing via srvrmgr:

change evtloglvl ObjMgrSqlLog=0 for comp <component>

9.15 Changing Database Connect Stringvi $ODBCINI and Editing the field .ServerName.

1. srvrmgr /g siebgateway /e siebel /u sadmin /p sadmin /s <server name>

2. At srvrmgr prompt: Verify and change the value of DSConnectString parameter:i. List params for named subsystems serverdatasrcii. Change param DSConnectString=<new value> for named subsystems serverdatasrc.

9.16 Enabling/Disabling Siebel ComponentsDisabling a Component

1. Bring up srvrmgr console:srvrmgr /g siebgateway /e siebel /u sadmin /p sadmin /s <server name>

2. Disable the component:disable comp <component name>

3. List the components and verify their status:list components

Enabling a ComponentDisabling a component may disable the component definition. We may need toenable the component definition

1. Bring up srvrmgr console at enterprise level (just do not use "/s" switch).2. srvrmgr /g siebgateway /e siebel /u sadmin /p sadmin3. Enable the component definition: enable compdef <component name>(Note: This enables the component definition at all active servers; you may need to disable the component at servers where you don't need the component.)

Performance Tuning Siebel Software on the Sun Platform Page 65

Page 66: Perf Tune Siebel Sun

4. Bring up srvrmgr console at server level:srvrmgr /g siebgateway /e siebel /u sadmin /p sadmin /s <server name>5. Enable the component definition at server level:enable compdef <component name>6. Bounce gateway and all active servers.

Note: Sometimes the component may not be enabled even after following these steps. In such cases you may need to enable the component group at enterprise level before enabling the actual component:enable compgrp <component group name>

Performance Tuning Siebel Software on the Sun Platform Page 66

Page 67: Perf Tune Siebel Sun

10 Appendix A: Transaction Response TimesThese are the different transactions executed by the OLTP workload of 10,000 Siebel users. The collection here is an average of all 10,000 users in steady state of 1 hour, executing multiple iterations with 30 second average think time between each transaction. The average of all of the average transactions was 0.167 seconds.

Transaction Name Min Average MaxeSvc2_Query_Branch 0.094 0.14 3.703eSvc2_Mail_And_Fax 0.063 0.085 3.156eSvc2_Login_Page 0.203 0.324 4.578eSvc2_Email 0.078 0.193 5.563eSvc2_Contact_Us_3 0.063 0.126 3.859eSvc2_Contact_Us 0.047 0.064 4.313eSvc2_Branch_Locator 0.031 0.042 3.422Eservice2 Avg 0.13914286eSvc1_Query_Product 0.031 0.052 2.672eSvc1_Product_Button 0.016 0.033 0.125eSvc1_OK_Product 0.016 0.025 0.266eSvc1_My_SR 0.047 0.084 4.031eSvc1_My_Account 0.063 0.088 3.813eSvc1_Logout 0.031 0.093 3.609eSvc1_Login_Page 0.203 0.351 3.984eSvc1_Login 0.266 0.446 3.75eSvc1_Enter_Info_And_Submit 0.047 0.098 3.453eSvc1_Drilldown_Submit_SR 0.063 0.168 4.344eSvc1_Drilldown_SR 0.078 0.115 2.969eSvc1_Continue 0.047 0.086 3.359eSrevice2_Contact_Us_2 0.031 0.06 3.797

0.13069231eSls3_338LogOut 0.063 0.1 0.563eSls3_337UserProfile 0.031 0.049 0.156eSls3_336MyAccount 0.094 0.13 1.438eSls3_335DDOrder 0.141 0.239 2.922eSls3_334MyOrders 0.094 0.128 0.203eSls3_333MyAccount 0.094 0.125 0.859eSls3_332ConfirmOrder 0.531 0.676 1.625eSls3_331Continue 0.266 0.428 2.484eSls3_330EditShippingDeatils 0.078 0.113 2.016eSls3_329EditShipping 0.078 0.098 0.25eSls3_328EnterCreditInfo 0.406 0.589 2.875eSls3_327CheckOut 0.219 0.341 2.578eSls3_326GotoCart 0.203 0.243 0.625eSls3_325ChangeQuantAddtocart 0.281 0.453 3.172eSls3_324DDproduct 0.406 0.503 3.328eSls3_323BackFromcompare 0 0 0eSls3_322Pick4ProdCompare 0.344 0.413 3.125eSls3_321Color_RedSize_MedManuf_Acme 0.531 0.6 0.797

Performance Tuning Siebel Software on the Sun Platform Page 67

Page 68: Perf Tune Siebel Sun

eSls3_320SelectProdFamily 0.109 0.181 2.281eSls3_319ParamSearch 0.078 0.101 0.328eSls3_318Home 0.188 0.251 3.203eSls3_317AddtoCart 0.203 0.283 3.438eSls3_316DDProdBabyHoodPolice 0.125 0.148 0.219eSls3_315NextRecs 0.094 0.097 0.172eSls3_314NextRecs 0.094 0.141 2.828eSls3_313DDSubCategory1.1 0.172 0.226 3.031eSls3_312DDCategory1 0.125 0.178 2.609eSls3_311CatalogTab 0.141 0.17 0.359eSls3_310AddtoCart 0.281 0.402 2.5eSls3_309DDProdBabyHood 0.109 0.134 0.266eSls3_308NextRecs 0.078 0.097 0.188eSls3_307NextRecs 0.094 0.1 0.516eSls3_306DDSubCategory1.1 0.156 0.196 2.047eSls3_305DDCategory1 0.109 0.144 0.375eSls3_304CatalogTab 0.109 0.146 1.875eSls3_302Login 0.406 0.595 3.922eSls3_301ClickLogin 0.047 0.073 0.156eSls3_300StartApp 0.25 0.373 3.313

0.24378947eSls2_224LogOut 0.047 0.119 2.734eSls2_223EmptyCart 0.063 0.129 2.203eSls2_222GotoCart 0.141 0.237 2.734eSls2_221AddtoCart 0.188 0.3 3.172eSls2_220DDProdBabyHood 0.109 0.147 1.453eSls2_219NextRecs 0.094 0.111 2.719eSls2_218NextRecs 0.094 0.106 2.375eSls2_217DDSubCategory1.2 0.156 0.189 3.391eSls2_216DDCategory1 0.328 0.387 2.484eSls2_215CatalogTab 0.109 0.187 3.547eSls2_214Addressbook 0.031 0.054 2.031eSls2_213MyAccount 0.094 0.111 1.344eSls2_212UserProfile 0.031 0.056 2.563eSls2_211MyAccount 0.094 0.127 2.922eSls2_210MyOrders 0.078 0.119 2.719eSls2_209MyAccount 0.125 0.165 3.172eSls2_208AddtoCart 0.203 0.427 3.219eSls2_207DDProdBing 0.125 0.176 3.672eSls2_206NextRecs 0.078 0.104 3.672eSls2_205NextRecs 0.078 0.105 3.578eSls2_204DDSubCategory1.3 0.141 0.155 1.641eSls2_203DDCategory1 0.109 0.159 3.422eSls2_202Login 0.375 0.536 3.641eSls2_201ClickLogin 0.047 0.077 2.125eSls2_200StartApp 0.234 0.331 4.063

0.18456eSls1_118DDProdBingle 0.125 0.163 3.297eSls1_117NextRecs 0.078 0.096 2.922eSls1_116NextRecs 0.078 0.101 3.109

Performance Tuning Siebel Software on the Sun Platform Page 68

Page 69: Perf Tune Siebel Sun

eSls1_115DDSubCategory1.3 0.125 0.157 3.469eSls1_114DDCategory1 0.109 0.16 3.844eSls1_113CatalogTab 0.125 0.149 3.266eSls1_112DDProdBinge 0.125 0.161 3.859eSls1_111NextRecs 0.078 0.099 3.469eSls1_110NextRecs 0.078 0.106 3.438eSls1_109DDSubCategory1.3 0.125 0.155 3.531eSls1_108DDCategory1 0.109 0.16 3.766eSls1_107CatalogTab 0.109 0.149 3.047eSls1_106DDProdBing 0.125 0.159 3.266eSls1_105NextRecs 0.078 0.106 3.297eSls1_104NextRecs 0.078 0.103 3.203eSls1_103DDSubCategory1.3 0.141 0.163 3.5eSls1_102DDCategory1 0.109 0.164 3.703eSls1_101CatalogTab 0.109 0.134 3.469eSls1_100StartApp 0.219 0.348 3.594

0.14910526eChannel3_ResetStates 0 0.006 1.969eChannel3_GoToSRSolutionView 0.094 0.152 5.766eChannel3_GoToServiceTab 0.156 0.246 6.328eChannel3_ExecuteQueryBySRNum 0.078 0.152 4.547eChannel3_EnterDetailsAndSave 0.219 0.388 6.656eChannel3_DrilldownSR 0.125 0.218 6.469eChannel3_CreateNewActivity 0.078 0.124 4.875eChannel3_ClickQueryButton 0.094 0.166 4.313eChannel3_ChangeStatusSubStatusAndSave 0.188 0.28 6.188

0.19244444eChannel2_132_SaveQuote 0.156 0.296 8.422eChannel2_131_NewQuote 0.125 0.185 4.172eChannel2_130_OpportunityQuoteView 0.125 0.2 3.813eChannel2_129_OpportunityAttachmentView 0.109 0.175 4.125eChannel2_128_SaveActivity 0.125 0.257 4.25eChannel2_127_NewActivity 0.109 0.15 2.141eChannel2_126_OpportunityActivityView 0.125 0.192 4.25eChannel2_125_PickSalesTeam 0.141 0.271 5.828eChannel2_124_NewSalesTeam 0.047 0.085 3.875eChannel2_123_OpportunitySalesTeamView 0.109 0.193 3.781eChannel2_122_SaveProduct2 0.203 0.345 4.641eChannel2_121_PickProduct2 0.094 0.132 2.203eChannel2_120_QueryForProduct2 0.031 0.117 2.969eChannel2_119_ClickOnProductField2 0.016 0.021 0.484eChannel2_118_NewProduct2 0.094 0.153 4.703eChannel2_117_SaveProduct1 0.203 0.364 6.141eChannel2_116_PickProduct1 0.094 0.137 1.688eChannel2_115_QueryForProduct1 0.031 0.061 2.5eChannel2_114_ClickOnProductField 0.031 0.037 1.656eChannel2_113_NewProduct1 0.094 0.139 4.125eChannel2_112_OpptyProductView 0.109 0.175 3.984eChannel2_111_SaveContact 0.156 0.293 6.469eChannel2_110_NewContact 0.156 0.305 5.578

Performance Tuning Siebel Software on the Sun Platform Page 69

Page 70: Perf Tune Siebel Sun

eChannel2_109_NewContactOppty 0.078 0.118 2.719eChannel2_108_DrilldownOnOppty 0.156 0.267 5.094eChannel2_107_SaveOppty 0.172 0.341 5.891eChannel2_106_PickAccount 0.125 0.22 5.516eChannel2_105_QueryForAccount 0.031 0.071 2.813eChannel2_104_ClickOnQueryForAccount 0.016 0.02 0.578eChannel2_103_ClickOnAccountField 0.109 0.189 5.453eChannel2_102_NewOppty 0.391 0.706 8.313eChannel2_101_OpportunityScreen 0.156 0.264 5.844eChannel2_100_ResetStates 0 0.014 0.359

0.19675758eChannel1_SearchForContact 0.047 0.087 2.953eChannel1_SearchForAccount 0.047 0.09 4.234eChannel1_SaveSR 0.094 0.19 4.078eChannel1_SaveContact 0.109 0.216 4.031eChannel1_SaveAddress 0.078 0.174 5.234eChannel1_SaveActivity 0.109 0.217 4.828eChannel1_SaveAccount 0.141 0.276 4.797eChannel1_ResetStates 0 0.005 0.469eChannel1_LookInContact 0.016 0.029 0.703eChannel1_LookInAccount 0.031 0.042 1.719eChannel1_GoContacts 0.219 0.326 7.375eChannel1_GoAccounts 0.156 0.25 6.547eChannel1_DrilldownOnAccount 0.141 0.303 6.172eChannel1_CreateSR 0.094 0.153 4.563eChannel1_CreateContact 0.063 0.118 3.156eChannel1_CreateAddress 0.063 0.085 2.359eChannel1_CreateActivity 0.094 0.128 2.766eChannel1_CreateAccount 0.078 0.115 5.734eChannel1_CloseFind 0.078 0.114 2.328eChannel1_ClickNewInContactMVG 0.109 0.236 5.25eChannel1_ClickFind 0.109 0.168 4.578eChannel1_AccountTeamView 0.109 0.15 3.109eChannel1_AccountSRView 0.109 0.169 3.531eChannel1_AccountRevenueView 0.109 0.169 4.547eChannel1_AccountQuoteView 0.109 0.186 4.391eChannel1_AccountOrderView 0.109 0.19 3.953eChannel1_AccountAssetView 0.094 0.16 4.5eChannel1_AccountAddressView 0.094 0.152 4.234eChannel1_AccountActivityView 0.109 0.165 3.375

0.1607931CC3_2solutionView 0.047 0.092 4.844CC3_2solutionSRview 0.031 0.058 2.781CC3_2setStatusAndSaveSR 0.156 0.284 5.938CC3_2setStatusAndSaveActivity 0.094 0.132 4.844CC3_2serviceScreen 0.078 0.123 4.172CC3_2saveSolution 0.063 0.136 4.438CC3_2ResetStates 0 0.005 0.641CC3_2relatedSRview 0.063 0.116 5.063CC3_2newSolutionInMVG 0.047 0.115 5.156

Performance Tuning Siebel Software on the Sun Platform Page 70

Page 71: Perf Tune Siebel Sun

CC3_2newSolution 0.047 0.136 3.281CC3_2goSolution 0.047 0.096 4.609CC3_2backToActivitiesView 0.047 0.075 3.688CC3_2activitiesView 0.078 0.121 4.781

0.11453846CC2_2serviceScreen 0.172 0.259 5.391CC2_2searchSR 0.047 0.088 4.25CC2_2saveSR 0.172 0.339 6.563CC2_2saveActivityPlan 0.563 0.931 7.984CC2_2ResetStates 0 0.004 0.469CC2_2queryContact 0.047 0.067 3.422CC2_2productPicklist 0.047 0.134 2.156CC2_2OpenBinocular 0.031 0.034 1.078CC2_2okProduct 0.016 0.031 2.859CC2_2okContact 0.047 0.098 5.328CC2_2newSR 0.031 0.064 4.5CC2_2newActivityPlan 0.031 0.054 3.688CC2_2goProduct 0.063 0.088 3.984CC2_2goContact 0.063 0.121 5.375CC2_2contactMVG 0.109 0.226 5.734CC2_2closeBinocular 0 0.002 0.594CC2_2activityPlanView 0.047 0.1 4.5

0.15529412CC1_ResetStates 0 0.005 0.594CC1_183_NavigateBackToOpptyQuoteView 0.141 0.213 5.453CC1_182_SaveSalesOrder 0.063 0.135 4.141CC1_181_NewSalesOrder 0.359 0.617 6.344CC1_180_QuoteOrderView 0.25 0.351 6.766CC1_179_UpdateOppty 0.141 0.258 6.359CC1_178_Reprice 0.031 0.055 3.234CC1_177_SelectDiscountAndSaveQuote 0.172 0.288 5.516CC1_176_PickPriceList 0.031 0.067 3.328CC1_175_GoQueryForPriceList 0.063 0.106 4.141CC1_174_QueryForPriceList 0.047 0.076 2.609CC1_173_BringUpPriceListMVG 0.297 0.423 4.203CC1_172_DrilldownOnQuote 0.281 0.403 5.797CC1_171_SaveQuote 0.094 0.208 6.406CC1_170_AutoQuote 0.141 0.293 6.313CC1_169_OpptyQuoteView 0.063 0.12 4.391CC1_168_SaveProduct2 0.063 0.131 3.781CC1_167_PickProduct2 0.016 0.033 1.016CC1_166_GoQueryForProduct2 0.063 0.112 4.172CC1_165_QueryForProduct2 0.031 0.116 1.25CC1_164_NewProduct2 0.016 0.035 1.281CC1_163_SaveProduct1 0.063 0.137 3.875CC1_162_PickProduct1 0.016 0.028 3.406CC1_161_GoQueryForProduct1 0.063 0.234 5CC1_160_QueryForProduct1 0.047 0.145 2.266CC1_159_NewProduct1 0.016 0.041 3.156CC1_158_OpptyProductView 0.047 0.093 3.969

Performance Tuning Siebel Software on the Sun Platform Page 71

Page 72: Perf Tune Siebel Sun

CC1_157_DrilldownOnOppty 0.172 0.256 4.797CC1_156_SaveOppty 0.109 0.236 4.656CC1_155_PickAccount 0.047 0.082 3.797CC1_154_GoQueryForAccount 0.078 0.137 5.125CC1_153_QueryForAccount 0.25 0.381 5.25CC1_152_BringUpAccountMVG 0.109 0.229 2.891CC1_151_NewInOpptyMVG 0.359 0.599 7.375CC1_150_NewOppty 0.078 0.209 4.063CC1_149_ContactOpptyView 0.063 0.115 3.438CC1_148_SaveContact 0.078 0.176 4.969CC1_147_NewContact 0.016 0.031 4.328CC1_146_ContactScreen 0.125 0.204 5.141CC1_145_CloseFind 0 0.002 0.547CC1_144_SearchForContact 0.031 0.062 3.75CC1_143_ClickFind 0.031 0.04 1.328cc1 avg 0.17814286 total avg 0.16775095

Performance Tuning Siebel Software on the Sun Platform Page 72

Page 73: Perf Tune Siebel Sun

11 Appendix B: Database Objects Growth During the TestSIEBEL OBJECT NAME TYPE

GROWTH IN BYTES

S_DOCK_TXN_LOG TABLE1,177,673,728

S_EVT_ACT TABLE 190,341,120S_DOCK_TXN_LOG_P1 INDEX 96,116,736S_DOCK_TXN_LOG_F1 INDEX 52,600,832S_ACT_EMP TABLE 46,202,880S_SRV_REQ TABLE 34,037,760S_AUDIT_ITEM TABLE 29,818,880S_OPTY_POSTN TABLE 28,180,480S_ACT_EMP_M1 INDEX 25,600,000S_EVT_ACT_M1 INDEX 23,527,424S_EVT_ACT_M5 INDEX 22,519,808S_ACT_EMP_U1 INDEX 21,626,880S_ACT_CONTACT TABLE 21,135,360S_EVT_ACT_U1 INDEX 18,391,040S_ACT_EMP_M3 INDEX 16,850,944S_EVT_ACT_M9 INDEX 16,670,720S_EVT_ACT_M7 INDEX 16,547,840S_AUDIT_ITEM_M2 INDEX 16,277,504S_ACT_EMP_P1 INDEX 15,187,968S_AUDIT_ITEM_M1 INDEX 14,131,200S_EVT_ACT_F9 INDEX 13,852,672S_ACT_CONTACT_U1 INDEX 13,361,152S_ORDER_ITEM TABLE 13,066,240S_REVN TABLE 12,943,360S_CONTACT TABLE 12,779,520S_SRV_REQ_M7 INDEX 12,492,800S_SRV_REQ_M2 INDEX 11,960,320S_ACT_EMP_F1 INDEX 11,804,672S_SRV_REQ_U2 INDEX 10,731,520S_SRV_REQ_U1 INDEX 10,444,800S_DOC_QUOTE TABLE 10,321,920S_QUOTE_ITEM TABLE 9,666,560S_SRV_REQ_M9 INDEX 8,970,240S_ACT_CONTACT_F2 INDEX 8,716,288S_OPTY TABLE 8,396,800S_ACT_CONTACT_P1 INDEX 8,183,808S_AUDIT_ITEM_F2 INDEX 7,987,200S_ORDER TABLE 7,987,200S_SRV_REQ_F13 INDEX 7,905,280S_SRV_REQ_P1 INDEX 7,872,512S_SRV_REQ_M10 INDEX 7,823,360S_RESITEM TABLE 7,798,784S_AUDIT_ITEM_P1 INDEX 7,634,944S_REVN_U1 INDEX 7,454,720S_SRV_REQ_M8 INDEX 7,331,840

Performance Tuning Siebel Software on the Sun Platform Page 73

Page 74: Perf Tune Siebel Sun

S_SRV_REQ_M3 INDEX 7,208,960S_SRV_REQ_F6 INDEX 7,135,232S_SRV_REQ_F1 INDEX 7,086,080S_REVN_M1 INDEX 7,004,160S_OPTY_U1 INDEX 6,676,480S_REVN_U2 INDEX 6,676,480S_EVT_ACT_F11 INDEX 6,602,752S_OPTY_TERR TABLE 6,569,984S_SRV_REQ_M6 INDEX 6,471,680S_SRV_REQ_F2 INDEX 5,611,520S_DOC_ORDER TABLE 4,972,544S_SR_RESITEM TABLE 4,972,544S_ORG_EXT TABLE 4,833,280S_ACCNT_POSTN TABLE 4,341,760S_OPTY_CON TABLE 4,136,960S_DOC_QUOTE_BU TABLE 4,096,000S_PARTY TABLE 4,096,000S_SRV_REQ_M5 INDEX 4,055,040S_POSTN_CON TABLE 3,932,160S_SRV_REQ_F7 INDEX 3,768,320S_SRV_REQ_M4 INDEX 3,563,520S_OPTY_BU TABLE 3,194,880S_REVN_M3 INDEX 3,162,112S_CONTACT_M13 INDEX 3,153,920S_OPTY_U2 INDEX 3,072,000S_REVN_U3 INDEX 3,031,040S_OPTY_BU_M9 INDEX 2,990,080S_OPTY_BU_P1 INDEX 2,949,120S_PARTY_M2 INDEX 2,949,120S_ORDER_BU_M2 INDEX 2,867,200S_OPTY_BU_U1 INDEX 2,744,320S_ORG_EXT_F1 INDEX 2,629,632S_CONTACT_M11 INDEX 2,621,440S_CONTACT_M21 INDEX 2,621,440S_CONTACT_M14 INDEX 2,621,440S_PARTY_M3 INDEX 2,621,440S_CONTACT_F6 INDEX 2,580,480S_OPTY_V2 INDEX 2,580,480S_RESITEM_M4 INDEX 2,547,712S_REVN_F6 INDEX 2,539,520S_ORDER_M5 INDEX 2,498,560S_PARTY_M4 INDEX 2,498,560S_REVN_M2 INDEX 2,498,560S_POSTN_CON_M1 INDEX 2,498,560S_CONTACT_M12 INDEX 2,416,640S_OPTY_BU_M1 INDEX 2,416,640S_CONTACT_M22 INDEX 2,416,640S_OPTY_BU_M2 INDEX 2,334,720S_OPTY_BU_M5 INDEX 2,334,720S_OPTY_BU_M6 INDEX 2,334,720

Performance Tuning Siebel Software on the Sun Platform Page 74

Page 75: Perf Tune Siebel Sun

S_OPTY_BU_M8 INDEX 2,334,720S_ORDER_POSTN TABLE 2,334,720S_OPTY_BU_M7 INDEX 2,334,720S_CONTACT_M9 INDEX 2,293,760S_CONTACT_X TABLE 2,293,760S_EVT_ACT_M8 INDEX 2,252,800S_OPTY_BU_M4 INDEX 2,211,840S_RESITEM_U2 INDEX 2,138,112S_RESITEM_M5 INDEX 2,097,152S_RESITEM_M6 INDEX 2,097,152S_ORDER_BU TABLE 2,088,960S_REVN_F3 INDEX 2,088,960S_DOC_QUOTE_U1 INDEX 2,048,000S_RESITEM_U1 INDEX 2,023,424S_OPTY_BU_M3 INDEX 1,966,080S_REVN_P1 INDEX 1,966,080S_REVN_F4 INDEX 1,966,080S_DOC_QUOTE_U2 INDEX 1,925,120S_SR_RESITEM_U1 INDEX 1,892,352S_POSTN_CON_M2 INDEX 1,843,200

Performance Tuning Siebel Software on the Sun Platform Page 75

Page 76: Perf Tune Siebel Sun

12 Appendix C: Oracle statspack Report This is one of the Oracle statspack reports collected during the one-hour steady state of the test. This report provides an efficient way to identify where the time is spent inside the Oracle database during the Siebel tests.

STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host------------ ----------- ------------ -------- ----------- ------- ------------ORAMST 3597051609 oramst 1 9.2.0.2.0 NO siebdb

Snap Id Snap Time Sessions Curs/Sess Comment ------- ------------------ -------- --------- -------------------Begin Snap: 7 14-Aug-04 00:29:10 1,043 57.2 5000-and-server-use End Snap: 8 14-Aug-04 01:52:31 639 77.8 5000-and-server-use Elapsed: 83.35 (mins)

Cache Sizes (end)~~~~~~~~~~~~~~~~~ Buffer Cache: 2,912M Std Block Size: 8K Shared Pool Size: 480M Log Buffer: 10,240K

Load Profile~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 874,784.17 7,640.12 Logical reads: 95,274.57 832.10 Block changes: 5,344.32 46.68 Physical reads: 1,632.28 14.26 Physical writes: 632.79 5.53 User calls: 3,171.71 27.70 Parses: 1,243.43 10.86 Hard parses: 0.10 0.00 Sorts: 385.85 3.37 Logons: 0.92 0.01 Executes: 1,662.82 14.52 Transactions: 114.50

% Blocks changed per Read: 5.61 Recursive Call %: 22.22 Rollback per transaction %: 0.22 Rows per Sort: 18.68

Instance Efficiency Percentages (Target 100%)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 99.99 Redo NoWait %: 100.00 Buffer Hit %: 98.64 In-memory Sort %: 100.00 Library Hit %: 100.02 Soft Parse %: 99.99 Execute to Parse %: 25.22 Latch Hit %: 99.95Parse CPU to Parse Elapsd %: 95.76 % Non-Parse CPU: 97.30

Shared Pool Statistics Begin End ------ ------ Memory Usage %: 67.68 68.34 % SQL with executions>1: 37.08 33.60 % Memory for SQL w/exec>1: 62.79 58.60

Top 5 Timed Events~~~~~~~~~~~~~~~~~~ % TotalEvent Waits Time (s) Ela Time-------------------------------------------- ------------ ----------- --------CPU time 11,550 72.04log file sync 573,210 2,143 13.37direct path read 280,947 627 3.91direct path write 154,781 533 3.32

Performance Tuning Siebel Software on the Sun Platform Page 76

Page 77: Perf Tune Siebel Sun

log file parallel write 550,101 515 3.21 -------------------------------------------------------------Wait Events for DB: ORAMST Instance: oramst Snaps: 7 -8-> s - second-> cs - centisecond - 100th of a second-> ms - millisecond - 1000th of a second-> us - microsecond - 1000000th of a second-> ordered by wait time desc, waits desc (idle events last)

Avg Total Wait wait WaitsEvent Waits Timeouts Time (s) (ms) /txn---------------------------- ------------ ---------- ---------- ------ --------log file sync 573,210 283 2,143 4 1.0direct path read 280,947 0 627 2 0.5direct path write 154,781 0 533 3 0.3log file parallel write 550,101 546,565 515 1 1.0db file sequential read 6,451,104 0 294 0 11.3db file parallel write 27,046 0 152 6 0.0SQL*Net more data to client 2,006,251 0 152 0 3.5control file parallel write 1,639 0 38 23 0.0enqueue 8,971 0 10 1 0.0buffer busy waits 37,083 0 9 0 0.1latch free 5,753 5,685 6 1 0.0log file switch completion 51 0 3 54 0.0db file scattered read 3,064 0 1 0 0.0LGWR wait for redo copy 13,244 0 1 0 0.0control file sequential read 743 0 0 0 0.0SQL*Net break/reset to clien 28 0 0 1 0.0log file single write 4 0 0 1 0.0buffer deadlock 211 211 0 0 0.0log file sequential read 4 0 0 0 0.0SQL*Net message from client 13,127,091 0 4,983,503 380 22.9SQL*Net more data from clien 2,197,829 0 285 0 3.8SQL*Net message to client 13,126,685 0 22 0 22.9 -------------------------------------------------------------Background Wait Events for DB: ORAMST Instance: oramst Snaps: 7 -8-> ordered by wait time desc, waits desc (idle events last)

Avg Total Wait wait WaitsEvent Waits Timeouts Time (s) (ms) /txn---------------------------- ------------ ---------- ---------- ------ --------log file parallel write 550,105 546,569 515 1 1.0db file parallel write 27,046 0 152 6 0.0control file parallel write 1,639 0 38 23 0.0log file sync 825 0 2 2 0.0LGWR wait for redo copy 13,244 0 1 0 0.0db file scattered read 176 0 0 2 0.0SQL*Net more data to client 3,320 0 0 0 0.0db file sequential read 995 0 0 0 0.0control file sequential read 691 0 0 0 0.0enqueue 12 0 0 1 0.0direct path write 54 0 0 0 0.0log file single write 4 0 0 1 0.0buffer busy waits 24 0 0 0 0.0direct path read 54 0 0 0 0.0log file sequential read 4 0 0 0 0.0rdbms ipc message 1,186,256 554,100 28,351 24 2.1SQL*Net message from client 9,948 0 4,869 489 0.0smon timer 16 16 4,501 ###### 0.0SQL*Net more data from clien 3,845 0 1 0 0.0SQL*Net message to client 9,948 0 0 0 0.0

Performance Tuning Siebel Software on the Sun Platform Page 77

Page 78: Perf Tune Siebel Sun

13 References

Solaris 8 Software Developer Collection: Multithreaded Programming Guidehttp://docs.sun.com/app/docs/doc/806-5257?q=multithreadingThis book is very useful for understanding the Solaris OS threads implementation. Developers must read this to take advantage of the various features. The book is also helpful for performance tuning engineers.

Solaris 8 Reference Manual Collection: mallocctl(3MALLOC) - MT hot memory allocator http://docs.sun.com/app/docs/doc/806-0627/6j9vhfn1i?q=mtmalloc&a=viewDetails on the MTmalloc library that is part of the standard Solaris OS.

Sun Java System (formerly known as iPlanet) Web Server 6.0 Performance Tuning, Sizing, and Scaling Guidehttp://docs.sun.com/source/816-5690-10/perf6.htm

Solaris Tunable Parameters Reference Manualhttp://docs.sun.com/app/docs/doc/816-0607?q=kernel+tuning

Siebel PSPP (Platform Sizing and Performance Program) Benchmark Web Sitehttp://www.siebel.com/crm/performance-benchmark.shtmThis is the official repository of certified benchmark results from all hardware vendors.

Sun Fire E2900-E25K Servers Benchmarks http://www.sun.com/servers/midrange/sunfire_e2900/benchmarks.htmlIndustry leading benchmark results.

Siebel SupportWeb (Siebel product documentation)http://supportweb.siebel.com/default.asp?lf=support_search.asp&rf=search/sea-search.asp&tf=tsupport_search.asp

Performance Tuning Siebel Software on the Sun Platform Page 78

Page 79: Perf Tune Siebel Sun

AcknowledgementsThanks to Siebel Systems' Mark Farrier, Francisco Casas, Farinaz Farsai, Vikram Kumar, Santosh Hasani, Sanjay Agarwal, Harsha Gadagkar, and others from Siebel Systems for working with Sun. Thanks to Scott Anderson, George Drapeau, and Lester Dorman for their role as management and sponsors from Sun. Thanks to Diane Kituara, Sherill Ellis, and George Drapeau for their valuable feedback on content for this paper. Thanks to Sun MDE Team Kesari Mandyam, Giri Mandalika, and Devika Gollapudi for their contributions. Thanks to the various engineers across Sun Microsystems for providing subject matter expertise: Ravindra Talashikar (PAE), Stephen Johnson (Network Storage), and Dileep Kumar (Sun Java System Web Server).

About the AuthorKhader Mohiuddin works as a Staff Engineer in the Market Development Engineering organization at Sun Microsystems, Inc. His role as the engineering lead for the Sun-Siebel Alliance involves optimizing Siebel CRM applications on the Sun platform, joint technology adoption projects, and so on. He has been at Sun for four years empowering ISVs to adopt Sun’s latest technologies. Prior to that he worked at Oracle as a developer and Senior Performance Engineer for five years, and at AT&T Bell Labs, New Jersey, for three years.

About Sun Microsystems, Inc.Since its inception in 1982, a singular vision — “The Network Is The Computer™” — has propelled Sun Microsystems, Inc. (Nasdaq: SUNW) to its position as a leading provider of industrial-strength hardware, software, and services that make the Net work. Sun can be found in more than 170 countries and on the World Wide Web at http://sun.com/.

Performance Tuning Siebel Software on the Sun Platform Page 79