perf tuning training v3

Upload: sureshkumartoracle

Post on 10-Apr-2018

229 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Perf Tuning Training v3

    1/75

    Disk iostat

    Ram vmstat

    Cpu- top, ps

    Network netstat

    Parameter Optimizer settings

    Metadata Statistics

    Cursor management

    PGA/SGA sizing

    Fixing design issues

    Application partitioning

    Bulk processing, removing iterations

    Feature use Pipelined function

    Row packing Block size

    Concurrency ASSM, inittrans

    Structure Btree/Bitmap index, IOT, cluster

    Partitioning

    Object Statistics

    Execution plan, hints

    Stored outline, parallel query

    Rewrite SQLs

    Tuning Hierarchy The Pyramid

    Server/OS level tuning

    Instance tuning

    Object tuning

    SQL tuning

    Application tuning

  • 8/8/2019 Perf Tuning Training v3

    2/75

    SQL Tuning

    What is a query optimization?

    Query optimization is of great importance for the performance of a relational database, especially for theexecution of complex SQL statements. A query optimizer determines the best strategy for performing each query.

    The query optimizer chooses, for example, whether or not to use indexes for a given query, and which join

    techniques to use when joining multiple tables. These decisions have a tremendous effect on SQL performance,

    and query optimization is a key technology for every application, from operational systems to data warehouse

    and analysis systems to content-management systems.

    What does Oracle provide for query optimization?

    Oracles optimizer consists of four major components :

    SQL transformations: Oracle transforms SQL statements using a variety of sophisticated techniques during

    query optimization. The purpose of this phase of query optimization is to transform the original SQL

    statement into a semantically equivalent SQL statement that can be processed more efficiently.

    Execution plan selection: For each SQL statements, the optimizer chooses an execution plan (which can be

    viewed using Oracles EXPLAIN PLAN facility or via Oracles v$sql_plan views). The execution plan

    describes all of the steps when the SQL is processed, such as the order in which tables are accessed, how

    the tables are joined together and whether tables are accessed via indexes. The optimizer considers many

    possible execution plans for each SQL statement, and chooses the best one.

  • 8/8/2019 Perf Tuning Training v3

    3/75

    SQL Tuning

    Cost model and statistics: Oracles optimizer relies upon cost estimates for the individual operations that

    make up the execution of a SQL statement. In order for the optimizer to choose the best execution plans, theoptimizer needs the best possible cost estimates. The cost estimates are based upon in-depth knowledgeabout the I/O, CPU, and memory resources required by each query operation, statistical information aboutthe database objects (tables, indexes, and materialized views), and performance information regarding thehardware server platform. The process for gathering these statistics and performance information needs tobe both highly efficient and highly automated.

    Dynamic runtime optimization: Not every aspect ofSQL execution can be optimally planned ahead of time.Oracle thus makes dynamic adjustments to its query-processing strategies based on the current database

    workload. The goal of dynamic optimizations is to achieve optimal performance evenwhen each query may not be able to obtain the ideal amount of CPU or memory resources.

    Oracle additionally has a legacy optimizer, the rule-based optimizer (RBO). This

    optimizer exists in Oracle Database 10g Release 2 solely for backwards

    compatibility. Beginning with Oracle Database 10g Release 1, the RBO is no

    longer supported. The vast majority of Oracles customers today use the costbased

    optimizer.

    SQL TRANSFORMATIONS

    There are many possible ways to express a complex query using SQL. The style ofSQL submitted to thedatabase is typically that which is simplest for the enduser to write or for the application to generate.However, these hand-written or machine-generated formulations of queries are not necessarily the mostefficient SQL for executing the queries. For example, queries generated by applications often have conditionsthat are extraneous and can be removed. Or, there may be additional conditions that can be inferred from aquery and should be added to the SQL statement. The purpose ofSQL transformations is to transform agiven SQL statement into a semantically-equivalent SQL statement (that is, a SQL statement which returnsthe same results) which can provide better performance.

  • 8/8/2019 Perf Tuning Training v3

    4/75

    SQL Tuning

    All of these transformations are entirely transparent to the application and end users; SQL transformations occur

    automatically during query optimization. Oracle has implemented a wide range ofSQL transformations. These

    broadly fall into two categories:

    Heuristic query transformations: These transformations are applied to incoming SQL statements whenever

    possible. These transformations always provide equivalent or better query performance, so that Oracle knows that

    applying these transformations will not degrade performance.

    Cost-based query transformations: Oracle uses a cost-based approach for several classes of query

    transformations. Using this approach, the transformed query is compared to the original query, and Oracles

    optimizer then selects the best execution strategy.

    Heuristic query transformationsSimple view merging

    Perhaps the simplest form of query transformation is view merging. For queries containing views, the reference to the view

    can often be removed entirely from the query by merging the view definition with the query. For example, consider

    a very simple view and query:

    CREATE VIEW TEST_VIEW AS

    SELECT ENAME, DNAME, SAL FROM EMP E, DEPT D

    WHERE E.DEPTNO = D.DEPTNO;

    SELECT ENAME, DNAME FROM TEST_VIEW WHERE SAL > 10000;

    Without any query transformations, the only way to process this query is to join all of the rows of EMP to all of the rows of

    the DEPT table, and then filter the rows with the appropriate values for SAL.

  • 8/8/2019 Perf Tuning Training v3

    5/75

    SQL TuningWith view merging, the above query can be transformed into:

    SELECT ENAME, DNAME FROM EMP E, DEPT D

    WHERE E.DEPTNO = D.DEPTNO

    AND E.SAL > 10000;

    When processing the transformed query, the predicate SAL>10000 can be applied before the join of the EMP and the DEPT tables. Thistransformation can

    vastly improve query performance by reducing the amount of data to be joined. Even in this very simple example, the benefits andimportance of query

    transformations is apparent.

    Complex view merging

    Many view-merging operations are very straightforward, such as the previous example. However, more complex views, such asviews containing GROUP BY or DISTINCT operators, cannot be as easily merged. Oracle provides several sophisticated techniquesfor merging even complex views. Consider a view with a GROUP BY clause. In this example, the view computes the average salaryfor each department:

    CREATE VIEW AVG_SAL_VIEW AS

    SELECT DEPTNO, AVG(SAL) AVG_SAL_DEPT FROM EMP

    GROUP BY DEPTNO

    A query to find the average salary for each department in Oakland:

    SELECT DEPT.NAME, AVG_SAL_DEPT

    FROM DEPT, AVG_SAL_VIEW WHERE DEPT.DEPTNO = AVG_SAL_VIEW.DEPTNO AND DEPT.LOC = 'OAKLAND

    can be tranformed into:

    SELECT DEPT.NAME, AVG(SAL) FROM DEPT, EMP

    WHERE DEPT.DEPTNO = EMP.DEPTNO

    AND DEPT.LOC = 'OAKLAND'

    GROUP BY DEPT.ROWID, DEPT.NAME

  • 8/8/2019 Perf Tuning Training v3

    6/75

    SQL Tuning

    The performance benefits of this particular transformation are immediately apparent: instead of having to

    group all of the data in the EMP table before doing the join, this transformation allows for the EMP data to be

    joined and filtered before being grouped.

    Subquery flattening

    Oracle has a variety of transformations that convert various types of subqueries into joins, semi-joins, or anti-joins.

    As an example of the techniques in this area, consider the following query, which selects those

    departments that have employees that make more than 10000:

    SELECT D.DNAME FROM DEPT D WHERE D.DEPTNO IN

    (SELECT E.DEPTNO FROM EMP E WHERE E.SAL > 10000)

    There are a variety of possible execution plans that could be optimal for this query. Oracle will consider the

    different possible transformations, and select the best plan based on cost.

    Without any transformations, the execution plan for this query would be similar to:

    OPERATION OBJECT_NAME OPTIONS

    SELECT STATEMENT

    FILTER

    TABLE ACCESS DEPT FULL

    TABLE ACCESS EMP FULL

  • 8/8/2019 Perf Tuning Training v3

    7/75

    SQL Tuning

    With this execution plan, the all of the EMP records satisfying the subquerys conditions will be scanned for every single row in the DEPT table. In

    general, this is not an efficient execution strategy. However, query transformations can enable much more efficient plans. One possible plan for

    this query is to execute the query as a semi-join. A semijoin is a special type of join which eliminates duplicate values from the inner table of the

    join (which is the proper semantics for this subquery). In this example, the optimizer has chosen a hash semi-join, although Oracle alssupports sort-merge and nested-loop semi-joins:

    OPERATION OBJECT_NAME OPTIONS

    SELECT STATEMENT

    HASH JOIN SEMI

    TABLE ACCESS DEPT FULL

    TABLE ACCESS EMP FULL

    Since SQL does not have a direct syntax for semi-joins, this transformed query cannot be expressed using standard SQL. However, the transformed

    pseudo-SQL would be:

    SELECT DNAME FROM EMP E, DEPT D

    WHERE D.DEPTNO E.DEPTNO

    AND E.SAL > 10000;

    Another possible plan is that the optimizer could determine that the DEPT table should be the inner table of the join. In that case, it will execute

    the query as a regular join, but perform a unique sort of the EMP table in order to eliminate duplicate department numbers:

    OPERATION OBJECT_NAME OPTIONS

    SELECT STATEMENTHASH JOIN

    SORT UNIQUE

    TABLE ACCESS EMP FULL

    TABLE ACCESS DEPT FULL

    The transformed SQL for this statement would be:

    SELECT D.DNAME FROM (SELECT DISTINCT DEPTNO FROM EMP) E, DEPT D

    WHERE E.DEPTNO = D.DEPTNO

    AND E.SAL > 10000;

    Subquery flattening, like view merging, is a fundamental optimization for good query performance.

  • 8/8/2019 Perf Tuning Training v3

    8/75

    SQL Tuning

    Transitive predicate generation

    In some queries, a predicate on one table can be translated into a predicate on another table due to the tables' join

    relationship. Oracle will deduce newpredicates in this way; such predicates are called transitive predicates. For example, consider a query that seeks to

    find all of the line-items that were

    shipped on the same day as the order data:

    SELECT COUNT(DISTINCT O_ORDERKEY) FROM ORDER, LINEITEM

    WHERE O_ORDERKEY = L_ORDERKEY

    AND O_ORDERDATE = L_SHIPDATE

    AND O_ORDERDATE BETWEEN '1-JAN-2002' AND '31-JAN-2002

    Using transitivity, the predicate on the ORDER table can also be applied to the LINEITEM table:

    SELECT COUNT(DISTINCT O_ORDERKEY) FROM ORDER, LINEITEM

    WHERE O_ORDERKEY = L_ORDERKEY

    AND O_ORDERDATE = L_SHIPDATE

    AND O_ORDERDATE BETWEEN '1-JAN-2002' AND '31-JAN-2002'

    AND L_SHIPDATE BETWEEN '1-JAN-2002' AND '31-JAN-2002

    The existence of new predicates may reduce the amount of data to be joined, or enable the use of additional

    indexes.

  • 8/8/2019 Perf Tuning Training v3

    9/75

    SQL Tuning

    Common subexpression elimination

    When the same subexpression or calculation is used multiple times in a query, Oracle will only evaluate

    the expression a single time for each row. Consider a query to find all employees in Dallas that are

    either Vice Presidents or with a salary greater than 100000.

    SELECT * FROM EMP, DEPT

    WHERE

    (EMP.DEPTNO = DEPT.DEPTNO AND LOC = 'DALLAS' AND SAL > 100000)

    OR(EMP.DEPTNO = DEPT.DEPTNO AND LOC = 'DALLAS' AND JOB_TITLE = 'VICE

    PRESIDENT')

    The optimizer recognizes that the query can be evaluated more efficiently when

    transformed into:

    SELECT * FROM EMP, DEPT

    WHERE EMP.DEPTNO = DEPT.DEPTNO AND

    LOC = DALLAS AND

    (SAL > 100000 OR JOB_TITLE = 'VICE PRESIDENT');

    With this transformed query, the join predicate and the predicate on LOC only

    need to be evaluated once for each row of DEPT, instead of twice for each row.

  • 8/8/2019 Perf Tuning Training v3

    10/75

    SQL Tuning

    Predicate pushdown and pullup

    A complex query may contain multiple views and subqueries, with many predicates that are applied to these views

    and subqueries. Oracle can move predicates into and out of views in order to generate new, better performing queries.

    A single-table view can be used to illustrate predicate push-down:

    CREATE VIEW EMP_AGG AS

    SELECT

    DEPTNO,

    AVG(SAL) AVG_SAL,

    FROM EMP

    GROUP BY DEPTNO;

    Now suppose the following query is executed:

    SELECT DEPTNO, AVG_SAL FROM EMP_AGG WHERE DEPTNO = 10;

    Oracle will push the predicate DEPTNO=10 into the view, and transform the query into the following SQL:

    SELECT DEPTNO, AVG(SAL)5 FROM EMP WHERE DEPTNO = 10

    GROUP BY DEPTNO;

    The advantage of this transformed query is that the DEPTNO=10 predicate is

    applied before the GROUP-BY operation, and this could vastly reduce the

    amount of data to be aggregated.

    Oracle has many other sophisticated techniques for pushing WHERE-clause

    conditions into a query block from outside, pulling conditions out of a query

    block, and moving conditions sideways between query blocks that are joined.

  • 8/8/2019 Perf Tuning Training v3

    11/75

    SQL Tuning

    Cost-based query transformationsMaterialized view rewrite

    Precomputing and storing commonly-used data in the form of a materialized view can greatly speed up query processing. Oracle can

    transform SQL queries so that one or more tables referenced in a query can be replaced by a reference to a materialized view. If the

    materialized view is smaller than the original table or tables, or has better available access paths, the transformed SQL statement

    could be executed much faster than the original one.

    For example, consider the following materialized view:

    CREATE MATERIALIZED VIEW SALES_SUMMARY

    ASSELECT SALES.CUST_ID, TIME.MONTH, SUM(SALES_AMOUNT) AMT

    FROM SALES, TIMEWHERE SALES.TIME_ID = TIME.TIME_ID

    GROUP BY SALES.CUST_ID, TIME.MONTH;

    This materialized view can be used to optimize the following query:

    SELECT CUSTOMER.CUST_NAME, TIME.MONTH, SUM(SALES.SALES_AMOUNT)

    FROM SALES, CUSTOMER, TIME

    WHERE SALES.CUST_ID = CUST.CUST_ID

    AND SALES.TIME_ID = TIME.TIME_ID

    GROUP BY CUSTOMER.CUST_NAME, TIME.MONTH;

    The rewritten query would be:SELECT CUSTOMER.CUST_NAME,SALES_SUMMARY.MONTH, SALES_SUMMARY.AMT

    FROM CUSTOMER, SALES_SUMMARY

    WHERE CUSTOMER.CUST_ID = SALES_SUMMARY.CUST_ID;

    In this example, the transformed query is likely much faster for several reasons:

    the sales_summary table is likely much smaller than the sales table and the

    transformed query requires one less join and no aggregation.

  • 8/8/2019 Perf Tuning Training v3

    12/75

    SQL Tuning

    OR-expansion

    This technique converts a query with ORs in the WHERE-clause into a UNION ALL of several queries without ORs. It can be highly

    beneficial when the Ors refer to restrictions of different tables. Consider the following query to find all the shipments that went eitherfrom or to Oakland.

    SELECT * FROM SHIPMENT, PORT P1, PORT P2

    WHERE SHIPMENT.SOURCE_PORT_ID = P1.PORT_ID

    AND SHIPMENT.DESTINATION_PORT_ID = P2.PORT_ID

    AND (P1.PORT_NAME = 'OAKLAND' OR P2.PORT_NAME = 'OAKLAND')

    The query can be transformed into:

    SELECT * FROM SHIPMENT, PORT P1, PORT P2

    WHERE SHIPMENT.SOURCE_PORT_ID = P1.PORT_ID

    AND SHIPMENT.DESTINATION_PORT_ID = P2.PORT_ID

    AND P1.PORT_NAME = 'OAKLAND'

    UNION ALL

    SELECT * FROM SHIPMENT, PORT P1, PORT P2WHERE SHIPMENT.SOURCE_PORT_ID = P1.PORT_ID

    AND SHIPMENT.DESTINATION_PORT_ID = P2.PORT_ID

    AND P2.PORT_NAME = 'OAKLAND' AND P1.PORT_NAME 'OAKLAND

    Note that each UNION ALL branch can have different optimal join orders. In the first branch, Oracle could take advantage of the

    restriction on P1 and drive the join from that table. In the second branch, Oracle could drive from P2 instead. The resulting plan can be

    orders of magnitude faster than for the original version of the query, depending upon the indexes and data for these tables. This query

    transformation is by necessity cost-based because this transformation does not improve the performance for every query.

  • 8/8/2019 Perf Tuning Training v3

    13/75

    SQL Tuning

    Star transformation

    A star schema is a one modeling strategy commonly used for data marts and data warehouses. A star schema

    typically contains one or more very large tables, called fact tables, which store transactional data, and a largernumber of smaller lookup tables, called dim

    Oracle supports a technique for evaluating queries against star schemas known as the star transformation. This

    technique improves the performance of star queries by applying a transformation that adds new subqueries

    to the original SQL. These new subqueries will allow the fact tables to be accessed much more efficiently using

    bitmap indexes.

    The star transformation is best understood by examining an example. Consider the following query that returnsthe sum of the sales of beverages by state in the third quarter of 2001. The fact table is

    sales. Note that the time dimension is a snowflake dimension since it consists of two tables, DAY and QUARTER.

    SELECT STORE.STATE, SUM(SALES.AMOUNT)

    FROM SALES, DAY, QUARTER, PRODUCT, STORE

    WHERE SALES.DAY_ID = DAY.DAY_ID AND DAY.QUARTER_ID =

    QUARTER.QUARTER_ID

    AND SALES.PRODUCT_ID = PRODUCT.PRODUCT_ID

    AND SALES.STORE_ID = STORE.STORE_ID

    AND PRODUCT.PRODUCT_CATEGORY = 'BEVERAGES'

    AND QUARTER.QUARTER_NAME = '2001Q3'

    GROUP BY STORE.STATE

  • 8/8/2019 Perf Tuning Training v3

    14/75

    SQL Tuning

    The transformed query may look like

    SELECT STORE.STATE, SUM(SALES.AMOUNT) FROM SALES, STOREWHERE SALES.STORE_ID = STORE.STORE_ID

    AND SALES.DAY_ID IN

    (SELECT DAY.DAY_ID FROM DAY, QUARTER

    WHERE DAY.QUARTER_ID = QUARTER.QUARTER_ID

    AND QUARTER.QUARTER_NAME = '2001Q3')

    AND SALES.PRODUCT_ID IN

    (SELECT PRODUCT.PRODUCT_ID FROM PRODUCT

    WHERE PRODUCT.PRODUCT_CATEGORY = 'BEVERAGES')GROUP BY STORE.STATE

    With the transformed SQL, this query is effectively processed in two main phases. In the first phase,

    all of the necessary rows are retrieved from the fact table using the bitmap indexes. In this case, the

    fact table will be accessed using bitmap indexes on day_id and product_id, since those are the two

    columns which appear in the subquery predicates.In the second phase of the query (the join-back

    phase), the dimension tables are joined back to the data set from the first phase. Since, in this

    query, the only dimension-table column which appears in the select-list is store.state, the store tableis the only table which needs to be joined. The existence of the subqueries containing PRODUCT,

    DAY, and QUARTER in the first phase of the queries obviated the need to join those tables in second

    phase, and the query optimizer intelligently eliminates those joins.

  • 8/8/2019 Perf Tuning Training v3

    15/75

    SQL Tuning

    Predicate pushdown for outer-joined views

    Typically, when a query contains a view that is being joined to other tables, the views can be merged in order to

    better optimize the query. However, if a view is being joined using an outer join, then the view cannot be merged.

    In this case, Oracle has specific predicate pushdown operations which will allow the join predicate to pushed into

    the view; this transformation allows for the possibility of executing the outer join using an index on one of the

    tables within the view. This transformation is cost-based because the index access may not be the most effective.

  • 8/8/2019 Perf Tuning Training v3

    16/75

    SQL Tuning

    ACCESS PATH SELECTION

    The object of access path selection is to decide on the order in which to join the tables in a query, what join

    methods to use, and how to access the data in each table. All of this information for a given query can be viewed

    using Oracles EXPLAIN PLAN facility or using Oracles v$sql_plan view. Oracles access path selection algorithms

    are particularly sophisticated because Oracle provides a particularly rich set of database structures and query

    evaluation techniques. Oracles access path selection and cost model incorporate a complete understanding of

    each of these features, so that each feature can be leveraged in the optimal way.

    Oracles database structures include:

    Table Structures

    Tables (default heap organized)

    Index-organized tables

    Nested tables

    Clusters

    Hash Clusters

    Index StructuresB-tree Indexes

    Bitmap Indexes

    Bitmap Join Indexes

    Reverse-key B-tree indexes

    Function-based B-tree indexes

    Function-based bitmap indexes

    Domain indexes

  • 8/8/2019 Perf Tuning Training v3

    17/75

    SQL Tuning

    Partitioning Techniques

    Range Partitioning

    Hash Partitioning

    Composite Range-Hash Partitioning

    List Partitioning

    Composite Range-List Partitioning

    Oracles access techniques include:

    Index Access Techniques

    Index unique key look-up

    Index max/min look-up

    Index range scan

    Index descending range scanIndex full scan

    Index fast full scan

    Index skip scan

    Index and-equal processing

    Index joins

    Index B-tree to bitmap conversion

    Bitmap index AND/OR processing

    Bitmap index range processing

    Bitmap index MINUS (NOT) processingBitmap index COUNT processing

    Join Methods

    Nested-loop inner-joins, outer-joins, semi-joins, and anti-joins

    Sort-merge inner-joins, outer-joins, semi-joins, and anti-joins

    Hash inner-joins, outer-joins, semi-joins, and anti-joins

    Partition-wise joins

  • 8/8/2019 Perf Tuning Training v3

    18/75

    SQL Tuning

    Join ordering

    When joining a large number of tables, the space of all possible execution plans can be extremely large and it

    would be prohibitively time consuming for the optimizer to explore this space exhaustively. With a 5-tablequery, the total number of execution plans is in the thousands and thus the optimizer can consider most

    possible execution plans. However, with a 10-table join, there are over 3 million join orders and, typically, well

    over 100 million possible execution plans. Therefore, it is necessary for the optimizer to use intelligence in its

    exploration of the possible execution plans rather than a brute-force algorithm.

    Adaptive search strategy

    Oracles optimizer uses many techniques to intelligently prune the search space. One notable technique is that

    Oracle uses an adaptive search strategy. If a query can be executed in one second, it would be considered

    excessive to spend 10 seconds for query optimization. On the other hand, if the query is likely to run for minutes

    or hours, it may well be worthwhile spending several seconds or even minutes in the optimization phase in the

    hope of finding a better plan. Oracle utilizes an adaptive optimization algorithm to ensure that the optimization

    time for a query is always a small percentage of the expected execution time of the query, while devoting extra

    optimization time for complex queries.

  • 8/8/2019 Perf Tuning Training v3

    19/75

    SQL TuningCOST MODEL AND STATISTICS

    A cost-based optimizer works by estimating the cost of various alternative execution plans and choosing the plan

    with the best (that is, lowest) cost estimate. Thus, the cost model is a crucial component of the optimizer, sincethe accuracy of the cost model directly impacts the optimizers ability to recognize and choose the best execution

    plans. The cost of an execution plan is based upon careful modeling of each component of the execution plan.

    The cost model incorporates detailed information about all of Oracles access methods and database structures,

    so that it can generate an accurate cost for each type of operation. Additionally, the cost model relies on

    optimizer statistics, which describe the objects in the database and the performance characteristics of the

    hardware platform.

    Optimizer statistics

    When optimizing a query, the optimizer relies on a cost model to estimate the cost of the operations involved in

    the execution plan (joins, index scans, table scans, etc.). This cost model relies on information about the

    properties of the database objects involved in the SQL query as well as the underlying hardware

    platform. In Oracle, this information, the optimizer statistics, comes in two flavors: object-level statistics and

    system statistics.

    Object-level statistics

    Object-level statistics describe the objects in the database. These statistics track values such as the number of

    blocks and the number of rows in each table, and the number of levels in each b-tree index. There are also

    statistics describing the columns in each table. Column statistics are especially important because they are used

    to estimate the number of rows that will match the conditions in the WHERE-clauses of each query. For every

    column, Oracles column statistics have the minimum and maximum values, and the number of distinct values.

    Additionally, Oracle supports histograms to better optimize queries on columns which contain skewed data distributions.

  • 8/8/2019 Perf Tuning Training v3

    20/75

    SQL Tuning

    System statistics

    System statistics describe the performance characteristics of the hardware platform. The optimizers cost model

    distinguishes between the CPU costs and I/O costs. However, the speed of the CPU varies greatly betweendifferent systems and moreover the ratio between CPU and I/O performance also varies greatly. Hence, rather

    than relying upon a fixed formula for combining CPU and I/O costs, Oracle provides a facility for gathering

    information about the characteristics of an individual system during a typical workload in order to

    determine the best way to combine these costs for each system. The information collected includes CPU-speed

    and the performance of the various types of I/O (the optimizer distinguishes between single-block, multi-block,

    and direct-disk I/Os when gathering I/O statistics).

    User-defined statistics

    Oracle also supports user-defined cost functions for user-defined functions and domain indexes. Customers who

    are extending Oracles capabilities with their own functions and indexes can fully integrate their own access

    methods into Oracles cost model. Oracles cost model is modular, so that these user-defined statistics are

    considered within the same cost model and search space as Oracles own built-in indexes and functions.

    Statistics management

    The properties of the database tend to change over time as the data changes, either due to transactional activity

    or due to new data being loaded into a data warehouse. In order for the object-level optimizer statistics to

    stay accurate, those statistics need to be updated when the underlying data has changed. The problem of

    gathering the statistics poses several challenges for the DBA:

    Statistics gathering can be very resource intensive for large databases.

  • 8/8/2019 Perf Tuning Training v3

    21/75

    SQL Tuning

    Determining which tables need updated statistics can be difficult. Many of the tables may not have changed very

    much and recalculating the statistics for those would be a waste of resources. However, in a database with

    thousands of tables, it is difficult for the DBA to manually track the level of changes to each table and whichtables require new statistics.

    Determining which columns need histograms can be difficult. Some columns may need histograms, others not.

    Creating histograms for columns that do not need them is a waste of time and space. However, not creating

    histograms for columns that need them could lead to bad optimizer decisions. Oracles statistics-gathering

    routines address each of these challenges.

    Automatic statistic gathering

    In Oracle Database 10g Release 2 the recommended approach to gathering statistics is to allow Oracle to

    automatically gather the statistics. Oracle will gather statistics on all database objects automatically and

    maintains those statistics in a regularly-scheduled maintenance job (GATHER STATS job). This job gathers statistics

    on all objects in the database, which have missing, or stale statistics (more than 10% of the rows have changed

    since it was last analyzed). The GATHER STATS job is created automatically at database creation time and ismanaged by the Scheduler. The Scheduler runs this job when the maintenance window is opened. By default, the

    maintenance window opens every night from 10 P.M. to 6 A.M. and all day on weekends. Automated statistics

    collection eliminates many of the manual tasks associated with managing the query optimizer, and significantly

    reduces the chances of getting poor execution plans because of missing or stale statistics. The GATHER STATS job

    can be stopped completely using the DBMS_SCHEDULER package.

  • 8/8/2019 Perf Tuning Training v3

    22/75

    SQL Tuning

    Parallel sampling

    The basic feature that allows efficient statistics gathering is sampling. Rather than scanning (and sorting) an

    entire table to gather statistics, good statistics can often be gathered by examining a small sample of rows. Thespeed-up due to sampling can be dramatic since sampling not only the amount of time to scan a table, but also

    subsequently reduces the amount of time to process the data (since there is less data to sorted and analyzed).

    Monitoring

    Another key feature for simplifying statistics management is monitoring. Oracle keeps track of how many changes

    (inserts, updates, and deletes) have been made to a table since the last time statistics were collected. Those

    tables that have changed sufficiently to merit new optimizer statistics are marked automatically

    by the monitoring process. When the DBA gathers statistics, Oracle will only gather statistics on those tables

    which have been significantly modified.

    Automatic histogram determination

    Oracles statistics-gathering routines also implicitly determine which columns require histograms. Oracle makes

    this determination by examining two characteristics: the data-distribution of each column, and the frequencywith which the column appears in the WHERE-clause ofSQL statements. For columns which are both highly

    skewed and commonly appear in WHERE clauses, Oracle will create a histogram.

  • 8/8/2019 Perf Tuning Training v3

    23/75

    SQL Tuning

    Dynamic sampling

    Unfortunately, even accurate statistics are not always sufficient for optimal query execution. The optimizer

    statistics are by definition only an approximate description of the actual database objects. In some cases, thesestatic statistics are incomplete. Oracle addresses those cases by supplementing the static object level

    statistics with additional statistics that are gathered dynamically during query optimization. There are primarily

    two scenarios in which the static optimizer statistics are inadequate: Correlation. Often, queries have complex

    WHERE-clauses in which there are two or more conditions on a single table. Here is a very simple example:

    SELECT * FROM EMP

    WHERE JOB_TITLE = 'VICE PRESIDENT'

    ANDS

    AL

    < 40000

    This simple approach is incorrect in this case. Job_title and salary are correlated, since employees with a job_title

    of Vice President are much more likely to have higher salaries. Although the simple approach indicates that this

    query should return 2% of the rows, this query may in actuality return zero rows. The static optimizer statistics,

    which store information about each column separately, do not provide any indication to the optimizer of which

    columns may be correlated.

    Transient data: Some applications will generate some intermediate result set that is temporarily stored in a table.

    The result set is used as a basis for further operations and then is deleted or simply rolled back. It can be very

    difficult to capture accurate statistics for the temporary table where the intermediate result is stored since the

    data only exists for a short time and might not even be committed. There is no opportunity for a DBA to gather

    static statistics on these transient objects.

    Oracle's dynamic sampling feature addresses these problems. While a query is being optimized, the optimizer may notice

    that a set of columns may be correlated or that a table is missing statistics. In those cases, the optimizer will

    sample a small set of rows from the appropriate table(s) and gather the appropriate statistics on-the-fly.

  • 8/8/2019 Perf Tuning Training v3

    24/75

    SQL Tuning

    Optimization cost modes

    A cost-based optimizer works by estimating the cost for various alternative execution plans and picking the one

    with the lowest cost estimate. However, the notion of "lowest cost" can vary based upon the requirements of agiven application. In an operational system, which displays only a handful of rows to the end-user at a time, the

    lowest cost execution plan may be the execution plan which returns the first row in the shortest amount of

    time. On the other hand, in a system in which the end-user is examining the entire data set returned by a query,

    the lowest cost execution plan is the execution plan which returns all of the rows in the least amount of time.

    Oracle provides two optimizer modes: one for minimizing the time to return the first N rows of query, and

    another for minimizing the time to return all of the rows from a query. The database administrator can

    additionally specify the value for N. In this way, Oracles query optimizer can be easily tuned to meet the

    specific requirements of different types of applications.

  • 8/8/2019 Perf Tuning Training v3

    25/75

    SQL Tuning

    Hints

    Hints are directives added to a SQL query to influence the execution plan. Hints are most commonly used to

    address the rare cases in which the optimizer chooses a suboptimal plan. Hints are useful tools not just toremedy an occasional suboptimal plan, but also for users who want to experiment with access paths, or simply

    have full control over the execution of a query.

    Hints are designed to be used only in unusual cases to address performance issues; hints should be rarely used

    and in fact the over-use of hints can detrimentally affect performance since these hints can prevent the

    Optimizer from adjusting the execution plans as circumstances change (tables grow, indexes are added, etc) and

    may mask problems such as stale optimizer statistics.

    Histograms

    Oracle uses two types of histograms for column statistics: height-balanced histograms and frequency histograms.

    In a height-balanced histogram, the column values are divided into bands so that each band contains

    approximately the same number of rows. The useful information that the histogram provides is where in the

    range of values the endpoints fall.In a frequency histogram, each value of the column corresponds to a single bucket of the histogram. Each bucket

    contains the number of occurrences of that single value. Frequency histograms are automatically created

    instead of height-balanced histograms when the number of distinct values is less than or equal to the number of

    histogram buckets specified.

  • 8/8/2019 Perf Tuning Training v3

    26/75

    SQL Tuning

    Common join techniques in Oracle:

    Nested Loop join:

    The optimizer uses nested loop joins when joining small number of rows, with a good driving condition

    between the two tables.

    The outer loop is the driving row source. It produces a set of rows for driving the join condition. The row

    source can be a table accessed using an index scan or a full table scan.

    The inner loop is iterated for every row returned from the outer loop, ideally by an index scan. If the access

    path for the inner loop is not dependent on the outer loop, then you can end up with a Cartesian product

    Hash join:

    Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources

    to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find

    the joined rows.

    This method is best used when the smaller table fits in available memory. However, if the hash table grows

    too big to fit into the memory, then the optimizer breaks it up into different partitions. As the partitions

    exceed allocated memory, parts are written to temporary segments on disk.

    After the hash table build is complete, the second, larger table is scanned.Sort-merge join:

    Sort merge joins can be used to join rows from two independent sources. Hash joins generally perform

    better than sort merge joins. On the other hand, sort merge joins can perform better than hash joins if no

    sort operation is required and row source is already sorted.

    In a merge join, there is no concept of a driving table. First both the inputs are sorted on the join key and

    sorted lists are merged together to fetch desired rows.

  • 8/8/2019 Perf Tuning Training v3

    27/75

    SQL Tuning

    Common join techniques in Oracle:

    Cartesian Joins:

    A Cartesian join is used when one or more of the tables does not have any join conditions to any other

    tables in the statement. The optimizer joins every row from one data source with every row from the other

    data source.

    Semi Join:

    A semi-join returns rows that match an EXISTS subquery without duplicating rows from the left side of the

    predicate when multiple rows on the right side satisfy the criteria of the subquery. A semi-join could be

    nested loop , sort merge or hash semi-join. Even if the second table contains two matches for a row in the first table, only one copy of the row will be

    returned. This is generally used with IN or EXISTS constructs.

    Anti Join

    Anti join between two tables returns rows from the first table where no matches are found in the second

    table. An anti-join is essentially the opposite of a semi-join.

    Anti-joins are generally written using the NOT EXISTS or NOT IN constructs. These two constructs differ in

    how they handle nulls.

  • 8/8/2019 Perf Tuning Training v3

    28/75

    Optimizer featuresCursor Sharing

    Oracle automatically notices when applications send similar SQL statements to the database. In evaluating

    whether statements are similar or identical, Oracle considers SQL statements issued directly by users andapplications as well as recursive SQL statements issued internally by a DDL statement.

    One of the first stages of parsing is to compare the text of the statement with existing statements in the shared

    pool to see if the statement can be shared. If the statement differs textually in any way, then Oracle does not

    share the statement.

    Statements that are identical, except for the values of some literals, are called similar statements. Similar

    statements pass the textual check in the parse phase when the CURSOR_SHARING parameter is set to SIMILAR or

    FORCE. Textual similarity does not guarantee sharing. The new form of the SQL statement still needs to gothrough the remaining steps of the parse phase to ensure that the execution plan of the preexisting statement is

    equally applicable to the new statement.

    Setting CURSOR_SHARING to EXACT allows SQL statements to share the SQL area only when their texts match

    exactly. This is the default behavior. Using this setting, similar statements cannot shared; only textually exact

    statements can be shared.

    Setting CURSOR_SHARING to either SIMILAR or FORCE allows similar statements to share SQL by auto-binding i.e.

    replacing literals with system generated bind variables in the SQL. The difference between SIMILAR and FORCE isthat SIMILAR forces similar statements to share the SQL area without deteriorating execution plans. Setting

    CURSOR_SHARING to FORCE forces similar statements to share the executable SQL area, potentially deteriorating

    execution plans. CURSOR_SHARING=similar works at its best when there are histograms present on columns

    having skewed value, as this value similar ensures only literals which are safe, will be replaced. Presence of

    Histogram reflects skewness of a column and Oracle considers that to be unsafe to auto-bind to avoid generating

    a sub-optimal but sharable plan.

  • 8/8/2019 Perf Tuning Training v3

    29/75

    Optimizer features

    Bind peeking

    The query optimizer peeks at the values of user-defined bind variables on the first invocation of a cursor. This

    feature lets the optimizer determine the selectivity of any WHERE clause condition, as well as if literals have

    been used instead of bind variables. On subsequent invocations of the cursor, no peeking takes place, and

    the cursor is shared, based on the standard cursor-sharing criteria, even if subsequent invocations use

    different bind values. When bind variables are used in a statement, it is assumed that cursor sharing is

    intended and that different invocations are supposed to use the same execution plan.

    If different invocations of the cursor would significantly benefit from different execution plans, then bind

    variables may have been used inappropriately in the SQL statement. Bind peeking works for a specific set of

    clients, not all clients. Up until 11g adaptive cursor sharing, the implementation of bind variable peekingcould cause performance problems, and many shops would disable bind variable peeking by opening a SR on

    MOSC and getting permission to set _optim_peek_user_binds=false.

    How histograms and bind variable peeking can cause an unstable plan

    we have a table T with 10000 rows, where 9900 rows have N=0 and 100 rows have N from 1 to 100. The

    histogram tells Oracle about this data distribution.

    The above is known as a skewed data distribution, and a histogram can help Oracle to choose the right plan,

    depending on the value.

    That is, Oracle is able to choose an index range scan when N=1 (returns only one row) and do an FTS when N=0

    (returns 9900 rows) perfect execution plans given the conditions.

    Imagine the following query: select * from t where n=:n;

  • 8/8/2019 Perf Tuning Training v3

    30/75

    Optimizer features

    We used a bind variable in place of a literal. On a hard parse, Oracle will peek at the value youve used for :n, and

    will optimize the query as if youve submitted the same query with this literal instead. The problem is that, in

    10G, bind variable peeking happens only on a hard parse, which means that all following executions will use thesame plan, regardless of the bind variable value. Both queries would use an index range scan if the example

    started with exec :n:=1 instead of exec :n:=0. Now, if 99% of your queries are not interested in querying the above

    table with :N:=0, then you want the plan with an index range scan because it will provide optimal performance

    most of the time, while resulting in suboptimal performance for only 1% of executions. With the histogram in

    place, one day you will be unlucky enough to have Oracle hard parse the query with a bind variable value of

    0, which will force everyone else to use an FTS (as was demonstrated above), which in turn will result an abysmal

    performances for 99% of executions

    What is making all those histograms

    It is a default behavior. With the declaration of RBOs obsolescence in 10G, we were also presented with a default

    gather stats job in every 10G database. This jobs runs with a whole bunch of AUTO parameters which is

    METHOD_OPT.The SIZE part of this parameter value controls histograms collection. Method_opt equal to

    AUTO means Oracle determines the columns to collect histograms based on data distribution and theworkload of the columns.

    Because to the above, there is no single answer since it involves know your data and apply your domain

    knowledge.

  • 8/8/2019 Perf Tuning Training v3

    31/75

    Optimizer features

    How to fix a misbehaving query due to incorrect peeking

    Dont hurry to flush your shared pool or unset hidden parameter. First, as a result of a complete brain cleaning, yourinstance will have to hard parse everything it had in the library cache, causing tons of redundant work. Second, and

    this is much more important, these hard parses might well result in an incorrect peeking for some other queries.

    So you might end up in a worse situation than you were in before.

    The right way is to get rid of a histogram by collecting stats on required table, with METHOD_OPT => 'FOR COLUMNS

    X SIZE 1' and NO_INVALIDATE => FALSE. This will cause all dependent cursors to be invalidated immediately after stats

    have been gathered.

    Sometimes, however, you dont have enough time to understand what caused a problem (or you simply dont have

    time to regather the stats) and, if probability theory is on your side (chances for a good peeking are much higher), allyou have to do to invalidate dependent cursors is to create comment on a table.

  • 8/8/2019 Perf Tuning Training v3

    32/75

    SQL TuningOptimizer Plan Stability - Stored Outlines

    Optimizer Plan Stability ensures predictable SQL performance and is particularly beneficial for third-party software

    vendors that distribute packaged applications. The vendors are able to guarantee that the same access paths are being

    used, regardless of the environment in which their applications are running Optimizer plan stability allows administrators

    to use OEMs Stored Outline Editor or SQL UPDATE statements to influence the optimizer to use a more high-performance

    access path, which is once again frozen in the Stored Outline.

    Oracle preserves the execution plans in objects called Stored Outlines. You can create a Stored Outline for one or more

    SQL statements and group Stored Outlines into categories. Grouping Stored Outlines allows you to control which category

    of outlines Oracle uses. As a result, you can toggle back and forth between multiple outline categories and, therefore,

    multiple access paths for the same statement.

    An outline consists primarily of a set of hints that is equivalent to the optimizer's results for the execution plan generation

    of a particular SQL statement. When Oracle creates an outline, plan stability examines the optimization results using the

    same data used to generate the execution plan. Oracle stores outline data in the OL$, OL$HINTS, and OL$NODES tables.

    Basically, the OL$ table holds a "header" information about the stored outline, such as the name, category, information to

    identify which statement outline applies to and some other info. There is one row per each outline in this table. The

    OL$HINTS table has one row per each hint that Oracle saves for this stored outline. Tables are "joined" on two columns

    OL_NAME and OL_CATEGORY. Data in this table actually defines the plan that will be applied to the according SQL

    statement. Unless you remove them, Oracle retains outlines indefinitely.The only effect outlines have on caching execution plans is that the outline's category name is used in

    addition to the SQL text to identify whether the plan is in cache. Oracle can automatically create outlines for all SQL

    statements, or you can create them for specific SQL statements. In either case, the outlines derive their input from the

    optimizer. Oracle creates stored outlines automatically when you set the initialization parameter

    CREATE_STORED_OUTLINES to true. When activated, Oracle creates outlines for all compiled SQL statements. You can

    create stored outlines for specific statements using the CREATE OUTLINE statement.

  • 8/8/2019 Perf Tuning Training v3

    33/75

    SQL Tuning

    Example: Tweak access path of Vendor code using outline

    We have is the statement in vendor application (package proc1)that uses select from a collection type and because of

    wrong cardinality assumption optimizer uses wrong plan. Here is the statement:

    SELECT a.data, b.data2

    FROM table1 a,

    table2 b,

    (SELECT /*+ NO_MERGE */ column_value

    FROM TABLE (CAST (coll_test_type (1, 2, 3) AS coll_test_type)) d) c

    WHERE a.id1 = c.column_value and a.id2=b.id2;

    we can hint optimizer with proper cardinality value like this:

    SELECT a.data, b.data2

    FROM table1 a,

    table2 b,

    (SELECT /*+ NO_MERGE CARDINALITY(d 3) */ column_value

    FROM TABLE (CAST (coll_test_type (1, 2, 3) AS coll_test_type)) d) cWHERE a.id1 = c.column_value and a.id2=b.id2;

    But in our case we do not have access to the statement code (it's in wrapped package), so what we will do is following:

    Create a stored outline for original statement by running our wrapped package (Oultine 1) Create stored outline for

    hinted statement (Outline 2) "Swap" the hints between Outline 1 and Outline 2 This way we will get the outline for our

    original statement that will have the hints that will make it use execution plan as if it was hinted as we need.

  • 8/8/2019 Perf Tuning Training v3

    34/75

    SQL Tuning

    Example: Tweak access path of Vendor code using outline

    Let's start with creating the stored outline for our SQL "hidden" in wrapped package pkg_wrapped in procedure proc1.

    SQL> alter session set create_stored_outlines=test_outln;

    System altered.

    SQL> execute wrapped_pkg.proc1;

    PL/SQL procedure successfully completed.

    SQL> alter session set create_stored_outlines=false;

    System altered.

    Please note, that in order to create stored outline, you need to grant CREATE ANY OUTLINE privilege directly (not throughrole) to owner of wrapped_pkg. In the script above we've created the stored outline in TEST_OUTLN category. Let'swhat we have in OL$ table:

    SQL> set long 4000

    SQL> select ol_name, sql_text from outln.ol$ where category='TEST_OUTLN';

    OL_NAME

    ------------------------------

    SQL_TEXT--------------------------------------------------------------------------------

    SYS_OUTLINE_031028174923638

    SELECT a.data, b.data2 FROM table1 a, table2 b,

    (SELECT /*+ NO_MERGE */ column_value

    FROM TABLE (CAST (coll_test_type (1, 2, 3) AS coll_test_type))) c

    WHERE a.id1 = c.column_value and a.id2=b.id2

    SQL> spool off

  • 8/8/2019 Perf Tuning Training v3

    35/75

    SQL Tuning

    Example: Tweak access path of Vendor code using outline

    As you see, Oracle has created the outline named SYS_OUTLINE_031028174923638for our

    statement. Now let's create the outline for a hinted version of the statement:

    SQL> alter session set create_stored_outlines=test_outln;

    Session altered.

    SQL> SELECT a.data, b.data2 FROM table1 a, table2 b,

    2 (SELECT /*+ NO_MERGE CARDINALITY(d 5)*/ column_value

    3 FROM TABLE (CAST (coll_test_type (1, 2, 3) AS coll_test_type)) d) c

    4 WHERE a.id1 = c.column_value and a.id2=b.id2;

    SQL> alter session set create_stored_outlines=false;

    Session altered.

    The new stored outline is name SYS_OUTLINE_031028201739552. Now we only need to replace the hints of original outline with

    newly created one.

    SQL> delete outln.ol$hints where ol_name='SYS_OUTLINE_031028174923638;

    22 rows deleted.

    SQL> update outln.ol$hints set ol_name='SYS_OUTLINE_031028174923638' where ol_name='SYS_OUTLINE_031028201739552;

    21 rows updated.

    SQL> commit;

  • 8/8/2019 Perf Tuning Training v3

    36/75

    SQL Tuning

    Example: Tweak access path of Vendor code using outline

    As you can see, the number of hints is different and we need to correct this in OL$ table:

    SQL> select ol_name, hintcount from outln.ol$ where category='TEST_OUTLN';

    OL_NAME HINTCOUNT

    ------------------------------ ----------

    SYS_OUTLINE_031028174923638 22

    SYS_OUTLINE_031028201739552 21

    SQL> update outln.ol$ set hintcount=21 where ol_name='SYS_OUTLINE_031028174923638';

    1 row updated.

    SQL> commit;

    Commit complete.

    Now everything is ready for test if the outline will be used when we turn the usage of outlines on:

    SQL> select ol_name, flags from outln.ol$;

    OL_NAME FLAGS

    ------------------------------ ----------

    SYS_OUTLINE_031028174923638 0

    SYS_OUTLINE_031028201739552 0

  • 8/8/2019 Perf Tuning Training v3

    37/75

    SQL TuningExample: Tweak access path of Vendor code using outline

    SQL> connect scott/tiger

    Connected.

    SQL> alter session set use_stored_outlines=test_outln;

    Session altered.

    SQL> execute wrapped_pkg.proc1;

    PL/SQL procedure successfully completed.

    SQL> connect outln/outln

    Connected.SQL> select ol_name, flags from ol$;

    OL_NAME FLAGS

    ------------------------------ ----------

    SYS_OUTLINE_031028174923638 1

    SYS_OUTLINE_031028201739552 0

    The FLAGS column in OL$ table is set to 1 when outline is used for the first time. It corresponds to USED column of DBA_OUTLINES

    view.Also the OUTLINE_CATEGORY column in V$SQL will show which outline category was used for the sql statement.

    The last thing to check is actually peek at V$SQL_PLAN and see if the plan is the one that we wanted.

    One way to check the plan is using the below statement -

    select * from table(dbms_xplan.display_cursor(null,null,'OUTLINE'));

  • 8/8/2019 Perf Tuning Training v3

    38/75

    PartitioningPartition optimizations

    Partitioning is an important feature for manageability. Oracle supports the partitioning on both tables and

    indexes. While Oracle supports a variety of partitioning methods, range partitioning (or composite partitioning

    range/hash or range/list) is perhaps the most useful partitioning method, particularly in data warehousing where

    rolling windows of data are common. Range partitioning on date ranges can have enormous benefits for the

    load/drop cycle in a data warehouse, both in terms of efficiency and manageability.

    However, partitioning is useful for query processing as well and Oracles optimizer is fully partition aware. Again,

    date range partitioning is typically by far the most important technique since decision support queries usually

    contain conditions that constrain to a specific time period. Consider the case where two years' worth of sales

    data is stored partitioned by month. Assume that we want to calculate the sum of the sales in the last three

    months. Oracle's optimizer will know that the most efficient way to access the data is by scanning the partitions

    for the last three months. That way, exactly the relevant data is scanned. A system lacking range partitioning

    would either have to scan the entire table 8 times more data -- or use an index, which is an extremely

    inefficient access method when the amount of data that needs to be accessed is large. The ability of the

    optimizer to avoid scanning irrelevant partitions is known as partition pruning.

    Basics of Partitioning

    Partitioning allows a table, index or index-organized table to be subdivided into smaller pieces. Each piece of thedatabase object is called a partition. Each partition has its own name, and may optionally have its own storage

    characteristics. From the perspective of a database administrator, a partitioned object has multiple pieces that

    can be managed either collectively or individually. However, from the perspective of the application, a

    partitioned table is identical to a non-partitioned table; no modifications are necessary when accessing a

    partitioned table using SQL DML commands. Database objects - tables, indexes, and index-organized tables are

    partitioned using a 'partitioning key', a set of columns which determine in which partition a given row will reside.

  • 8/8/2019 Perf Tuning Training v3

    39/75

    Partitioning

    Basic Partitioning Strategies

    Oracle Partitioning offers three fundamental data distribution methods that control how the data is actually goingto placed into the various individual partitions, namely:

    Range: The data is distributed based on a range of values of the partitioning key (for a date column as the

    partitioning key, the 'January- 2007' partition contains rows with the partitioning-key values between '01

    JAN-2007' and '31-JAN-2007'). The data distribution is a continuum without any holes and the lower boundary of a

    range is automatically defined by the upper boundary of the preceding range.

    List: The data distribution is defined by a list of values of the partitioning key (for a region column as the

    partitioning key, the 'North America partition may contain values 'Canada', 'USA', and 'Mexico'). A special

    'DEFAULT' partition can be defined to catch all values for a partition key that are not explicitly defined by any of

    the lists.

    Hash: A hash algorithm is applied to the partitioning key to determine the partition for a given row. Unlike the

    other two data distribution methods, hash does not provide any logical mapping between the data and any

    partition.

    Using the above-mentioned data distribution methods, a table can be partitioned either as single or composite

    partitioned table:

    Single (one-level) Partitioning: A table is defined by specifying one of the data distribution methodologies, using

    one or more columns as the partitioning key.

    Composite Partitioning: A combination of two data distribution methods are used to define a composite

    partitioned table.

  • 8/8/2019 Perf Tuning Training v3

    40/75

    PartitioningIndexing strategy on partitioned table

    Local Indexes: A local index is an index on a partitioned table that is coupled with the underlying partitioned

    table, 'inheriting' the partitioning strategy from the table. Consequently, each partition of a local indexcorresponds to one - and only one - partition of the underlying table. The coupling enables optimized partition

    maintenance; for example, when a table partition is dropped, Oracle simply has to drop the corresponding index

    partition as well. No costly index maintenance is required. Local indexes are most common in data warehousing

    environments. A local index is non-prefixed if it is not partitioned on a left prefix of the index columns.

    Global Partitioned Indexes: A global partitioned index is an index on a partitioned or non-partitioned table that is

    partitioned using a different partitioning-key or partitioning strategy than the table. Global-partitioned indexescan be partitioned using range or hash partitioning and are uncoupled from the underlying table. For example, a

    table could be range partitioned by month and have twelve partitions, while an index on that table could be

    range-partitioned using a different partitioning key and have a different number of partitions. Global partitioned

    indexes are more common for OLTP than for data warehousing environments. A global partitioned index is

    prefixed if it is partitioned on a left prefix of the index columns.

    Global Non-Partitioned Indexes: A global non-partitioned index is essentially identical to an index on a nonpartitioned table. The index structure is not partitioned and uncoupled from the underlying table. In data

    warehousing environments, the most common usage of global nonpartitioned indexes is to enforce primary key

    constraints. OLTP environments on the other hand mostly rely on global non-partitioned indexes. Oracle

    additionally provides a comprehensive set ofSQL commands for managing partitioning tables. These include

    commands for adding new partitions, dropping, splitting, moving, merging, truncating, and optionally

    compressing partitions.

  • 8/8/2019 Perf Tuning Training v3

    41/75

    Partitioning

    Partition-wise joins, GROUP-BYs and sorts

    Certain operations can be conducted on a partition-wise basis when those operations involve the partitioningkey of a table. For example, suppose that a sales table was range-partitioned by date. When a query requests

    sales records ordered by data, Oracles optimizer realizes that each partition could be sorted independently (on a

    partition-wise basis) and then the results could simply be concatenated afterwards. Sorting each partitioning

    separately is much more efficient than sorting the entire table at one time. Similar optimizations are made for

    join and GROUP-BY operations.

    Partition-wise joins can be applied when two tables are being joined together, and at least one of these tables is

    partitioned on the join key. Partition-wise joins break a large join into smaller joins of 'identical' data sets for the

    joined tables. 'Identical' here is defined as covering exactly the same set of partitioning key values on both sides

    of the join, thus ensuring that only a join of these 'identical' data sets will produce a result

    and that other data sets do not have to be considered.

    11g partitioning enhancements

    Extended Composite Partitioning extends to allow the following composite partitioning schemes :

    Range-Hash (available since 8i)

    Range-List (available since 9i)

    Range-Range

    List-Range

    List-Hash

    List-List

  • 8/8/2019 Perf Tuning Training v3

    42/75

    Partitioning

    11g partitioning enhancements

    Interval Partitioning - Interval partitioning is an extension of range partitioning, where the system is able to

    create new partitions as they are required. The PARTITION BY RANGE clause is used in the normal way toidentify the transition point for the partition, then the new INTERVAL clause used to calculate the range for

    new partitions when the values go beyond the existing transition point.

    CREATE TABLE interval_tab

    ( id NUMBER, code VARCHAR2(10), description VARCHAR2(50), created_date DATE )

    PARTITION BY RANGE (created_date) INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))

    ( PARTITION part_01 values LESS THAN (TO_DATE('01-NOV-2007','DD-MON-YYYY')) );

    Provided we insert data with a created_date value less than '01-NOV-2007' the data will be placed in the existingpartition and no new partitions will be created. But if we add data beyond the range of the existing partition, a

    new partition is created.

    System Partitioning - System partitioning gives you all the advantages partitioning, but leaves the decision

    of how the data is partitioned to the application layer. The PARTITION clause is used to define which

    partition the row should be placed in during insert or any other dml incl. select queries.

    Reference Partitioning This allows tables related by foreign keys to be logically equi-partitioned. The childtable is partitioned using the same partitioning key as the parent table without having to duplicate the key

    columns. Partition maintenance operations performed on the parent table are reflected on the child table,

    but no partition maintenance operations are allowed on the child table. This is achieved by PARTITION BY

    REFERENCE (FK constraint name) clause for the child table.

    Virtual Column-Based Partitioning- These virtual columns are not physically stored in the table, but derived

    from data in the table. These virtual columns can be used in the partition key in all basic partitioning

    schemes. For example, we can create a table that is list partitioned on a virtual column that represents the

    first letter in the username column of the table (using GENERATED ALWAYS AS clause).

  • 8/8/2019 Perf Tuning Training v3

    43/75

    Initialization parameters Performance factors

    Before changing any initialization parameter for improving performance:

    Understand database / application type OLTP/DW/Hybrid

    Analyze advisories/performance diagnostics

    Understand application SLAs

    Identify bottlenecks

    Analyze impact and test application before implementation

    Running Oracle Database with an underscore (hidden) parameter makes you different from the rest of the

    world, and this is not how Oracle Database was tested and intended to be run in a first place (think bugs).

    So, there need to be extra caution before changing any hidden parameters

  • 8/8/2019 Perf Tuning Training v3

    44/75

  • 8/8/2019 Perf Tuning Training v3

    45/75

    Initialization parameters Performance factors

    db_cache_size

    - Default buffer cache

    Db_keep/recycle_cache_size

    - Isolating frequently/sparingly used objects in memory

    db_K_cache_size

    - handling tablespaces with different block size

    Pga_aggregate_target and work_area_size_policy

    - Automatic sizing of * area_size

    Cursor sharing

    - Sharing sqls different in literals

    Sga_max_size and sga_target

    Shared_pool_size and shared_pool_reserved_size- Avoid hard parsing, shared pool latch contention if possible

    Statistics_level

    - Includes advisories including db_cache_advice

    - Collect statistics automatically for diagnosing performance issues

  • 8/8/2019 Perf Tuning Training v3

    46/75

    Initialization parameters Performance factors

    Optimizer_index_cost_adj

    - smaller the value, cheaper index access cost

    Optimizer_index_caching

    - makes optimizer to assume about cached index

    Optimizer_mode

    - change depending on response/throughput requirement

    Optimizer_max_permutations

    - 80,000 in 9i, hidden in 10g with value 2000

    db_file_multiblock_read_count

    - Read more blocks in single IO supported by OS platform

    - Auto tuned in 10g Rel 2

    Parallel_automatic_tuning and parallel_adaptive_multi_user

    - Handling parallel queries intelligently, automatically

    Optimizer_dynamic_sampling

    - 0 in 9.0, 1 in 9.2, 2 in 10g

  • 8/8/2019 Perf Tuning Training v3

    47/75

    Initialization parameters Performance factors

    Optimizer_features_enable

    - Preserve optimizer behavior after upgrade

    Optimizer_secure_view_merging

    - Merge view / inline view considering or ignoring security

    Fast_start_parallel_rollback and recovery_parallelism

    - Recover parallely during crash recovery

    Query_rewrite_enabled and QUERY_REWRITE_INTEGRITY

    - true, false, force rewrite; enable using materialized view in place accessing actual table in a query

    - Required for function based index before 9.2.0.4

    - Mainly DW requirement, enforced/stale_tolerated/trusted data integrity

    Few Hidden parameters important to know

    - Numerous undocumented parameters affecting performance use in recommended way

    - _Optimizer_cost_model (IO/CPU)

    - _optimizer_cost_based_transformation

    - _hash_join_enabled

    - _unnest_subquery

    - _gby_hash_aggregation_enabled

  • 8/8/2019 Perf Tuning Training v3

    48/75

    Instance Level Tuning

    ASMM

    In 10G version, the ASMM (Automatic Shared Memory Management) has been introduced to relieve DBAs from

    sizing some parts of the SGA by themselves.

    When enabled, it lets Oracle decide of the right size for some components of the SGA:

    Database buffer cache (Default pool)

    Shared pool

    Large pool Java pool

    10gR2 the streams pool is included

    They are called auto-tuned parameters.

    The main objectives to justify this new functionality are:

    - Distribute the available memory depending of the current Workload. The MMAN process will take some

    regular memory snapshots to evaluate the needs and thereby the dispatching of the usable memory.

    - Enhance the memory usage depending of the activity. Avoid the memory errors like ORA-4031.

    The ASMM is driven by one init parameter: SGA_TARGET.

    When set to 0, the ASMM is disabled and you run with the old method, so you need to define the above auto

    tuned parameters by yourself. The default value for SGA_TARGET is 0 so ASMM disabled.

  • 8/8/2019 Perf Tuning Training v3

    49/75

    Instance Level Tuning

    The conditions to enable the ASMM mechanism are:

    STATISTICS_LEVEL=TYPICAL or ALL

    When you use a value greater than 0 for SGA_TARGET, the ASMM is enabled and the memory will be spread

    between all components: auto-tuned and manual parameters.

    The SGA_TARGET value will therefore define the memory size sharable between auto-tuned and manual

    parameters.

    The manual parameters are:

    Log buffer

    Other buffer caches (KEEP/RECYCLE, other block sizes)

    Streams pool (new in Oracle Database 10g)

    Fixed SGA and other internal allocations

    Resizing SGA_TARGET

    SGA_TARGET is a dynamic parameter and can be changed through Enterprise Manager or with the ALTERSYSTEM command.

    SGA_TARGET can be increased up to the value ofSGA_MAX_SIZE. It can be reduced until any one auto-tuned

    components reaches its minimum size (either a user-specified minimum or an internally determined minimum).

    If you increase the value ofSGA_TARGET, the additional memory is distributed according to the auto-tuning

    policy across the auto-tuned components. If you reduce the value ofSGA_TARGET the memory is taken away by

    the auto-tuning policy from one or more of the auto-tuned components. Therefore any change in the value of

    SGA_TARGET affects only the sizes of the auto-tuned components.

  • 8/8/2019 Perf Tuning Training v3

    50/75

    Instance Level Tuning

    Shared Pool Tuning

    Oracle keeps SQL statements, packages, object information and many other items in an area in the SGA known as

    the shared pool. This sharable area of memory is managed as a sophisticated cache and heap manager rolled into

    one. It has 3 fundamental problems to overcome:

    - The unit of memory allocation is not a constant - memory allocations from the pool can be anything from a

    few bytes to many kilobytes

    - Not all memory can be 'freed' when a user finishes with it (as is the case in a traditional heap manager) as

    the aim of the shared pool is to maximize sharability of information. The information in the memory may be

    useful to another session - Oracle cannot know in advance if the items will be of any use to anyone else or

    not.- There is no disk area to page out to so this is not like a traditional cache where there is a file backing store.

    Only "recreatable" information can be discarded from the cache and it has to be re-created when it is next

    needed.

    Issues faced with poorly tuned Shared Pool

    Latch contention for the library cache latch

    Latch contention for the shared pool latch

    High CPU parse times

    High numbers of reloads in V$LIBRARYCACHE

    Lots of parse calls

    Frequent ORA-04031 errors

  • 8/8/2019 Perf Tuning Training v3

    51/75

    Instance Level Tuning

    Reducing the load on the Shared Pool

    Parse Once / Execute Many

    By far the best approach to use in OLTP type applications is to parse a statement only once and hold

    the cursor open, executing it as required. This results in only the initial parse for each statement

    (either soft or hard). Obviously there will be some statements which are rarely executed and so maintaining

    an open cursor for them is a wasteful overhead.

    Eliminating Literal SQL

    If you have an existing application it is unlikely that you could eliminate all literal SQL but you should be prepared

    to eliminate some if it is causing problems. By looking at the V$SQLAREA view it is possible to see which

    literal statements are good candidates for converting to use bind variables.

    NB: Note the use of parameter CURSOR_SHARING discussed earlier. Judicious use of this parameter helps Oracle

    to convert literal SQLs to autobind wherever appropriate, making them sharable and thus decreasing load.

    Avoid Invalidations

    Some specific orders will change the state of cursors to INVALIDATE. These orders modify directly the context of

    related objects associated with cursors. That's orders are TRUNCATE, ANALYZE or DBMS_STATS.GATHER_XXX on

    tables or indexes, grants changes on underlying objects. The associated cursors will stay in the SQLAREA but

    when it will be reference next time, it should be reloaded and reparsed fully, so the global performance will be

    impacted.

  • 8/8/2019 Perf Tuning Training v3

    52/75

    Instance Level Tuning

    Flushing the SHARED POOL

    On systems which use a lot of literal SQL the shared pool is likely to fragment over time such that the degree of

    concurrency which can be achieved diminishes. Flushing the shared pool will often restore performance for awhile as it can cause many small chunks of memory to be coalesced. After the flush there is likely to be an interim

    spike in performance as the act of flushing may remove sharable SQL from the shared pool but does nothing to

    improve shared pool fragmentation. The command to flush the shared pool is ALTER SYSTEM FLUSH

    SHARED_POOL;Contrary to reports elsewhere items kept in the shared pool using DBMS_SHARED_POOL.KEEP

    will NOT be flushed by this command. Any items (objects or SQL) actually pinned by sessions at the time of the

    flush will also be left in place.

    NB: Flushing the shared pool will flush any cached sequences potentially leaving gaps in the sequence range.DBMS_SHARED_POOL.KEEP('sequence_name','Q') can be used to KEEP sequences preventing such gaps.

    Also for large shared pool flushing the shared pool may cause a halt of the system, specially when the instance is

    very active.

    Investigate Shared Pool demand

    1) If using SGA_TARGET, to Gauge how ASMM is working, issue a query like -

    select to_char(end_time, 'dd-Mon-yyyy hh24:mi') end, component, oper_type, initial_size, target_size, final_size

    from V$SGA_RESIZE_OPS

    --where component='shared pool'

    order by end;

    NOTE: You can use the line 'where component='shared pool' to limit the analysis to just the Shared Pool. If you

    see indications of problems growing and shrinking memory, this can be a sign that SGA_TARGET /

    SGA_MAX_SIZE settings are too small.

  • 8/8/2019 Perf Tuning Training v3

    53/75

    Instance Level Tuning

    Investigate Shared Pool demand

    2) To figure out activity for soft vs. hard parses

    select

    to_char(100 * sess / calls, '999999999990.00') || '%' cursor_cache_hits,

    to_char(100 * (calls - sess - hard) / calls, '999990.00') || '%' soft_parses,

    to_char(100 * hard / calls, '999990.00') || '%' hard_parses

    from

    ( select value calls from v$sysstat where name = 'parse count (total)' ),

    ( select value hard from v$sysstat where name = 'parse count (hard)' ),

    ( select value sess from v$sysstat where name = 'session cursor cache hits' );

    3) Invalidations and reloads (From AWR/statspack report from Library cache Activity section)

    Version count (From AWR/statspack report from sql order by version count section)

    4) Check V$SHARED_POOL_RESERVED view (due to Bug 3669074, use Oracle provided query prior to 10.2 version)

    5) There is a fixed table called x$ksmlru that tracks allocations in the shared pool that cause other objects in the

    shared pool to be aged out. This fixed table can be used to identify what is causing the large allocation.

    To monitor this fixed table just run the following (connected as the SYS):

    SELECT * FROM X$KSMLRU WHERE ksmlrsiz > 0;

  • 8/8/2019 Perf Tuning Training v3

    54/75

    Instance Level Tuning

    Investigate Shared Pool demand

    6) You can run this query to build trend information on memory usage in the SGA.

    Remember, the 'free' class in this query is not specific to the Shared Pool, but is across the SGA.

    NOTE: It is not recommended to run queries on X$KSMSP when the DB is under load. Performance of the

    database will be impacted.

    SELECT KSMCHCLS CLASS, COUNT(KSMCHCLS) NUM, SUM(KSMCHSIZ) SIZ,

    To_char( ((SUM(KSMCHSIZ)/COUNT(KSMCHCLS)/1024)),'999,999.00')||'k' "AVG SIZE"

    FROM X$KSMSP GROUP BY KSMCHCLS;

    a) if 'free' memory (column SIZ) is low (less than 5mb or so) you may need to increase the shared_pool_size and

    shared_pool_reserved_size. You should expect 'free' memory to increase and decrease over time.

    Seeing trends where 'free' memory decreases consistently is not necessarily a problem, but seeing consistent

    spikes up and down could be a problem.

    b) if 'freeable' or 'perm' memory (column SIZ) continually grows then it is possible you are seeing a memory

    bug(in case where ora-04031 is hitting).c) if 'freeabl' and 'recr' memory classes (column SIZ)are always huge, this indicates that you have a lot of cursor

    info stored that is not releasing.

    d) if 'free' memory (column SIZ) is huge but you are still getting 4031 errors, the problem is likely reloads and

    invalidations in the library cache causing fragmentation.

  • 8/8/2019 Perf Tuning Training v3

    55/75

    Instance Level Tuning

    Automatic PGA Tuning

    Program Global Area (PGA) resides in the process private memory of the server process. It contains global

    variables and data structures and control information for a server process. example of such information

    is the runtime area of a cursor.

    The performance of complex long running queries, typical in a DSS environment, depend to a large extent on the

    memory available in the Program Global Area (PGA). which is also called work area.

    The size of a work area can be controlled and tuned. Generally, bigger work areas can significantly improve the

    performance of a particular operator at the cost of higher memory consumption. Ideally, the size of a work area isbig enough that it can accommodate the input data and auxiliary memory structures allocated by it

    associated SQL operator. This is known as the optimal size of a work area (e.g. a memory sort). When the size of

    the work area is smaller than optimal (e.g. a disk sort), the response time increases, because an extra pass is

    performed over part of the input data. This is known as the one-pass size of the work area. Under the one-pass

    threshold, when the size of a work area is far too small compared to the input data size, multiple passes over the

    input data are needed. This could dramatically increase the response time of the operator. This is known as the

    multi-pass size of the work area.

    Automatic PGA tuning gets rid of the requirement for the DBA to manually set values for different PGA work area

    components e.g. SORT_AREA_SIZE, HASH_AREA_SIZE,BITMAP_MERGE_AREA_SIZE, and

    CREATE_BITMAP_AREA_SIZE etc.

  • 8/8/2019 Perf Tuning Training v3

    56/75

    Instance Level Tuning

    How to enable automatic PGA Tuning

    Starting with Oracle9i, an option is provided to completely automate the management of PGA memory.

    Administrators merely need to specify the maximum amount of PGA memory available to an instance using a

    newly introduced initialization parameter PGA_AGGREGATE_TARGET.

    The database server automatically distributes this memory among various active queries in an intelligent manner

    so as to ensure maximum performance benefits and the most efficient utilization of memory.

    The automatic SQL execution memory management feature is enabled by setting the parameter

    WORKAREA_SIZE_POLICY to AUTO and by specifying a size of PGA_AGGREGATE_TARGET in the initialization file.These two parameters can also be set dynamically using the ALTER SYSTEM command. In the absence of either of

    these parameters, the database will revert to manual PGA management mode.

    How To Tune PGA_AGGREGATE_TARGET

    1) Preferred and easiest way of monitoring and setting pga_aggregate_target parameter (PGA) is

    section 'PGA Memory Advisory' in the AWR and Statspack reports.When you go down or up the advisory section from the row with 'Size Factr' = 1.00, you get estimations for Disk

    usage - column 'Estd Extra W/A MB Read/ Written to Disk ' - for bigger or smaller settings of

    pga_aggregate_target. The less Disk usage figure in this column, usually the better.

    Your first goal is to have such a setting of pga_aggregate_target, that number in the column 'Estd Extra W/A MB

    Read/ Written to Disk ' does not substantially reduce any more In other words, further increases of

    pga_aggregate_target won't give any more benefit.

  • 8/8/2019 Perf Tuning Training v3

    57/75

    Instance Level Tuning

    2) What's the best value of PGA_AGGREGATE_TARGET - Rule of Thumb

    If we have an Oracle instance configured on system with 16G of Physical memory, then the suggested

    PGA_AGGREGATE_TARGET parameter value we should start with incase we have OLTP system is (16 G *

    80%)*20% ~= 2.5G and incase we have DSS system is (16 G * 80%)* 50% ~= 6.5 G.

    In the above equation, we assume that 20% of the memory will be used by the OS, and in OLTP system 20% of

    the remaining memory will be used for PGA_AGGREGATE_TARGET and the remaining memory is going for Oracle

    SGA memory and non-oracle processes memory. So make sure that you have enough memory for your SGA and

    also for non-oracle processes.

    3) Second step in tuning the PGA_AGGREGATE_TARGET is to monitor performance using available PGA

    statistics and see if PGA_AGGREGATE_TARGET is under sized or over sized. Several dynamic performance views

    are available for this purpose, the main one are V$PGASTAT and V$SQL_WORKAREA_HISTOGRAM.

    over allocation count(from V$PGASTAT):

    Over-allocating PGA memory can happen if the value of

    PGA_AGGREGATE_TARGET is too small to accommodate the untunable PGA memory part plus the minimum

    memory required to execute the work area workload. When this happens, Oracle cannot honor the initialization

    parameter PGA_AGGREGATE_TARGET, and extra PGA memory needs to be allocated. over allocation count is the

    number of time the system was detected in this state since database startup. This count should ideally be equal

    to zero.

    Instance Level Tuning

  • 8/8/2019 Perf Tuning Training v3

    58/75

    Instance Level Tuning4) PGA advice views: By examining V$PGA_TARGET_ADVICE and V$PGA_TARGET_ADVICE_HISTOGRAM views,

    you will be able to determine how key PGA statistics will be impacted if you change the value of

    PGA_AGGREGATE_TARGET. To enable automatic generation of PGA advice performance views, make sure the

    following parameters are set: - PGA_AGGREGATE_TARGET and STATISTICS_LEVEL. Set this to TYPICAL (the

    default) or ALL. The following query shows statistics for all nonempty buckets.

    SELECT LOW_OPTIMAL_SIZE/1024 low_kb,(HIGH_OPTIMAL_SIZE+1)/1024 high_kb,

    optimal_executions, onepass_executions, multipasses_executions

    FROM v$sql_workarea_histogram

    WHERE total_executions != 0;

    The result of the query might look like the following:

    LOW_KB HIGH_KB OPTIMAL_EXECUTIONS ONEPASS_EXECUTIONS MULTIPASSES_EXECUTIONS

    -------------------------------------------------------------------------------------------------------------------------------------

    8 16 156255 0 0

    16 32 150 0 0

    32 64 89 0 064 128 13 0 0

    128 256 60 0 0

    256 512 8 0 0

    512 1024 657 0 0

    1024 2048 551 16 0 551 work areas used an optimal amount of memory,

    while 16 ran in one-pass mode and none ran in multi-pass mode

    W it t

  • 8/8/2019 Perf Tuning Training v3

    59/75

    Wait events

    "WAIT EVENT" Defined

    At any given moment, every Oracle process is either busy servicing a request or waiting for something specific

    to happen. By busy we mean that the process wishes to use the CPU. For example, a dedicated serverprocess might be searching the buffer cache to see if a certain data block is in memory. This process would be

    said to be busy and not waiting. The ARC0 process, meanwhile, might be waiting for LGWR to signal that an

    online redo log needs archiving. In this case, the ARC0 process is waiting. There are more than 800 wait events

    in Oracle 10g.

    Wait Event Interface

    It is very beneficial that Oracle is diligent about tracking wait event information and making it available to

    DBAs. We call this the wait event interface. By querying v$ views, we can see what events processes are

    waiting on to an amazing level of detail. For example, we might learn that a dedicated server process has been

    waiting 30 milliseconds for the operating system to read eight blocks from data file 42, starting at block 18042.

    We can also see summary information of how much time each Oracle process has spent waiting on each type

    of wait event for the duration of the process. In addition, we can direct an Oracle process to write detailed wait

    event data to a trace file for later analysis using TKPROF.

    Some common wait events

    Some of the common wait events are db file sequential read, db file scattered read, log file sync , latch free,

    enqueue, db file parallel write, latch: cache buffers chains, direct path write, log file switch completion, enq: SQ

    - contention, SQL*Net more data to client, SQL*Net more data from client etc.

    W it t

  • 8/8/2019 Perf Tuning Training v3

    60/75

    Wait events

    WHY WAIT EVENT INFORMATION IS USEFUL

    Using the wait event interface, you can get insights into where time is being spent. If a report takes four hours

    to complete, for example, the wait event interface will tell you how much of that four hours was spent waiting

    for disk reads caused by full table scans, disk reads