www.pafumi.net tuning methodology.html

56
Tuning Methodology Quick thinks to check for Check Disk I/O Improper PGA Set up Modify init.ora Parameters SQL Code Tuning Collect Schema Statistics Redo Log Switches Large Full Table Scans Small Full Table Scans and Index Scans Many Indexes on Data Buffer Cache Check for skewed Indexes (unbalanced) Tuning Database Buffer Cache Fragmentation on DB Objects Size of LOG_BUFFER Size of SHARED_POOL_SIZE Allocate Files Properly (check waits on them) Checking Active Statements Use IPC for local Connections Check Undo Parameters Detect High SQL Parse Monitor Open and Cached Cursors Detect Top 10 Queries in SQL Area Allocate Objects into Multiple Block Buffers (another web page) Check for Indexes not Used and HOT Tables Detect and Resolve Buffer Busy Waits *********************** Show Porcentage of a Table in the data buffer Testing Procedures or Packages for Performance Using PGA Advice Ut ilit y Check Sorts Optimizing Indexes (creating 32k block size) Quick Things to Check for My goal is to quickly identify and correct performance problems. Here is a summary of the things that I look at first: 1 - Install STATSPACK first, and get hourly snaps working. 2 - Get an SQL access report (or plan9i.sql), an spreport during peak times, and statspack_alert.sql output. PDFmyURL.com

Upload: balajismith

Post on 10-Jul-2016

24 views

Category:

Documents


13 download

DESCRIPTION

Tuning Methodology

TRANSCRIPT

Page 1: Www.pafumi.net Tuning Methodology.html

Tuning MethodologyQuick thinks to check forCheck Disk I/OImproper PGA SetupModify init .ora ParametersSQL Code TuningCollect Schema Stat ist icsRedo Log SwitchesLarge Full Table ScansSmall Full Table Scans and Index ScansMany Indexes on Data Buffer CacheCheck for skewed Indexes (unbalanced)Tuning Database Buffer CacheFragmentat ion on DB ObjectsSize of LOG_BUFFERSize of SHARED_POOL_SIZEAllocate Files Properly (check waits on them)Checking Act ive StatementsUse IPC for local Connect ionsCheck Undo ParametersDetect High SQL ParseMonitor Open and Cached CursorsDetect Top 10 Queries in SQL AreaAllocate Objects into Mult iple Block Buffers (another web page)Check for Indexes not Used and HOT TablesDetect and Resolve Buffer Busy Waits ***********************Show Porcentage of a Table in the data bufferTest ing Procedures or Packages for PerformanceUsing PGA Advice Ut ilityCheck SortsOpt imizing Indexes (creat ing 32k block size)

Quick Things to Check forMy goal is to quickly ident ify and correct performance problems. Here is a summary of the things that I look at f irst :1 - Install STATSPACK f irst , and get hourly snaps working. 2 - Get an SQL access report (or plan9i.sql), an spreport during peak t imes, and statspack_alert .sql output.

PDFmyURL.com

Page 2: Www.pafumi.net Tuning Methodology.html

3 - Look for "silver bullet f ixes":

part ial schema stat ist ics (using dbms_stats)missing indexesopt imizer_index_cost_adj=15 #10-15 for OLTP systems, 50 for DW #This adjusts the opt imizer to favor index accessopt imizer_index_caching=85 (depending on RAM for index caching, around 85)opt imizer_mode=f irst_rows (for OLTP)parallel_automat ic_tuning=TRUE (parallelizes full-table scans, Because parallel full-table scans are very fast , the CBO will give a highercost to index access and be friendlier to full-table scans)hash_area_size too small (too many nested loop joins)

4 - Fully ut ilize server RAM - On a dedicated Oracle server, use all extra RAM for db_cache_size less PGA's and 20% RAM reserve for OS. 5 - Get the bott lenecks - See STATSPACK top 5 wait events - OEM performance pack reports - TOAD reports 6 - Look for Buffer Busy Waits result ing f rom table/index freelist shortages 7 - See if large-table full-table scans can be removed with well-placed indexes 8 - If tables are low volat ility, seek an MV that can pre-join/pre-aggregate common queries. Turn-on automat ic query rewrite 9 - Look for non-reentrant SQL - (literals values inside SQL from v$sql) - If so, set cursor_sharing=force

Non-Use of Bind VariablesA quick method of seeing whether code is being reused (a key indicator of proper bind variable usage) is to look at the values of reusableand non-reusable memory in the shared pool. A SQL for determining this comparison of reusable to non-reusable code is shown here:ttitle 'Shared Pool Utilization'spool sql_garbageselect 1 nopr, to_char(a.inst_id) inst_id, a.users users, to_char(a.garbage,'9,999,999,999') garbage, to_char(b.good,'9,999,999,999') good, to_char((b.good/(b.good+a.garbage))*100,'9,999,999.999') good_percent from (select a.inst_id, b.username users, sum(a.sharable_mem+a.persistent_mem) Garbage, to_number(null) good from sys.gv_$sqlarea a,dba_users b where (a.parsing_user_id = b.user_id and a.executions<=1) group by a.inst_id, b.username union select distinct c.inst_id, b.username users, to_number(null) garbage, sum(c.sharable_mem+c.persistent_mem) Good from dba_users b, sys.gv_$sqlarea c

PDFmyURL.com

Page 3: Www.pafumi.net Tuning Methodology.html

from dba_users b, sys.gv_$sqlarea c where (b.user_id=c.parsing_user_id and c.executions>1) group by c.inst_id, b.username) a, (select a.inst_id, b.username users, sum(a.sharable_mem+a.persistent_mem) Garbage, to_number(null) good from sys.gv_$sqlarea a, dba_users b where (a.parsing_user_id = b.user_id and a.executions<=1) group by a.inst_id,b.username union select distinct c.inst_id, b.username users, to_number(null) garbage, sum(c.sharable_mem+c.persistent_mem) Good from dba_users b, sys.gv_$sqlarea c where (b.user_id=c.parsing_user_id and c.executions>1) group by c.inst_id, b.username) bwhere a.users=b.users and a.inst_id=b.inst_id and a.garbage is not null and b.good is not nullunionselect 2 nopr,'-------' inst_id,'-------------' users,'--------------' garbage,'--------------' good,'--------------' good_percent from dualunionselect 3 nopr, to_char(a.inst_id,'999999'), to_char(count(a.users)) users, to_char(sum(a.garbage),'9,999,999,999') garbage, to_char(sum(b.good),'9,999,999,999') good, to_char(((sum(b.good)/(sum(b.good)+sum(a.garbage)))*100),'9,999,999.999') good_percent from (select a.inst_id, b.username users, sum(a.sharable_mem+a.persistent_mem) Garbage, to_number(null) good from sys.gv_$sqlarea a, dba_users b where (a.parsing_user_id = b.user_id and a.executions<=1) group by a.inst_id,b.username union select distinct c.inst_id, b.username users, to_number(null) garbage, sum(c.sharable_mem+c.persistent_mem) Good from dba_users b, sys.gv_$sqlarea c where (b.user_id=c.parsing_user_id and c.executions>1) group by c.inst_id,b.username) a, (select a.inst_id, b.username users, sum(a.sharable_mem+a.persistent_mem) Garbage, to_number(null) good from sys.gv_$sqlarea a, dba_users b where (a.parsing_user_id = b.user_id and a.executions<=1) group by a.inst_id,b.username union select distinct c.inst_id, b.username users, to_number(null) garbage, sum(c.sharable_mem+c.persistent_mem) Good from dba_users b, sys.gv_$sqlarea c where (b.user_id=c.parsing_user_id and c.executions>1) group by c.inst_id, b.username) b where a.users=b.users and a.inst_id=b.inst_id and a.garbage is not null and b.good is not null

PDFmyURL.com

Page 4: Www.pafumi.net Tuning Methodology.html

and a.garbage is not null and b.good is not null group by a.inst_id order by 1,2 desc/spool offttitle offset pages 22

An example report isDate: 03/25/05 Page: 1Time: 17:51 PM Shared Pool Utilization SYSTEM whoville databaseusers Non-Shared SQL Shared SQL Percent Shared-------------------- -------------- -------------- --------------WHOAPP 532,097,982 1,775,745 .333SYS 5,622,594 5,108,017 47.602DBSNMP 678,616 219,775 24.463SYSMAN 439,915 2,353,205 84.250SYSTEM 425,586 20,674 4.633------------- -------------- -------------- --------------5 541,308,815 9,502,046 1.725

As you can see the majority owner in this applicat ion, WHOAPP is only showing 0.3 percent of reusable code by memory usage and is tyingup an amazing 530 megabytes with non-reusable code! Let ’s look at a database with good reuse stat ist ics. Look at this one:Date: 11/13/05 Page: 1Time: 03:15 PM Shared Pool Utilization PERFSTAT dbaville database users Non-Shared SQL Shared SQL Percent Shared -------------------- -------------- -------------- -------------- DBAVILLAGE 9,601,173 81,949,581 89.513 PERFSTAT 2,652,827 199,868 7.006 DBASTAGER 1,168,137 35,468,687 96.812 SYS 76,037 5,119,125 98.536 ------------- -------------- -------------- -------------- 4 13,498,174 122,737,261 90.092

Notice how the two applicat ion owners, DBAVILLAGE and DBASTAGER show 89.513 and 96.812 reuse percentage by memory footprint forcode.

So what else can we look at to see about code reusage, the above reports give us a gross indicat ion, how about something with a bit moreusability to correct the situat ion? The V$SQLAREA and V$SQLTEXT views give us the capability to look at the current code in the sharedpool and determine if it is using, or not using bind variables. set lines 140 pages 55 verify off feedback offcol num_of_times heading 'Number|Of|Repeats'col SQL heading 'SubString - &&chars Characters'col username format a15 heading 'User'

PDFmyURL.com

Page 5: Www.pafumi.net Tuning Methodology.html

@title132 'Similar SQL'spool rep_out\&db\similar_sql&&charsselect b.username,substr(a.sql_text,1,&&chars) SQL, count(a.sql_text) num_of_times from v$sqlarea a, dba_users bwhere a.parsing_user_id=b.user_idgroup by b.username,substr(a.sql_text,1,&&chars) having count(a.sql_text)>&&num_repeatsorder by count(a.sql_text) desc;spool offundef charsundef num_repeatsclear columnsset lines 80 pages 22 verify on feedback onttitle off

It shows a simple script to determine, based on the f irst x characters (input when the report is executed) the number of SQL statements thatare ident if ical up to the f irst x characters. This shows us the repeat ing code in the database and helps us to t rack down the of fendingstatements for correct ion. An example output :

Date: 02/23/05 Page: 1 Time: 10:20 AM Similar SQL SYSTEM whoville database User SubString - 120Characters --------------- ------------------------------------------------------- Number Of Repeats ---------- WHOAPP SELECT Invoices."INVOICEKEY", Invoices."CLIENTKEY", Invoices."BUYSTATUS", Invoices."DEBTORKEY",Invoices."INPUTTRANSKEY" 1752 WHOAPP SELECT DisputeCode.DisputeCode , DisputeCode.Disputed , InvDispute."ROWID" , DisputeCode."ROWID" FROMInvDispute , Disp 458 WHOAPP SELECT Transactions.PostDate , Payments.PointsAmt , Payments.Type_ AS PmtType , Payments.Descr ,Payments.FeeBasis , Pay 449 SYS SELECT SUM(Payments.Amt) AS TotPmtAmt , SUM(Payments.FeeEscrow) AS TotFeeEscrow , SUM(Payments.RsvEscrow) ASTotRsvEscro428 WHOAPP SELECT SUM(Payments.Amt) AS TotPmtAmt, SUM(Payments.FeeEscrow) AS TotFeeEscrow, SUM(Payments.RsvEscrow) ASTotRsvEscrow428 WHOAPP SELECT Transactions.BatchNo , Payments.Amt , Payments."ROWID" , Transactions."ROWID" FROM Payments ,Transactions WHERE396 WHOAPP INSERT INTO Payments (PaymentKey, AcctNo, Amt, ChargeAmt, Descr, FeeBasis, FeeEarned, FeeEscrow, FeeRate,

PDFmyURL.com

Page 6: Www.pafumi.net Tuning Methodology.html

WHOAPP INSERT INTO Payments (PaymentKey, AcctNo, Amt, ChargeAmt, Descr, FeeBasis, FeeEarned, FeeEscrow, FeeRate,FeeTaxAmt, Hol244 WHOAPP SELECT Clients.Name , Clients.ClientNo , Invoices.InvNo , Invoices.ClientKey AS InvClientKey ,Transactions.ClientKey AS 244 SYS SELECT COUNT(*) AS RecCount , INVOICES."ROWID" , TRANSACTIONS."ROWID" , PROGRAMS."ROWID" FROM INVOICES ,TRANSACTIONS ,232

Using a substring f rom the above SQL the V$SQLTEXT view can be used to pull an ent ire list ing of the code

The proper f ix for non-bind variable usage is to re-write the applicat ion to use bind variables. This of course can be an expensive and t imeconsuming process, but ult imately it provides the best f ix for the problem. However, what if you can’t change the code? Oracle has providedthe CURSOR_SHARING init ializat ion variable that will automat ically replace the literals in your code with bind variables. The sett ings forCURSOR_SHARING are EXACT (the default ), FORCE , and SIMILAR.

· EXACT – The statements have to match exact ly to be reusable

· FORCE – Always replace literals

· SIMILAR – Perform literal peeking and replace when it makes sense

We usually suggest the use of the SIMILAR opt ion for CURSOR_SHARING

Improper Index UsageYou will be happy to know that start ing with Oracle9i there is a new view that keeps the explain plans for all current SQL in the shared pool,this view, appropriately named V$SQL_PLAN allows DBAs to determine exact ly what statements are using full table scans and moreimportant ly how often the part icular SQL statements are being executedcol object_name format a28col rows|blocks|pool a30set pages 55 set linesize 140 set trims onttitle 'Full Table - Index Scans'spool Full_Table-Index_Scans.txtselect sp.object_name, (select executions from v$sqlarea sa where sa.address = sp.address and sa.hash_value =sp.hash_value) no_of_full_scans, (select trim(lpad(nvl(trim(to_char(num_rows)),' '),10,' ')||' | '||lpad(nvl(trim(to_char(blocks)),' '),10,' ')||' |

PDFmyURL.com

Page 7: Www.pafumi.net Tuning Methodology.html

(select trim(lpad(nvl(trim(to_char(num_rows)),' '),10,' ')||' | '||lpad(nvl(trim(to_char(blocks)),' '),10,' ')||' |'||buffer_pool) from dba_tables where table_name = sp.object_name and owner = sp.object_owner) "rows|blocks|pool", (select sql_text from v$sqlarea sa where sa.address = sp.address and sa.hash_value =sp.hash_value) sqltext from v$sql_plan sp where operation IN ('TABLE ACCESS','INDEX') and options in ('FULL','FULL SCAN','FAST FULL SCAN','SKIP SCAN','SAMPLE FAST FULL SCAN') and object_owner IN ('XGUARD935')and rownum < 60order by 2 desc,3 desc;spool offset pages 20ttitle off

Notice that I didn’t limit myself to just full table scans, I also looked for expensive index scans as well. The Report shows:

Fri Aug 24 page 1 Full Table - Index Scans

OBJECT_NAME NO_OF_FULL_SCANS rows|blocks|pool---------------------------- ---------------- ---------------------------------SQLTEXT--------------------------------------------------------------------------------------------------------------------------------------------LOOKUP_WORKTYPE 956170 17 | 5 | DEFAULTSELECT WORKTYPEID FROM LOOKUP_WORKTYPE WHERE WORKTYPECODE = :B1

ROUTINGNUMBER 294118 520 | 5 | DEFAULTSELECT ROUTINGNUMBERID, ROUTINGNUMBER, BANKID, CENTERID FROM ROUTINGNUMBER WHERE BANKID = :B1

EXCHANGEITEMEXCEPTION 39421 72280 | 1566 | DEFAULTSELECT COUNT(1) FROM EXCHANGEITEMQUERY EIQU, EXCHANGEITEMEXCEPTION EIEX WHERE :B1 =EIQU.EXCHANGEITEMID AND EIQU.EXCHANGEITEMQUERYID=EIEX.EXCHANGEITEMQUERYID AND EIEX.REMOVED = 0

ANDOR 3454 20 | 5 | DEFAULTSELECT ANDORID, EXCEPTIONID, ISAND, LEFTID, RIGHTID FROM ANDOR ORDER BY EXCEPTIONID, ANDORID

EXCEPTIONS 3377 97 | 60 | DEFAULTSELECT E.EXCEPTIONID, EXCEPTIONNAME, DESCRIPTION, EXCEPTIONCODE, E.CENTERID, E.BANKID, E.CUSTOMERID, E.ACCOUNTID, DATASOURCEID, DATAFIELDID, INEQUALITYID, CONSTRAINTDATASOURCEID, CONSTRAINTDATAVALUE, D.DEFINITIONID, DEFINITIONATTRIBUTEID, E.ACTIVESTATUSID, E.APPLICATIONID, ISUSERDEFINED FROM EXCEPTIONS E, DEFINITION D WHERE E.APPLICATIONID = :B1 AND E.EXCEPTIONID = D.EXCEPTIONID (+) ORDER BY E.EXCEPTIONNAME, D.DEFINITIONID

X937USERRECORD 3317 0 | 1 | DEFAULTINSERT INTO X937USERRECORD_ARCH SELECT * FROM X937USERRECORD WHERE OUTJOBID = :B1

UN_CENTERNAME 1679SELECT CENTERID, CENTERNAME, ACTIVESTATUSID AS CENTERACTIVESTATUSID, COMMENTS AS CENTERCOMMENTS, ITEMSETTINGID AS CENTERITEMSETTINGID, CENTERCODE, EXPORTSTATUSID AS CENTEREXPORTSTATUSID, EXPORTTIME AS CENTEREXPORTTIME, GLACCOUNTNUMBER, NULL AS BANKID FROM CENTER ORDER BY CENTERNAME

MACHINE 1481 3 | 5 | DEFAULTSELECT M.MACHINEID, MACHINENAME, IPADDRESS, S.SERVICEID, SERVICENAME, APPLICATIONID FROM SERVICE S, MACHINE M, PROCESS P WHERE S.SERVICEID = P.SERVICEID AND M.MACHINEID = P.MACHINEID ORDER BY MACHINENAME, SERVICENAME

Notice instead of t rying to capture the full SQL statement I just grab the HASH value. I can then use the hash value to pull the interest ing SQL statements using SQL similar to:

PDFmyURL.com

Page 8: Www.pafumi.net Tuning Methodology.html

select sql_text from v$sqltext where hash_value=&hashorder by piece;

Once I see the SQL statement I use SQL similar to this to pull the table indexes:

set lines 132col index_name form a30col table_name form a30col column_name format a30select a.table_name,a.index_name,a.column_name,b.index_type from dba_ind_columns a, dba_indexes b where a.table_name =upper('&tab') and a.table_name=b.table_name and a.index_owner=b.owner and a.index_name=b.index_name order by a.table_name,a.index_name,a.column_position;set lines 80

Once I have both the SQL and the indexes for the full scanned table I can usually quickly come to a tuning decision if any addit ional indexesare needed or, if an exist ing index should be used. In some cases there is an exist ing index that could be used of the SQL where rewrit ten. Inthat case I will usually suggest the SQL be rewrit ten. An example extract f rom a SQL analysis of this type is shown here:

SQL> @get_itEnter value for hash: 605795936SQL_TEXT----------------------------------------------------------------DELETE FROM BOUNCE WHERE UPDATED_TS < SYSDATE - 21 SQL> @get_tab_indEnter value for tab: bounceTABLE_NAME INDEX_NAME COLUMN_NAME INDEX_TYPE------------ -------------------------- -------------- ----------BOUNCE BOUNCE_MAILREPRECJOB_UNDX MAILING_ID NORMALBOUNCE BOUNCE_MAILREPRECJOB_UNDX RECIPIENT_ID NORMALBOUNCE BOUNCE_MAILREPRECJOB_UNDX JOB_ID NORMALBOUNCE BOUNCE_MAILREPRECJOB_UNDX REPORT_ID NORMALBOUNCE BOUNCE_PK MAILING_ID NORMALBOUNCE BOUNCE_PK RECIPIENT_ID NORMALBOUNCE BOUNCE_PK JOB_ID NORMAL

As you can see here there is no index on UPDATED_TS

SQL> @get_itEnter value for hash: 3347592868

PDFmyURL.com

Page 9: Www.pafumi.net Tuning Methodology.html

SQL_TEXT----------------------------------------------------------------SELECT VERSION_TS, CURRENT_MAJOR, CURRENT_MINOR, CURRENT_BUILD,CURRENT_URL, MINIMUM_MAJOR, MINIMUM_MINOR, MINIMUM_BUILD, MINIMUM_URL, INSTALL_RA_PATH, HELP_RA_PATH FROM CURRENT_CLIENT_VERSION

Here there is no WHERE clause, hence a FTS is required.

SQL> @get_itEnter value for hash: 4278137387 SQL_TEXT----------------------------------------------------------------SELECT STATUS FROM DB_STATUS WHERE DB_NAME = 'ARCHIVE' SQL> @get_tab_indEnter value for tab: db_status Improper Memory ConfigurationIn this sect ion we will discuss two major areas of memory, the database buffer area and the shared pool area. The PGA areas are discussedin a later sect ion.

The Database Buffer Area

Anything that goes to users or gets into the database must go through the database buffers.Gone are the days of a single buffer area (the default ) now we have 2, 4, 8,, 16, 32 K buffer areas, keep and recycle buffer pools on top ofthe default area. Within these areas we have the consistent read, current read, f ree, exclusive current, and many other types of blocks thatare used in Oracle’s mult i-block consistency model.The V$BH view (and it ’s parent the X$BH table) are the major tools used by the DBA to t rack block usage, however, you may f ind that thedata in the V$BH view can be misleading unless you also t ie in block size data.

set pages 50ttitle80 'All Buffers Status'spool All_Buffers_Status.txtselect '32k '||status as status, count(*) as num from v$bh where file# in(select file_id from dba_data_files where tablespace_name in ( select tablespace_name from dba_tablespaces where block_size=32768)) group by '32k '||status

PDFmyURL.com

Page 10: Www.pafumi.net Tuning Methodology.html

unionselect '16k '||status as status, count(*) as num from v$bh where file# in(select file_id from dba_data_files where tablespace_name in (select tablespace_name from dba_tablespaces where block_size=16384)) group by '16k '||statusunionselect '8k '||status as status, count(*) as num from v$bh where file# in( select file_id from dba_data_files where tablespace_name in (select tablespace_name from dba_tablespaces where block_size=8192)) group by '8k '||statusunionselect '4k '||status as status, count(*) as num from v$bh where file# in(select file_id from dba_data_files where tablespace_name in ( select tablespace_name from dba_tablespaces where block_size=4096)) group by '4k '||statusunionselect '2k '||status as status, count(*) as num from v$bh where file# in(select file_id from dba_data_files where tablespace_name in ( select tablespace_name from dba_tablespaces where block_size=2048)) group by '2k '||statusunionselect status, count(*) as num from v$bh where status='free'group by statusorder by 1/spool offttitle off

As you can see, we will need to be SYS user to run it . An example report would be:

Date: 12/13/05 Page: 1

PDFmyURL.com

Page 11: Www.pafumi.net Tuning Methodology.html

Time: 10:39 PM All Buffers Status PERFSTAT whoville database STATUS NUM --------- ---------- 32k cr 2930 32k xcur 29064 8k cr 1271 8k free 3 8k read 4 8k xcur 378747 free 10371

As you can see, while there are f ree buffers, only 3 of them are available to the 8k, default area and none are available to our 32K area. Thefree buffers are actually assigned to a keep or recycle pool area (hence the null value for the blocksize) and are not available for normalusage.

So, if you see buffer busy waits, db block waits and the like and you run the above report and see no free buffers it is probably a good betyou need to increase the number of available buffers for the area showing no free buffers. You should not immediately assume you needmore buffers because of buffer busy waits as these can be caused by other problems such as row lock waits, it l waits and other issues.Luckily Oracle10g has made it relat ively simple to determine if we have these other types of waits:

-- Crosstab of object and statistic for an owner--col "Object" format a20set numwidth 12set lines 132set pages 50@title132 'Object Wait Statistics'spool rep_out\&&db\obj_stat_xtabselect * from(select DECODE(GROUPING(a.object_name), 1, 'All Objects', a.object_name) AS "Object",sum(case when a.statistic_name = 'ITL waits'then a.value else null end) "ITL Waits",sum(case when a.statistic_name = 'buffer busy waits'then a.value else null end) "Buffer Busy Waits",sum(case when a.statistic_name = 'row lock waits'then a.value else null end) "Row Lock Waits",sum(case when a.statistic_name = 'physical reads'then a.value else null end) "Physical Reads",sum(case when a.statistic_name = 'logical reads'then a.value else null end) "Logical Reads"from v$segment_statistics awhere a.owner like upper('&owner')group by rollup(a.object_name)) bwhere (b."ITL Waits">0 or b."Buffer Busy Waits">0)/spool offclear columns

PDFmyURL.com

Page 12: Www.pafumi.net Tuning Methodology.html

clear columnsttitle off

This is an object stat ist ic cross tab report based on the V$SEGMENT_STATISTICS view. The cross tab report generates a list ing showingthe stat ist ics of concern as headers across the page rather than list ings going down the page and summarizes them by object . This allowsus to easily compare total buf fer busy waits to the number of ITL or row lock waits. This ability to compare the ITL and row lock waits tobuffer busy waits lets us see what objects may be experiencing content ion for ITL lists, which may be experiencing excessive locking act ivityand through comparisons, which are highly contended for without the row lock or ITL waits. An example of the output of the report , editedfor length, is shown here:

Date: 12/09/05 Page: 1 Time: 07:17 PM Object Wait Stat ist ics PERFSTAT whoville database ITL Buffer Busy Row Lock Physical LogicalObject Waits Waits Waits Reads Reads -------------- ----- ----------- -------- ---------- ----------- BILLING 0 63636 38267 1316055 410219712 BILLING_INDX1 1 16510 55 151085 21776800 ...DELIVER_INDX1 1963 36096 32962 1952600 60809744 DELIVER_INDX2 88 16250 9029 18839481 342857488 DELIVER_PK 2676 99748 29293 15256214 416206384 DELIVER_INDX3 2856 104765 31710 8505812 467240320 ...All Objects 12613 20348859 1253057 1139977207 20947864752

In the above report the BILLING_INDX1 index has a large number of buffer busy waits but we can’t account for them from the ITL or Rowlock waits, this indicates that the index is being constant ly read and the blocks then aged out of memory forcing waits as they are re-readfor the next process. On the other hand, almost all of the buffer busy waits for the DELIVER_INDX1 index can be at t ributed to ITL and RowLock waits.In situat ions where there are large numbers of ITL waits we need to consider the increase of the INITRANS sett ing for the table to removethis source of content ion. If the predominant wait is row lock waits then we need to determine if we are properly using locking and cursors inour applicat ion (for example, we may be over using the SELECT…FOR UPDATE type code.) If , on the other hand all the waits are un-accounted for buffer busy waits, then we need to consider increasing the amount of database block buffers we have in our SGA.As you can see, this object wait cross tab report can be a powerful addit ion to our tuning arsenal.By knowing how our buffers are being used and seeing exact ly what waits are causing our buffer wait indicat ions we can quickly determine ifwe need to tune objects or add buffers, making sizing buffer areas fairly easy.But what about the Automat ic Memory Manager in 10g? It is a powerful tool for DBAs with systems that have a predictable load prof ile,however if your system has rapid changes in user and memory loads then AMM is playing catch up and may deliver poor performance as aresult . In the case of memory it may be better to hand the system too much rather than just enough, just in t ime (JIT).

PDFmyURL.com

Page 13: Www.pafumi.net Tuning Methodology.html

As many companies have found when trying the JIT methodology in their manufacturing environment it only works if things are easilypredictable.The AMM is ut ilized in 10g by sett ing two parameters, the SGA_MAX_SIZE and the SGA_TARGET. The Oracle memory manager will size thevarious buffer areas as needed within the range between base sett ings or SGA_TARGET and SGA_MAX_SIZE using the SGA_TARGETsett ing as an “opt imal” and the SGA_MAX_SIZE as a maximum with the manual set t ings used in some cases as a minimum size for thespecif ic memory component.

Check Disks I/ODisk stress will show up on the Oracle side as excessive read or write t imes. Filesystem stress is shown by calculat ing the IO t imings asshown here:

em Purpose: Calculate IO timing values for datafilescol name format a65col READTIM/PHYRDS heading 'Avg|Read Time' format 9,999.999col WRITETIM/PHYWRTS heading 'Avg|Write Time' format 9,999.999set lines 132 pages 45start title132 'IO Timing Analysis'spool rep_out\&db\io_timeselect f.FILE# ,d.name,PHYRDS,PHYWRTS,READTIM/PHYRDS,WRITETIM/PHYWRTS from v$filestat f, v$datafile d where f.file#=d.file# and phyrds>0 and phywrts>0unionselect a.FILE# ,b.name,PHYRDS,PHYWRTS,READTIM/PHYRDS,WRITETIM/PHYWRTS from v$tempstat a, v$tempfile b where a.file#=b.file# and phyrds>0 and phywrts>0order by 5 desc;spool offttitle offclear col

An example of the output :

Date: 11/20/05 Page: 1Time: 11:12 AM IO Timing Analysis PERFSTAT whoraw database FILE# NAME PHYRDS PHYWRTS READTIM/PHYRDS WRITETIM/PHYWRTS----- -------------- ---------- ------- -------------- ---------------- 13 /dev/raw/raw19 77751 102092 76.8958599 153.461829 33 /dev/raw/raw35 32948 52764 65.7045041 89.5749375 7 /dev/raw/raw90 245854 556242 57.0748615 76.1539869 54 /dev/raw/raw84 208916 207539 54.5494409 115.610912 40 /dev/raw/raw38 4743 27065 38.4469745 47.1722889 15 /dev/raw/raw41 3850 7216 35.6272727 66.1534091

PDFmyURL.com

Page 14: Www.pafumi.net Tuning Methodology.html

15 /dev/raw/raw41 3850 7216 35.6272727 66.1534091 12 /dev/raw/raw4 323691 481471 32.5510193 100.201424 16 /dev/raw/raw50 10917 46483 31.9372538 74.5476626 18 /dev/raw/raw24 3684 4909 30.8045603 71.7942554 23 /dev/raw/raw58 63517 78160 29.8442779 84.4477866 5 /dev/raw/raw91 102783 94639 29.1871516 87.8867909

As you can see we are looking at an example report f rom a RAW conf igurat ion using single disks. Not ice how both read and write t imesexceed even the rather large good pract ice limits of 10-20 milliseconds for a disk read. However in my experience for reads you should notexceed 5 milliseconds and usually with modern buffered reads, 1-2 milliseconds. Oracle is more tolerant for write delays since it uses adelayed write mechanism, so 10-20 milliseconds on writes will normally not cause signif icant Oracle waits, however, the smaller you can getread and write t imes, the better!

For the money, I would suggest RAID0/1 or RAID1/0, that is, striped and mirrored. It provides nearly all of the dependability of RAID5 andgives much better write performance. You will usually take at least a 20 percent write performance hit using RAID5. For read-only applicat ionsRAID5 is a good choice, but in high-transact ion/high-performance environments the write penalt ies may be too high.

Table 1 shows how Oracle suggests RAID should be used with Oracle database f iles.

RAID Type ofRaid

ControlFile

DatabaseFile

RedoLog File

ArchiveLog File

0 Striping Avoid OK Avoid Avoid1 Shadowing Best OK Best Best1+0 Striping

andShadowing

OK Best Avoid Avoid

3 Stripingwith stat icparity

OK OK Avoid Avoid

5 Stripingwithrotat ingparity

OK Best ifRAID0-1notavailable

Avoid Avoid

Table 1: RAID Recommendat ions (From Metalink NOTE: 45635.1)

Improper PGA setupOracle provides AWRRPT or statspack reports to t rack and show the number of sorts. Unfortunately hashes are not so easily t racked.Oracle t racks disk and memory sorts, number of sort rows and other sort related stat ist ics. Hashes on the other hand only can be trackedusually by the execut ion plans for cumulat ive values, and by various views for live values. After 9i the parameterPGA_AGGREGATE_TARGET was provided to allow automated sett ing of the sort and hash areas. For current ly act ive sorts or hashes thefollowing script can be used to watch the growth of temporary areas.

PDFmyURL.com

Page 15: Www.pafumi.net Tuning Methodology.html

column now format a14column operation format a15column dt new_value td noprintset feedback offselect to_char(sysdate,'ddmonyyyyhh24miss') dt from dual;set lines 132 pages 55@title132 'Sorts and Hashes'spool rep_out\&&db\sorts_hashes&&tdselect sid,work_area_size,expected_size,actual_mem_used,max_mem_used,tempseg_size,to_char(sysdate,'ddmonyyyyhh24miss') now, operation_type operationfrom v$sql_workarea_active;spool offclear columnsset lines 80 feedback onttitle off

Example output from this report.

Date: 01/04/06 Page: 1Time: 01:27 PM Sorts and Hashes SYS whoville database Work Area Expected Actual Mem Max Mem TempsegSID Size Size Used Used Size Now Operation---- --------- -------- ---------- ------- ------- --------------- ---------------1176 6402048 6862848 0 0 04jan2006132711 GROUP BY (HASH) 582 114688 114688 114688 114688 04jan2006132711 GROUP BY (SORT) 568 5484544 5909504 333824 333824 04jan2006132711 GROUP BY (HASH)1306 3469312 3581952 1223680 1223680 04jan2006132711 GROUP BY (HASH)

As you can see the whoville database had no hashes, at the t ime the report was run, going to disk. We can also look at the cumulat ivestat ist ics in the v$sysstat view for cumulat ive sort data.

Date: 12/09/05 Page: 1Time: 03:36 PM Sorts Report PERFSTAT sd3p databaseType Sort Number Sorts-------------------- --------------sorts (memory) 17,213,802sorts (disk) 230sorts (rows) 3,268,041,228

Another key indicator that hashes are occurring are if there is excessive IO to the temporary tablespace yet there are few or no disk sorts.The PGA_AGGREGATE_TARGET is the target total amount of space for all PGA memory areas. However, only 5% or a maximum of 200megabytes can be assigned to any single process. The limit for PGA_AGGREGATE_TARGET is 4 gigabytes (supposedly) however you canincrease the sett ing above this point . The 200 megabyte limit is set by the _pga_max_size undocumented parameter, this parameter can be

PDFmyURL.com

Page 16: Www.pafumi.net Tuning Methodology.html

reset but only under the guidance of Oracle support . But what size should PGA_AGGREGATE_TARGET be set? The AWRRPT report in10g provides a sort histogram which can help in this decision.

PGA Aggr Target Histogram DB/Inst: OLS/ols Snaps: 73-74-> Optimal Executions are purely in-memory operations Low HighOptimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs------- ------- -------------- -------------- ------------ ------------ 2K 4K 1,283,085 1,283,085 0 0 64K 128K 2,847 2,847 0 0 128K 256K 1,611 1,611 0 0 256K 512K 1,668 1,668 0 0 512K 1024K 91,166 91,166 0 0 1M 2M 690 690 0 0 2M 4M 174 164 10 0 4M 8M 18 12 6 0 -------------------------------------------------------------

In this case we are seeing 1-pass execut ions indicat ing disk sorts are occurring with the maximum size being in the 4m to 8m range. For an8m sort area the PGA_AGGREGATE_TARGET should be set at 320 megabytes (sorts get 0.5*(.05*PGA_AGGREGATE_TARGET)). For thissystem the sett ing was at 160 so 4 megabytes was the maximum sort size, as you can see we were seeing 1-pass sorts in the 2-4m rangeas well even at 160m.By monitoring the realt ime or live hashes and sorts and looking at the sort histograms from the AWRRPT reports you can get a very goodidea of the needed PGA_AGGREGATE_TARGET set t ing. If you need larger than 200 megabyte sort areas you may need to get approvalf rom Oracle support through the i-tar process to set the _pga_max_size parameter to greater than 200 megabytes.

Modify init.ora Parameters- For OLTP systems the parameter DB_FILE_MULTIBLOCK_READ_COUNT is set to values 8 - 16 while in decision support systems it isset to higher values. This parameter determines the maximum number of database blocks read in one I/O operat ion during a full tablescan. The sett ing of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance.

- OPTIMIZER_INDEX_COST_ADJThis init ializat ion parameter is a percentage value represent ing a comparison between the relat ive cost of physical I/O requests for indexedaccess and full table-scans. The default value of 100 indicates to the cost-based opt imizer that indexed access is 100% as cost ly (i.e.,equally cost ly) as FULL table scan access. Usually it 's around 15 for an OLTP system and 50 for DW systems. The smaller the value, thecheaper the cost of index access. I usually start with 20. Query to suggest its value:

col c1 heading 'Average Waits for|Full Scan Read I/O' format 9999.999col c2 heading 'Average Waits for|Index Read I/O' format 9999.999col c3 heading 'Percent of| I/O Waits|for Full Scans' format 9.99col c4 heading 'Percent of| I/O Waits|for Index Scans' format 9.99col c5 heading 'Starting|Value|for|optimizer|index|cost|adj' format 999

PDFmyURL.com

Page 17: Www.pafumi.net Tuning Methodology.html

select a.average_wait c1,b.average_wait c2, a.total_waits /(a.total_waits + b.total_waits) c3, b.total_waits /(a.total_waits + b.total_waits) c4, (b.average_wait / a.average_wait)*100 c5from v$system_event a, v$system_event bwhere a.event = 'db file scattered read'and b.event = 'db file sequential read';

Here is the list ing f rom this script : Starting Value for optimizer Percent of Percent of index Average Waits for Average Waits for I/O Waits I/O Waits costFull Scan Read I/O Index Read I/O for Full Scans for Index Scans adj------------------ ----------------- -------------- --------------- --------- 1.473 .289 .02 .98 20

As you can see, the suggested start ing value for optimizer_index_cost_adj may be too high because 98% of data waits are on index(sequent ial) block access. How we can "weight" this start ing value for opt imizer_index_cost_adj to ref lect the reality that this system hasonly 2% waits on full-table scan reads (a typical OLTP system with few full-table scans)? As a pract ical matter, we never want an automatedvalue for opt imizer_index_cost_adj to be less and 1, nor more than 100.

Another one:col a1 head "avg. wait time|(db file sequential read)"col a2 head "avg. wait time|(db file scattered read)"col a3 head "new setting for|optimizer_index_cost_adj"select a.average_wait a1, b.average_wait a2, round( ((a.average_wait/b.average_wait)*100) ) a3from (select d.kslednam EVENT, s.kslestim / (10000 * s.ksleswts) AVERAGE_WAIT from x$kslei s, x$ksled d where s.ksleswts != 0 and s.indx = d.indx) a, (select d.kslednam EVENT, s.kslestim / (10000 * s.ksleswts) AVERAGE_WAIT from x$kslei s, x$ksled d where s.ksleswts != 0 and s.indx = d.indx) bwhere a.event = 'db file sequential read'and b.event = 'db file scattered read';

- OPTIMIZER_INDEX_CACHINGThis init ializat ion parameter represents a percentage value, ranging between the values of 0 and 99. The default value of 0 indicates to theCBO that 0% of database blocks accessed using indexed access can be expected to be found in the Buffer Cache of the Oracle SGA. Thisimplies that all index accesses will require a physical read from the I/O subsystem for every logical read from the Buffer Cache, also known asa 0% hit rat io on the Buffer Cache. This parameter applies only to the CBO’s calculat ions of accesses for blocks in an index, not for the

PDFmyURL.com

Page 18: Www.pafumi.net Tuning Methodology.html

blocks in the table related to the index. It should be set to 90.

- Set the OPTIMIZER_FEATURES_ENABLE = 9.2.0

- OPTIMIZER_MODE = f irst_rows (for OLTP systems). This parameter returns the rows faster.

SQL Code TuningIf the SQL hash value (SHV) corresponding to the SQL statement is not found in the library cache during the soft parse, the server processmust perform a hard parse on the statement. During this operat ion, the execut ion plan for the statement must be determined and the resultmust be stored in the library cache. This is a computat ionally expensive step. The hard parse is usually accompained by latch content ion onthe shared pool and library cache latches. In OLTP the aim is to parse once, execute many t imes. Ideally sof t parse should be > 95%, if fallssignif icant ly lower than 80% then we need to invest igate.

--The following query is useful for detect ing programs that are performing excessive hard parses.spool excessive_hard_parses.txtSELECT /*+ RULE */ substr(s.program,1,20) program, COUNT(*) users, SUM(t.value) parses, SUM(t.value)/COUNT(*) parses_per_session, SUM(t.value)/(SUM(sysdate-s.logon_time)*24) parses_per_hour FROM v$session s, v$sesstat t WHERE t.statistic# = 153 AND s.sid = t.sid GROUP BY s.program HAVING SUM(t.value)/COUNT(*) > 2.0 ORDER BY parses_per_hour DESC;spool off

The query produces several parse metrics aggregated by program name. The parses column indicates the total hard parse count.parses_per_session is the average number of parses for all sessions running the program, and parses_per_hour is the average number ofparses per hour for all sessions running the program. Search for high numbers in the parses_per_hour column . The term high is relat ive. ForOLTP programs, numbers below 10 are reasonable. For batch programs, higher values are acceptable. Any programs with values higher than10 should be invest igated further.

For programs that are suspect, query the library cache to ident ify the SQL statements being executed using the following query. Run thisquery as many t imes as are required to get a reasonable sample.SELECT /*+ RULE */ t.sql_text FROM v$sql t, v$session s WHERE s.sql_address = t.address AND s.sql_hash_value = t.hash_value AND s.sid = &SID;

--Identifying unnecessary parse calls at system levelspool unnecessary_parse_calls_system_level.txtselect parse_calls, executions, substr(sql_text, 1, 300) from v$sqlarea where command_type in (2, 3, 6, 7)order by 3;

PDFmyURL.com

Page 19: Www.pafumi.net Tuning Methodology.html

spool off

Check for statements with a lot of execut ions. It is bad to have the PARSE_CALLS value in the above statement close to the EXECUTIONSvalue. The previous query will f ire only for DML statements (to check on other types of statements use the appropriate command typenumber). Also ignore Recursive calls (dict ionary access), as it is internal to Oracle

--Ident ifying unnecessary parse calls at session levelspool unnecessary_parse_calls_sess_level.txtselect b.sid, substr(c.username,1,12) username, substr(c.program,1,15) program, substr(a.name,1,20) name, b.value from v$sesstat b, v$statname a , v$session c where a.name in ('parse count (hard)', 'execute count') and b.statistic# = a.statistic# and b.sid = c.sid and c.username not in ('SYS','SYSTEM') order by sid;spool off

Ident ify the sessions involved with a lot of re-parsing (VALUE column). Query these sessions from V$SESSION and then locate the programthat is being executed, result ing in so much parsing.select a.parse_calls, a.executions, substr(a.sql_text, 1, 100) from v$sqlarea a, v$session b where b.schema# = a.parsing_schema_id and b.sid = &sid order by 1 desc;

As stated earlier, excessive parsing will result in higher than opt imal CPU consumption.However, the greater impact is likely to be content ion for resources in the shared pool. If many small statements are hard parsed, sharedpool f ragmentat ion is likely to result . As the shared pool becomes more fragmented, the amount of t ime required to complete a hard parseincreases. As the process of execut ing many unique statements cont inues, resource content ion worsens. The crit ical resources will likely bememory in the library cache and the various latches associated with the shared pool. There are several straightforward methods to detectcontent ion. The following query shows a list events on which sessions are wait ing to complete before cont inuing. Since v$session_waitcontains one row for each session, the query will return the total number of sessions wait ing for each event. The view contains real-t imedata so it should be run repeatedly to detect possible problems.SELECT /*+ RULE */ SUBSTR(event,1,30) event, COUNT(*) FROM v$session_wait WHERE wait_time = 0 GROUP BY SUBSTR (event,1,30), state;

If the latch free event appears cont inuously, then there is latch resource content ion. The following query can be used to determine whichlatches have content ion. Since v$latchholder contains one row for each session, the query will return the total number of sessions wait ingfor each latch. The view contains real-t ime data so it should be run repeatedly.SELECT /*+ RULE */ name, COUNT(*) FROM v$latchholder GROUP BY name;

PDFmyURL.com

Page 20: Www.pafumi.net Tuning Methodology.html

If library cache or shared pool latches appear cont inuously with any frequency, then there is content ion.

Latch Contention AnalysisWhen an Oracle session needs to place a new SQL statement in the shared pool, it has to acquire a latch, or internal lock. Under somecircumstances, content ion for these latches can result in poor performance. This does not happen frequent ly but it is worth checking. Setthe db_block_lru_latches to a higher number if you are experiencing a high number of misses or sleeps. spool latch_content_analysis.txtclear breaksclear computesclear columnscolumn name heading "Latch Type" format a25column pct_miss heading "Misses/Gets (%)" format 999.99999column pct_immed heading "Immediate Misses/Gets (%)" format 999.99999ttitle 'Latch Contention Analysis Report' skipselect n.name, misses*100/(gets+1) pct_miss, immediate_misses*100/(immediate_gets+1) pct_immedfrom v$latchname n,v$latch lwhere n.latch# = l.latch# and n.name in('%cache bugffer%','%protect%'); spool off

The Quick FixCorrect ing the of fending software may require days or weeks However, if performance is poor, there are some things that can be done toimprove performance unt il the source of the problem can be corrected.

1. Increase the size of the shared pool. For minor content ion problems, an increase of 20% should be suitable. For more severe problems,consider incremental increases of 50% unt il performance improves. If the host system has limited memory and the buffer cache hit rate isabove 90%, consider reducing the size of the buffer cache to increase the size of the shared pool. A buffer cache hit rat io of 80-85% withreduced latch content ion will likely produce better database performance than a higher buffer cache hit rat io with high latch content ion.2. Consider reducing the value of the opt imizer_max_permutat ions parameter if the cost-based opt imizer is being used and the database isusing Oracle Enterprise Server Version 8.0 or higher. This parameter controls the maximum number of execut ion plans that the opt imizer willdevelop to ident ify the one with the lowest cost . The default value is 80,000 but values of 100 to 1,000 usually produce ident ical execut ionplans to those when a higher value is used. Since hard parses account for a signif icant amount of CPU consumed on short-running SQLstatements, one of the art ifacts of high hard parse counts is high CPU consumption. Reducing the value of opt imizer_max_permutat ions willhelp mit igate the problem.3. Flush the shared pool periodically. This will reduce memory f ragmentat ion in the shared pool, which will reduce the elapsed t ime of the hardparse. The frequencydepends upon the size of the shared pool and the severity of the problem. For mild problems, consider f lushing twice each day. For severeproblems, it may benecessary to f lush the shared pool every few hours.4. Pin f requent ly used PL/SQL funct ions and packages in the shared pool. When a program calls a method within a package, the ent irepackage must be loaded into the shared pool. If the shared pool is highly f ragmented and there is considerable latch content ion, a signif icantamount of clock t ime may be required to load large packages into memory. Pinning packages and funct ions will improve the response t ime

PDFmyURL.com

Page 21: Www.pafumi.net Tuning Methodology.html

when they are accessed.

spool frequently_used_reloaded_objects.txt--To view a list of frequently used and re-loaded objectsset linesize 200select loads, executions, substr(owner, 1, 15) "Owner", substr(namespace, 1, 20) "Type", substr(name, 1, 100) "Text" from v$db_object_cachewhere owner not in ('SYS','SYSTEM','PERFSTAT','WMSYS','XDB')order by loads desc;spool off

--To pin a package in memoryexec dbms_shared_pool.keep('standard', 'p');

spool pinned_objects.txt--To view a list of pinned objectsselect substr(owner, 1, 15) "Owner", substr(namespace, 1, 20) "Type", substr(name, 1, 42) "Text" from v$db_object_cache where kept = 'YES' and owner not in ('SYS','SYSTEM')order by 1,3;spool off

It is straightforward to verify that an applicat ion is using bind variables using the Oracle t race facility and tkprof , the applicat ion prof iler.Tkprof produces a list of all SQL statements executed along with their execut ion plans and some performance stat ist ics. These metrics areaggregated for each unique SQL statement. Verify that excess parsing is not occurring. Below is an example of a query that was parsedonce for each execut ion. Not ice that in the countcolumn, the number of parses is equal to the number of execut ions. The Parse row indicates the number of hard parses that occurred forthe statement. In the ideal case, the statement would be parsed once and executed many t imes. call count cpu elapsed disk query currentrows

call count cpu elapsed disk query current rows------- ------ -------- ---------- ---------- ---------- ---------- ----------Parse 27 0.02 0.00 0 0 0 0Execute 27 0.00 0.00 0 0 0 0Fetch 108 0.03 0.00 0 189 0 81------- ------ -------- ---------- ---------- ---------- ---------- ----------total 162 0.05 0.00 2 189 0 81

Once the applicat ion has been corrected, the size of the shared pool should be reevaluated to determine if it could be reduced to its originalsize. If shared pool f lushes were employed as a temporary remedy, t ry to reduce the number of f lushes to perhaps once per day. Excessiveshared pool f lushes will also result in performance degradat ion.

PDFmyURL.com

Page 22: Www.pafumi.net Tuning Methodology.html

Collect Schema and DB StatisticsIs CRITICAL for Oracle to have accurate stat ist ics. More informat ion HERE. Examples:--For one Table and all its indexesBEGIN dbms_stats.gather_table_stats (ownname => 'LABTEST', tabname => 'DIEGO', partname => null, est imate_percent => 10, --or DBMS_STATS.AUTO_SAMPLE_SIZE degree => 3 , cascade => true); END;

--For a Full SchemaBEGIN dbms_stats.gather_schema_stats(ownname => 'LABTEST', est imate_percent => 10, granularity => 'ALL', method_opt => 'FOR ALL COLUMNS', --or method_opt=>'FOR ALL COLUMNS SIZE AUTO' degree => DBMS_STATS.DEFAULT_DEGREE, opt ions => 'GATHER AUTO', cascade => TRUE ); END;

Redo Logs SwitchesCheck Alert Log File to see frequency of Redo Log Swtiches. If you see errors there or that the switches are too of ten (ideally once every 30minutes), then :1- Increase Redo Log Files2- Add more groups3- Modify LOG_CHECKPOINT_TIMEOUT=0 and duplicate the value on LOG_CHECKPOINT_INTERVAL4- Modify archive_lag_target = 1800, so it will force the generat ion of archive log f iles to 30 minutes.

spool redo_log_switches.txtset pages 100column d1 form a20 heading "Date"column sw_cnt form 99999 heading 'Number|of|Switches'column Mb form 999,999 heading "Redo Size"column redoMbytes form 999,999,9999 heading "Redo Log File Size (Mb)"break on reportcompute sum of sw_cnt on reportcompute sum of Mb on reportvar redoMbytes number;

PDFmyURL.com

Page 23: Www.pafumi.net Tuning Methodology.html

var redoMbytes number;begin select max(bytes)/1024/1024 into :redoMbytes from v$log;end;/print redoMbytesselect trunc(first_time) d1 , count(*) sw_cnt , count(*) * :redoMbytes Mbfrom v$log_historygroup by trunc(first_time);spool off

Check for Large Table Full Scansspool large_table_scans.txt--Find Large Table Scans SELECT substr(table_owner,1,10) Owner, substr(table_name,1,15) Table_Name, size_kb, statement_count, reference_count, substr(executions,1,4) Exec, substr(executions * reference_count,1,8) tot_scansFROM (SELECT a.object_owner table_owner, a.object_name table_name, b.segment_type table_type, b.bytes / 1024 size_kb, SUM(c.executions ) executions, COUNT( DISTINCT a.hash_value ) statement_count, COUNT( * ) reference_count FROM sys.v_$sql_plan a, sys.dba_segments b, sys.v_$sql c WHERE a.object_owner (+) = b.owner AND a.object_name (+) = b.segment_name AND b.segment_type IN ('TABLE', 'TABLE PARTITION') AND a.operation LIKE '%TABLE%' AND a.options = 'FULL' AND a.hash_value = c.hash_value AND b.bytes / 1024 > 1024 AND a.object_owner != 'SYS' GROUP BY a.object_owner, a.object_name, a.operation, b.bytes/1024, b.segment_type ORDER BY 4 DESC, 1, 2 );spool off

spool recent_full_table_scans.txt-- Recent full table scan -- Should be run as SYS userset verify offcol object_name form a30col o.owner form a15PROMPT Column flag in x$bh table is set to value 0x80000, whenPROMPT block was read by a sequential scan. SELECT o.object_name,o.object_type,o.owner, count(*) FROM dba_objects o,x$bh x WHERE x.obj=o.object_id AND o.object_type='TABLE' AND standard.bitand(x.flag,524288)>0

PDFmyURL.com

Page 24: Www.pafumi.net Tuning Methodology.html

AND standard.bitand(x.flag,524288)>0 AND o.owner<>'SYS'having count(*) > 2group by o.object_name,o.object_type,o.ownerorder by 4 desc;spool off

spool unused_indexes.txt-- Do these tables contain indexes ??-- This query creates a mini "unused indexes" report that you can use to ensure that -- any large tables that are being scanned on your system have the proper indexing scheme. SELECT DISTINCT substr(a.object_owner,1,10) table_owner, substr(a.object_name,1,15) table_name, b.bytes / 1024 size_kb, d.index_name FROM sys.v_$sql_plan a, sys.dba_segments b, sys.dba_indexes d WHERE a.object_owner (+) = b.owner AND a.object_name (+) = b.segment_name AND b.segment_type IN ('TABLE', 'TABLE PARTITION') AND a.operation LIKE '%TABLE%' AND a.options = 'FULL' AND b.bytes / 1024 > 1024 AND b.segment_name = d.table_name AND b.owner = d.table_owner AND b.owner != 'SYS' ORDER BY 1, 2; spool off

spool physical_IO.txt--How much physical I/O, etc., a large table scan causes on a system--It displays I/O and some wait metrics that can give a DBA more insight into what Oracle is doing behind the scenes to access the object. --Solution: Create indexes, force use with hintsSELECT DISTINCT substr(a.object_owner,1,8) table_owner, substr(a.object_name,1,15) table_name, b.bytes / 1024 size_kb, substr(c.tablespace_name,1,10) Tablespace, substr(c.statistic_name,1,27) Statistic_Name , substr(c.value,1,5) ValueFROM sys.v_$sql_plan a, sys.dba_segments b, sys.v_$segment_statistics c WHERE a.object_owner (+) = b.owner AND a.object_name (+) = b.segment_name AND b.segment_type IN ('TABLE', 'TABLE PARTITION') AND a.operation LIKE '%TABLE%' AND a.options = 'FULL' AND b.bytes / 1024 > 1024 AND b.owner = c.owner AND b.owner != 'SYS' AND b.segment_name = c.object_name ORDER BY 1, 2;spool off

Solut ionCreate indexes, force use with hints

PDFmyURL.com

Page 25: Www.pafumi.net Tuning Methodology.html

Check for Small Table and Index Full-Scansspool Object_Access.txt-- You detect this by watching db file scattered reads' on top 5 wait eventsset heading onset feedback onset linesize 120

ttitle 'Full Table Scans and Counts| |The "K" indicates that the table is in the KEEP Pool.'select substr(p.owner,1,10) owner, substr(p.name,1,30) name, t.num_rows,-- ltrim(t.cache) ch, decode(t.buffer_pool,'KEEP','Y','DEFAULT','N') K, s.blocks blocks, sum(a.executions) nbr_FTSfrom dba_tables t, dba_segments s, v$sqlarea a, (select distinct address, object_owner owner, object_name name from v$sql_plan where operation = 'TABLE ACCESS' and options = 'FULL') pwhere a.address = p.address and t.owner = s.owner and t.table_name = s.segment_name and t.table_name = p.name and t.owner = p.owner and t.owner not in ('SYS','SYSTEM')having sum(a.executions) > 1group by p.owner, p.name, t.num_rows, t.cache, t.buffer_pool, s.blocksorder by sum(a.executions) desc;

column nbr_scans format 999,999,999column num_rows format 999,999,999column tbl_blocks format 999,999,999column owner format a15;column table_name format a25;column index_name format a25;ttitle 'Index full scans and counts'select p.owner, d.table_name, p.name index_name, seg.blocks tbl_blocks, sum(s.executions) nbr_scansfrom dba_segments seg, v$sqlarea s, dba_indexes d, (select distinct address, object_owner owner, object_name name from v$sql_plan where operation = 'INDEX' and options = 'FULL SCAN') pwhere d.index_name = p.name and s.address = p.address and d.table_name = seg.segment_name and seg.owner = p.owner and seg.owner not in ('SYS','SYSTEM')having sum(s.executions) > 9group by p.owner, d.table_name, p.name, seg.blocksorder by sum(s.executions) desc;

ttitle 'Index range scans and counts'select p.owner, d.table_name, p.name index_name, seg.blocks tbl_blocks, sum(s.executions) nbr_scansfrom dba_segments seg, v$sqlarea s, dba_indexes d, (select distinct address, object_owner owner, object_name name

PDFmyURL.com

Page 26: Www.pafumi.net Tuning Methodology.html

(select distinct address, object_owner owner, object_name name from v$sql_plan where operation = 'INDEX' and options = 'RANGE SCAN') pwhere d.index_name = p.name and s.address = p.address and d.table_name = seg.segment_name and seg.owner = p.owner and seg.owner not in ('SYS','SYSTEM')having sum(s.executions) > 9group by p.owner, d.table_name, p.name, seg.blocksorder by sum(s.executions) desc;

ttitle 'Index unique scans and counts'select p.owner, d.table_name, p.name index_name, sum(s.executions) nbr_scansfrom v$sqlarea s, dba_indexes d, (select distinct address, object_owner owner, object_name name from v$sql_plan where operation = 'INDEX' and options = 'UNIQUE SCAN') pwhere d.index_name = p.name and s.address = p.addresshaving sum(s.executions) > 9group by p.owner, d.table_name, p.nameorder by sum(s.executions) desc;

spool off

Solut ionCheck if is it OK those access. Pin those tables and indexes. Example: alter table/index …. Storage (buffer_pool keep);

Check for many indexes on data buffer cacheQuery the tables $BH and user_indexes

spool indexused_on_data_buffer_cache.txt--Solution: Adjust parameters OPTIMIZER_INDEX_COST_ADJ=15 AND OPTIMIZER_INDEX_CACHING=85 with the % of indexes on data buffer cache/* Recently used indexes *//* Should be run as SYS user */set serverout on size 1000000set verify offcolumn owner format a20 trunccolumn segment_name format a30 truncselect distinct b.owner, b.segment_name from x$bh a, dba_extents b where b.file_id=a.dbarfil and a.dbablk between b.block_id and b.block_id+blocks-1 and segment_type='INDEX' and b.owner = upper('&OWNER')/spool off

PDFmyURL.com

Page 27: Www.pafumi.net Tuning Methodology.html

Solut ionAdjust parameters OPTIMIZER_INDEX_COST_ADJ=15 AND OPTIMIZER_INDEX_CACHING=85 with the % of indexes on data buffer cache

Check for skewed Indexes (Unbalanced)Another performance issue could be that your indexes are skewed, this happens when you have a lot of DML act ivity in your tables. In orderto check that, perform the following steps:1- Analyze your indexes with compute (or est imate if the you have more than 100,000 rows in your table) analyze index xxxxxxx compute statistics;

2- Run the following query to see the BLEVEL of the index and if you need to rebuid them. If the blevel is higher than 3, you should rebuild it .spool Unbalanced_Indexes.txt--If the blevel is higher than 3, you should rebuild itselect substr(table_name,1,15) "Table Name", substr(index_name,1,20) "Index Name", blevel, decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL', 2,'OK BLEVEL',3,'OK BLEVEL', null,'?????????','***BLEVEL HIGH****') OK from dba_indexes where owner=UPPER('&OWNER') order by 1,2;spool off

3- Gather more index stat ist ics using the VALIDATE STRUCTURE opt ion of the ANALYZE command to populate the INDEX_STATS virtualtable. analyze index xxxxxxxxx validate structure;

4-The INDEX_STATS view will hold informat ion for one index at a t ime: it will never contain more than one row. Therefore you need to querythis view before you analyze next index select name "INDEXNAME", HEIGHT, DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED, (LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS from index_stats;

The PCT_DELETED column shows what percent of leaf entries (index entries) have been deleted and remain unf illed. The more deletedentries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED is 20% or higher, the index is candidate forrebuilding. If you can af ford to rebuild indexes more frequent ly, then do so if the value is higher than 10%. Leaving indexes with highPCT_DELETED without rebuild might cause excessive redo allocat ion on some systems. The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average. For example, if a table has10000 records and 9000 dist inct SSN values, the formula would result in (10000-9000) x 100 / 10000 = 10. This shows a good distribut ion ofvalues. If , however, the table has 10000 records and only 2 dist inct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98.This shows that there are very few dist inct values as a percentage of total records in the column. Such columns are not candidates for arebuild but good candidates for bitmapped indexes.

PDFmyURL.com

Page 28: Www.pafumi.net Tuning Methodology.html

The following PL/SQL code will analyze your indexes and then create a report of the indexes to rebuild. Run it as the owner of the indexes.declare pMaxHeight integer := 3; pMaxLeafsDeleted integer := 20;

cursor csrIndexStats is select name, height, lf_rows as leafRows, del_lf_rows as leafRowsDeleted from index_stats; vIndexStats csrIndexStats%rowtype;

cursor csrGlobalIndexes is select index_name, tablespace_name from user_indexes where partitioned = 'NO';

cursor csrLocalIndexes is select index_name, partition_name, tablespace_name from user_ind_partitions where status = 'USABLE';

vCount integer := 0;

begin dbms_output.enable(100000);

/* Working with Global/Normal indexes */ for vIndexRec in csrGlobalIndexes loop execute immediate 'analyze index ' || vIndexRec.index_name ||' validate structure';

open csrIndexStats; fetch csrIndexStats into vIndexStats; if csrIndexStats%FOUND then if (vIndexStats.height > pMaxHeight) or (vIndexStats.leafRows > 0 and vIndexStats.leafRowsDeleted > 0 and (vIndexStats.leafRowsDeleted * 100 / vIndexStats.leafRows) > pMaxLeafsDeleted) then vCount := vCount + 1; dbms_output.put_line('Rebuilding index ' || vIndexRec.index_name || '...'); execute immediate 'alter index ' || vIndexRec.index_name || ' rebuild online parallel nologging compute statistics' || ' tablespace ' || vIndexRec.tablespace_name; end if; end if; close csrIndexStats;

end loop;

dbms_output.put_line('Global indexes rebuilt: ' || to_char(vCount)); vCount := 0;

/* Local indexes */ for vIndexRec in csrLocalIndexes loop execute immediate 'analyze index ' || vIndexRec.index_name || ' partition (' || vIndexRec.partition_name ||

PDFmyURL.com

Page 29: Www.pafumi.net Tuning Methodology.html

' partition (' || vIndexRec.partition_name || ') validate structure'; open csrIndexStats; fetch csrIndexStats into vIndexStats; if csrIndexStats%FOUND then if (vIndexStats.height > pMaxHeight) or (vIndexStats.leafRows > 0 and vIndexStats.leafRowsDeleted > 0 and (vIndexStats.leafRowsDeleted * 100 / vIndexStats.leafRows) > pMaxLeafsDeleted) then vCount := vCount + 1; dbms_output.put_line('Rebuilding index ' || vIndexRec.index_name || '...'); execute immediate 'alter index ' || vIndexRec.index_name || ' rebuild partition ' || vIndexRec.partition_name || ' online parallel nologging estimate statistics' || ' tablespace ' || vIndexRec.tablespace_name; end if; end if; close csrIndexStats; end loop;

dbms_output.put_line('Local indexes rebuilt: ' || to_char(vCount));end RebuildUnbalancedIndexes;/

Fragmentation on DB ObjectsAnother performance problem may be the DB fragmentat ion. Run the following to detect :REM Segments that are fragmented and level of fragmentation REM It counts number of extentsset heading on set termout on set pagesize 66 set line 132 select substr(de.owner,1,8) "Owner", substr(de.segment_type,1,8) "Seg_Type", substr(de.segment_name,1,25) "Segment_Name", substr(de.tablespace_name,1,15) "Tblspace_Name", count(*) "Frag NEED", substr(df.name,1,40) "DataFile_Name" from sys.dba_extents de, v$datafile df where de.owner not in ('SYS','SYSTEM') and de.file_id = df.file# and de.segment_type = 'TABLE' group by de.owner, de.segment_name, de.segment_type, de.tablespace_name, df.name having count(*) > 4 order by count(*) asc;

Tuning buffer cacheStep 1.Ident ify how f requent ly data blocks are accessed f rom the buf fer cache (a. k. a Block Buf fer Hit Rat io).Oracle database maintains dynamic performance view V$BUFFER_POOL_STATISTICS with overall buffer usage statistics. This view maintains the following counts

PDFmyURL.com

Page 30: Www.pafumi.net Tuning Methodology.html

every time a data block is accessed either from the block buffers or from the disk:

NAME – Name of the buffer poolPHYSICAL_READS – Number of physical readsDB_BLOCK_GETS – Number of reads for INSERT, UPDATE and DELETECONSISTENT_GETS – Number of reads for SELECT DB_BLOCK_GETS + CONSISTENT_GETS = Total Number of reads

Based on above statistics we can calculate the percentage of data blocks being accessed from the memory to that of the disk (block buffer hit ratio). The followingSQL statement will return the block buffer hit ratio:

SELECT NAME, 100 – round ((PHYSICAL_READS / (DB_BLOCK_GETS + CONSISTENT_GETS))*100,2) HitRatioFROM V$BUFFER_POOL_STATISTICS;

NAME HITRATIO-------------------- ----------DEFAULT 96.82

Before measuring the database buffer hit ratio, it is very important to check that the database is running in a steady state with normal workload and no unusual activityhas taken place. For example, when you run a SQL statement just after database startup, no data blocks have been cached in the block buffers. At this point, Oraclereads the data blocks from the disk and will cache the blocks in the memory. If you run the same SQL statement again, then most likely the data blocks will still bepresent in the cache, and Oracle will not have to perform disk IO. If you run the same SQL statement multiple times you will get a higher buffer hit ratio. On the otherhand, if you either run SQL statements that rarely query the same data, or run a select on a very large table, the data block may not be in the buffer cache and Oraclewill have to perform disk IO, thereby lowering the buffer hit ratio.

A hit ratio of 95% or greater is considered to be a good hit ratio for OLTP systems. The hit ratio for DSS (Decision Support System) may vary depending on thedatabase load. A lower hit ratio means Oracle is performing more disk IO on the server. In such a situation, you can increase the size of database block buffers toincrease the database performance. You may have to increase the physical memory on the server if the server starts swapping after increasing block buffers.

Step 2: Ident ify f requent ly used and rarely used data blocks. Cache f requent ly used blocks and discard rarely used blocks.

If you have a low buffer hit ratio and you cannot increase the size of the database block buffers, you can still gain some performance advantage by tuning the blockbuffers and efficiently caching the data block that will provide maximum benefits. Ideally, we should cache data blocks that are either frequently used in SQLstatements, or data blocks used by performance sensitive SQL statements (A SQL statement whose performance is critical to the system performance). An ad-hocquery that scans a large table can significantly degrade overall database performance. A SQL on a large table may flush out frequently used data blocks from thebuffer cache to store data blocks from the large table. During the peak time, ad-hoc queries that select data from large tables or from tables that are rarely usedshould be avoided. If we cannot avoid such queries, we can limit the impact on the buffer cache by using RECYCLE buffer pool.

A DBA can create multiple buffer pools in the SGA to store data blocks efficiently. For example, we can use RECYCLE pool to cache data blocks that are rarely usedin the application. Typically, this will be a small area in the SGA to store data blocks for current SQL statement / transaction that we do not intend to hold in the memoryafter the transaction is completed. Similarly, we can use KEEP pool to cache data blocks that are frequently used by the application. Typically, this will be big enough tostore data blocks that we want to always keep in memory. By storing data blocks in KEEP and RECYCLE pools you can store frequently used data blocks separatelyfrom the rarely used data blocks, and control which data blocks are flushed from the buffer cache. Using RECYCLE pool, we can also prevent a large table scan fromflushing frequently used data blocks. You can create the RECYCLE and KEEP pools by specifying the following init.ora parameters:

DB_KEEP_CACHE_SIZE = <size of KEEP pool>DB_RECYCLE_CACHE_SIZE = < size of RECYCLE pool>

PDFmyURL.com

Page 31: Www.pafumi.net Tuning Methodology.html

When you use the above parameters, the total memory allocated to the block buffers is the sum of DB_KEEP_CACHE_SIZE, DB_RECYCLE_CACHE_SIZE, andDB_CACHE_SIZE.

Step 3: Assign tables to KEEP / RECYCLE pool. Ident ify buf fer hit rat io for KEEP, RECYCLE, and DEFAULT pool. Adjust the init ializat ion parametersfor opt imum performance.

By default, data blocks are cached in the DEFAULT pool. The DBA must configure the table to use the KEEP or the RECYCLE pool by specifying BUFFER_POOLkeyword in the CREATE TABLE or the ALTER TABLE statement. For example, you can assign a table to the recycle pool by using the following ALTER TABLE SQLstatement.

ALTER TABLE <TABLE NAME> STORAGE (BUFFER_POOL RECYCLE)

The DBA can take help from application designers in identifying tables that should use KEEP or RECYCLE pool. You can also query X$BH to examine the current blockbuffer usage by database objects (You must log in as SYS to query X$BH).

spool tables_to_RECYCLE_Pool.txt--The following query returns a list of tables that are rarely used and can be assigned to the RECYCLE pool. Col owner format a14Col object_name format a36Col object_type format a15SELECT o.owner, object_name, object_type, COUNT(1) buffers FROM SYS.x$bh, dba_objects o WHERE (tch = 1 OR (tch = 0 AND lru_flag < 8)) AND obj = o. object_id AND o.owner upper('&OWNER') GROUP BY o.owner, object_name, object_type ORDER BY buffers;spool off

spool tables_to_KEEP_Pool.txt--The following query will return a list of tables that are frequently -- used by SQL statements and can be assigned to the KEEP pool. Col owner format a14Col object_name format a36Col object_type format a15SELECT o.owner, object_name, object_type, COUNT(1) buffers FROM SYS.x$bh, dba_objects o WHERE tch > 10 AND lru_flag = 8 AND obj = o.object_id AND o.owner = upper('&OWNER') GROUP BY o.owner, object_name, object_type ORDER BY buffers;spool off

Once you have setup the database to use KEEP and RECYCLE pools, you can monitor the buffer hit ratio by querying V$BUFFER_POOL_STATISTICS andV$DB_CACHE_ADVICE to adjust the buffer pool initialization parameters.

PDFmyURL.com

Page 32: Www.pafumi.net Tuning Methodology.html

Step 4 : Ident ify the amount of memory needed to maintain required performance.

Oracle 9i maintains block buffer advisory information in V$DB_CACHE_ADVICE. This view contains simulated physical reads for a range of buffer cache sizes. TheDBA can query this view to estimate buffer cache requirement for the database. The cache advisory can be activated by setting DB_CACHE_ADIVE initializationparameter.

DB_CACHE_ADVICE = ON

There is a minor overhead associated with cache advisory collection. Hence, it is not advisable to collect these statistics in production databases until there is a needto tune the buffer cache. The DBA can turn on DB_CACHE_ADVISE dynamically for the duration of sample workload period and collect advisory statistics.

Conclusion

Using this methodical approach, a DBA can easily identify the problem areas, and tune the database block buffers. The DBA can create the following buffer pool toefficiently cache data blocks in SGA:

1. KEEP: Cache tables that are very critical for system performance. Typically, lookup tables are very good candidates for the KEEP pool. The DBA should createthe KEEP pool large enough to maintain 99% buffer hit ratio on this pool.

2. RECYCLE: Cache tables that are not critical for system performance. Typically, a table containing historical information that is either rarely queried or used bybatch process is a good candidate for the RECYCLE pool. The DBA should create the RECYCLE pool large enough to finish the current transaction.

3. DEFAULT: Cache tables that do not belong to either KEEP or RECYCLE pool.

The DBA can setup OEM jobs, Oracle statspack, or custom monitoring scripts to monitor your production database block buffer efficiency, and to identify and tune theproblem area.

Check Size of LOG_BUFFERBigger is better and reduces I/O Check ML 147471.1 item 4. Check for content ion on 'redo allocat ion latch', 'redo copy latch'. Using that query check if 'redo log space request ' not near 0, process had to wait for space in the buffer If you get 'redo allocat ion latch', then increase LOG_PARALLELISMIf you get 'redo copy latch', then increase _LOGSIMULTANEOUS_COPIES (default is 2 t imes # of CPU)

Check Size of SHARED_POOL_SIZE VariableUsually we want this variable to be around 250-300MB.Using the v_$SGASTAT, check if you see a large value under "shared pool f ree memory", if so, reduce it . You don't want to have a big spacewith lot of SQL Staments that are not re-used. If you have that, then Oracle is going to take too long to f ind those statements in memory.

PDFmyURL.com

Page 33: Www.pafumi.net Tuning Methodology.html

Allocate Files properly (Tuning buffer busy waits by file)Check for Buffer busy Waits.This view (based on X$KCBWAIT) reports the number of t imes an instance has had buffer busy waits on dif ferent classes of blocks sincethe instance was started. Oracle also provides a companion view called X$KCBFWAIT which duplicates the funct ion of X$KCBWAIT, but summarises the waits by f ileid.

SPOOL file_wait.txtSET linesize 180SET pagesize 9000COLUMN filename FORMAT a40 HEAD "File Name"COLUMN file# FORMAT 99 HEAD "F#"COLUMN ct FORMAT 999,999,999 HEAD "Waits"COLUMN time FORMAT 999,999,999 HEAD "Time"COLUMN avg FORMAT 999.999 HEAD "Avg Time"SELECT indx+1 file# , b.name filename , count ct , time , time/(DECODE(count,0,1,count)) avgFROM x$kcbfwait a, v$datafile bWHERE indx < (select count(*) from v$datafile) AND a.indx+1 = b.file#order by ct desc /spool off

Checking ACTIVE Statementsspool Act ive_Statements.txtset linesize 110--Extract ing the act ive SQL a user is execut ing select sesion.sid, substr(sesion.username,1,15) username, substr(optimizer_mode,1,10) opt_mode, hash_value, address, cpu_time, elapsed_time, sql_text from v$sqlarea sqlarea, v$session sesion where sesion.sql_hash_value = sqlarea.hash_value and sesion.sql_address = sqlarea.address and sesion.username is not null;

--I/O being done by an act ive SQL statement select sess_io.sid, sess_io.block_gets, sess_io.consistent_gets,

PDFmyURL.com

Page 34: Www.pafumi.net Tuning Methodology.html

sess_io.consistent_gets, sess_io.physical_reads, sess_io.block_changes, sess_io.consistent_changes from v$sess_io sess_io, v$session sesion where sesion.sid = sess_io.sid and sesion.username is not null;

-- If by chance the query shown earlier in the V$SQLAREA view did not show your full SQL text -- because it was larger than 1000 characters, this V$SQLTEXT view should be queried -- to extract the full SQL. It is a piece by piece of 64 characters by line, -- that needs to be ordered by the column PIECE. -- SQL to show the full SQL execut ing for act ive sessions select sesion.sid, sql_text from v$sqltext sqltext, v$session sesion where sesion.sql_hash_value = sqltext.hash_value and sesion.sql_address = sqltext.address and sesion.username is not null order by sqltext.piece;spool of f

Use IPC for local connectionsWhen a process is on the same machine as the server, use the IPC protocol for connect ivity instead of TCP. Inner Process Communicat ionon the same machine does not have the overhead of packet building and deciphering that TCP has. I've seen a SQL job that runs in 10minutes using TCP on a local machine run as fast as one minute using an IPC connect ion. You can set up your tnsnames f ile like this on a local machine so that local connect ion with use IPC connect ions f irst and then TCPconnect ion second. PROD = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(Key = IPCKEY)) (ADDRESS = (PROTOCOL = TCP)(HOST = MYHOST)(PORT = 1521)) ) (CONNECT_DATA = (SID = PROD) ) )

Check undo parametersWhen you are working with UNDO, there are two important things to consider: The size of the UNDO tablespace

PDFmyURL.com

Page 35: Www.pafumi.net Tuning Methodology.html

The UNDO_RETENTION parameter.To get informat ion of your current set t ings you can use the following query:set serveroutput onDECLARE tsn VARCHAR2(40); tss NUMBER(10); aex BOOLEAN; unr NUMBER(5); rgt BOOLEAN; retval BOOLEAN;BEGIN retval := dbms_undo_adv.undo_info(tsn, tss, aex, unr, rgt); dbms_output.put_line('UNDO Tablespace is: ' || tsn); dbms_output.put_line('UNDO Tablespace size is: ' || TO_CHAR(tss));

IF aex THEN dbms_output.put_line('Undo Autoextend is set to: TRUE'); ELSE dbms_output.put_line('Undo Autoextend is set to: FALSE'); END IF;

dbms_output.put_line('Undo Retention is: ' || TO_CHAR(unr));

IF rgt THEN dbms_output.put_line('Undo Guarantee is set to: TRUE'); ELSE dbms_output.put_line('Undo Guarantee is set to: FALSE'); END IF;END;/

There are two ways to proceed to opt imize your resources. You can choose to allocate a specif ic size for the UNDO tablespace and then set the UNDO_RETENTION parameter to an opt imal valueaccording to the UNDO size and the database act ivity. If your disk space is limited and you do not want to allocate more space thannecessary to the UNDO tablespace, this is the way to proceed. If you are not limited by disk space, then it would be better to choose the UNDO_RETENTION t ime that is best for you (for FLASHBACK,etc.). Allocate the appropriate size to the UNDO tablespace according to the database act ivity. This t ip help you get the informat ion you need whatever the method you choose.spool Check_Undo_Parameters.txtset serverout on size 1000000set feedback offset heading offset lines 132declare cursor get_undo_stat is select d.undo_size/(1024*1024) "C1", substr(e.value,1,25) "C2", (to_number(e.value) * to_number(f.value) * g.undo_block_per_sec) / (1024*1024) "C3", round((d.undo_size / (to_number(f.value) * g.undo_block_per_sec))) "C4"

PDFmyURL.com

Page 36: Www.pafumi.net Tuning Methodology.html

round((d.undo_size / (to_number(f.value) * g.undo_block_per_sec))) "C4" from (select sum(a.bytes) undo_size from v$datafile a, v$tablespace b, dba_tablespaces c where c.contents = 'UNDO' and c.status = 'ONLINE' and b.name = c.tablespace_name and a.ts# = b.ts#) d, v$parameter e, v$parameter f, (select max(undoblks/((end_time-begin_time)*3600*24))undo_block_per_sec from v$undostat) g where e.name = 'undo_retention' and f.name = 'db_block_size';begindbms_output.put_line(chr(10)||chr(10)||chr(10)||chr(10) || 'To optimize UNDO you have two choices :');dbms_output.put_line('====================================================' || chr(10)); for rec1 in get_undo_stat loop dbms_output.put_line('A) Adjust UNDO tablespace size according to UNDO_RETENTION :' || chr(10)); dbms_output.put_line(rpad('ACTUAL UNDO SIZE ',61,'.')|| ' : ' || TO_CHAR(rec1.c1,'999999') || ' MEGS'); dbms_output.put_line(rpad('OPTIMAL UNDO SIZE WITH ACTUAL UNDO_RETENTION (' || ltrim(TO_CHAR(rec1.c2/60,'999999')) || ' MINUTES) ',61,'.') || ' : ' || TO_CHAR(rec1.c3,'999999') || ' MEGS'); dbms_output.put_line(chr(10)); dbms_output.put_line('B) Adjust UNDO_RETENTION according to UNDO tablespace size :' || chr(10)); dbms_output.put_line(rpad('ACTUAL UNDO RETENTION ',61,'.') || ' : ' || TO_CHAR(rec1.c2/60,'999999') || ' MINUTES'); dbms_output.put_line(rpad('OPTIMAL UNDO RETENTION WITH ACTUAL UNDO SIZE (' || ltrim(TO_CHAR(rec1.c1,'999999')) || ' MEGS) ',61,'.') || ' : ' || TO_CHAR(rec1.c4/60,'999999') || ' MINUTES'); end loop;dbms_output.put_line(chr(10)||chr(10));end;/

select 'Number of "ORA-01555 (Snapshot too old)" encountered since the last startup of the instance : ' || sum(ssolderrcnt) from v$undostat;spool off

Detect High SQL parse callsOne of the f irst things that an Oracle DBA does when checking the performance of any database is to check for high-use SQL statements.The script below will display all SQL where the number of parse calls is more than twice the number of SQL execut ions. The output f rom thisscript is a good start ing point for detailed SQL tuning. This query can also be modif ied to display the most f requent ly executed SQLstatements that reside in the library cache. prompt ********************************************************** prompt SQL High parse calls prompt ********************************************************** select sql_text, parse_calls, executions from v$sqlarea where parse_calls > 300 and executions < 2*parse_calls and executions > 1;

PDFmyURL.com

Page 37: Www.pafumi.net Tuning Methodology.html

This script is great for f inding non-reusable SQL statements that contain embedded literals. As you may know, non-reusable SQLstatements place a heavy burden on the Oracle library cache. When cursor_sharing=FORCE, Oracle8i will re-write the SQL with literal valuesso it can use a host variable instead. This is a great “silver bullet” for system where the literal SQL cannot be changed.

Monitor Open and Cached CursorsOpen cursors take up space in the shared pool, in the library cache. To keep a renegade session from f illing up the library cache, or cloggingthe CPU with millions of parse requests, we set the parameter OPEN_CURSORS. OPEN_CURSORS sets the maximum number of cursors each session can have open, per session. For example, if OPEN_CURSORS is setto 1000, then each session can have up to 1000 cursors open at one t ime. If a single session has OPEN_CURSORS # of cursors open, it willget an ora-1000 error when it t ries to open one more cursor.The default is value for OPEN_CURSORS is 50, but Oracle recommends that you set this to at least 500 for most applicat ions. Someapplicat ions may need more, eg. web applicat ions that have dozens to hundreds of users sharing a pool of sessions. Tom Kyte recommendssett ing it around 1000.

If SESSION_CACHED_CURSORS is not set , it defaults to 0 and no cursors will be cached for your session. (Your cursors will st ill be cachedin the shared pool, but your session will have to f ind them there.) If it is set , then when a parse request is issued, Oracle checks the librarycache to see whether more than 3 parse requests have been issued for that statement. If so, Oracle moves the session cursor associatedwith that statement into the session cursor cache. Subsequent parse requests for that statement by the same session are then f illed f romthe session cursor cache, thus avoiding even a soft parse. (Technically, a parse can't be completely avoided; a "sof ter" sof t parse is donethat 's faster and requires less CPU.)

The obvious advantage to caching cursors by session is reduced parse t imes, which leads to faster overall execut ion t imes. This is especiallyso for applicat ions like Oracle Forms applicat ions, where switching from one form to another will close all the session cursors opened for thef irst form. Switching back then opens ident ical cursors. So caching cursors by session really cuts down on reparsing.There's another advantage, though. Since a session doesn't have to go looking in the library cache for previously parsed SQL, cachingcursors by session results in less use of the library cache and shared pool latches. These are of ten points of content ion for busy OLTPsystems. Cutt ing down on latch use cuts down on latch waits, providing not only an increase in speed but an increase in scalability.

This will give the number of current ly opened cursors, by session:--total cursors open, by sessionselect a.value, s.username, s.sid, s.serial# from v$sesstat a, v$statname b, v$session s where a.statistic# = b.statistic# and s.sid=a.sid and b.name = 'opened cursors current';

If you're running several N-t iered applicat ions with mult iple webservers, you may f ind it useful to monitor open cursors by username andmachine:--total cursors open, by username & machineselect sum(a.value) total_cur, avg(a.value) avg_cur, max(a.value) max_cur, s.username, s.machine from v$sesstat a, v$statname b, v$session s where a.statistic# = b.statistic# and s.sid=a.sid and b.name = 'opened cursors current'

PDFmyURL.com

Page 38: Www.pafumi.net Tuning Methodology.html

group by s.username, s.machine order by 1 desc;

The best advice for tuning OPEN_CURSORS is not to tune it . Set it high enough that you won't have to worry about it . If your sessions arerunning close to the limit you've set for OPEN_CURSORS, raise it . If you set OPEN_CURSORS to a high value, this doesn't mean that everysession will have that number of cursors open. Cursors are opened on an as-needed basis

To see if you've set OPEN_CURSORS high enough, monitor v$sesstat for the maximum opened cursors current. If your sessions arerunning close to the limit , up the value of OPEN_CURSORS.select max(a.value) as highest_open_cur, p.value as max_open_cur from v$sesstat a, v$statname b, v$parameter p where a.statistic# = b.statistic# and b.name = 'opened cursors current' and p.name= 'open_cursors' group by p.value;HIGHEST_OPEN_CUR MAX_OPEN_CUR---------------- ------------ 1953 2500

Monitoring the session cursor cachev$sesstat also provides a stat ist ic to monitor the number of cursors each session has in its session cursor cache.

--session cached cursors, by sessionselect a.value, s.username, s.sid, s.serial#from v$sesstat a, v$statname b, v$session swhere a.statistic# = b.statistic# and s.sid=a.sidand b.name = 'session cursor cache count' ;

You can also see direct ly what is in the session cursor cache by querying v$open_cursor. v$open_cursor lists session cached cursors by SID,and includes the f irst few characters of the statement and the sql_id, so you can actually tell what the cursors are for.

select c.user_name, c.sid, sql.sql_textfrom v$open_cursor c, v$sql sqlwhere c.sql_id=sql.sql_idand c.sid=&sid;

Tuning SESSION_CACHED_CURSORSIf you choose to use SESSION_CACHED_CURSORS to help out an applicat ion that is cont inually closing and reopening cursors, you canmonitor its ef fect iveness via two more stat ist ics in v$sesstat . The stat ist ic "session cursor cache hits" ref lects the number of t imes that astatement the session sent for parsing was found in the session cursor cache, meaning it didn't have to be reparsed and your session didn'thave to search through the library cache for it . You can compare this to the stat ist ic "parse count (total)"; subtract "session cursor cachehits" f rom "parse count (total)" to see the number of parses that actually occurred.

PDFmyURL.com

Page 39: Www.pafumi.net Tuning Methodology.html

select cach.value cache_hits, prs.value all_parses, prs.value-cach.value sess_cur_cache_not_used from v$sesstat cach, v$sesstat prs, v$statname nm1, v$statname nm2 where cach.statistic# = nm1.statistic# and nm1.name = 'session cursor cache hits' and prs.statistic#=nm2.statistic# and nm2.name= 'parse count (total)' and cach.sid= &sid and prs.sid= cach.sid ;

Enter value for sid: 947old 8: and cach.sid= &sid and prs.sid= cach.sidnew 8: and cach.sid= 947 and prs.sid= cach.sid

CACHE_HITS ALL_PARSES SESS_CUR_CACHE_NOT_USED---------- ---------- ----------------------- 106 210 104

Monitor this in concurrence with the session cursor cache count.

--session cached cursors, for a given SID, compared to maxselect a.value curr_cached, p.value max_cached, s.username, s.sid, s.serial# from v$sesstat a, v$statname b, v$session s, v$parameter2 p where a.statistic# = b.statistic# and s.sid=a.sid and a.sid=&sid and p.name='session_cached_cursors' and b.name = 'session cursor cache count' ;

Detect Top 10 Queries in SQL Areaspool top10_sqlarea.txt/*This script queries the SQL area ordered by the the average cost of the statement. The "Avg Cost" row is basically the No. of Buffer Gets per Rows processed. Where no rows are processed, all Buffer Gets are reported for the statement. The script also lists any potential candidates for a converting to stored procedures by running a case insensitive query.*/set pagesize 66 linesize 132set echo off

column executions heading "Execs" format 99999999column rows_processed heading "Rows Procd" format 99999999column loads heading "Loads" format 999999.99column buffer_gets heading "Buffer Gets"column disk_reads heading "Disk Reads"column elapsed_time heading "Elasped Time"column cpu_time heading "CPU Time"column sql_text heading "SQL Text" format a120 wrapcolumn avg_cost heading "Avg Cost" format 99999999column gets_per_exec heading "Gets Per Exec" format 99999999column reads_per_exec heading "Read Per Exec" format 99999999

PDFmyURL.com

Page 40: Www.pafumi.net Tuning Methodology.html

column reads_per_exec heading "Read Per Exec" format 99999999column rows_per_exec heading "Rows Per Exec" format 99999999

break on reportcompute sum of rows_processed on reportcompute sum of executions on reportcompute avg of avg_cost on reportcompute avg of gets_per_exec on reportcompute avg of reads_per_exec on reportcompute avg of row_per_exec on report

PROMPT PROMPT Top 10 most expensive SQL by Elapsed Time...PROMPTselect rownum as rank, a.* from ( select elapsed_Time, executions, buffer_gets, disk_reads, cpu_time, hash_value, sql_text from v$sqlarea where elapsed_time > 20000 order by elapsed_time desc) a where rownum < 11;

PROMPT PROMPT Top 10 most expensive SQL by CPU Time...PROMPTselect rownum as rank, a.* from ( select elapsed_Time, executions, buffer_gets, disk_reads, cpu_time, hash_value, sql_text from v$sqlarea where cpu_time > 20000 order by cpu_time desc) awhere rownum < 11;

PROMPT PROMPT Top 10 most expensive SQL by Buffer Gets by Executions...PROMPTselect rownum as rank, a.*from (select buffer_gets, executions, buffer_gets/ decode(executions,0,1, executions) gets_per_exec, hash_value, sql_text from v$sqlarea where buffer_gets > 50000 order by buffer_gets desc) awhere rownum < 11;

PROMPTPROMPT Top 10 most expensive SQL by Physical Reads by Executions...PROMPTselect rownum as rank, a.*from (select disk_reads, executions, disk_reads / decode(executions,0,1, executions) reads_per_exec, hash_value, sql_text from v$sqlarea where disk_reads > 10000 order by disk_reads desc) awhere rownum < 11;

PDFmyURL.com

Page 41: Www.pafumi.net Tuning Methodology.html

PROMPTPROMPT Top 10 most expensive SQL by Rows Processed by Executions...PROMPTselect rownum as rank, a.*from (select rows_processed, executions, rows_processed / decode(executions,0,1, executions) rows_per_exec, hash_value, sql_text from v$sqlarea where rows_processed > 10000 order by rows_processed desc) a where rownum < 11;

PROMPT PROMPT Top 10 most expensive SQL by Buffer Gets vs Rows Processed...PROMPTselect rownum as rank, a.*from ( select buffer_gets, lpad(rows_processed || decode(users_opening + users_executing, 0, ' ','*'),20) "rows_processed", executions, loads, (decode(rows_processed,0,1,1)) * buffer_gets/ decode(rows_processed,0,1,rows_processed) avg_cost, sql_text from v$sqlarea where decode(rows_processed,0,1,1) * buffer_gets/ decode(rows_processed,0,1,rows_processed) > 10000 order by 5 desc) awhere rownum < 11;

rem Check to see if there are any candidates for procedures or rem for using bind variables. Check this by comparing UPPER remrem This May be a candidate application for using the init.ora parameterrem CURSOR_SHARING = FORCE|SIMILAR

select rownum as rank, a.*from (select upper(substr(sql_text, 1, 65)) sqltext, count(*) from v$sqlarea group by upper(substr(sql_text, 1, 65)) having count(*) > 1 order by count(*) desc) awhere rownum < 11;

prompt Output spooled to top10_sqlarea.txtspool off

If you want to see the full text of the sql statement, you can run the following query:select v2.sql_text, v2.addressfrom v$sqlarea v1, v$sqltext v2where v1.address=v2.addressand v1.sql_text like 'SELECT COUNT(*) FROM DEPT%'order by v2.address, v2.piece;

The next query returns the SQL text f rom a hash value that must be determined from each v$sqlarea row in quest ion.select sql_textfrom v$sqltextwhere hash_value=&hash_value

PDFmyURL.com

Page 42: Www.pafumi.net Tuning Methodology.html

order by piece;

Check for Indexes not Used and HOT TablesIf you want to know if an index has ever been used since instance startup, or the use of a specif ic table, the solut ion is quite easy.Simply query V$SEGMENT_STATISTICS to see if there has even been a physical read on the index in quest ion. Queries similar to thefollowing can help:select index_name from all_indexes where owner = 'FRAUDGUARD' and index_name not in ( select object_name from v$segment_statistics where owner='FRAUDGUARD' and statistic_name='physical reads');

If you get no rows, that means that all your indexes has been used.

Next, we'll determine the top 10 tables that have incurred the most physical I/O operat ions. select table_name,total_phys_io from (select owner||'.'||object_name as table_name, sum(value) as total_phys_io from v$segment_statistics where owner='FRAUDGUARD' and object_type='TABLE' and statistic_name in ('physical reads','physical reads direct','physical writes','physical writes direct') group by owner||'.'||object_name order by total_phys_io desc)where rownum <=10;

TABLE_NAME TOTAL_PHYS_IO------------------------------------------------------------- -------------FG83_DEV.FLOWDOCUMENT_ARCH 1011844FG83_DEV.FLOWDOCUMENT 697512FG83_DEV.FLOWFIELD_ARCH 21423FG83_DEV.USERACTIVITYLOG_ARCH 13987FG83_DEV.FLOWDATA 13607FG83_DEV.USERACTIVITYLOG 12334FG83_DEV.SIGNATURES 8992FG83_DEV.PROCESSLOG 4764FG83_DEV.EXCEPTIONITEM_ARCH 399FG83_DEV.USERLEVELPERMISSION 276

The query above eliminated any data dict ionary tables f rom the results. It should now be clear what the exact table is that experiences themost physical I/O operat ions. Appropriate act ions can now be taken to isolate this potent ial hotspot f rom other highly act ive databasesegments.

If you've ever dealt with wait events, you may have seen the 'buf fer busy waits' event. This event occurs when one session is wait ing onanother session to read the buffer into the cache, or some other session is changing the buffer. This even can of ten be seen when querying

PDFmyURL.com

Page 43: Www.pafumi.net Tuning Methodology.html

V$SYSTEM_EVENT. If I query my database, I have approximately 13 million waits on this specif ic event.

select event,total_waits from v$system_event where event='buffer busy waits';

EVENT TOTAL_WAITS---------------------------------------- -----------buffer busy waits 12976210

The big quest ion is to determine which segments are contribut ing to this overall wait event. Querying V$SEGMENT_STATISTICS can help usdetermine the answer.

select substr(segment_name,1,30) segment_name, object_type,total_buff_busy_waits from (select owner||'.'||object_name as segment_name,object_type, value as total_buff_busy_waits from v$segment_statistics where statistic_name in ('buffer busy waits') order by total_buff_busy_waits desc)where rownum <=10;

SEGMENT_NAME OBJECT_TYPE TOTAL_BUFF_BUSY_WAITS----------------------------------- ------------- ---------------------WEBMAP.SDE_BLK_1103 TABLE 10522135WEBMAP.SDE_BLK_804 TABLE 1176185SRTM.SDE_BLK_1101 TABLE 651175WEBMAP.SDE_BLK_804_UK INDEX 100242SYS.DBMS_LOCK_ALLOCATED TABLE 64695NED.SDE_BLK_1002 TABLE 48582WEBMAP.BTS_ROADS_MD TABLE 27068WEBMAP.SDE_BLK_1103_UK INDEX 25707ARCIMS.SDE_LOGFILE_DATA_IDX1 INDEX 24618NED.SDE_BLK_62 TABLE 14710

From the query above, we can see that one specif ic table contributed 10.5 million, or approximately 80%, of the total waits.

If you ever want to know why the access to a specif ic table (Example: EMP) is slow, one of the f irst act ions would be to run:select statistic_name, value from v$segment_statistics where owner='SCOTT' and object_name = 'EMP';

STATISTIC_NAME VALUE---------------------------------------------------------------- ----------logical reads 17653buffer busy waits 1744db block changes 16234

PDFmyURL.com

Page 44: Www.pafumi.net Tuning Methodology.html

physical reads 1110physical writes 516physical reads direct 0physical writes direct 0global cache cr blocks served 0global cache current blocks served 0ITL waits 0row lock waits 6

From the above query we can see that EMP is forever being modif ied and rarely just being selected. And those modif icat ions has problemsbecause of the high number of bussy waits (users t ry to access to the same block). Perhaps if that table has a higher PCTFREE the problemwould disappear. Or maybe this is a case for ASSM.

Detect and Resolve Buffer Busy WaitsWhenever mult iple insert or update tasks access a table, it is possible that Oracle may be forced to wait to access the f irst block in thetable. The f irst block is called the segment header, and the segment header contains the freelist for the table. The number of f reelists forany table should be set to the high-water mark of concurrent inserts or updates.The script below will tell you if you have waits for table or index freelists. If so, you need to ident ify the table and add addit ional f reelists.You can add freelists with the ALTER table command.The procedure for ident ifying the specif ic table associated with a f reelist wait or a buffer busy wait is complex, but it is fully described in thebook “Oracle High-Performance Tuning with STATSPACK.

column s_v format 999,999,999 heading 'Total Requests' new_value tnrcolumn count format 99999990 heading ‘count’ new_value cntcolumn proc heading 'Ratio of waits' PROMPT Current v$waitstat freelist waits...PROMPT set heading on;prompt - This displays the total current waits on freelistsselect class, count from v$waitstat where class = 'free list'; prompt - This displays the total gets in the databaseselect sum(value) s_v from v$sysstat where name IN ('db block gets', 'consistent gets'); PROMPT - Here is the ratioselect &cnt/&tnr * 100 proc from dual;

PDFmyURL.com

Page 45: Www.pafumi.net Tuning Methodology.html

Current v$waitstat freelist waits...- This displays the total current waits on freelistsCLASS COUNT------------------ ---------free list 0

- This displays the total gets in the databaseTotal Num of Requests--------------------- 140318872

- Here is the ratioRatio in %---------- 0

Please note the freelist content ion also can be manifested as a buffer busy wait . This is because the block is already in the buffer, butcannot be accessed because another task has the segment header. The sect ion below describes the process the block address associatedwith a wait . As we discussed, Oracle does not keep an accumulator to t rack individual buffer busy waits. To see them, you must create ascript to detect them and then schedule the task to run frequent ly on your database server.vi get_busy.ksh#!/bin/ksh# First, we must set the environment . . . .export ORACLE_SID=proderpexport ORACLE_HOME=`cat /var/opt/oracle/oratab|grep \^$ORACLE_SID:|cut -f2 -d':'`export PATH=$ORACLE_HOME/bin:$PATHexport SERVER_NAME=`uname -a|awk '{print $2}'`typeset -u SERVER_NAME

# sample every 10 secondsSAMPLE_TIME=10while truedo #************************************************************* # Test to see if Oracle is accepting connections #************************************************************* $ORACLE_HOME/bin/sqlplus -s /<<! > /tmp/check_$ORACLE_SID.ora select * from v\$database; exit ! #************************************************************* # If not, exit immediately . . . #************************************************************* check_stat=`cat /tmp/check_$ORACLE_SID.ora|grep -i error|wc -l`; oracle_num=`expr $check_stat` if [ $oracle_num -gt 0 ]

PDFmyURL.com

Page 46: Www.pafumi.net Tuning Methodology.html

if [ $oracle_num -gt 0 ] then exit 0 fi

rm -f /export/home/oracle/statspack/busy.lst

$ORACLE_HOME/bin/sqlplus -s perfstat/perfstat<<!> /tmp/busy.lst

set feedback off; select sysdate, event, substr(tablespace_name,1,14), p2 from v\$session_wait a, dba_data_files b where a.p1 = b.file_id; !

var=`cat /tmp/busy.lst|wc -l`

echo $varif [[ $var -gt 1 ]]; then echo **********************************************************************" echo "There are waits" cat /tmp/busy.lst|mailx -s "Prod block wait found"\ dpafumi at yahoo com echo **********************************************************************" exitfi

sleep $SAMPLE_TIMEdone

As we can see from this script , it probes the database for buffer busy waits every 10 seconds. When a buffer busy wait is found, it mails thedate, tablespace name, and block number to the DBA. Here is an example of a block alert e-mail:

SYSDATE SUBSTR(TABLESP BLOCK--------- -------------- ----------28-DEC-00 APPLSYSD 25654

Here we see that we have a block wait condit ion at block 25654 in the applsysd tablespace. The procedure for locat ing this block is beyondthe scope of this t ip, but complete direct ions are in Chapter 10 of Oracle High Performance Tuning with STATSPACK

PDFmyURL.com

Page 47: Www.pafumi.net Tuning Methodology.html

One of the most confounding problems with Oracle is the resolut ion of buffer busy wait events. Buffer busy waits are common in an I/O-bound Oracle system, as evidenced by any system with read (sequent ial/scattered) waits in the top-f ive waits in the Oracle STATSPACKreport , like this:

Top 5 Timed Events % Total Event Waits Time (s) Ela Time --------------------------- ------------ ----------- ----------- db file sequential read 2,598 7,146 48.54 db file scattered read 25,519 3,246 22.04 library cache load lock 673 1,363 9.26 CPU time 2,154 934 7.83 log file parallel write 19,157 837 5.68

The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows withfewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may st ill see buffer busy waits, and increasing thebuffer size won't help.In order to look at system-wide wait events, we can query the v$system_event performance view. This view, shown below, provides thename of the wait event, the total number of waits and t imeouts, the total t ime waited, and the average wait t ime per event.

spool Wait_Events.txtselect substr(event,1,25) event, total_waits, total_timeouts, time_waited, average_waitfrom v$system_eventwhere event like '%wait%'order by 2 desc;spool off

EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT--------------------------- ----------- -------------- ----------- ------------buffer busy waits 636528 1557 549700 .863591232write complete waits 1193 0 14799 12.4048617free buffer waits 1601 0 622 .388507183

If you want to see all the events, you can try with:set pages 999 set lines 90 column c1 heading 'Event|Name' format a30 column c2 heading 'Total|Waits' format 999,999,999 column c3 heading 'Seconds|Waiting' format 999,999 column c4 heading 'Total|Timeouts' format 999,999,999 column c5 heading 'Average|Wait|(in secs)' format 99.999 ttitle 'System-wide Wait Analysis|for current wait events'

PDFmyURL.com

Page 48: Www.pafumi.net Tuning Methodology.html

select event c1, total_waits c2, time_waited/100 c3, total_timeouts c4, average_wait/100 c5 from sys.v_$system_event where event not in ( 'dispatcher timer', 'lock element cleanup', 'Null event', 'parallel query dequeue wait', 'parallel query idle wait - Slaves', 'pipe get', 'PL/SQL lock timer', 'pmon timer', 'rdbms ipc message', 'slave wait', 'smon timer', 'SQL*Net break/reset to client', 'SQL*Net message from client', 'SQL*Net message to client', 'SQL*Net more data to client', 'virtual circuit status', 'WMON goes to sleep' ) AND event not like 'DFS%' and event not like '%done%' and event not like '%Idle%' AND event not like 'KXFX%' order by c2 desc;

Wed Feb 14 page 1 System-wide Wait Analysis for current wait events AverageEvent Total Seconds Total WaitName Waits Waiting Timeouts (in secs)------------------------------ ------------ -------- ------------ ---------db file sequential read 812 7 0 .010control file parallel write 645 3 0 .000control file sequential read 378 4 0 .010log file parallel write 213 0 127 .000db file scattered read 111 2 0 .020

PDFmyURL.com

Page 49: Www.pafumi.net Tuning Methodology.html

db file scattered read 111 2 0 .020wakeup time manager 61 1,874 61 30.720direct path read 27 0 0 .000rdbms ipc reply 10 2 0 .180db file parallel write 8 0 4 .020direct path write 8 0 0 .000buffer busy waits 7 0 0 .000log file sequential read 4 0 0 .000log file single write 4 0 0 .000LGWR wait for redo copy 2 0 0 .000log file sync 2 0 0 .010library cache load lock 2 0 0 .000instance state change 2 0 0 .000reliable message 1 0 0 .070refresh controlfile command 1 0 0 .050control file heartbeat 1 4 1 4.100

The type of buffer that causes the wait can be queried using the v$waitstat view. This view lists the waits per buffer type for buffer busywaits, where COUNT is the sum of all waits for the class of block, and TIME is the sum of all wait t imes for that class:

select * from v$waitstat;

CLASS COUNT TIME ------------------ ---------- ---------- data block 1961113 1870278 segment header 34535 159082 undo header 233632 86239 undo block 1886 1706

Buffer busy waits occur when an Oracle session needs to access a block in the buffer cache, but cannot because the buffer copy of thedata block is locked. This buffer busy wait condit ion can happen for either of the following reasons:

The block is being read into the buffer by another session, so the wait ing session must wait for the block read to complete. Another session has the buffer block locked in a mode that is incompat ible with the wait ing session's request.

Because buffer busy waits are due to content ion between part icular blocks, there's nothing you can do unt il you know which blocks are inconf lict and why the conf licts are occurring. Tuning therefore involves ident ifying and eliminat ing the cause of the block content ion.The v$session_wait performance view, shown below, can give some insight into what is being waited for and why the wait is occurring.

SQL> desc v$session_wait Name Null? Type ----------------------------------------- -------- --------------------- SID NUMBER SEQ# NUMBER EVENT VARCHAR2(64) P1TEXT VARCHAR2(64) P1 NUMBER P1RAW RAW(4)

PDFmyURL.com

Page 50: Www.pafumi.net Tuning Methodology.html

P2TEXT VARCHAR2(64) P2 NUMBER P2RAW RAW(4) P3TEXT VARCHAR2(64) P3 NUMBER P3RAW RAW(4) WAIT_TIME NUMBER SECONDS_IN_WAIT NUMBER STATE VARCHAR2(19)

The columns of the v$session_wait view that are of part icular interest for a buffer busy wait event are:

P1—The absolute f ile number for the data f ile involved in the wait . P2—The block number within the data f ile referenced in P1 that is being waited upon. P3—The reason code describing why the wait is occurring.

Here's an Oracle data dict ionary query for these values:

select p1 "File #", p2 "Block #", p3 "Reason Code"from v$session_waitwhere event = 'buffer busy waits';

If the output f rom repeatedly running the above query shows that a block or range of blocks is experiencing waits, the following queryshould show the name and type of the segment:

select owner, segment_name, segment_typefrom dba_extentswhere file_id = &P1 and &P2 between block_id and block_id + blocks -1;

Once the segment is ident if ied, the v$segment_statistics performance view facilitates real-t ime monitoring of segment-level stat ist ics. Thisenables a DBA to ident ify performance problems associated with individual tables or indexes, as shown below.

select object_name, statistic_name, valuefrom V$SEGMENT_STATISTICSwhere object_name = 'SOURCE$';

OBJECT_NAME STATISTIC_NAME VALUE----------- ------------------------- ----------SOURCE$ logical reads 11216SOURCE$ buffer busy waits 210SOURCE$ db block changes 32SOURCE$ physical reads 10365SOURCE$ physical writes 0SOURCE$ physical reads direct 0SOURCE$ physical writes direct 0SOURCE$ ITL waits 0

PDFmyURL.com

Page 51: Www.pafumi.net Tuning Methodology.html

SOURCE$ ITL waits 0SOURCE$ row lock waits

We can also query the dba_data_files to determine the file_name for the f ile involved in the wait by using the P1 value from v$session_waitfor the file_id.

SQL> desc dba_data_files Name Null? Type ----------------------------------------- -------- ---------------------------- FILE_NAME VARCHAR2(513) FILE_ID NUMBER TABLESPACE_NAME VARCHAR2(30) BYTES NUMBER BLOCKS NUMBER STATUS VARCHAR2(9) RELATIVE_FNO NUMBER AUTOEXTENSIBLE VARCHAR2(3) MAXBYTES NUMBER MAXBLOCKS NUMBER INCREMENT_BY NUMBER USER_BYTES NUMBER USER_BLOCKS NUMBER

Interrogat ing the P3 (reason code) value from v$session_wait for a buffer busy wait event will tell us why the session is wait ing. The reasoncodes range from 0 to 300 and can be decoded, as shown in Table A .

Table A

Code Reason for wait

- A modification is happening on a SCUR or XCUR buffer but has not yetcompleted.

0 The block is being read into the buffer cache.100 We want to NEW the block, but the block is currently being read by another

session (most likely for undo).110 We want the CURRENT block either shared or exclusive but the block is

being read into cache by another session, so we have to wait until its read()is completed.

120 We want to get the block in current mode, but someone else is currentlyreading it into the cache. Wait for the user to complete the read. This occursduring buffer lookup.

130 Block is being read by another session, and no other suitable block imagewas found, so we wait until the read is completed. This may also occur after abuffer cache assumed deadlock. The kernel can't get a buffer in a certainamount of time and assumes a deadlock. Therefore it will read the CRversion of the block.

200 We want to NEW the block, but someone else is using the current copy, so PDFmyURL.com

Page 52: Www.pafumi.net Tuning Methodology.html

200 We want to NEW the block, but someone else is using the current copy, sowe have to wait for that user to finish.

210 The session wants the block in SCUR or XCUR mode. If this is a bufferexchange or the session is in discrete TX mode, the session waits for the firsttime and the second time escalates the block as a deadlock, so does notshow up as waiting very long. In this case, the statistic: "exchange deadlocks"is incremented, and we yield the CPU for the "buffer deadlock" wait event.

220 During buffer lookup for a CURRENT copy of a buffer, we have found thebuffer but someone holds it in an incompatible mode, so we have to wait.

230 Trying to get a buffer in CR/CRX mode, but a m odification has started on thebuffer that has not yet been completed.

231 CR/CRX scan found the CURRENT block, but a modification has started onthe buffer that has not yet been completed.

Reason codesAs I ment ioned at the beginning of this art icle, buffer busy waits are prevalent in I/O-bound systems. I/O content ion, result ing in waits fordata blocks, is of ten due to numerous sessions repeatedly reading the same blocks, as when many sessions scan the same index. In thisscenario, session one scans the blocks in the buffer cache quickly, but then a block has to be read from disk. While session one awaits thedisk read to complete, other sessions scanning the same index soon catch up to session one and want the same block current ly being readfrom disk. This is where the buffer busy wait occurs—wait ing for the buffer blocks that are being read from disk. The following rules ofthumb may be useful for resolving each of the noted content ion situat ions:

Data block contention—Ident ify and eliminate HOT blocks f rom the applicat ion via changing PCTFREE and or PCTUSED values toreduce the number of rows per data block. Check for repeatedly scanned indexes. Since each transact ion updat ing a block requires atransact ion entry, increase the INITRANS value. Freelist block contention—Increase the FREELISTS value. Also, when using Parallel Server, be certain that each instance has its ownFREELIST GROUPs. Segment header contention—Again, increase the number of FREELISTs and use FREELIST GROUPs, which can make a dif ferenceeven within a single instance. Undo header contention—Increase the number of rollback segments.

The following STATSPACK script is very useful for detect ing those t imes when the database has a high-level of buffer busy waits. prompt *********************************************************** prompt Buffer Busy Waits may signal a high update table with too prompt few freelists. Find the offending table and add more freelists. prompt *********************************************************** prompt column buffer_busy_wait format 999,999,999 column mydate heading 'yr. mo dy Hr.' select to_char(snap_time,'yyyy-mm-dd HH24') mydate, new.name, new.buffer_busy_wait-old.buffer_busy_wait buffer_busy_wait from perfstat.stats$buffer_pool_statistics old, perfstat.stats$buffer_pool_statistics new, perfstat.stats$snapshot sn

PDFmyURL.com

Page 53: Www.pafumi.net Tuning Methodology.html

where snap_time > sysdate-&1 and new.name <> 'FAKE VIEW' and new.snap_id = sn.snap_id and old.snap_id = sn.snap_id-1 and new.buffer_busy_wait-old.buffer_busy_wait > 1 group by to_char(snap_time,'yyyy-mm-dd HH24'), new.name, new.buffer_busy_wait-old.buffer_busy_wait ;

Show the percentage of a table in the data bufferIn Oracle9i we have a mult iple blocks size feature, and separate independent data buffers can be created for all objects in the today, for 2k,4k, 8k, 16k and 32k blocks sizes.The following script will interrogate to the v$bh view and give us counts all the number of data blocks in the buffer on a segment-by-segment basis. Note that the script also then joins into the dba_objects view in order to count the number of data blocks in the segmentand compare it to the buffer. This script is a mult i-step process, and rather than make the query complex with in-line views or subqueries,the script has been broken down into three separate queries using temporary tables to hold the intermediate results. The following query isextremely useful for showing the percentage of data blocks for on each table within the data buffer caches.set pages 999set lines 80ttitle 'Contents of Data Buffers'drop table t1; create table t1 asselect o.object_name object_name, o.object_type object_type, count(1) num_blocksfrom dba_objects o, v$bh bhwhere o.object_id = bh.objdand o.owner not in ('SYS','SYSTEM')group by o.object_name, o.object_typeorder by count(1) desc; column c1 heading "Object|Name" format a30column c2 heading "Object|Type" format a12column c3 heading "Number of|Blocks" format 999,999,999,999column c4 heading "Percentage|of object|data blocks|in Buffer" format 999 select object_name c1, object_type c2, num_blocks c3, (num_blocks/decode(sum(blocks), 0, .001, sum(blocks)))*100 c4from t1, dba_segments swhere s.segment_name = t1.object_name and num_blocks > 10group by object_name, object_type, num_blocksorder by num_blocks desc;

drop table t1;

PDFmyURL.com

Page 54: Www.pafumi.net Tuning Methodology.html

Wed Oct 23 page 1

Contents of Data Buffers

Percentage of object Object Object Number of data blocks Name Type Blocks in Buffer --------------------------- ------- ------------ ----------- MTL_DEMAND_INTERFACE TABLE 38,745 100 FND_CONCURRENT_REQUESTS TABLE 16,636 88 WIP_TRANSACTIONS TABLE 14,777 100 WIP_TRANSACTION_ACCOUNTS TABLE 13,390 33 CRP_RESOURCE_HOURS TABLE 7,806 100 SO_LINES_ALL TABLE 7,576 100 ABC_EDI_LINES TABLE 7,041 100 BOM_INVENTORY_COMPONENTS TABLE 6,882 46 MTL_SYSTEM_ITEMS TABLE 4,747 63 WIP_TRANSACTION_ACCOUNTS_N1 INDEX 3,996 38 MTL_ITEM_CATEGORIES TABLE 3,390 100 RA_CUSTOMER_TRX_LINES_ALL TABLE 3,264 100 MRP_FORECAST_DATES TABLE 3,082 99 RA_CUSTOMER_TRX_ALL TABLE 2,739 97 WIP_OPERATIONS TABLE 2,311 34 SO_PICKING_LINES_ALL TABLE 2,006 100 MTL_DEMAND_INTERFACE_N10 INDEX 1,482 76 BOM_OPERATION_RESOURCES TABLE 1,456 45 ABC_EDI_ERRORS TABLE 1,427 100 ABC_EDI_HEADERS TABLE 1,188 100

Testing Procedures or Packages for Performance-- before.sqlset echo offset timing offset recsep offcolumn CPU noprint new_value before_cpucolumn READS noprint new_value before_readsselect s_cpu.value CPU, sum(s_reads.value) READSfrom sys.v_$session se, sys.v_$statname n_cpu,

PDFmyURL.com

Page 55: Www.pafumi.net Tuning Methodology.html

sys.v_$statname n_reads, sys.v_$sesstat s_cpu, sys.v_$sesstat s_readswhere n_reads.name in ('db block gets', 'consistent gets') and n_cpu.name = 'CPU used by this session' and n_cpu.statistic# = s_cpu.statistic# and n_reads.statistic# = s_reads.statistic# and s_cpu.sid = se.sid and s_reads.sid = se.sid and se.audsid = userenv('SESSIONID')group by s_cpu.value/column CPU clearcolumn READS clear

will display nothing but blank lines but will collect values before your PL/SQL runs; immediately af ter your PL/SQL, run this :

-- after.sqlset echo offset timing offset recsep offcolumn CPU print format 999999column READS print format 9999999999999select s_cpu.value - &&before_cpu - 97 CPU, sum(s_reads.value) - &&before_reads - 10 READSfrom sys.v_$session se, sys.v_$statname n_cpu, sys.v_$statname n_reads, sys.v_$sesstat s_cpu, sys.v_$sesstat s_readswhere n_reads.name in ('db block gets', 'consistent gets') and n_cpu.name = 'CPU used by this session' and n_cpu.statistic# = s_cpu.statistic# and n_reads.statistic# = s_reads.statistic# and s_cpu.sid = se.sid and s_reads.sid = se.sid and se.audsid = userenv('SESSIONID')group by s_cpu.value/column CPU clearcolumn READS clear

Check Sortsspool sorts.txt--The ratio of sorts (disk) to sorts (memory) should be < 5%. -- Increase the size of SORT_AREA_SIZE if it is less than 5%.

PDFmyURL.com

Page 56: Www.pafumi.net Tuning Methodology.html

-- Increase the size of SORT_AREA_SIZE if it is less than 5%. -- Increments of 10% should be fine. select disk.value "Disk", mem.value "Mem", (disk.value/mem.value)*100 "Ratio" from v$sysstat mem, v$sysstat disk where mem.name = 'sorts (memory)' and disk.name = 'sorts (disk)';spool off

Optimizing IndexesMove Indexes to a 32k Block SizeCreate a 32k_block Cache in the SPFILEdb_32k_cache_size = 32M

Create a Tablespace using 32K BlocksCREATE TABLESPACE "TS_32K_INDEXES" LOGGING DATAFILE '/oradata/SID/TS_32K_IND.dbf ' SIZE 100M BLOCKSIZE 32768 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT AUTO;

PDFmyURL.com