white paper - oaug collaborate 11 - matthews -clean up that mess

13
COLLABORATE 11 Copyright ©2011 by Barbara Matthews 1 Clean Up That Mess! A Mother’s Guide to Managing Your E-Business Suite Clutter Barbara Matthews Introduction If you’ve noticed that your Applications seem to be slowly grinding along, perhaps you need to review your overall approach to managing your Applications data. Are you purging data that ought to be purged? Is your Concurrent Manager configuration all that it could be? Have you LOOKED at your Workflow tables lately? This paper will describe many of the dust bunnies I’ve found in assorted E-Business Suite closets, and what I recommend you do to clean sweep your way to performance improvements. As we tackle our clean sweep, I’ll start by telling you what I won’t be talking a lot about: your real business data. Your GL transactions, your Work In Process data, your Accounts Receivables data… your functional super users should be worrying about that. They should be considering how long to hold onto GL data, and whether to carry it all forward when you upgrade to Release 12, or archive it elsewhere. In this paper, all I worry about is the metadata about your E-Business Suite. If there’s a big table owned by the APPS user, that’s something I’ m interested in. But your business data? Nope, I’ll give you a couple of pointers, but for the most part, this paper is about the data about your data, not your data. There Are Many Ways to Tweak Your Concurrent Manager Configuration I never cease to be amazed at the unique configurations that E-Business Suite customers choose for their Concurrent Manager setups. The Concurrent Manager offers a plethora of choices for how you can set it up. This flexibility can help during a performance crisis, but in general, you should fall back to a more standard configuration once the crisis is averted. When I See Pound Signs, I Want to Scream When I review customer’s environments, I run a set of scripts that check for assorted issues. My general rule of thumb is, if the results are so large that my report can’t fit the answer in the field, then there’s a problem. Take a look at this example, which shows a worrisome detail look at how long the concurrent requests are waiting to get to run: Concurrent Manager Performance History TOTAL AVG. WAITED AVG. CONCURRENT MANAGER COUNT HOURS HOURS HOURS WAIT ----------------------------------- -------- --------- -------- ------------ -------- Long Running Manager 77 .79 .01 4384255.11 ######## Quick Manager 216 10.10 .05 14827306.97 ######## Standard Manager 397 7.36 .02 29284205.96 ########

Upload: savoir81

Post on 20-Apr-2015

28 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 1

Clean Up That Mess! A Mother’s Guide to Managing Your E-Business Suite

Clutter

Barbara Matthews

Introduction

If you’ve noticed that your Applications seem to be slowly grinding along, perhaps you need to

review your overall approach to managing your Applications data. Are you purging data that

ought to be purged? Is your Concurrent Manager configuration all that it could be? Have you

LOOKED at your Workflow tables lately? This paper will describe many of the dust bunnies

I’ve found in assorted E-Business Suite closets, and what I recommend you do to clean sweep

your way to performance improvements.

As we tackle our clean sweep, I’ll start by telling you what I won’t be talking a lot about: your

real business data. Your GL transactions, your Work In Process data, your Accounts Receivables

data… your functional super users should be worrying about that. They should be considering

how long to hold onto GL data, and whether to carry it all forward when you upgrade to Release

12, or archive it elsewhere. In this paper, all I worry about is the metadata about your E-Business

Suite. If there’s a big table owned by the APPS user, that’s something I’m interested in. But your

business data? Nope, I’ll give you a couple of pointers, but for the most part, this paper is about

the data about your data, not your data.

There Are Many Ways to Tweak Your Concurrent Manager Configuration

I never cease to be amazed at the unique configurations that E-Business Suite customers choose

for their Concurrent Manager setups. The Concurrent Manager offers a plethora of choices for

how you can set it up. This flexibility can help during a performance crisis, but in general, you

should fall back to a more standard configuration once the crisis is averted.

When I See Pound Signs, I Want to Scream

When I review customer’s environments, I run a set of scripts that check for assorted issues. My

general rule of thumb is, if the results are so large that my report can’t fit the answer in the field,

then there’s a problem. Take a look at this example, which shows a worrisome detail – look at

how long the concurrent requests are waiting to get to run:

Concurrent Manager Performance History

TOTAL AVG. WAITED AVG.

CONCURRENT MANAGER COUNT HOURS HOURS HOURS WAIT

----------------------------------- -------- --------- -------- ------------ --------

Long Running Manager 77 .79 .01 4384255.11 ########

Quick Manager 216 10.10 .05 14827306.97 ########

Standard Manager 397 7.36 .02 29284205.96 ########

Page 2: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 2

There are a number of things that could cause Concurrent Programs to wait for long periods

before they get to run. First, if a Concurrent Manager queue is set up to run a small number of

programs at a time, then requests may wait a long time to get their turn.

Figure 1 – Choose Concurrent Manager: Define and then query up your Concurrent Managers. You can click on the

Work Shifts button to see how many Concurrent Programs can run at the same time for a Concurrent Manager.

Figure 2 – You can change the number of Concurrent Requests that can run at the same time in the Work Shifts screen

by changing the Processes field.

Page 3: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 3

Other factors that can influence wait time are the Cache Size and the Sleep Seconds for each

Concurrent Manager.

A second possibility for this long wait time is that the perhaps the Purge Concurrent Requests

and/or Manager Data Concurrent Program is not being run, so the response time for all

Concurrent Request activities, including querying the status of a request, submitting a request,

and starting to run the request, will be very slow. In the case of the example above, both

situations were true.

One way to separate longer running Concurrent Programs from the rest is to create a special

queue to run long-running reports. You could use a Work Shift to ensure that those reports only

run during off-peak hours. In the example with the large wait times, I would expect the average

hours to run to be longer for the Long Running Manager queue than any of the other queues, and

I would also expect the waited hours to be longer, because one long running job will end up

waiting behind other long running jobs.

You can see that for this example, something is terribly wrong – the Long Running Manager is

running jobs faster, on average, than the other managers, and all of the queues are waiting for a

horribly unreasonable amount of time. What could the issue be? Further investigation showed

that this environment has 18 Concurrent Programs defined to Run Alone, something I’ll discuss

next, and those Run Alone programs are run frequently, so the Concurrent Manager isn’t running

the way anyone would expect it to run.

(Think Carefully Before You) Run Alone

You can set a program up to Run Alone. What an interesting option! Pick a report, any report,

and set it to Run Alone. All other submitted programs will wait in pending status until this one

report completes. I’ve had a company call in a complete panic because they thought their

concurrent manager was broken. It turns out, someone had selected this option for a program that

ran quite frequently.

Page 4: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 4

Figure 3 – As System Administrator, choose Concurrent: Program: Define, query up a program, then click Run Alone

Consider this an option use judiciously and with extra caution.

Making Concurrent Programs Incompatible with Each Other

If you’re trying to avoid having other programs conflict with a particular program, then you can

also set programs up to be incompatible with each other. Those reports will never run at the same

time.

A simple click here can

bring your concurrent

manager to a grinding

halt…

Page 5: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 5

Figure 4 – You can set up programs so that they are incompatible with one another

Figure 5 – Now the Analyze Impact Data Concurrent Program and the Analyze All Index Column Statistics Concurrent

Program will never run at the same time.

You can click on the

Incompatibilities button to set

another program to be

incompatible with this one

Page 6: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 6

You can use the Incompatible setting for cases where deadlocking is occurring for two or more

concurrent programs, but you should be judicious about using this option as well – you must

understand the data that the programs are targeting, and be sure that they really are fighting over

the right to update the same data. Your DBA can spot deadlocking in the database’s alert log,

and occasionally you can tell that programs are deadlocking by the error that the losing program

reports in its Concurrent Request status.

Raise the Priority of a Concurrent Program

You can give one or more Concurrent Programs a higher priority than others. This is a

reasonable option to use for a small subset of programs that are critical. However, you might also

consider creating a new Concurrent Manager for critical programs like these. Deassign them

from their current Concurrent Manager, including the Standard Manager, and assign them to

your Critical Concurrent Manager. If you find that you are setting up more than, say, 5 programs

with a higher priority, stop and rethink what you are trying to achieve and consider creating a

separate Concurrent Manager.

Figure 6 – You can change the priority of a Concurrent Program so it runs before other programs, or after

The default priority is 50. If

you want a Concurrent

Program to always run

sooner, set Priority to < 50

Page 7: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 7

If you’re in a bind, right smack dab in the middle of a crisis, you can raise the priority of a given

Concurrent Request. Just find it in the Concurrent Manager: Administer screen and change the

priority. You can’t change the priority if it’s already running; that would be silly, wouldn’t it?

Also, if your Concurrent Program is a recurring scheduled program, then it will resubmit with

that raised priority, so make sure you take a look every now and again to make sure that the

programs that you expect to have a higher priority actually do.

(Should You) Let the Standard Manager Do All The Work?

You can run everything under the Standard Manager. This is actually, for the most part, the

default, seeded by Oracle. Yes, there are lots of other Concurrent Managers defined by Oracle,

but if you look, you’ll see that the bulk of Concurrent Requests end up running under the

Standard Manager. All Concurrent Programs seeded by Oracle can run under the Standard

Manager, as well as under specific managers to which they are assigned.

Consider setting up one more queue – the Quick Manager. Track the performance of your

Concurrent Requests over time, and Include Concurrent Programs that always run quickly (say,

in under a minute) to the Quick Manager. Don’t forget to Exclude them from any other manager

where they are currently set up, as well as from the Standard Manager. The Quick Manager

doesn’t have any extra fire power under the hood; it simply pulls those quick running programs

out of the other queues and lets them run first and fast, in their own queue. Why let those fast

jobs get stuck behind the slow ones, after all?

Should the Standard Manager do all the work? You have lots of knobs that you can turn to make

the Standard Manager work well if it does do all the work. If you let the Standard Manager do all

the work, then make sure that it has an appropriate number of workers assigned to do that work.

This means you have to look at not only how quickly jobs run through the Standard Manager –

so you would want to tune your worst performing programs to address that issue – but you also

need to look at how long Concurrent Programs are waiting to run. If they are waiting too long

(I’ve shown an example earlier), then you should either create additional queues and let the

programs run there, or increase the number of programs that can run at the same time in the

Standard Manager.

Of course there’s a tradeoff. Your server and database can handle a certain load. If you exceed it,

then you’ll make the database, and likely the server, thrash. That’s why you probably wouldn’t

set up the Standard Manager to run, say, 100 jobs at once, or 1000 jobs at once. You need to

observe performance and come up with a balance that works best for your environment. Then

tune your slow-running code, apply performance patches for Oracle’s slow-running code, and

determine if additional hardware could help – would more memory or CPU or faster I/O help?

Can you afford it? Are there other features that can improve performance? Are you upgraded to

the latest version of the RDBMS? Oracle 11gR2 has the fastest optimizer; if you’re not already

on it, consider if it is worthwhile to upgrade sooner rather than later.

Don’t Forget the Online Users

Really, if there are kings and queens in your world, it should be the online users. They’re the

ones with their fingers hovering over the keypad, wishing the computer would run just a little bit

Page 8: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 8

faster. If you make the Concurrent Manager run like the wind, you must make sure that it doesn’t

hog all of your computer’s resources and slow down the online users. Your best gauge for online

performance is the complaints that come into your Help Desk.

Other Performance Tips

Are You Pinning?

Really? Do I have to tell you that? Pinning objects into memory allows them to be accessed from

memory, which is faster than accessing them from disk. Oracle has been kind enough to create a

pinning tool for you called PIND. If you aren’t pinning objects, see MOS Doc. ID: 301171.1,

"Toolkit for dynamic marking of Library Cache objects as Kept (PIND)" for more details.

Gather Schema Statistics Options

Let’s start with some definitions from MOS Doc. ID: 556466.1, “Definition of Parameters Used

in Gather Schema Statistics Program” and MOS Doc. ID: 419728.1, “How To Gather Statistics

On Oracle Applications 11.5.10(and above) - Concurrent Process,Temp Tables, Manually.”

Schema Name

You may enter ALL to analyze every defined App schema.

Estimate Percent

Percentage of rows to estimate. If left empty it will default to 10%. The valid range is 0-99. A

higher percentage will be more accurate, but will take longer to run. If the object(s) that you are

gathering statistics for do not change often or the object(s) has data entered that is very similiar

you may choose a lower number. However, if the data changes frequently a larger number

entered for this parameter would be recommended to provide a more accurate representation of

your data.

Degree

Enter the Degree of parallelism. If not entered, it will default to min(cpu_count,

parallel_max_servers). You can modify the degree of parallelism to see if performance can be

improved.

Backup Flag

If the value is 'NOBACKUP' then it won't take a backup of the current statistics and should run

quicker. If the value is 'BACKUP' then it does an export_table_stats prior to gathering the

statistics.

Restart Request Id

Enter the request id that should be used for recovering gather_schema_stats if this request should

fail. You may leave this parameter null.

Gather Options

As of 11.5.10, FND_STATS.GATHER_SCHEMA_STATS introduced a new parameter called

Page 9: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 9

OPTIONS that, if set to GATHER AUTO, allows FND_STATS to automatically determine the

tables for which statistics should be gathered based on the change threshold. The Modifications

Threshold can be adjusted by the user by passing a value for modpercent, which by default is

equal to 10. GATHER AUTO uses a database feature called Table Monitoring, which needs to

be enabled for all the tables. A procedure called ENABLE_SCHEMA_MONITORING has been

provided to enable monitoring on all tables for a given schema or all Applications schemas.

Modifications Threshold

Choose the percentage of change that will be used by the Gather Options parameter. The default

value is 10%.

Invalidate Dependent Cursors

The invalidate option controls whether we invalidate cursors in the shared pool after gathering

statistics. The default behavior invalidates cursors, and this option can be used to avoid this if

gathering statistics on a busy system and it is not immediately required to utilize the new

statistics.

Figure 7 – You can experiment with the Degree field to see if raising the degree of parallelism can improve performance

You can choose Gather Auto, rather than Gather for the Gather Options. This may decrease the

amount of time spent analyzing objects by only analyzing those that have changed by a defined

percentage (the Modifications Threshold).

Page 10: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 10

Figure 8 – You can experiment with Gather Auto to see if you can improve performance of the Gather Schema Statistics

Concurrent Program by only analyzing objects that have changed by a certain percentage (Modifications Threshold).

This might allow you to run the Gather Schema Statistics Concurrent Program more frequently, and for a shorter period

of time.

Figure 9 – If you choose GATHER AUTO for Gather Options, you also have to choose a Modifications Threshold

Page 11: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 11

Purging Applications Data

Oracle provides a number of programs that purge or delete data from the E-Business Suite. Your

functional super users need to determine what can be purged from your company’s business data,

but it helps if you can provide a list of what programs are available. The Concurrent Programs

available to you will vary depending on what release of the E-Business Suite you are running,

and what patches you have applied. You should share this information with your functional super

users to see if there are any opportunities for cleanup.

You can also provide a list of concurrent programs that are already being run. You should

periodically review this list to determine if there are additional purges that could be run. After

applying significant patchsets (like RUP 7 for Release 11i, or a new RUP for Release 12), you

should look again to see if there are new purges available.

And purging isn’t the only alternative – you could also archive data – removing data from tables

and storing it in historical tables that are still accessible, to improve performance. You’ll likely

need to work with your functional users to make those kinds of decisions, and you may need to

consider third party vendor alternatives.

For our purposes, we’ll concentrate on the metadata in the E-Business Suite that can be purged.

This is data about your Applications. It is data that helps you manage the E-Business Suite, but

needs to be purged regularly to maintain optimal performance. When I look at an E-Business

Suite environment, I always look to see what their largest tables are. This gives me an indicator

of whether they are purging, and it also tells me what their most-used applications are.

Different customers have different results, of course, but the following tables may grow large.

The good news is, there are programs that can clean them up and help control their size:

FND_CONCURRENT_REQUESTS – Most customers run the Purge Concurrent Requests

and/or Manager Data concurrent program daily and delete records older than 7-30 days.

Generally, Oracle recommends no more than 25,000 records in this table if possible, because the

amount of data in the table will affect users’ performance when they query the concurrent

manager or submit new jobs.

One way to control the size of the fnd_concurrent_requests table without giving up data that

might be useful is to create an additional run of the Purge, included in a Request Set that also

runs the current daily purge, to delete data from programs that run very frequently (more than

500 times in a month), whose Concurrent Manager details are not important to save for more

than a few days.

As customers approach upgrading to Release 12, it may be worthwhile to consider creating a

history table for the FND_CONCURRENT_REQUESTS data. Would it be helpful to know what

Concurrent Programs have been run in the last year, and what parameters were passed? This

might aid in creating test plans for the upgrade.

Page 12: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 12

FND_LOGINS – The “Purge Signon Audit data” Concurrent Program should be run at least

weekly. There is a bug with the Concurrent Program setup with Release 11i that requires that the

request be run via a script using the concsub command.

FND_LOBS - FND_LOBS stores information about all LOBs managed by the Generic File

Manager (GFM). Each row includes the file identifier, name, content-type, and actual data. Each

row also includes the dates the file was uploaded and when it will expire, the associated program

name and tag, and the language and Oracle characterset. My Oracle Support Doc. ID: 829235.1,

“FAQ – Performance considerations for FND_LOBS” describes how to maintain FND_LOBS.

We recommend running the concurrent program “Purge Obsolete Generic File Manager Data”

on a periodic basis, perhaps once a week.

SELF SERVICE SESSIONS – the concurrent program “Purge Inactive Sessions” removes data

from the ICX_SESSIONS, ICX_SESSION_ATTRIBUTES, ICX_TRANSACTIONS,

ICX_TEXT, ICX_CONTEST_RESULTS_TEMP, ICX_FAILURES,

ICX_REQUISITIONER_INFO and FND_SESSION_VALUES tables. There is an entry for

every time someone logs into the self services web applications. We recommend scheduling this

program to run daily, during off-hours.

WORKFLOW – The Purge Obsolete Workflow Runtime Data Concurrent Program should be

run daily. The System Administrator should set up the “Purge Obsolete Workflow Runtime

Data” concurrent program for items with a Persistence Type of Temporary, and should schedule

this program to run daily, choosing an appropriate Age for data retention, and leaving Item Type

and Item Key blank.

FND_LOG_MESSAGES – You may need to apply one-off Patch 8989384, “FNDLGPRG

Not Purging Old Records In the FND_LOG_MESSAGES Table.” In Release 11i, run "Purge

Debug Log and System Alerts", and in Oracle Application Release 12, run "Purge Logs and

Closed System Alerts."

This program purges all messages up to the specified date, except messages for active

transactions (new or open alerts, active ICX sessions, concurrent requests, etc.). This program

should be scheduled to run daily and by default purges messages older than 7 days. Affected

tables are FND_EXCEPTION_NOTES, FND_OAM_BIZEX_SENT_NOTIF,

FND_LOG_METRICS, FND_LOG_UNIQUE_EXCEPTIONS, FND_LOG_EXCEPTIONS,

FND_LOG_MESSAGES, FND_LOG_TRANSACTION_CONTEXT, and

FND_LOG_ATTACHMENTS

WSH.WSH_EXCEPTIONS - If this table grows very large, see MOS Doc. ID: 358994.1,

“The Table WSH_EXCEPTIONS is Extremely Large – Unable to Purge”, for details about how

to purge this table. Also see MOS Doc. ID: 842728.1, “Sample API To Purge

WSH_EXCEPTIONS Using WSH_EXCEPTIONS_PUB”

APPLSYS.FND_ENV_CONTEXT – If this table is very large, see MOS Doc ID:

419990.1, which describes a bug in Release 11.5.9 where the concurrent manager purge misses

this table

Page 13: White Paper - OAUG Collaborate 11 - Matthews -Clean Up That Mess

COLLABORATE 11 Copyright ©2011 by Barbara Matthews 13

PERFSTAT - A very large PERFSTAT (statspack) table and index, while not necessarily a bad

thing, is worth checking to see if the data is being used, or if a smaller amount of data could be

saved. The following query shows how old the data is:

SELECT to_char(snap_time,'YYYY MON') snapdate,

count(*)

FROM perfstat.stats$snapshot

GROUP BY to_char(snap_time, 'YYYY MON')

ORDER BY 1;

This can be purged with $ORACLE_HOME/rdbms/admin/sppurge.sql or sptrunc.sql.

Make Sure Your Interface Tables Hold Only What They Should

You should review your interface tables before upgrading to make sure that they do not have

unnecessary data in them. You might call this a bit of a nag on my part. Everyone knows, after

all, that the interface tables have to be cleaned out as part of the upgrade to Release 12. The

reason I bring it up is that I have seen an instance with data left in interface tables from a prior

implementation that just never got cleaned up. It makes good sense to review your interface

tables to make sure that they don’t have unexpected data in them.

Conclusion

Now, before you start changing your Concurrent Manager configuration, or kicking off purge

programs that you’ve never run before, I have one last pearl of wisdom:

"You have brains in your head. You have feet in your shoes. You can steer yourself any direction

you choose. You're on your own. And you know what you know. And YOU are the one who'll

decide where to go..." (Dr. Seuss)

You have a test environment for good reasons. Take a look at my recommendations, and then

research them on My Oracle Support, and test them on your test environment. One of the

interesting things I’ve found as I’ve looked at different customers’ environments is that often

they have a perfectly rationale reason for not doing things the way everyone else does. So take a

look, research, test, and feel free to let me know how things go.