monitoring for bw data staging

27
© 2010 SAP AG Best-Practice Document Monitoring for BW Data Staging Dietmar-Hopp-Allee 16 D-69190 Walldorf DATE April 2011 SOLUTION MANAGEMENT PHASE SAP SOLUTION Operations Implementation Operations & Continuous Improvement SAP SAP NetWeaver Business Warehouse (BW) TOPIC AREA SOLUTION MANAGER AREA Application management Business process operations Technical operations System Monitoring System Administration Business Process Monitoring

Upload: balakrishna-vegi

Post on 07-Dec-2014

2.538 views

Category:

Documents


1 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Monitoring for bw data staging

© 2010 SAP AG

Best-Practice Document

Monitoring for BW Data Staging

Dietmar-Hopp-Allee 16 D-69190 Walldorf

DATE

April 2011

SOLUTION MANAGEMENT PHASE SAP SOLUTION

Operations Implementation

Operations & Continuous Improvement

SAP SAP NetWeaver Business Warehouse (BW)

TOPIC AREA SOLUTION MANAGER AREA

Application management

Business process operations

Technical operations

System Monitoring

System Administration

Business Process Monitoring

Page 2: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 2/27

Table of Content

1 Management Summary 3

1.1 Goal of Using This Service 3

1.2 Alternative Practices 3

1.3 Staff and Skills Requirements 3

1.4 System Requirements 3

1.5 Duration and Timing 3

2 Best-Practice Document 4

2.1 Preliminary Tasks 4

2.2 Procedure 4

2.2.1 Monitor Data Staging 4

2.2.2 Performance 14

2.2.3 Data Management 16

2.2.4 Data Consistency 23

3 Further Information 25

3.1 Overview about jobs and tasks 25

Page 3: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 3/27

1 Management Summary

1.1 Goal of Using This Service

Use this service to obtain the correct procedures for monitoring your BW system in the area of data staging, uploads and process chains.

1.2 Alternative Practices

You can get SAP experts to deliver this Best Practice if you order the Solution Management Optimization

(SMO) service “System Administration Workshop” in the scope of a MaxAttention or Safeguarding Support

engagement.

1.3 Staff and Skills Requirements

For optimal benefit you should make sure that you have experience in the following areas:

Knowledge of standard SAP application and database monitors

Practical experience with the SAP BW Data Warehousing Workbench

Experience with monitoring of data uploads

Experience in job scheduling in an SAP environment

1.4 System Requirements

This document refers to SAP NetWeaver 7.x BW systems.

1.5 Duration and Timing

Monitoring tasks have to be performed in a BW system on an ongoing basis. The time required for the

monitoring tasks depends on the objects being used and the data volume being loaded.

Page 4: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 4/27

2 Best-Practice Document

2.1 Preliminary Tasks

Before performing this best-practice document, ensure that you perform the following preliminary tasks or

checks in the system:

Read this Best Practice completely before any given recommendations are implemented

We strongly advise against direct changes in the production environment, and recommend first testing any

changes in a test system.

2.2 Procedure

2.2.1 Monitor Data Staging

Frequency: Daily

Please evaluate the monitoring strategy for the production BW system. If there are reports or data which need

to be available at a certain agreed time, then the upload process for this data needs to be analyzed and it

should be decided if it is necessary to monitor the jobs during the night or at the weekend.

Uploads and reports should be assigned a business priority status. Then a monitoring strategy appropriate to

the priority can be defined. For example, a high priority report could be the “Daily Logistics Report”. A priority

period could be the period end closing.

When the monitoring is performed in the morning one hour before the query users start working, this strategy

is cost effective and often sufficient for many uploads. However, this may cause severe delays in data and

report availability if priority uploads need to be restarted. In the case of failed extractions it can also mean that

when the extraction processes are restarted in the morning it causes an extra load on the source system

during online working hours.

A successful BW monitoring strategy requires close co-operation between both application and basis teams.

In BW, in comparison to SAP ERP, these two areas of expertise are more closely linked and the teams are

interdependent. As BW uploads involve extractions in other systems, it is also necessary to ensure that

communication and monitoring is as seamless as possible between the BW and source system teams.

If the monitoring will be performed by Basis individuals, then further training in BW will be necessary or vice-

versa. Close co-operation could mean assigning priority to certain BW jobs and documentation which details

the correct course of action and contact people if the job were to fail.

Automation methods include adding notification processes (available as standard) to critical Process Chains.

Most important BW Monitors are accessible via a central point of entry in the Data Warehousing Workbench

(transaction RSA1) – Administration – Monitors:

Page 5: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 5/27

The monitor “BI CCMS” is discussed in detail in the Best Practice Document “Monitoring for BW Basis (ABAP

and Java)”, the monitors “BW Accelerator”, “Precalculation Server” and “Aggregates” are discussed in detail

in the Best Practice Document “Monitoring for BW Reporting”.

The following monitors and tools are important for Data Staging, dependent on your specific scenario to

monitor:

2.2.1.1 Workload Monitor ST03

Page 6: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 6/27

The workload monitor in transaction ST03 is useful for general and BW specific performance overviews.

In transaction ST03, chose “expert mode”, “BI Workload” and “Load data”. The BW upload part of transaction

ST03 gives an overview for the different uploads (InfoPackage or DTP uploads) and process chains. The

most important performance indicators for the upload like runtime and data volume are summarized here.

The different stages of the upload process, like source system extraction or transformations, are visible and

possible bottlenecks can be identified here.

2.2.1.2 Process Chain Monitor

For the application-side analysis of the processes executed via process chain you can use the Process Chain

Monitor that is included in the the Data Warehousing Workbench – Administration – Monitors – Process

Chains. Here you have an overview about the progress of your process chains and which process chains are

running at the moment:

Page 7: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 7/27

By double-clicking on a certain process chain, you jump to transaction RSPC and see an overview about the

progress of the processes of this chain and the process steps executed so far for each process. From here

you can jump to more detailed monitors of single processes (Upload Monitor, transaction RSMO, Application

Log, transaction SLG1) or to the job log of the processes (transaction SM37).

2.2.1.3 BW Job Overview in RSM37

The BW job overview in transaction RSM37 eases the administration of background jobs “behind“ BW

Process Chains and other BW Jobs through transparency on the involved objects. Transaction RSM37 allows

you to display background jobs and their BW context (depending on the BW Job):

Process Chain / Process Variant

DTP / InfoPackage

InfoSource / DataSource

Request ID

In this example, the administrator would like to get an overview on all process chain jobs finished, cancelled

or still active on June 5th 2008:

This selection leads to the following output list:

Page 8: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 8/27

The above output list has to be customized to only show jobs belonging to process chains. Change the layout

and filter on the selection value “chain”:

Page 9: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 9/27

The BW job overview in transaction RSM37 is available with SAP NetWeaver 7.0 BW Support Package

(ABAP) 13 = SAP NetWeaver 7.0 Support Package Stack 12. Please see SAP Note “1035318: No overview

of jobs and their program variants”.

2.2.1.4 Upload Monitor

To have an overview about all upload processes running on your BW system, you should use the Upload

Monitor (Data Warehousing Workbench – Administration – Monitors – Load Data). In the detail-monitor of

each request (upload process) the progress of this upload and the single process steps are shown. The

Upload Monitor provides all necessary information on times spent in different processes during the load (for

example, extraction time, transfer, posting to PSA, processing transformations, writing to fact tables).

If you encounter upload performance problems, try to identify within the upload monitor where the problem

resides (for example, transformations):

If the extraction from a SAP source system consumes significant time, use the extractor checker (transaction RSA3) in the source system for further analysis.

If the PSA upload times are unacceptable, check the system for I/O contention because of the high number of writes ( disk layout and striping, DB I/O) and the PSA partitioning configuration.

If the upload bottleneck resides in transformations, please review your coding regarding performance problems. You can use the debug feature in the monitor, SQL trace (transaction ST05) or ABAP performance trace (transaction SE30).

If a monitoring during an upload is possible, transaction SM50 can be used to view the running processes.

2.2.1.4.1 Decision Tree Upload Performance

Page 10: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 10/27

Transaction

RSMO or RSRQ

Slow

Extraction

=>Source system

Transaction RSA3

Slow

Transformation

Indexes dropped?

Size of dimension tables

DB parameterization

Customer exit?

Hardware bottleneck?

Simulate update

N

N

Y

Y

YDB Access?

PSA/

Cube

Transaction

ST03

2.2.1.5 Change Run Monitor

For problems with change runs you can use the Change Run Monitor (Data Warehousing Workbench –

Administration – Monitors – Change Run). The Change Run Monitor displays all characteristics that are

involved in the current change run. It also provides a list of InfoCubes and their associated aggregates that

have already been adjusted as well as those aggregates that still have to be adjusted by the change run. The

change method, or the way in which the aggregate data is modified, is also displayed. There are three

change methods: Delta, Delta Rollup and Reconstruction.

Page 11: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 11/27

2.2.1.5.1 Decision Tree Change Run Monitoring

Page 12: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 12/27

Identify long running

change runs in your process

chains

Many/large

Aggregates

processed?

Check the DB settings,

indexes, no. of

partitions,

etc.

Bad DB

performance?

N

Y

Y

Check Application log

(SLG1) and job log (SM37)

to identify the critical part

Review your

aggregate design

Change run mode

(R, D) reasonable?N Check DELTALIMIT

setting

N

2.2.1.6 Real-Time Data Acquisition

For problems with Real-Time Data Acquisition (RDA) you can use the Monitor for Real-Time Data Acquisition

(Data Warehousing Workbench – Administration – Monitors – Real-Time Data Acquisition). Here you see all

daemons that control RDA runs and their current status: Active or inactive. For each daemon you can see

which InfoPackage, Data Transfer Process and Data Target belongs to it and in which periodicity the data is

fetched. You can also see when the last upload happened and how many records the last upload and the

currently open request contains.

Page 13: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 13/27

2.2.1.7 Open Hub Monitor

In SAP NetWeaver 7.x BW systems, Open Hub can be set up as a Data Target for Data Transfer Processes,

where the monitoring is fully integrated with the normal upload monitors.

Open Hub processing via InfoSpokes is obsolete from SAP NetWeaver 7.30 BW systems onwards. In SAP

NetWeaver 7.0 BW systems, Open Hubs that are still being processed via InfoSpokes can be monitored via

the Open Hub Monitor (transaction RSBMO2).

2.2.1.8 DataStore Objects Overview

The Status Overview of DataStore Objects can be accessed via the Data Warehousing Workbench –

Administration – Monitors – DataStore Objects. It provides an overview about the loading and activation

status of the last request to be loaded into each DataStore Object:

2.2.1.9 BW Delta Queue Maintenance

The BW Delta Queue can be monitored with the BW Delta Queue Maintenance (transaction RSA7). The BW

Delta Queue for the BW system itself can be accessed via the Data Warehousing Workbench –

Administration – Monitors – BW Delta Queue. However please note that this link does not include the BW

Delta Queues in the connected source systems. The BW Delta Queues for the source systems have to be

accessed directly via transaction RSA7 in each source system.

It is especially important to monitor these queues before release or Support Package upgrades to ensure the

BW Delta Queues are empty.

Page 14: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 14/27

2.2.2 Performance

2.2.2.1 Compression

Frequency: Daily

InfoCubes should be compressed regularly. Uncompressed cubes increase the data volume and have a

negative effect on query and aggregate build performance. If too many uncompressed requests are allowed

to build up in an InfoCube, this can eventually cause unpredictable and severe performance problems.

Basically the F-fact table is optimized for writing (upload) and the E-fact table is optimized for reading

(queries).

A well run and high performing BW system is not possible unless there is a regular compression of InfoCubes

and aggregates. A regular compression strategy should be defined with the business data owners. In line

with the requirements of the business process, data in the F-fact table should be compressed regularly. For

more information refer to the documentation in the SAP Help Portal:

Technical description

During the upload of data, a full request will always be inserted into the F-fact table. Each request gets its

own request ID and partition (DB dependent), which is contained in the 'package' dimension. This feature

enables you, for example, to delete a request from the F-fact table after the upload. However, this may result

in several entries in the fact table with the same values for all characteristics except the request ID. This will

increase the size of the fact table and the number of partitions (DB dependent) will unnecessarily decrease

the performance of your queries. During compression these records are summarized to one entry with the

request ID '0'. Note that once the data has been compressed some functions are no longer available for this

data (for example, it is not possible to delete the data for a specific request ID).

Real-time InfoCubes in a Planning environment

You should compress your InfoCubes regularly, especially the real-time InfoCubes. In a planning process a

request is closed when the open request contains 50,000 records or when it is switched manually between

loading and planning mode.

For Real-time InfoCubes that are used for planning there is another advantage through compression (on

ORACLE databases). The F-fact table has a B-tree index and is partitioned according to request ID, whereas

the E-fact table has a bitmap index and partitioning according to your settings (time characteristics). Read

accesses to the E-fact table are faster than those to the F-fact table because B-tree indices are favorable for

data write processes but not for read processes. Please check SAP Note 217397 for details.

The planning function “Delete” is used to remove data from the selected planning package. The records are

not directly deleted from the InfoCube. Instead, the system creates additional records with offsetting values.

The original and offsetting record are deleted when the InfoCube is compressed.

2.2.2.2 Report SAP_INFOCUBE_DESIGNS

Frequency: Monthly

Page 15: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 15/27

Running the report SAP_INFOCUBE_DESIGNS shows the database tables of an InfoCube, the number of

records in these tables and the ratio of the dimension table size to the Fact table size. If dimension tables are

too large then they can cause badly performing table joins on database level. Because the data volume

grows and the data allocation changes over the time, this check should be executed regularly.

When loading transaction data IDs have to be generated in the dimension table for the entries. If you do have

a large dimension this number range operation can negatively affect performance.

In the InfoCube star schema, a dimension table can be omitted if the InfoObject is defined as a line item. This

means that SQL-based queries become easier. In many cases, the database optimizer can select better run

schedules.

However, this also has one disadvantage because you cannot include additional characteristics in a

dimension that is marked as a line item at a later date. This is only possible with normal dimensions. If a

dimension table has more than one characteristic with a high granularity then consider placing these

characteristics into separate dimension tables.

Example:

A line item is an InfoObject, for example an order number, where one or a few facts in the fact table of the

InfoCube are listed.

Guidelines How to Limit the Number of Records in Dimension Tables

1. If an InfoObject has almost as many distinct values as there are entries in the fact table, define the

dimension of the InfoObject as a line item dimension. If defined in this manner, the system will write the data

directly to the fact table (a field with the data element RSSID, which immediately shows the SID table of the

InfoObject, is written in the fact table) instead of creating a dimension table that has almost as many entries

as the fact table.

2. Only group related characteristics into one dimension. Unrelated characteristics can use too much disk

space and cause performance problems (for example, 10,000 customers and 10,000 materials may result in

100,000,000 records).

3. Avoid characteristics with a high granularity meaning many distinct entries compared with the number of

entries in the fact table.

4. If you cannot avoid characteristics with a high granularity and most of your queries do not use these

characteristics, create an aggregate that stores summarized information. Do not use characteristics with a

high granularity in this aggregate.

Please note that the line item flag can have a negative performance impact on F4 help usage for which the

setting 'Only Values in InfoProvider' is used (transaction RSD1 Tab 'Business Explorer').

5. It is also worthwhile to try the checks in transaction RSRV. Use for example RSRV All Elementary Tests

Transaction Data Entries Not Used in the Dimension of an InfoCube.

Implementation

When creating the dimensions as part of InfoCube maintenance, flag the relevant dimension as a line item.

You can assign this dimension to exactly one InfoObject. To check which InfoObject has the highest

cardinality in the dimension, you can look at the fields of the dimension table highlighted in report

Page 16: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 16/27

SAP_INFOCUBE_DESIGNS. For example, Transaction DB02 => Detailed Analysis (Tables and Indexes

section) => Enter dimension table name => Check for the field with the most distinct objects.

The table below shows as an example a list of InfoCubes with large dimension tables:

INFOCUBE DIMENSION TABLE NO OF ROWS IN

DIMENSION

% ENTRIES IN DIMS COMPARED TO

F-TABLE

YCCA_C11 /BIC/DYCCA_C114 1.796.665 32

YCS_C01 /BIC/DYCS_C011 855.039 53

2.2.3 Data Management

2.2.3.1 Archive Data

Frequency: Depending on the retention period of your data.

Without archiving, unused data is stored in the database and DataStore objects and InfoCubes can grow

unrestricted. This can lead to a deterioration of general performance.

Establish a BW data archiving project to identify InfoCube and DataStore objects that can be archived. Data

storage also needs to be planned and estimated to determine data storage requirements and the cost

involved. The benefits of BW archiving include:

Reduction of online disk storage

Improvement in BW query performance

Increased data availability as rollup, change runs and backup times will be shorter

Reduced hardware consumption during loading and queries

An archiving strategy should include the following points and tasks:

Periodical archiving of objects

Estimation of required data storage

Identification of InfoCube and DataStore objects that can be archived

Mapping of Data targets to archiving objects

Validation of archived data

Read capability from archived data

Retention period of archived and deleted data

Time to adjust or rebuild aggregates

Timings for locking the data target whilst deletion is taking place.

For more information refer to the SAP Notes 643541 and 653393.

Consider the usage of Near Line Storage. Near Line Storage is recommended for data that you may still

need. Storing historical data in Near Line Storage reduces the data volume of InfoProviders; however, the

data is still available for queries. As Near Line Storage is a third party tool, it might be necessary to schedule

further periodic jobs for this tool. Further information about Near Line Storage is available in the SAP

Documentation::

http://help.sap.com/ SAP NetWeaver Release SAP NetWeaver by Key Capability Information

Page 17: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 17/27

Integration by Key Capability Business Intelligence Data Warehousing Data Warehouse

Management Information Lifecycle Management Data Archiving Process

Page 18: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 18/27

2.2.3.2 Delete Persistent Staging Area (PSA) data

Frequency: Weekly

Determine a defined data retention period for data in the PSA tables. This will depend on the type of data

involved and data uploading strategy. As part of this policy, establish a safe method to periodically delete the

data. If the PSA data is not deleted on a regular basis, the PSA tables grow unrestricted. Very large tables

increase the cost of data storage, the downtime for maintenance tasks and the performance of data upload.

Request deletion is integrated in Process Chains.

2.2.3.3 Delete Change Log data

Frequency: Weekly

Please note that only already updated change log requests can be deleted and that after the deletion a

reconstruction of requests for subsequent data targets using the DataStore change log will not be possible.

Change log requests can be deleted via Process Chains:

2.2.3.4 Delete DTP Temporary Storage

Frequency: Weekly

Page 19: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 19/27

The Data Transfer Process (DTP) can be set up from the temporary storage in case of a problem. You can

view and verify the data in the temporary storage in case of problems.

The deletion of temporary storage can be set from DTP Maintenance Goto Settings for DTP Temporary

Storage Delete Temporary Storage:

Here you can choose for each DTP:

For which steps you want to have a temporary storage

The level of detail for the temporary storage

Page 20: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 20/27

The retention time of the temporary storage.

2.2.3.5 Archive / Delete Administration Tables

Frequency: Weekly

SAP Note 706478 (and the referenced sub notes) provides an overview of administrative basis tables that

may considerably increase in size and cause problems if the entries are not regularly archived or deleted.

Growing administration tables increase the total size of the system and negatively impact performance, for

example during monitoring the requests.

Affected tables are (among others):

Application Log Tables (BAL*)

IDoc Tables (EDI*)

BW Monitoring Tables for requests and process chains

Job Tables (TBTCO, TBTCP)

Archive or delete entries in these tables as described in the SAP Note.

2.2.3.6 Delete BW Statistic Tables

Frequency: Quarterly

The BW statistic tables RSDDSTAT* and the planning statistic tables UPC_STATISTIC* have to be deleted

regularly.

Please follow the SAP Notes 211940, 195157, 179046 and 366869 before deleting or archiving.

For the BW statistic tables RSDDSTAT*, the deletion of records older than x days can be done in transaction

RSA1. To delete data, call transaction RSA1, choose 'Tools' 'Settings for BW Statistics’ and select 'Delete

Statistical Data':

Page 21: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 21/27

The time period for data which should be deleted can now be entered. Please read SAP Note 309955 for

information on usage and errors in the BW Statistics.

For the BPS statistic tables UPC_STATISTIC*, the deletion of records older than x days can be done in

transaction BPS_STAT0:

Page 22: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 22/27

2.2.3.7 Delete tRFC Queues

Frequency: Weekly

In BW and all connected source systems, check all outbound tRFC queues (transaction SM58) from time to

time to see whether they contain old unsent data packages or Info-IDocs. The reason for these leftover

entries could be that they have already been processed in BW and therefore cannot be processed again or

they contain old requests that cannot be sent following the system copy or RFC connection change.

You should then delete these tRFC requests from the queue, not only to reduce the data volume in your

system but primarily to prevent you from accidentally sending these old entries again to BW data targets:

In the RFC destination (transaction SM59) from the source system to BW, the connection should be set up to

prevent a terminated transfer from being restarted automatically and, most importantly, to prevent periodic

automatic activation of incorrectly sent data:

Page 23: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 23/27

2.2.4 Data Consistency

2.2.4.1 Schedule RSRV Checks

Frequency: Weekly, important checks daily.

Use transaction RSRV to analyze the important BW objects in the system. This transaction also provides

repair functionality for some tests. These tests only check the intrinsic technical consistency of BW Objects

such as foreign key relations of the underlying tables, missing indexes and so on. They do not analyze any

business logic or semantic correctness of data.

Missing table indexes or inconsistencies in master or transactional data can have a negative impact on your

system performance or lead to missing information in your BW reporting.

You can schedule the tests in RSRV to be run regularly in the background by defining a specific test package

for your core business process and needs. Weekly checking (for example on the weekend) should be

adequate in general, but important checks (for example missing table indexes) could be also performed on a

more frequent basis (daily for example).

Another option is to start these checks based on an event (for example tests are triggered after data loading).

Page 24: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 24/27

If you do so, make sure that the application log is checked regularly for the results and the recommended

necessary corrections are made in time.

For further information please read SAP Note 619760 containing the latest news about RSRV checks:

A detailed description of the test information as to whether repair functionality is available in the check. When

the results of the checks are displayed in the application log, you can double click on the message. You will

then see the message again on the right hand side with additional buttons for long texts and details (scroll to

the right side) if applicable.

Page 25: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 25/27

3 Further Information

3.1 Overview about jobs and tasks

The table below lists all the jobs and tasks discussed in this Best Practice.

Area Job / Task Frequency

Monitor Data Staging Workload Monitor ST03 Daily

Process Chain Overview Daily

BW Job Overview in RSM37 Daily

Upload Monitor Daily

Change Run Monitor Daily

Real-Time Data Acquisition Daily (when RDA is used)

Open Hub Monitor Daily

DataStore Objects Overview Daily

BW Delta Queue Maintenance Daily

Performance Compression Daily

Report SAP_INFOCUBE_DESIGNS Monthly

Data Management Archive Data Depending on the retention

period of your data.

Delete PSA data Weekly

Delete Change log data Weekly

Delete DTP Temporary Storage Weekly

Archive / Delete Administration Tables Weekly

Page 26: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 26/27

Delete BW Statistic Tables Quarterly

Delete tRFC Queues Weekly

Data Consistency Schedule RSRV Checks Weekly, important checks

daily.

Page 27: Monitoring for bw data staging

Best-Practice Document Monitoring for BW Data Staging

© 2011 SAP AG page 27/27

© Copyright 2011 SAP AG. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors. Microsoft, Windows, Excel, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation. IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, System z9, z10, z9, iSeries, pSeries, xSeries, zSeries, eServer, z/VM, z/OS, i5/OS, S/390, OS/390, OS/400, AS/400, S/390 Parallel Enterprise Server, PowerVM, Power Architecture, POWER6+, POWER6, POWER5+, POWER5, POWER, OpenPower, PowerPC, BatchPipes, BladeCenter, System Storage, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, Parallel Sysplex, MVS/ESA, AIX, Intelligent Miner, WebSphere, Netfinity, Tivoli and Informix are trademarks or registered trademarks of IBM Corporation. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Adobe, the Adobe logo, Acrobat, PostScript, and Reader are either trademarks or registered trademarks of Adobe Systems Incorporated in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation. UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group. Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems, Inc. HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology. Java is a registered trademark of Sun Microsystems, Inc. JavaScript is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and implemented by Netscape. SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, Clear Enterprise, SAP BusinessObjects Explorer and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. Business Objects and the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and other Business Objects products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP France in the United States and in other countries. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary. The information in this document is proprietary to SAP. No part of this document may be reproduced, copied, or transmitted in any form or for any purpose without the express prior written permission of SAP AG. This document is a preliminary version and not subject to your license agreement or any other agreement with SAP. This document contains only intended strategies, developments, and functionalities of the SAP® product and is not intended to be binding upon SAP to any particular course of business, product strategy, and/or development. Please note that this document is subject to change and may be changed by SAP at any time without notice. SAP assumes no responsibility for errors or omissions in this document. SAP does not warrant the accuracy or completeness of the information, text, graphics, links, or other items contained within this material. This document is provided without a warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. SAP shall have no liability for damages of any kind including without limitation direct, special, indirect, or consequential damages that may result from the use of these materials. This limitation shall not apply in cases of intent or gross negligence. The statutory liability for personal injury and defective products is not affected. SAP has no control over the information that you may access through the use of hot links contained in these materials and does not endorse your use of third-party Web pages nor provide any warranty whatsoever relating to third-party Web pages.