essbas student guide i

182
1

Upload: vivekawa

Post on 16-Oct-2014

736 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Essbas Student Guide I

1

Page 2: Essbas Student Guide I

TABLE OF CONTENTS

1. DESIGNING OUTLINES.....................................................................................................................6

CHAPTER OBJECTIVES...........................................................................................................................6

MULTIDIMENSIONALITY OVERVIEW......................................................................................................6

ESSBASE OVERVIEW..............................................................................................................................6

MULTIPLE APPLICATIONS WITH ONE ENVIRONMENT...........................................................................7

PRODUCTION ENVIRONMENT.................................................................................................................8

CREATING APPLICATIONS AND DATABASES.........................................................................................9

CREATING DIMENSIONS AND MEMBERS...............................................................................................9

DIMENSION HIERARCHIES.....................................................................................................................9

ESSBASE TERMINOLOGY FOR DIMENSIONS AND HIERARCHIES..........................................................11

MEMBER PROPERTIES..........................................................................................................................12

DESIGNING TIME DIMENSIONS............................................................................................................15

DYNAMIC TIME SERIES MEMBERS......................................................................................................16

DESIGNING SCENARIO DIMENSIONS....................................................................................................17

VARIANCE ANALYSIS IN SCENARIOS...................................................................................................17

INDEX-BASED MEMBERS.....................................................................................................................18

DESIGNING ACCOUNTS DIMENSIONS...................................................................................................19

IMPLEMENTING ACCOUNTS DIMENSION PROPERTIES.........................................................................20

DESIGN CONSIDERATIONS FOR DATA DESCRIPTOR DIMENSIONS.......................................................20

DESIGN CONSIDERATIONS FOR BUSINESS VIEW DIMENSIONS............................................................21

2. BUILDING LOAD RULES.................................................................................................................22

CHAPTER OBJECTIVES.........................................................................................................................22

DESIGNING LARGE DIMENSIONS WITH LABEL OUTLINES...................................................................22

CREATING DIMENSION BUILD LOAD RULES.......................................................................................23

PROCESS FOR DIMENSION BUILD LOAD RULES..................................................................................24

OUTLINE LOADING..............................................................................................................................26

SETTING THE LOAD METHOD..............................................................................................................27

LOADING BY GENERATIONS................................................................................................................27

LOADING BY LEVELS...........................................................................................................................28

LOADING BY PARENT-CHILD...............................................................................................................29

CREATING ATTRIBUTE DIMENSIONS...................................................................................................30

GENERAL GUIDELINES.........................................................................................................................31

BUILDING ATTRIBUTE DIMENSIONS....................................................................................................33

LOADING DATA...................................................................................................................................34

2

Page 3: Essbas Student Guide I

LOCKING AND SENDING WITH A SPREADSHEET..................................................................................34

CREATING A DATA LOAD RULE..........................................................................................................35

HANDLING DATA VALUES ON LOAD...................................................................................................37

3. SPREADSHEET REPORTING..........................................................................................................40

CHAPTER OBJECTIVES.........................................................................................................................40

INSTALLING THE ESSBASE SPREADSHEET ADD-IN..............................................................................40

INSTALLING THE TOOLBAR..................................................................................................................40

CONNECTING TO ESSBASE...................................................................................................................41

LABEL PLACEMENT RULES OVERVIEW...............................................................................................41

EXPLORING DATA WITH BASIC RETRIEVE OPERATIONS.....................................................................43

MANAGING STYLE OPTIONS................................................................................................................47

BUILDING REPORTS WITH MEMBER SELECTION.................................................................................49

REPLICATING REPORTS WITH CASCADE..............................................................................................52

4. CREATING BASIC CALCULATIONS.............................................................................................54

CHAPTER OBJECTIVES.........................................................................................................................54

DESIGNING CALCULATIONS IN THE OUTLINE......................................................................................54

UNARY OPERATORS.............................................................................................................................54

USING FORMULAS IN AN OUTLINE......................................................................................................57

DESIGNING CALCULATION SCRIPTS....................................................................................................57

DATA STORAGE AND CALCULATION EFFICIENCY OVERVIEW............................................................59

DATA BLOCK FUNDAMENTALS...........................................................................................................61

DATA BLOCK CREATION.....................................................................................................................61

ORGANIZING CALCULATIONS..............................................................................................................67

LEVERAGING HIERARCHY INTELLIGENCE...........................................................................................80

SCRIPTING MEMBER SET FUNCTIONS..................................................................................................81

DYNAMIC CALC AND STORE MEMBERS..............................................................................................86

DESIGN CONSIDERATIONS...................................................................................................................87

OVERVIEW OF DYNAMIC CALCULATION ORDER................................................................................88

DENSE DIMENSION GUIDELINES..........................................................................................................89

SPARSE DIMENSION GUIDELINES........................................................................................................90

5. DESIGNING AND OPTIMIZING ADVANCED CALCULATIONS...............................................92

CHAPTER OBJECTIVES.........................................................................................................................92

CREATING AND TESTING CALCULATION SCRIPTS...............................................................................92

TESTING IN A PILOT ENVIRONMENT....................................................................................................93

CORRECTING CALCULATION BEHAVIORS............................................................................................95

AGGREGATING MISSING VALUES........................................................................................................96

3

Page 4: Essbas Student Guide I

ANALYZING EXPECTED VERSUS CORRECT BEHAVIOR.......................................................................97

BACK CALCULATIONS.......................................................................................................................102

MANIPULATING DATA SETS..............................................................................................................103

CLEARBLOCK OR CLEARDATA.................................................................................................103

DATACOPY.....................................................................................................................................104

NORMALIZING DATA.........................................................................................................................105

DEVELOPING A NORMALIZATION TABLE..........................................................................................106

OPTIMIZING CALCULATION PERFORMANCE......................................................................................107

DESIGNING FOR FEWER BLOCKS.......................................................................................................107

FEWER SPARSE DIMENSIONS WITHIN A DATABASE.........................................................................108

FEWER SPARSE MEMBERS WITHIN A DATABASE.............................................................................108

FEWER DIMENSIONS OVERALL..........................................................................................................108

DESIGNING WITH SMALLER BLOCKS.................................................................................................109

FEWER DENSE DIMENSIONS WITHIN A DATABASE..........................................................................109

FEWER STORED MEMBERS WITHIN A DENSE DIMENSION................................................................110

DESIGNING FOR DENSER BLOCKS.....................................................................................................110

DESIGNING FOR FEWER CALCULATION PASSES................................................................................110

6. AGGREGATE STORAGE DATABASES.......................................................................................112

APPENDIX OBJECTIVES......................................................................................................................112

AGGREGATE STORAGE OVERVIEW....................................................................................................112

ACCESSING A WIDER APPLICATION SET...........................................................................................113

COMBINING BLOCK AND AGGREGATE STORAGE..............................................................................114

DELIVERING END-TO-END SUPPORT.................................................................................................115

PERSONNEL BENEFITS.......................................................................................................................115

IT BENEFITS......................................................................................................................................115

AGGREGATE STORAGE KERNEL OVERVIEW......................................................................................116

CREATING MEMBER FORMULAS........................................................................................................116

ANALYZING DATA WITH HYPERION VISUAL EXPLORER..................................................................117

MDX OVERVIEW...............................................................................................................................118

IDENTIFYING DIMENSIONS AND MEMBERS.......................................................................................120

Defining the MDX Data Model: Tuples and Sets.............................................................................122

4

Page 5: Essbas Student Guide I

1. Designing Outlines

Chapter Objectives

Upon completion of this chapter, you will be able to:

• Describe Multidimensionality• Describe Essbase• Create applications and databases• Create dimensions and members• Design time dimensions• Design scenario dimensions• Design accounts dimensions• Describe design considerations for data descriptor dimensions• Describe design considerations for business view dimensions

Multidimensionality Overview

A multidimensional database is an extended form of a two- dimensional data array, such as a spreadsheet, generalized to encompass many dimensions.

You need a database that makes it easy to: Analyze information from different perspectives Roll hierarchies in all directions Find a fast answer to the two most frequently asked questions:

• Where did that number come from? • Why did that number change?

You can use multidimensionality to:

Analyze the same business information from different perspectives Let different users easily analyze the information that they want to see in

a large database, knowing that they are all working from the same source data

Allow data storage and analysis to occur at different levels of detail Set the foundation for drilling down Conceptualize the way analysts have been thinking for decades

Essbase Overview

For years, business managers and financial analysts have been asking questions and thinking about their budgets, sales data, and nearly every other piece of business information in multidimensional terms. They might want to know how a

5

Page 6: Essbas Student Guide I

particular product sells in one market versus another, or which customer actually nets them the highest profit. With Essbase® Analytic Services, multidimensional analysis is easy to do.

Essbase is a multidimensional database application tool that lets you continually analyze and compare aspects of your business. An Essbase database:

Works with multidimensional data and rollup hierarchies within dimensions

Obtains its information from other systems Deals with some level of summarized data, not transactions Can be adapted to many different reporting and analysis environment.

Multiple Applications with One Environment

Most traditional database products are application-specific, and the proliferation of application-specific packages results in multiple support for training and development.

With Essbase as your application development environment, you gain the following benefits:

You can build multiple databases using the same tool. Individual applications are better integrated with each other. Only one environment needs to be supported for development,

deployment, training, and technical support.

6

Page 7: Essbas Student Guide I

Production Environment

7

Page 8: Essbas Student Guide I

Creating Applications and Databases

You create an application on the server. Each application can contain one or more databases. When reating a database, keep the following design considerations in mind:

A database within an application may have the same or a different name as the application to which it belongs.

A database has only one outline associated with it. (The outline has the same name as the database and is a pictorial view of your database structure.)

Typically, a database has other objects associated with it, including calculation scripts, reports scripts and load rules.

The conventional advice is to create only one database per application:

If you have trouble with one database, other databases are not affected because they are in a different application.

The application log (which writes information about the databases it contains) is easier to read.

Creating Dimensions and Members

An outline contains multiple dimensions reflecting such database elements as time frame, scenarios, accounts, and products. The first member under the database member defines a dimension. You may have as many dimensions as you want; however, as a rule:

Most Essbase databases have six to eight dimensions. Four or five dimensions is on the low side. Ten or eleven dimensions is on the high side.

Databases with few dimensions generally calculate quickly. Databases with many dimensions incorporating many members and complex hierarchies generally take a long time to calculate and have large storage requirements.

Dimension Hierarchies

Each dimension contains a hierarchy of members. The hierarchy reflects many different elements within the database:

Member names (or labels) that define possible data intersections Consolidation path or rollup of lower to higher level members

8

Page 9: Essbas Student Guide I

Drill-through path that users follow when navigating through the data.

Dimensions may have a few members or many members. It is not uncommon for hierarchies to contain thousands of members with as many as twelve levels in the principal rollup path.

MembersYou define Essbase database structures and relationships between data elements by creating andmoving members in the outline.

The following moving rules apply:

Moving a member by any method also moves all of the member's descendants.Deleting a member also deletes all of the member's descendants.Moving a member and its descendants prompts a restructure operation when the file is saved.Restructuring maintains the integrity of underlying data storage structures.

Outline Detail in ReportingThe outline associated with a database should reflect the reporting requirements and structuresneeded by end users. On a spreadsheet report, each dimension in the outline must display in either summary form or with detail from drilling down on the report.

The order of members in a dimension's hierarchy determines the sequence for drilling down inreports. Changing a member’s place in a dimension’s hierarchy updates where it displays (or where its data is summarized) in the reporting structure after restructuring the database and recalculating the rollup.Using the Essbase Shared Member feature, you can design multiple rollup schemes for the sameinformation in the same dimension.

9

Page 10: Essbas Student Guide I

Essbase Terminology for Dimensions and Hierarchies

The Essbase terms for hierarchies are categorized as generation, level, or genealogy; they facilitate your understanding of the syntax of other Essbase objects and functionality where special language is used.For example:

Calculation script syntax for writing formulas: @ANCESTVAL Load rule methods for maintaining outline members: using generation,

level, or parent/child references Syntax for writing report scripts: <idescendants member name Security filter syntax for assigning access: @IDESCENDANTS

GenerationsYou count generations from the top of the outline to the bottom: generation 1, generation 2, generation 3, and so on.

The database name at the top of the outline is generation 0. Each dimension name is generation 1. Generation counting is important when you load data by the generation load rule method or when you write a calculation script that references members at specific generations.

LevelsYou count levels from the bottom of the outline to the top: level 0, level 1, level 2, and so on.

The lowest point in a hierarchy is level 0. Level 0 members are also called leaf nodes .Level 0 is typically, though not necessarily, the level where data is loaded.

10

Page 11: Essbas Student Guide I

Level counting is important when you load data by the level load rule method or when you write acalculation script that references members at specific levels.

GenealogyGenealogy relationships in a hierarchy are defined as follows:

Genealogy Description

Member A name at any level in the hierarchy, including dimension namesLeaf Node Any member at Level 0Parent A member at the next level immediately above another member in the hierarchyChild A member at the next level immediately below another member in the hierarchy Siblings Members with the same parent

Ancestor All members in the hierarchy directly above another memberDescendants All members in the hierarchy below another member

Member Properties

You can specify a broad variety of settings for each member that define the member’s storage characteristics and other rollup and reporting behaviors. You can define the following important properties for members:

Aliases Consolidation operators Data storage User-defined attributes (UDAs) Attribute dimensions

AliasesAliases are names that can be used in place of the main member name. Aliases are commonly used for storing descriptions (for example, account or cost center

11

Page 12: Essbas Student Guide I

names) and for providing alternative naming conventions where organization sectors use different terminology or a foreign language. Like the member names, aliases can be used for:

Spreadsheet reporting Calculation script formula references Data loading references in data source files Report script references

You can create and maintain up to ten alias tables for a database. Companies use alias tables to incorporate different naming conventions between functional departments and to capture foreign language differences.

Consolidation OperatorsHow a member rolls up in a hierarchy depends on the mathematical operation, called the consolidation operator, that is assigned to the member. Consolidation operators are also called unary operators in Essbase practice and documentation.

Consolidation operators are set for members in the Member Properties dialog box. Consolidators include:

Add, subtract, multiply, and divide % (computes a percentage) ~ (tilde; causes the member to be ignored during consolidation)

You use consolidation operators to specify rollup math in large business view dimensions such as products and customers. You also use consolidation operators for building models in the Accounts (also known as Measures) dimension involving units, rates, and dollars and complex activity driver relationships.

Data StorageBy default, new members added to an outline automatically store associated data. However, some members are created for purposes other than storage. The following table describes the six storage types, which are set in the Member Properties dialog box:

12

Page 13: Essbas Student Guide I

User-Defined AttributesUser-defined attributes (UDAs) are special flags you can use for reporting and calculations. Using UDAs is a way to avoid setting up additional dimensions where the member identification information is not hierarchical.

UDAs are specific flags set up for a given dimension and can be assigned to individual members. When working with UDAs, keep these rules in mind:

Multiple UDAs can be set up within a dimension. A given member within a dimension may be assigned one or more

UDAs, or no UDA at all. UDAs may be loaded to specific members using load rules. You can use UDAs for special functions in calculation scripts and

spreadsheet reporting. Think of UDAs generally as a way to group members in a dimension.

Here are some additional uses for UDAs: In security filters, UDAs can be used with the @UDA function to give

access to users based on a specific UDA. This has the effect of pushing the maintenance to the outline and simplifying filter definitions.

In partitioning, UDAs can be used in the area definition. In VBA, you can define a selection so that a drop-down list box is filled

with members that have a specific UDA.

Attribute Dimensions

13

Page 14: Essbas Student Guide I

Attribute dimensions are like UDAs in that they help assign characteristics to given members in an outline.Examples of characteristics are product sizes or colors, customer regions, and product package types. Unlike UDAs, attribute dimensions can be hierarchical; after they are requested in a report, they behave like standard dimensions.

When working with attribute dimensions, keep the following information in mind:

Attribute dimensions add no overhead in terms of database size. They are dynamic dimensions with no storage requirements.

Calculation of attribute dimensions is deferred until they are requested in a report. Furthermore, there is built-in functionality to enhance dynamic calculations. By default, you can sum, average, and count members, request minimum or maximum values, or use any combination.

Attribute dimensions are drillable so that a report can show the specified attribute dimension across all other dimensions in the model. By default, attribute dimensions are not shown until they

are explicitly requested. Attribute dimensions can be of different types (Text, Numeric, Boolean,

and Date). Each has built-in functionality in terms of enhanced retrieval filtering and calculations.

A given member within a dimension may be assigned one or more attributes or no attributes at all.

Attribute dimensions may be created and loaded to specific members using load rules.

Designing Time Dimensions

With few exceptions, Essbase data bases contain a time frame that you can define to any level.For example:

Financial applications in Essbase tend to track month, quarter, and year total financial impacts.

Order and production tracking databases often define time frames to the daily level for flash order reporting and shop floor production analysis.

Single and Multidimensional Time DesignsThe Time dimension generally does not include many members, nor are the design choices complex.There are two standard approaches to the Time dimension design:

Generic model (multidimensional) Fiscal year crossover model (single dimensional)

14

Page 15: Essbas Student Guide I

Generic ModelIn the generic model, months, quarters, and years have no fiscal year identification. Fiscal year ID is incorporated as a separate dimension or in the Scenario dimension.

The generic model is the default design to use if other analysis or calculation requirements do not force the crossover model.

Crosstab reporting (where different time elements display as row and columns on a report) requires the generic model.

Fiscal Year Crossover ModelIn the fiscal year crossover model, months, quarters, and years have the fiscal year identification hardwired into the member names. The crossover model is typically used when analysis or calculation requires a continuum for the time frame.

Range-related calculation operations (for example, using @PRIOR, @NEXT and @AVGRANGE) usually require the fiscal year crossover model for correct calculations between fiscal years.

History-to-date accumulations—especially for Dynamic Time Series calculations—require the fiscal year crossover model.

Dynamic Time Series Members

The most common special calculations in the Time dimension are period-to-date calculations. Essbase uses dynamic time series member calculations to automatically perform such calculations without monthly maintenance.

Use built-in dynamic time series member calculations for period-to-date calculations such as history, year, quarter, and month-to-date accumulations:

Dynamic time series members are calculated when initiated by a spreadsheet request. Values are not stored.

Dynamic time series members require the time-related dimension to be tagged Time.

You typically tag the dimension that incorporates your time frame as the Time dimension. You tag the first member in your Time dimension with the Time dimension type.

When you tag a Time dimension, you realize the following benefits:

Range operators used in calculation scripts operate by default over the tagged time dimension periods (for example, months in a series). Range operators include such functions as @PRIOR, @NEXT, @AVGRANGE.

Dynamic time series member calculations are enabled.

15

Page 16: Essbas Student Guide I

Designing Scenario Dimensions

Although the Scenario dimension usually has few members and a minimum hierarchy, its impact on design and calculation issues is substantial.

For financial applications, you define the different data sets that are the foundation of the planning and control process in the Scenario dimension. For example:

Actuals: Monthly downloads from the general ledger of actual financial results

Budget: Data for setting standards derived from the annual planning process

Forecast: Monthly or quarterly updated estimates of financial performance

Underlying processes often drive how scenarios are defined:

Track sequential versions of data sets: For example, scenario versions Budget V.1, Budget V.2, and Budget V.3 house successive versions of the budget. Each new version begins with a copy of the prior one created using the DATACOPY calculation script command.

Track steps in the internal buildup of a data set with contributions from different functional areas of the company: For example, scenario-controlled steps in the process might be Field Sales Forecast, Marketing Adjusted Forecast, and Financial Adjusted Forecast, where each successive forecast version incorporates the input of a new functional group building from the prior one.

Track what-if alternatives: For example, scenario versions High Estimate, Best Estimate, and Low Estimate represent different assumption sets about revenue and costs that share the same underlying data sources and modeling structures.

Variance Analysis in Scenarios

You typically compare and compute the most important variances between data sets in the Scenario dimension. With multiple data sets incorporated under one database umbrella, it is easy to do the variance analysis:

The comparisons are valid because each scenario uses the same account, cost center, product, and other structures incorporated in the outline dimensions.

The variance calculations are quick and efficient using the Essbase

16

Page 17: Essbas Student Guide I

calculator capabilities.

When you use the @VAR and @VARPER calculation functionswith Expense Reporting flags on accounts, better/worse sign conventions are easily controlled.

There are two ways to build variance calculations into an outline:

Shared member rollups Formulas on outline members

Shared Member RollupsShared members are a unique design element in Essbase that allows sharing of input data and calculated results between members in a dimension.

The shared member assumes all attributes of the main member (its value, aliases, time balance flags, and so on) except the main member's consolidation operator.

Shared members have their own consolidation operator independent from the main member's value they share. You can build complex models with computation dependencies between members.

Formulas on Outline MembersFormulas do not need a calculation script; they can be associated with members directly in the outline.Formulas in outlines are executed as follows:

If the data storage type for the member is Store Data, the formula is executed in order during a CALC ALL execution or a CALC DIM on the member’s dimension.If the data storage type for the member is Dynamic Calc or Dynamic Calc and Store, the formulas are executed when a user retrieves the member in a spreadsheet.

Index-Based Members

The Label Only and Shared Member storage types are index-based. Members with these storage types are actually pointers to other stored members.

Label Only members have an index pointer to the first child with an Add (+) consolidation operator, or the first child if all children have an Ignore (~) consolidation operator. There may be some variability in this defined behavior in actual situations.

17

Page 18: Essbas Student Guide I

Shared Member members have an index pointer to the main member whose name it bears. Shared members generally follow the main member in the outline they point to through the index.

Designing Accounts Dimensions

The Accounts dimension (often called Measures) is generally the most complex dimension. Your business model resides in Accounts, and the most complicated and potentially time-consuming calculations occur here.

For financial applications, the main calculations occur in Accounts. Therefore, you find the following typical structures in this dimension:

Natural class accounts that define the profit and loss structure or subsets of it, such as accounts that build up to gross margin.

Balance and cash flow accounts and associated metrics, such as average assets and return on net Assets

Unit, rate, and dollar calculations, especially where such calculations involve activity driver relationships between members in the dimension

Metrics and analysis calculations of all types, such as Profit%, Margin%, Sales Per Employee, Cost Per Transaction, % Mix, % Commissions

Outline Calculation OverviewThe order of calculation within each dimension depends on the relationships between members in the database outline. Each branch of a dimension is calculated one by one, in top-down order. However, within each branch, level 0 values are calculated first, followed by their level 1 parent value. The calculation continues in this manner until all levels in the first branch are calculated. It then moves on to the second branch and repeats the process.

Maximize as much as possible the use of unary calculations (consolidation operators) for model building in the Accounts dimension. The underlying design principal is to use unary calculations where possible and minimize the use of formulas in the outline or in calculation scripts:

Unary calculations execute faster than formulas in the outline and formulas in calculation scripts.

The unary calculation construction provides visibility when drilling down in the Accounts hierarchy to see where a number originates. Formulas obscure visibility of the calculations.

Use formulas (either in the outline or in a calculation script) where you cannot do the calculation with unary operators (when incorporating an if-then con dition or where a unary operator hierarchy path would confuse users).

18

Page 19: Essbas Student Guide I

Implementing Accounts Dimension Properties

If you tag your Accounts dimension with the Accounts dimension type, then a number of additional features are enabled for that dimension:

Time balance accounting for identified members in the Accounts dimension is enabled. Time balance accounting affects the aggregation sequence of flagged members in the Accounts dimension (for example, balance sheet accounts) across the dimension defined as the Time dimension type.

Expense reporting for use with @VAR and @VARPER calculation functions is enabled, thus allowing control over better/worse sign conventions for variance calculations.

With Calc All and Calc Dim functions, Essbase automatically implements a correct accounts-first calculation order.

When the Accounts dimension is flagged and meets certain other conditions, two-pass calculations can be calculated with only one pass on data blocks.

The time balance accounting properties also require that you have a Time dimension enabled in your outline.

Design Considerations for Data Descriptor Dimensions

In an Essbase outline, dimensions can typically be grouped into one of two categories: data descriptor or business view dimensions. Time, Scenario, and Accounts dimensions all fall into the data descriptor category.

From a design perspective, the data descriptor dimensions share the following characteristics:

They define the data in its essentials. Most Essbase databases contain these three dimensions.

Initial development of the outline is often manual. For example, members are manually created, moved, and specified to reflect specialized calculations and models. Maintenance is typically a manual process as well.

Variances are typically calculated in these dimensions. They are calculation-intensive. They include models, formulas,

mathematically driven data relationships, and so on. Data is often, though not always, dense, thus driving sparse and dense

settings and data block construction.

19

Page 20: Essbas Student Guide I

Design Considerations for Business View Dimensions

All other dimensions (not including the data descriptor dimensions) fall under the category of business view dimensions. Examples of business view dimensions include products, customers, channels, and regions.

Business view dimensions provide users a specific cut of the data— the multidimensional richness of analysis that extends beyond the simpler information incorporated in the data descriptor dimensions.

From a design perspective, business view dimensions share the following characteristics:

The design choice is substantially driven by the company’s industry and business practices.

They often have hundreds or thousands of members rolled up through many levels in a hierarchy. Thus, initial development of the outline and subsequent maintenance is usually automated through load rules.

They are not typically calculation-intensive. Calculations are usually simple aggregated rollups of branches in the dimension hierarchy. There are very few (or no) complex models, member formulas, or variances.

Business views often incorporate complex alternative rollups using shared members. For this reason, load rules (especially parent/child load rules) are typically used to facilitate the construction and maintenance of shared members.

Data is often, though not always, sparse, thus driving Essbase sparse/dense settings and data block construction.

20

Page 21: Essbas Student Guide I

2. Building Load Rules

Chapter Objectives

Upon completion of this chapter, you will be able to:

• Design large dimensions with label outlines• Create dimension build load rules• Set the dimension build load method• Manipulate column fields• Create attribute dimensions• Create data load rules

Designing Large Dimensions with Label Outlines

In this section, we provide you with development methodologies for planning how to build rules to load members into outlines.

Defining HierarchiesSome Essbase dimensions, such as those related to product codes and customer numbers, contain hundreds or even thousands of members. It is more efficient to build, change, or remove these dimensions dynamically using a data source and a rules file, than it is to add or change each member manually in the Outline Editor.

Planning the DimensionsA product dimension may have thousands of members and many potential levels in the hierarchy. How do you plan this? The answer is to develop a label outline for planning purposes before starting to load the members into the outline. Label outlines help you build a vision of the forest before you populate it with the trees. The following steps summarize the process for creating a label outline:

1) Open an empty outline on a client machine.

2) Create an outline member name for the dimension to be analyzed.

3) Starting from the top of the dimension, add a name for each level (for example, Product Family).

4) Starting from the top at a generation parallel to the name hierarchy, add a business example of the named level (for example, Lightbolt).

5) Continue adding layers of names with parallel examples until you reach the

21

Page 22: Essbas Student Guide I

bottom of the hierarchy.

6) Use a finalized version of the label outline as a specification sheet or sign-off document.

Creating Dimension Build Load Rules

A data source can contain only dimension names, member names, alias names, or data values; it cannot contain miscellaneous text. Data sources must be complete, ordered, and correctly formatted so that Essbase understands them.

Before loading data or build dimensions, you must format the data source so that it maps to the multidimensional database into which you are loading. You can format your data source by altering it or, more typically, by transforming the data with a rules file during loading.

Rules files do not change the original data source. They perform operations on the data as the data source is loaded, such as rejecting invalid records, scaling data values, or dynamically building new dimensions.

You must use data load rules to load SQL data and to build dimensions and members dynamically.

Data load rules are stored in rules files with a .rul file extension.

Batch MaintenanceYou can use load rules to automate the loading and maintenance processes for outlines. For example, you can:

Load many members at one time in a batch process. Load complex hierarchies with multiple rollup paths by using shared

members. Sort hierarchy members when loading or maintaining structures. Automate the reorganization of existing hierarchies and member

relationships. Maintain hierarchies by adding new members and deleting old ones.

TroubleshootingLoad rules provide tools for interpreting and troubleshooting source files. For example, you can:

Associate source data hierarchies with dimensions. Define hierarchy levels by generation, level, or parent-child methods. Set up alternative naming conventions (member names and aliases). Move columns that are in the wrong place in the hierarchy. Ignore rows and columns with extraneous information. Duplicate, parse, and concatenate information to construct hierarchies.

22

Page 23: Essbas Student Guide I

Add prefixes or suffixes to member names to provide clarity. Select or reject member names by using complex alpha or numeric

criteria. Replace specific member names or character sequences in a source with

alternate sequences. Eliminate leading or trailing spaces in source formats.

Process for Dimension Build Load Rules

The following steps summarize the process for loading members into an outline:

1. Associate the rule with an outline.

2. Open a data file or SQL data source.

3. Set the data source properties.

4. Set the view to Dimension Build Fields.

5. If required, set up the new dimension.

6. Select the dimension build method.

7. If required, format the file.

8. Associate fields with dimensions and field types.

9. Validate the dimension build rule.

10. Save the dimension build rule.

Step 1: Associate the Rule with an OutlineIf you do not associate your rule with an outline, you cannot successfully construct or validate the load rule for errors. The association with an outline is not saved; you must reassociate each time you open the saved load rule.

Step 2: Open a Data File or SQL Data SourceYou need to see a sample source file containing the outline members to be loaded into the Essbase outline (for example, the file’s contents and order of columns). This information provides a frame of reference for building the load rule. The source file you use to build the rule should be the final source file or in exactly the same format as the final source file that you use when updating members.

You can open data sources such as text files, spreadsheet files, and SQL data sources in the Data Prep Editor.

23

Page 24: Essbas Student Guide I

Step 3: Set the Data Source PropertiesTo apply the rules, Essbase interprets the source in a columnar format. By choosing the file’s delimiter type, you allow Essbase to interpret the source’s column organization. The delimiter choices are:

Comma Tab All Whitespace Custom (Use a single character as the custom delimiter.) Column Width (Use column width for data sources with fixed-width

columns. Enter the width of the column as five digits or fewer.)

Step 4: Set the View to Dimension Build FieldsWhen developing a load rule, you work in one of the following view modes:

Dimension build fields view: Automate loading or maintenance of members in the outline.

Data load fields view: Load actual data (for example, units and dollars) to existing members in a database.

Step 5: Set Up the New DimensionEssbase must know the dimension name and its properties. If a dimension has not already been set up in the outline, you can create the dimension in the new load rule.

Step 6: Select the Dimension Build MethodEssbase provides three principal methods for loading outline members:

Generation Level Parent-child

Step 7: Format the FileEssbase provides the following options to format the columns in data sources:

Move Split Join Create using join Create using text

Step 8: Associate Fields with Dimensions and Field TypesYou must specify the following information so that Essbase knows what to do with each column:

24

Page 25: Essbas Student Guide I

Dimension in the outline to which the field value belongs Data type (for example, member name for a specific level or generation,

alias information, attribute, and formula)

Step 9: Validate the Dimension Build RuleWe recommend that you validate your load rule before saving it. Although not all errors are caught during validation, an invalid load rule typically results in an incorrect build.

Step 10: Save the Dimension Build RuleYou can reuse one rules file with multiple data sources. It is a good idea to save when you create a load rule or modify an existing one.

Outline Loading

Loading new members into an outline or updating an existing outline occurs by matching the source file with the load rule and outline and then initiating the load procedure.

The load rule does not modify the source file during the loading procedure. It interprets the source file content according to the underlying rules that you specify.

You can initiate a load procedure by one of the following methods:

Outline Editor Data Load dialog box ESSCMD or MaxL (in batch mode)

Error TrackingYou can use the following methods to identify errors:

Load Rule Validation: Select Options/Validate to check for syntax and logic errors in the load rules.

Source Files: Check error conditions in the source files (for example, duplicate use of an alias).

The following guidelines will help you track source file errors generated on loading:

Open Dimbuild.err in Notepad, a spreadsheet, or any word processor to view source file errors. The default location of the Dimbuild.err file on the client is EAS/Client. You may change the location of this file to any available path and file name.

Change the error file name to match the name of the rule that you are testing

Define the error file type as .txt to make this ASCII text file easier to

25

Page 26: Essbas Student Guide I

access from a text editor. Copy the file name and path from the Data Load dialog box before you

initiate the load. If you do have errors, you can paste the path into the system run prompt and easily open the error file.

Load the error file after revising the rule or correcting the source error. (The error file contains a comment describing the error and the complete record rejected. All comments are ignored during the load process.)

Server Error Messages Help FileMany times an error message is accompanied by an error code in the application or server log. Essbase provides an HTML help file that lists error codes encountered during data loads and calculations. This documentation contains an explanation of the error message and a possible solution.

Setting the Load Method

Essbase load rules provide several methods for loading and maintaining outline members. You can select one of the following methods to automatically load members based on the format of the source file:

Build Method Description

Use Generation Columns representing hierarchy pathways are References organized top-down and left to right. Use Level Columns representing hierarchy pathways are References organized bottom-up and left to right.

Use Parent-Child Columns representing parent-child relationships References are organized by pairs from left to right, building up hierarchy elements.

Loading by Generations

Loading and maintaining outlines using the generation method assumes that source files are organized top-down, left to right; or they are ordered as such by applying rules that interpret source file columns into that organization.

Loading Hierarchies by GenerationGeneration numbers refer to consolidation levels within a dimension. Generation 1 is the highest level in a dimension’s hierarchy in the outline. Generations below the dimension names are numbered 2, 3, 4, and so on, counting top-down.

To structure files for generation method loading, organize member columns in the source file or move columns into an order of highest to lowest, left to right, in their

26

Page 27: Essbas Student Guide I

hierarchical order. Each row in the source file represents a distinct pathway of drill down relationships.

Loading Shared Members by GenerationShared members can be loaded under the generation method using a tag called duplicate generation in the field type choices. The duplicate generation method is not commonly used because the outline structure requires top-down symmetry in generation number between the main member and the shared member.

Loading by Levels

Loading and maintaining outlines using the level method assumes that source files are naturally organized bottom-up, left to right; or they are ordered as such after applying rules that move source file columns into that organization.

Loading Hierarchies by LevelLevels also refer to the branches within a dimension; however, levels reverse the numerical ordering that Essbase uses for generations. The levels count up from the leaf member toward the root. The root level number varies depending on the depth of the branch.

To structure files for level method loading, organize members in the source file lowest to highest, left to right, in their hierarchical order; or move columns into this organization within the load rule itself.

Each row in the source file represents a distinct pathway of relationships drilling up from the bottom.

Loading Shared Members by LevelShared members can be loaded under the level method using a tag called duplicate level in the field type choices.

27

Page 28: Essbas Student Guide I

Like the generation method, loading shared members using levels also requires symmetry. In this case, the shared member must be at the same level as the main member.

Since many sharing structures share at the zero level, the symmetry requirement is not as onerous as with the generation method.

Loading by Parent-Child

Loading and maintaining outlines using the parent-child method assumes that extracts can be made from source files in sorted parent-child paired relationships.

Loading Hierarchies Using Parent-ChildUse the parent-child references build method with data sources in which each record specifies the name of the new member and the name of the parent to which you want to add it.

When structuring files for parent-child loading, you should:

Organize members in the source file left to right in parent-child pairs.Ensure that all combinations of parent-child relationships that you want to set up in the outline are represented. (Any given member may display in the parent or the child column.)

Ensure that the parent-child pairs are sorted in order of precedence. In some cases, Essbase can determine the precedence without sorting; in other cases, unsorted files may produce an incorrect result.

Loading Shared Members by Parent-ChildParent-child is the most flexible method for loading and maintaining shared members.

If you do not select the Do Not Share option, parent-child loading automatically sets up shared members. The principal requirement is that a matching main member already exists in the outline with a different parent than the shared member currently being loaded.Selecting Do Not Share turns off all sharing with parent-child loading. The Do Not Share check box is not selectable for generation or level loading methods.

Creating and maintaining shared members solves a variety of sharing issues that are difficult to manage with the generation or level methods:

If Do Not Share is not selected, sharing with parent-child is automatic without any special setup requirements.

The parent-child method allows sharing at asymmetrical levels and generations. The other methods do not.

The parent-child method enables building asymmetrical hierarchies for

28

Page 29: Essbas Student Guide I

alternate rollup paths above the shared members in one pass. The other methods do not.

The parent-child method enables sharing with members at upper levels. That sort of sharing is difficult with the other methods.

Manipulating Column FieldsYour source file may not always have columns in the correct sequential order, or you may need to do other column manipulations such as duplicating, parsing, or concatenating columns to create uniqueness in member names.

You can use the following functions to manipulate column fields:

Function Description

Move 1.Changes the sequential order of columns. For dimension building, columns must display in a specific order Split 2. Parses fields. Also segregates columns where the source file has variable field lengths without field delimiters Join 3. Joins together two or more columns. Often used for solutions to create uniqueness in member names. Create Using 4.Joins together two or more columns, except a new Join column or set of columns is created. Also duplicates a single column Create Using 5. Enables you to enter any text, including a blank Text file space, and that text displays for all records in source file.

Creating Attribute Dimensions

Attribute dimensions are powerful tools for reporting and calculating additional data. Typically, an attribute is a concrete characteristic of a member in a standard dimension; for example, size, inception date, or any other characteristic that does not change over time.

Attribute Dimension RulesAttribute dimensions are not like standard dimensions. Standard dimensions can have multiple relationships across other dimensions; data can be stored and viewed across every intersection of all dimensions. For example, a color dimension which relates to a product dimension results in the ability to store and view data such that any product can have multiple colors associated with it.

With attribute dimensions, all relationships with its associated base dimension are

29

Page 30: Essbas Student Guide I

one to one. For example, a color attribute dimension can be tied to a product dimension such that each product can have only one color associated with it.

Attribute dimensions are not UDAs. Although there are many similarities, attribute dimensions are very different. Attribute dimensions provide much richer reporting capabilities.

General Guidelines

Follow these general rules when creating attribute dimensions:

Base dimensions must be sparse. Base dimensions are the dimensions associated with the attribute dimension.

Attribute dimensions do not have consolidation symbols or formulas. All calculations are done across the base dimension.

Although attribute dimensions can have a multitiered hierarchy, you must associate the level 0 members (bottom level members) of attribute dimensions with base dimension members.

Base dimension members associated with attribute dimensions must be at the same level. This can be any level, but it must be the same across the base dimension.

Do not tag shared members in the base dimension with attribute dimension members. Shared members automatically inherit their respective stored member attributes.

Attribute Calculations By default, dynamic attribute calculations are available through the Attribute Calculations dimension. This dimension behaves like other attribute dimensions in that it is not automatically displayed in a report until you explicitly request it.

Boolean and DateYou can change the Boolean values from the default of TRUE/FALSE to another value, such as YES/NO.In addition, you can change the date format for members from the default of mm-dd-yyyy to dd-mm-yyyy. This means that all attribute members must be in the exact format chosen.

Numeric Ranges for Numeric AttributesA numeric attribute can represent a single value or a range of values. Ranges can be used for report filtering and calculations. The default setting is Tops of Ranges. In a size dimension, for example, this would result in:

Values between 1 and 300 are associated with attribute member 300. Values between 300 and 500 fall under attribute member 500. Values between 500 and 1000 are under attribute member 1000.

30

Page 31: Essbas Student Guide I

If you change the option selection to Bottoms of Ranges:

Values less than 500 fall under the 300 attribute. Values between 500 and 1000 have the 500 attribute. Any values greater than 1000 fall under the 1000 attribute.

Design ConsiderationsAttribute dimensions are another way to group like information together. Standard dimensions, UDAs, and alternate hierarchies are ways to achieve similar results, but each choice has a different impact on database size, performance, and reporting. Understanding the impact of each choice is one of the keys to good database design.

31

Page 32: Essbas Student Guide I

Building Attribute Dimensions

There are a variety of ways to build the attribute dimension:

Manually in Outline Editor Generation, Level, or Parent-Child references Automatically when you associate the base member dimensions with

specific attributes

Building Attribute Dimensions Using Rules FilesYou can use rules files to build attribute dimensions dynamically, to add and delete members, and to establish or change attribute associations.

Besides the manual method, this is the only way to build a multitiered attribute dimension. The load is very similar to creating a normal dimension. This is a multistep process:

1. Create the attribute dimension.

2. Declare the build method to be used to load the attribute.

3. Assign each field a dimension property.

4. Run the load.

Associating Base Dimension Members with Specific AttributesOnce you have built an attribute dimension and associated it with a base dimension, you can add attributes to the members of the base dimension. You can accomplish this either by modifying the member properties in the outline editor or by running a load rule.

If you are associating the attributes through a load rule, the setting for the attribute column is usually:

The attribute dimension name for the Field Type. The base dimension in the Dimension text box. The generation or level number corresponding to where attributes are

being assigned in the base dimension.

Using Numeric RangesFor numeric attributes, you can optionally place attribute members within ranges by modifying the range settings when you select the field type. The range settings are only selectable for numeric attributes.

32

Page 33: Essbas Student Guide I

Loading Data

The concepts for creating a data load rule are the same as for a dimension build load rule, but the mechanics and interfaces differ.

Methods for Loading DataThere are three ways to load data:

Free-form loading without rules Structured loading with rules Locking and sending with a spreadsheet

Free-Form Loading Without RulesFree-form loading enables manual or batch process loading with ESSCMD or MaxL instead of load rules.

Free-form loading has certain requirements that must be met: Your file structures need to be formatted with precise organization of headers and row/column information similar to the rules used in spreadsheet reporting.

This methodology is not commonly used in a production environment.

Structured Loading with RulesLoading data with load rules lets you deal with unstructured formats and source problems. Loading with rules is easily implemented manually with the Administration Console or in a batch production environment with ESSCMD or MaxL

Unlike lock and send loading from a spreadsheet, there are no fundamental restrictions on the size of files or number of records that can be loaded using load rules.

Locking and Sending with a Spreadsheet

Just as users can receive information in a spreadsheet using the Essbase Spreadsheet Add-in, they can also send data to an Essbase database from this same interface.

This methodology is commonly used for interactive applications. For example, a user develops budget information, sends it to Essbase, and then retrieves a returned result that incorporates an intermediary calculation process.

Loading to Essbase from a spreadsheet is executed by sequentially selecting Lock and then Send from the Essbase menu. Formatting for Lock and Send from the spreadsheet follows spreadsheet retrieve rules.

33

Page 34: Essbas Student Guide I

When Lock is selected, all data blocks represented on the spreadsheet are locked; that is, other users may access the same information on a read-only basis, but may not change any data.

When Send is chosen, the data on the spreadsheet is written to the data blocks represented and then those data blocks are unlocked.

What Load Rules Do for DataLike load rules for loading and maintaining members in an outline, you use load rules with the same interfaces to load data from text, Excel, or Lotus files.

Different rules can be quickly developed to address the file format issues of different sources. Essbase databases typically draw information from multiple sources.

Use load rules for managing the following types of data loading activities:

Loading a large number of records at one time in a single batch process Matching record elements to members in the outline Overwriting existing loaded or calculated values and aggregate values

during data load Managing header information, such as ignoring extraneous headers or

setting up special-purpose headers for label identification Selecting or rejecting records on data load and screening on key words

or values by field Manipulating date and data formats Updating unrecognized new members on the fly without creating error

conditions Creating uniqueness in member names of source data by adding

prefixes or suffixes or by replacing member name elements Cleaning up source file leading and trailing white space Scaling incoming data automatically

Creating a Data Load Rule

When you open a data load rule, you perform the following tasks:

1) On the Options menu, associate an outline.

2) On the File menu, open a data source file.

3) Set the data file delimiter if the delimiter is not tab.

4) On the View menu, select Data Load Fields.

34

Page 35: Essbas Student Guide I

Associating Fields with Dimension MembersData load rules, just like rules for dimension building, rely on a column organization of the source file. So that Essbase knows what to do with each column, you must match each column of information, which contains either a member label or a data element, to a specific dimension or a member within a dimension.

Maximizing Data LoadLoading data generally does not take a long time. The following guidelines ensure that you do not have performance problems.

Data load time is highly affected by the organization of the source file. The source file should be organized so that Essbase does not have to revisit data blocks, the basic unit of storage, more than once.

For maximum efficiency and to minimize passes on data blocks, structure your source file as follows:

For labels identifying the data points, set sparse dimensions to the left and dense dimensions to the right.

Sort the columns left to right. Use simple load rules.

Load rules interpret the source files. Highly complex rules that involve substantial interpretation (for example, rules that contain select and reject screens, complex column moves, splits and joins, and so on) take longer to load than simple rules.

Selecting Data Load ParametersEssbase provides various format manipulation possibilities. When loading data, these format manipulations may be used for load rules for either dimension building, for data loading, or for both.

Convert Case Selecting Convert Case lets you convert all incoming file names from lowercase to uppercase or vice versa.By default, the choice for original case is preselected.

Prefix and Suffix You can add a specific prefix or suffix to incoming member names. This feature is typically used where you need to create uniqueness in member names distinct from other columns.

35

Page 36: Essbas Student Guide I

Format ControlsThe following controls enable you to format your data file columns:

Control Description

Data Field When the column is the only data field in the data source file, select this check box; that is, other columns and/or headers have identified the record in its completeness to all dimensions.

Drop Leading and Leading and trailing whitespace in source files Trailing cause errors that prevent loading. Leave this Whitespace check box selected unless such whitespace is

part of the member identification.

Convert Spaces To This feature is not commonly used. It is a Underscores carryover from Essbase 2.x where member

names could not include blank spaces. Scale You may change the scale of incoming data by

selecting and entering a multiplier. For example, 1,234,567 would be converted to 1,234.567 when multiplied by .001.

Replace WithYou may create complex replace functions on member names. You typically use this feature to resolve member uniqueness problems, strip unwanted identifier information, or substitute member name identifications to accommodate naming differences between source systems and Essbase.

Handling Data Values on Load

You can clear existing data from the database before loading new values. By default, Essbase overwrites the existing values in the database with the new values in the data source.

You have choices about how values are aggregated on data load. For example:

Incoming records can be added or subtracted from the existing values in an Essbase database. If you load weekly values, you can add them to the existing values to create monthly values in the database.

The add or subtract options make it more difficult to recover if the database crashes while loading data, although Essbase lists the number of the last row committed in

36

Page 37: Essbas Student Guide I

the application event log file. However, you can seat the Commit Row value to 0 (Database Transaction setting), so that Essbase views the entire load as a single transaction and commit the data only when the load is complete, but using this setting will have a negative impact on your data load performance.

Handling Aggregations on LoadThere are three options for handling aggregations on load:

Option Description Overwrite Existing Incoming data steps on data already contained in Values the database. Use this when you want to replace

existing records with new records that have identical identifiers for each dimension. Each new record overwrites the values of the previous one with the same member labels.

Add To Existing Incoming data is aggregated with values already Values contained in the database. If you use option this when loading records into a database with existing records with entical identifiers for each dimension, each new record is added to the aggregated values of all previous records with the same member labels. Subtract From This option behaves much as Add to Existing Existing Values Values, except values are subtracted. If you use

this option when loading records into a database with existing records with identical identifiers for each dimension, each new record is subtracted from the aggregated values of all previous records with the same member labels.

Selecting and Rejecting RecordsBy default, Essbase accepts all records in a source file. You can set up screens, however, to select or reject records using specific alpha or numeric criteria.

Booleans on Multiple FieldsIf you have select or reject criteria on multiple column fields, by default all conditions must be met before a record is selected or rejected. That is, the default Boolean operator, when there are several fields with select or reject criteria, is the AND operator.

Capturing New Members During Data LoadIn a normal production mode, new members being added to an outline are identified and loaded with a dimension build load rule (or rules for multiple business view dimensions) that is executed just prior to a data load.

37

Page 38: Essbas Student Guide I

There may be cases, however, where new members may not be captured by this normal procedure; for example, where extracts for the outline update are pulled on a schedule prior to preparation of the data file extract.

During a data load, if a member of a record is not in the outline, Essbase considers the record to be in error and the record is not loaded.

For those cases, you can set up a safety net procedure so that unrecognized members encountered during a data load are identified and placed into specific locations in the outline.

When the source is out of sync with separate dimension build inputs, you can use the following dimension build Add As options to capture new member exceptions during a data load. Otherwise, members with data not already in the outline cause an error condition.

Option Description Add as a sibling of Compares the new member to existing members a member with in the outline using string matching; for example, matching string matching to account or part number sequences

with callouts embedded in the numbering scheme.

Add as a sibling of Assigns the new member to the lowest level in a lowest level hierarchy. Useful if you are loading to a flat

dimension list.

Add as a child of Assigns the new member as a child of another member in the outline; for example, capturing members to a member called “unrecognized product” or “unrecognized customer.”

38

Page 39: Essbas Student Guide I

3. Spreadsheet Reporting

Chapter Objectives

Upon completion of this chapter, you will be able to:

• Install the Essbase Spreadsheet Add-in and toolbar• Connect to Essbase• Retrieve data• Manage worksheet options• Build reports with the member selection tool• Create reports with Query Designer• Replicate reports with Cascade• Protect Excel formulas during retrieval

Installing the Essbase Spreadsheet Add-in

The Essbase Spreadsheet Add-in is a powerful way to analyze your data in a familiar spreadsheet environment. It is installed on a user’s personal computer or client machine.

The appearance of the Essbase menu on the Microsoft Excel menu bar indicates that the Essbase Spreadsheet Add-in is installed. If it is installed but the Essbase menu is not displayed, then you need to add it to the menu bar (Tools > Add-Ins).

You can use the Tools > Add-Ins menu to turn the add-in on and off, to regain Excel functionality in non-Essbase workbooks.

The add-in file is Essexcln.xll and is located in the Essbase\bin folder. In Tools > Add-Ins it displays as Essbase OLAP Server DLL.

Installing the Toolbar

The Essbase Spreadsheet Add-in comes with its own custom toolbar that incorporates most of the commands on the Essbase menu. The toolbar enables your users to perform actions by clicking a button, instead of searching through menus.

When the Essbase Spreadsheet Add-in is installed on your computer, the installation program adds a file named Esstoolb.xls to the Essbase\Client\Sample directory. By opening this file and enabling its built-in macros, you install the toolbar in Excel.

39

Page 40: Essbas Student Guide I

Connecting to Essbase

Users log on to the Essbase Server after starting Excel or Lotus 1-2-3 on their client machines. Keep the following guidelines in mind:

You can have only one database connection from any individual worksheet in a workbook. That is, you cannot retrieve data from more than one database to a single worksheet.

Within a single workbook, you may have different database connections on different worksheets.

Database connections are persistent until you disconnect them or until you exit the Essbase Spreadsheet Add-in. Merely closing the Excel file does not end established connections.

You can view all current established connections in the Disconnect dialog box.

If you have at least one established connection, the system does not prompt you to connect again on subsequent worksheets, but uses the last requested connection.

Retrieving DataUpon any retrieve action from the Essbase menu, such as Retrieve, Zoom In, Keep Only, or Pivot, Essbase initiates a label-scanning process left-to-right, top-to-bottom.

The scanning looks for labels on the spreadsheet to match with members in the outline. The header section of the worksheet is scanned first, then the row/column section.

When at least one label is matched for each dimension with members in the outline, Essbase can place data, assuming that the labels follow the placement rules described in the following sections.

Label Placement Rules Overview

The following list describes general rules for placing labels on a worksheet so that Essbase may properly place data during a retrieve operation. If the rules are not followed, an error message describes the error condition. When you close the error message, the requested retrieve action is not performed.

Labels on the worksheet must match outline members or their aliases exactly. Look out for trailing white space. A space is interpreted as a valid character.

Worksheet labels are not case-sensitive unless case sensitivity is set by the database designer in the outline (Settings > Case Sensitive Members).

We do not recommend setting members as case-sensitive.

40

Page 41: Essbas Student Guide I

Any dimension may appear in the header or row/column sections in any combination with other dimensions.

All dimensions, except attribute dimensions, must be represented in the header or row/column section before Essbase encounters a data point.

Rows or columns that contain header or row/column labels for retrieving can be hidden. For example, labels for retrieval need not be displayed on the visible report format. To hide a row, select it, and then select Format > Row > Hide.

Except for a few minor restrictions, labels on the spreadsheet that are not displayed in the outline do not prevent you from retrieving data. For example, your own special labeling may display anywhere on the sheet. Essbase alerts you to such extraneous members with a warning dialog box upon a retrieve action. The unknown member message may be turned on and off on the Global tab (Essbase > Options).

Labels that look like a number or date may be interpreted by Excel as a number or date convention rather than a member name. For example, 0198 is read by Excel as the numeric value 198. Precedesuch typed values with an apostrophe to cause them to be interpreted as text and thereby a valid member name from the outline.

When attribute dimensions are included in a report (they are not by default), they behave like regular dimensions in terms of navigation. However, when drilling on a level 0 attribute dimension member, a different set of rules apply: the associated base dimension expands and displays the base members with that attribute.

Header RulesThe following rules apply to the scanning process for the header section of the worksheet.

Only one member from a given dimension may be displayed. Header members define all the data on the page for that dimension. If a dimension is represented in the header section, then members from

the same dimension cannot be displayed in a row or column. Any dimension (excluding attribute dimensions) not found on the

worksheet during the label-scanning process is placed into the header section as the dimension name (generation 1) member.

Retrieving into a blank sheet places all nonattribute dimensions at generation 1 onto the sheet as a header. However, the first dimension appearing in the outline displays as a row.

Header member names may display in any order. They may also be stacked in multiple rows. Stacking header members results in a new header placement when you drill across columns.

Zooming on a header member causes it to become a new row. Zooming on a header member while pressing the Alt key causes it to become a new column.

41

Page 42: Essbas Student Guide I

Row/Column RulesFollow these rules to apply the placement of row and column labels:

Rows and columns must be below the page header section, starting on a separate row.

There must be at least one row and one column on a report. In some formats, Essbase may interpret one of the page headers as a column header.

Column headings must be on a row of their own prior to the row headings. All members from a given dimension that is a column header must be displayed on the same row.

Row headings must be in a column of their own, but row headings may be in a column that sits between column headers. All members from a given dimension that is a row header must be displayed in the same column.

Row and column headings must contain members from only one dimension.

Rows and columns may be nested: • There is no limit to the number of levels of nesting in any combination for rows and columns up to the total number of dimensions in the database. • Nesting can be done with asymmetrical headers for columns by stacking member names. For example, the nesting relationships are explicitly placed one on top of another.

Exploring Data with Basic Retrieve Operations

The following options are the basic retrieve operations of Essbase, all of which (except Flashback) initiate the label-scanning process:

42

Page 43: Essbas Student Guide I

Identifying Conditions for Poor Retrieval PerformanceNormally, retrieve operations should be quick, usually taking a second or two for a more complicated report that requires many data blocks to be brought into memory.

Retrieve performance may be negatively affected by server, network, or database design conditions such as:

43

Page 44: Essbas Student Guide I

Condition Description

Poor Design Data block size is too large and has low block density, causing memory limitations when building complex reports requiring many blocks.

Dynamic Requesting a report that includes a large number of Calculations members that are dynamically calculated. This includes

attribute dimensions.

Transparent Data is being requested from one database to another, Partitions especially over a network.

Heavy Server Too many users are attempting to access the server at Traffic one time.

Competing Memory-intensive operations such as calculations are Operations occupying the server.

Retrieve performance may also be negatively affected by client or user-generated conditions such as a large-area retrieve. This happens when a user performs a retrieve operation while selecting a whole worksheet or more than one column and Essbase attempts to retrieve into any large selected area. Press the Esc key to interrupt.

Managing Worksheet OptionsYou use the Options command on the Essbase menu to control indentation, zoom, aliases, messaging, styles, and other feature sets in an Essbase spreadsheet.

Managing Global OptionsGlobal options are specific to the client machine set by the individual user. These option settings apply to all worksheets and workbooks that a user opens.

Mouse Actions The mouse actions options enable the left and right mouse functionality for zoom and pivot actions and for access to linked objects.Enabling mouse actions for Essbase disables some Excel functionality, such as shortcut menus. Enabling Limit to Connected Sheets means Essbase will disable Excel functionality only on worksheets currently connected to Essbase. This option is only available with Essbase version 7.1.2 and higher.ModeNavigate without Data lets you develop reports using zoom, pivot, and keep only actions without retrieving from the Essbase Server (no data displays).

Managing Zoom Options

44

Page 45: Essbas Student Guide I

Zoom options are specific to individual worksheets. Each worksheet may have zoom settings of its own, and these settings are saved with the Excel workbook.

Zoom In Zoom In enables you to set the zoom behavior when you select Zoom on the Essbase menu or when you double-click a member name. There are eight settings that affect Zoom In:

Option Description Next Level Default setting. Goes to the next level in the

hierarchy. For example, zooming from a member goes to the member’s children.

All Levels Sets the zoom to drill down on all descendants of the member selected. Be careful not to zoom to all levels on a dimension with thousands of members.

Bottom Level Sets the zoom to drill down to level 0 members in relation to the member selected. This is a useful feature when you want to quickly see the source data that is usually loaded to the bottom of a hierarchy.

Sibling Level, Horizontal zooming. Crosses a dimension’s Same Level, and hierarchy from the selected member rather than Same Generation drilling down vertically.

Formula Enables drilling down based on member formulas in the outline. The drill results in a list of the members that make up the formula.

Include Selection Retains the parent member on the report with its children.

Within Selected If there are duplicate member names on a report, Group affects only members of a specific group. For

example, an asymmetrical report may contain Current Year and Budget Sales figures for Quarter 1. With this option selected, you can drill down to the January value for Current Year while leaving the Budget value at the Quarter 1 level.

Remove Works in tandem with Within Selected Group. Unselected Groups When selected, it removes unselected groups

from the report. For example, if you are working on an asymmetrical report that contains both Current Year and Budget information and you drill into Current Year Quarter 1, the Budget values are removed from the report.

Zoom Out

45

Page 46: Essbas Student Guide I

The Zoom settings do not affect Zoom Out. In all cases, Zoom Out goes up to the next level from the member selected.

Managing Style Options

Style options are specific to individual worksheets. Each sheet may have style settings of its own. Style settings are saved with the Excel workbook. There are three categories:

Option Description Member Styles Lets you set font characteristics (font, size, style,

color, background, and so on) for parent, child, shared members, members with formulas, and memberswith a dynamic calc storage setting. • Style settings apply to members of all dimensions.

• Parent style settings apply to all non-level 0 members. Child style settings apply only to level 0 members.

Dimension Lets you set different font characteristics for each Styles dimension in your database. When used together With Member Styles, the member style takes precedence

Data Cell Lets you set font characteristics on data cells Styles themselves (not member labels) for distinguishing Linked Objects, Read Only, and Read/Write. A style setting must be set for Linked Objects if Linked objects are used. Otherwise, users do not

know what cell to select to view a linked object.

Linked ObjectsEssbase is ideally suited for loading, storing, and calculating numeric data. However, a text capability is provided where notes can be associated with specific member combination data points, and other files such as a Word document, PowerPoint slides, or even another linked Essbase database can be associated with a data point. The feature set is Linked Objects.

When you create a Linked Object in a data cell, the spreadsheet user can double-click that data cell to view the object to which it is linked. However, to gain the full benefit of using Linked Objects, you need to assign a unique style to Linked Objects in the Essbase > Options dialog box. When a user accesses a worksheet containing a Linked Object, the style is displayed, alerting the user to its presence.

46

Page 47: Essbas Student Guide I

Managing Display OptionsDisplay options are specific to individual worksheets. Each sheet may have settings of its own.

Indentation Three levels of indentation are provided:

Totals Subitems None

The user cannot override Essbase indentation settings on retrieval sheets.

Where indentation (and potentially other formatting requirements) are rigid, use the backsheet method: the formatted sheet is linked by formulas to an unformatted Essbase retrieval sheet.

ReplacementThe Replacement options enable you to enter your own nomenclature for #Missing and #NoAccess labels. For example, set #Missing as dash (-) or N/A.

Suppress The Suppress options enable you to suppress #Missing rows, zero rows, or underscore (_) characters in member names.Suppress #Missing Rows and Suppress Zero Rows functionality is defined by row. An entire row with #Missing or Zero values is not displayed if the feature is selected.Suppress #Missing Rows and Zero Rows functionality have no memory (previously suppressed rows with no values do not reappear if they subsequently contain values). For those cases, use template retrieve with report script syntax.

Aliases The Aliases options enable you to show reports using aliases rather than member names.

47

Page 48: Essbas Student Guide I

There are two alias options:

Option Description

Use Aliases Aliases are used according to the alias table set. Members with no aliases default to the member name.

Use Both This option places the member name and the Member Names aliases on a report for row dimensions. This may be and Aliases distracting where you have multiple nested rows.Member names are duplicated where there is no alias. This feature is not functional for columns.

CellThe Cell options enable you to control the display of certain cell characteristics:

Option Description

Adjust When selected, it automatically adjusts column width Columns to the widest number, member name, or user- entered label spacing. Clear this option when creating reports and off when fixed format reports are finalized.

Repeat Fills in outer nested member names (or aliases) in Member Labels rows and columns when more than one dimension is used. This feature is useful when creating reporting formats for export to sources that require full row identification of data labels.

Building Reports with Member Selection

The Member Selection dialog box provides direct access to your database outline for copying member names:

By Member Name By Generation Name By Level Name By Dynamic Time Series

For any of the options, select the dimension to be placed on the worksheet from the

48

Page 49: Essbas Student Guide I

Dimension drop-down list. When you select the Time-related dimension, the By Dynamic Time Series option under the view method is selectable.

For any of the four options, you may select a “subset” rule so that you can qualify member lists by wild card search criteria and UDAs.

You can save and later recall any complex member selection rule set.

Selecting by Member NameThe By Member Name option for member selection enables you to copy members, taking a vertical slice of a dimension’s hierarchy starting from an anchor member.

When you create a member list, you can also specify:

Member Display: Members are displayed down the worksheet if you select the Place Down the Sheet check box. Otherwise, members are displayed across the worksheet. Appearance of Shared Members: Shared members are not copied to the worksheet if you select the Suppress Shared Member check box.

Selecting by Generation and Level NameThe By Generation Name and By Level Name options for member selection enables you to copy members, taking a horizontal slice of a dimension’s hierarchy defined by the generation or level name or number.

Creating Reports with the Query DesignerIf you prefer more structure when building a report, Essbase Query Designer handles much of the work for you. In addition to making it easy to construct new reports, the interface includes an array of tools for filtering and sorting reports based on column data values.

Query Designer enables you to quickly and easily create spreadsheet reports using point-and-click and drag-and-drop interfaces. It provides a set of intuitive dialog boxes that step you through the process of placing member labels on the spreadsheet.

Query Designer is the only part of the Essbase Spreadsheet Add-in that enables you to create reports based on data values as opposed to members labels. This is accomplished through filtering and sorting tools embedded in the interface, so that you can apply specific, user-defined criteria to the data. You can use filtering and sorting to:

Create top or bottom lists (for example, top ten customers based on total sales)Identify members with a variance within specified ranges (for example, actual versus budget greater than 10%) Identify members with values within specific

49

Page 50: Essbas Student Guide I

ranges (for example, units sales between 100,000 and 500,000)Sort members according to ascending values (for example, ranking members based on total sales)

Like other reports you create, reports generated with the Query Designer are reusable, but it also defines a query which can itself be saved for future use. When you run the query, it can generate the final report in one of two ways:

Report Type Description Excel The Excel workbook report can be saved just like any Workbook other report. It is not necessary to run the query again

if you want to use the report in a subsequent time period. Simply change the member names in the Excel Worksheet, connect, and retrieve.

Report Script You can save report scripts along with the query. • Report scripts are used by system administrators

to create data extracts for data load files and other system maintenance tasks.

• Report scripts are also used to create large reports unsuitable for spreadsheet production.

Filtering for Top/Bottom N with Query DesignerTop/Bottom N is a type of filter analysis that is created from the Navigation panel under Data Filtering. Top/Bottom N enables you to filter row dimension members for a specific top and bottom number of row members based on the values of one column header.

When using Top/Bottom N filters, you should consider these issues:

Top N and Bottom N row members may display simultaneously on the same report.

Column references for setting the top/bottom criteria include stacked columns where multiple nested dimensions are included in the member specifications.

You should perform top/bottom analysis on row members that are all at the same generation or level. Otherwise, intermediate subtotals skew the results.

You should perform top/bottom analysis on row members that are defined by a member selection macro using generation, level, UDA, or similar references where a dynamically refreshed pool of members, not an explicit list, is the basis for the filtering.

Filtering Values with Query DesignerWith Query Designer, you can filter row dimension members for one or more criteria based on the values of one column header. When using value filters, you should consider these issues:

50

Page 51: Essbas Student Guide I

You can apply multiple value filters using And/Or logic. This feature lets you specify value ranges for screening in addition to specific value thresholds.

Column references for setting the top/bottom criteria include stacked columns where multiple nested dimensions are included in the member specifications.

You should perform value screening on row members that are all at the same generation or level. Otherwise, intermediate subtotals skew the results.

You should perform value screening on row members that are defined by a member selection macro using generation, level, UDA, or similar references where a dynamically refreshed pool of members, not an explicit list, is the basis for the filtering.

For Essbase 6.5 and later, Query Designer enables you to use negative numbers as filtering criteria.

Sorting with Query DesignerWith data sorting, you can sort row dimension members in ascending or descending order, based on the values of a column header. You set up the sorting on the Navigation panel. When sorting values, you should consider the following issues:

You can set up multiple sorts for specific row members based on different column headers.

Column references for setting the sorting criteria include stacked columns where multiple nested dimensions are included in the member specifications.

You should sort on row members that are all at the same generation or level. Otherwise, intermediate subtotals skew the results.

You should sort on row members that are defined by a member selection macro using generation, level, UDA, or similar references where a dynamically refreshed pool of members, not an explicit list, is the basis for the filtering.

Replicating Reports with Cascade

With cascade reporting, you can create one standard report complete with precise styles, color coding, and number formats. You can then replicate this report format to multiple cost centers, regions, product lines, or other business view elements.

Cascade creates multiple workbooks or sheets within a workbook that replicate your standard format. It retrieves on each replicated sheet, and indexes the sheet references.

Protecting Formulas During Retrieval

51

Page 52: Essbas Student Guide I

One continuing litmus test for Essbase spreadsheet reporting has been how Essbase interacts with formulas on a worksheet. You can enable formula preservation on the Mode tab of the Essbase > Options dialog box. Near total compatibility exists between Essbase retrieve functions and using formulas on a worksheet. The exceptions are:

Suppress #Missing or Zero Rows: if either of these display options are selected, none of the formula preservation options are selectable.Pivot: if any formula preservation options are selected, you can not pivot dimensions on your worksheet.

Formula BehaviorWhen formula preservation options are not selected, an Essbase retrieve deletes formulas in a spreadsheet.For example, formulas of any type on a worksheet are overwritten by a null. To keep formulas on a worksheet during a retrieve operation, you must preserve the formulas.

The Formula Preservation options are as follows:

Option Description

Retain on Retrieval Preserves formulas on a simple retrieve operation, but formulas are deleted without warning for any other retrieve operation such as zoom.

Retain on Keep Enabling Retain on Retrieval activates other and Remove Only retrieve operations in layers, the first layer being the option to preserve formulas when using Keep Only and Remove Only.

Retain on Zooms Enabling Retain on Retrieval also activates Retain on Zooms which, in turn, enables the Formula Fill check box. Retain on Zooms preserves the previous formulas on row-based zoom in and out operations. Formula Fill fills in the appropriate formula for newly inserted members.

Retain on Zooms with Formula Fill is intended to work correctly only on row zoom operations. Column zoom operations with these options selected produce unexpected results.

52

Page 53: Essbas Student Guide I

4. Creating Basic Calculations

Chapter Objectives

Upon completion of this chapter, you will be able to:

• Design calculations in the outline• Design calculation scripts• Describe the relationship between data storage and calculation efficiency• Organize calculation scripts• Return correct calculations results• Control the top-down calculator• Leverage hierarchy intelligence• Generate member lists• Create variables• Optimize the outline with dynamic calculations

Designing Calculations in the Outline

Based on your experience with Essbase, you have created outlines, performed basic aggregations, and looked at spreadsheet and advanced reporting options. Now we focus on the Essbase calculator.

Perhaps the most unique aspect of Essbase is its ability to quickly and dynamically calculate data based on the needs of the user. We are used to dealing with spreadsheets where we have to hard-code formulas and calculate each value individually. With Essbase, values are calculated all at once based on their structure in the outline.

Calculating in the outline takes place in two ways:

Unary or consolidation operatorsMember formulas

Unary Operators

The first and most efficient manner of calculating in Essbase in the outline is through the use of unary operators. Unary operators are addition, subtraction, multiplication, and division operators that determine the many ways in which data values aggregate to a parent level.

Where possible, you should maximize the use of unary calculations (consolidation operators) for model building in the Accounts dimension:

Unary operator calculations are faster than formulas in the outline or in calculation scripts.

53

Page 54: Essbas Student Guide I

Unary calculation construction provides visibility for drilling down in the Accounts hierarchy so that you can see where the number originated. Formulas can obscure visibility into the calculations.

You can assign six unary operators to dimension members:

Operator Description

+ Default operator. When a member has the + operator, Essbase adds that member to the result of previous

calculations performed on other members.

- When a member has the - operator, Essbase multiplies the member by -1 and then adds the product to the

result of previous calculations performed on other members.

* When a member has the * operator, Essbase multiplies the member by the result of previous calculations

performed on other members.

/ When a member has the / operator, Essbase divides the member into the result of previous calculations

performed on other members.

% When a member has the % operator, Essbase divides the member into the sum of previous calculations

performed on other members. The result is multiplied by 100.

~ When a member has the ~ operator, Essbase does not use it in the consolidation to its parent.

Order of OperationsIt is important to understand how Essbase calculates members with different operators. When you are using addition and subtraction operators, the order of members in the outline is irrelevant. However, when you use any other operator, you need to consider the member order and its impact on the consolidation.

When siblings have different operators, Essbase calculates data in top-down order. The following example illustrates a top-down calculation:

Sample RollupParent1Member1 (+) 10Member2 (+) 20Member3 (-) 25Member4 (*) 40

54

Page 55: Essbas Student Guide I

Member5 (%) 50Member6 (/) 60Member7 (~) 70

Essbase calculates Member1 through Member4 as follows:

(((Member1 + Member2) + (-1)Member3) * Member4) = X(((10 + 20) + (-25)) * 40) = 200

If the result from Members 1-4 is X, then Member5 consolidates as follows:

(X/Member5) * 100 = Y(200/50) * 100 = 400

If the result of Member5 is Y, then Member6 consolidates as follows:

Y/Member6 = Z400/60 = 66.67

Essbase ignores Member7 in the consolidation.

Associating Formulas with Outline MembersThe second option for calculating within the outline is through the use of member formulas. You should use formulas when you cannot calculate with unary operators (for example, incorporating an if-then condition) or when a unary operator hierarchy path would confuse users. You can directly associate formulas with members in the outline.

Calculating Formulas in the OutlineFormulas on outline members are calculated by the following methods:

If the data storage type for the member is Store Data, the formula is executed in outline order during a CALC ALL execution, a CALC DIM on the member’s dimension, or by invoking the member calculation directly (for example, “Unit Mix”).If the data storage type for the member is Dynamic Calc or Dynamic Calc and Store, the formulas are executed when a user retrieves the member into a spreadsheet.

Overview of Default Outline Calculation OrderWhen you perform a default calculation (CALC ALL) on a database, Essbase calculates the dimensions in this order:

1) Dimension tagged Accounts if it is dense.

2) Dimension tagged Time if it is dense.

55

Page 56: Essbas Student Guide I

3) Dense dimensions in outline order.

4) Dimension tagged Accounts if it is sparse.

5) Dimension tagged Time if it is sparse.

6) Sparse dimensions in outline order.

7) Secondary calculation pass on members tagged two-pass calculation.

If the Accounts dimension has no member formulas, the default Essbase behavior is to calculate all dense dimensions in the order they appear in the outline, followed by all sparse dimensions in the order they appear in the outline.

Using Formulas in an Outline

The issues for using a formula in an outline rather than a formula in a calculation script are complex. Depending on the exact system resources and the needs of users, your calculation strategy can change. However, there are a few basic guidelines you should follow.

Use formulas in outlines when:

The member calculations are straightforward without complex order of calculation dependencies (for example, calculation order within the outline organization is acceptable for dependencies).The member requires a two-pass calculation operation and is flagged as such, and no other back calculation of other members is required to be performed in a calculation script after the main CALC ALL or CALC DIM statement.The member requires a two-pass calculation operation and is flagged as such, and all other conditions are met for accomplishing a Two For The Price Of One calculation. The member's data storage type is Dynamic Calc or Dynamic Calc and Store. Formulas for Dynamic Calc members cannot be executed in a calculation script.

Designing Calculation Scripts

Outside of calculating in the outline, the other option for calculating in Essbase is calculation scripts. Every database needs at least one calculation script to roll up (aggregate) unary operators and execute formulas in the outline. Input of data does not execute any calculation process.

56

Page 57: Essbas Student Guide I

The default calculation script for a new database is a CALC ALL statement.

Basic Calculation Script SectionsThe three most basic functions of calculation scripts include:

Section Description

Housekeeping A range of commands used to copy and clear data or perform other housekeeping functions on specific sectors of the database (for example, DATACOPY Bud Version 1 TO Budget Version 2).

Roll-Up A calculation script with the CALC ALL or CALC DIM functions on specific dimensions rolls up input data in the outline hierarchy according to the outline’s

consolidation operators and formulas on members in the outline. The default calculation script, unless reset to an alternative script, is equivalent to the CALC ALL command.

Member Data is created or modified for specific members Formulas according to formulas associated with the member either in the outline or in a calculation script.

Driven by ProcessCalculation requirements are driven by processes that typically include multiple interim steps. For this reason, most databases are associated with multiple calculation scripts. Conversely, one calculation script typically does not meet all calculation requirements for a single database.

The following example demonstrates how specific calculation scripts track a typical month-end close process using Essbase:

57

Page 58: Essbas Student Guide I

Script 1: Roll up data for the new month after the first data load from the general ledger.Script 2: Recalculate the database, focusing on calculations that need updating for new allocation rates and adjusting entries.Script 3: Calculate variances from budget and forecast, percentages, and analytical metrics.

Calculation Script InterfaceEssbase reads a calculation script as text and then performs the calculation operations according to the script instructions.

Note: You can write a calculation script in any word processor and save the file as type CSC (Essbase calculation script file extension). The script executes as long as there are no syntax errors.

The benefits of writing a calculation script through Essbase Calculation Script Editor are as follows:

You can select outline member names from the member list, so that you avoid typing and introducing errors.

You can access the calculation script formula commands with syntax parameter guides.

You can use the Find Members functionality to locate member names you may have forgotten.

The calculation script editor provides robust error checking. All scripts are also validated on the server prior to execution.

Data Storage and Calculation Efficiency Overview

The engine powering Essbase is the data block. Understanding data blocks is fundamental to being able to write calculation scripts that compute the numbers correctly and complete within a reasonable period of time. Here, we look at basic data block construction, including a detailed analysis of how blocks are created and calculated upon during a data load and CALC DIM rollup.

Dense and Sparse SettingsYou need to think about your data in sparse and dense terms. A methodology for thinking this through is to compare dimensions to each other in pairs, asking questions about the density or sparsity of data in the members of one dimension versus the other.

Perspective 1: Think About the DataData is typically dense in a dimension when there is a high percentage of members that have data out of all possible combinations after the database is loaded and

58

Page 59: Essbas Student Guide I

rolled up.

For example, compare one product at level 0 in the Product dimension to members in the Accounts dimension:

If the product has unit sales, what is the probability that it also has a price? Answer: high.

If the product has units and a price, what is the probability that it also has dollar sales after a rollup calculation? Answer: high.

If the product has sales, what is the probability that it also has a cost of sales, gross margin, and other account measures? Answer: high.

In this example, you can think of data in the Accounts dimension versus the product as being dense. For example, if you have data in one key account, such as units, you probably have data in most, or at least many, of the other accounts.

A similar series of questions about the same product versus the Time dimension might result in the same conclusion. For example, Time is dense because if you sell the product in one month, you are likely to sell in other months, quarters, and for the year total.

Data is typically sparse in a dimension when there is a low percentage of members that have data out of all possible combinations after the database is loaded and rolled up.

For example, compare a level 0 product in the Product dimension to members in the Customer dimension. Assuming there are 3,000 products and 3,000 customers:

What is the probability that any given customer will buy the level 0 product? Answer: lowPicture a matrix of 3,000 customers by 3,000 products. What is the percentage of intersecting cells where there is a data point (for example, unit sales) out of the total number of cells? Answer: low.

Perspective 2: What Are the Essbase Settings?Each dimension in an Essbase database is either dense or sparse. How such settings are made drives the construction of data blocks. The data block is the fundamental unit of storage of data in Essbase. Data block construction, as determined by sparse and dense settings, has a major impact on calculation times, storage requirements, and other variables in Essbase performance.

There is normally a correspondence between your analysis of the data and the Essbase settings. Dense and sparse settings for each dimension are typically set after benchmarking calculation and storage performances of alternate setting

59

Page 60: Essbas Student Guide I

combinations.

Note: By default, Essbase configures your dense and sparse settings automatically, but you can change them manually on the Properties tab of the Outline Editor.

What is the final criteria for dense and sparse settings? Answer: That combination of settings that produces the lowest overall calculation time with acceptable storage configurations.

Data Block Fundamentals

A data block is the basic unit of storage in Essbase. Blocks contain the actual data of your database and are stored in page (.pag file type) files located on one or more server drives. Data blocks contain cells.

A cell is created for every intersection of stored members in the dense dimensions.

Stored members are those defined in the outline with storage types: Store Data, Dynamic Calc and Store, and Never Share.

All other storage types (for example, Dynamic Calc, Label Only, and Shared Member) are not stored members and therefore do not take up space (cells) in a data block.

Each cell, as defined by the stored dense members, takes up 8 bytes of storage space on disk (before compression) and in memory. The size of a data block can be calculated as the

product of the stored members in the dense dimensions * 8 bytes. Data blocks are created in their entirety as long as one cell has a data

value that has been input or calculated. All data blocks are the same size in bytes (before compression) regardless of the number of cells that have data values versus #Missing.

All data blocks move from disk to memory (for calculation, restructure, or reporting) and back to disk in their entirety.

Data blocks in a database can be big or small, depending on what dimensions are defined as dense on the Properties tab in the Outline Editor. The more dimensions defined as dense, the larger the data block. Conversely, changing a dense dimension to sparse reduces the block’s size.The greater the number of stored members in a dense dimension, the larger the data block. Conversely, reducing the number of stored members in a dense dimension reduces the block’s size.

Data Block Creation

A data block is created for each unique combination of stored members in sparse dimensions when there is at least one data value in a cell within the block.

60

Page 61: Essbas Student Guide I

The information identifying the data block (the block’s name as defined by the combination of sparse members) is stored in index (.ind file type) files on the Server drives.

Data blocks are created under any one of four conditions:

Condition Description

Data Load Data blocks are created when data is loaded to sparse member combinations that did not previously exist. The blocks are created upon input, whether loaded using a source file and load rule or loaded from a spreadsheet using lock and send.

Sparse Rollup Data blocks are created when sparse dimensions are rolled up as specified in the outline using a CALC ALL, CALC DIM, or other calculation function.

Datacopy Data blocks are created when the DATACOPY command is executed in a calculation script.

Member Formula Data blocks may be created under certain circumstances as a result of a member formula (for example, Units = 10).

Storage and Calculation EfficiencyEfficiency is based on two principles:

Data blocks are created only for combinations of sparse members where there is data (data is stored and calculated only for actual combinations).For combinations of sparse members where there is no data, no data block is created. Therefore, there is no requirement for disk space for storage or calculation time for calculations.

Sample Calculation ProcessPut simply, a rollup is an aggregation. The Essbase calculator uses the unary operators to calculate the values for each parent member. In the materials that follow, you trace the development of a CALC DIM calculation script and how its execution builds data blocks for the Sales database.

Example AssumptionsAssumptions for this example are as follows:

61

Page 62: Essbas Student Guide I

Months roll up to quarters and quarters roll up to Year Tot in the Time dimension. The Year Tot dimension is dense.

Units multiplied by rates equals dollars is the form of calculations in the Accounts dimension.

The Accounts dimension is dense. The Customer, Product, and Scenario dimensions are sparse

dimensions. Scenario is not included in the CALC DIM statement because no members in the Scenario dimension require rollup.

The completed calculation script reads:CALC DIM (Accounts, Time, Customer, Product);

Step 1: After Data Input

Units and rates are input for each month for product and customer combinations. All data values are Current Year.

Upon input, a data block is created for each intersection of the individual products and customers with the Current Year scenario (such as IBM for Light 365 A, Current Year; IBM for

Light 520A, Current Year and so on). Each data block created upon input contains units and rates by month,

but dollars are not calculated because no calculation script has been executed at this stage.

62

Page 63: Essbas Student Guide I

Step 2: CALC DIM (Accounts)

Dollars are calculated in the outline from units and rates for each data block.

Quarter and Year Tot data are still #Missing because the Year Tot dimension has not yet been rolled up.

No new data blocks are created. Cells are being filled out in the original data blocks that were created on input.

Step 3: CALC DIM (Year Tot)

Year Tot and Quarters are rolled up from months for all accounts. The Year Tot dimension cells are now filled in.

No new data blocks are created. Cells are being filled out in the original data block that were created on input.

63

Page 64: Essbas Student Guide I

Step 4: CALC DIM (Customer)

New upper-level data blocks are now created in the Customer dimension, reflecting aggregations at higher levels in the sparse hierarchy.The additional blocks were created by summing in the Customer dimension:• All account data for units, rates, and dollars• All time data for months, quarters, and Year Tot

Step 5: CALC DIM (Product)

New upper-level data blocks are now created in the Product dimension, reflecting aggregations at higher levels in the hierarchy.The additional blocks were created by summing in the Product dimension:• All account data for units, rates, and dollars• All time data for months, quarters, and Year Tot

Interpreting Data Block StatisticsEssbase provides substantial statistical information about your database and data blocks, so you need to understand how block statistics are calculated and their meaning.

Blocks Parameter GroupThe Blocks parameter group includes a list of block statistics. The following table presents the Block parameter measures in their order of appearance on the list.

64

Page 65: Essbas Student Guide I

Measure Description

Number of existing Total number of data blocks in the database at its blocks current state of calculation. For any given

calculation without FIX statements, this number reflects how many blocks are moved from

storage to memory for calculation. Fewer existing blocks generally means a lower calculation time because fewer blocks are cycled through memory.

Block size (B) Computed as the product of the stored members in the dense dimensions multiplied by 8 bytes.

This is an important statistic because sizes that are too large may adversely affect calculation, storage, and retrieval efficiency.

Potential number of Computed as the product of the stored members blocks in the sparse dimensions. In most databases, this

number is very large and has no real meaning except as to its order of magnitude.

Existing level 0 Number of data blocks at level 0. Level 0 is not blocks necessarily the same as blocks created upon

input if data is loaded at upper levels.Existing upper-level Number of data blocks at upper levels. Upper-blocks level blocks include all combinations of upper-

level sparse members plus combinations with mixed upper-level and level 0 sparse members. In most databases, the ratio of upper- to lower- level blocks is very high.

Block density (%) From a sampling of existing data blocks, the percentage of cells in blocks that have a value versus total number of cells in the block. One minus this number is the percentage of cells with #Missing. This is an important statistic and a key measure in your database for storage and calculation efficiency. Dense-sparse

configurations should maximize block density.Percentage of Number of existing blocks divided by potential maximum blocks number of blocks. This is a measure of the existing sparsity of your database. It is not uncommon for this number to be very small (less than 1%). It is

the nature of multidimensional databases that there are many combinations without data.

Compression ratio Compression efficiency when blocks are stored on disk, normally from using the Bitmap Encoding compression technique. Since #Missing cells are typically compressed with this method, it directly relates to block density. Compression is set on the Storage tab (Database > Settings).

Average clustering Fragmentation level of the data (.pag) files. The ratio maximum value, 1, indicates no fragmentation.

65

Page 66: Essbas Student Guide I

Organizing Calculations

In this section, we look at the architecture of calculation scripts, the logical flow of calculation script sections, and how calculation order and sequential construction of data blocks drive correct answers with optimal performance.

The following calculation script has a standard design:

/*Calculation Script Purpose: Normalizes actuals for intercompany adjustments, allocations, pushes down of rates and other adjustments to set up data for “apples to apples” melding on “in month” actuals with “out month” forecast information. Back calcs rates.*/

SET AGGMISSG OFF;SET UPDATECALC OFF;

/*Fix on specific actual/ forecast scenario and “in months” of actuals.*/FIX (“FY95 DEC FCST”, “01”, “02”, “03”)

/*Allocate costs by volume and gross Sales. First roll up volume and sales and other impacted accounts.*/FIX (Volume, “Gross Sales”, “Interco Sales”, “Fixed R & D”, “Fixed Corp X Charge”, “Fixed IS X Charge”, “Fixed Mktg Admin”, “Fixed Mfg Admin”)CALC DIM (Org_Structure);ENDFIX

/*Sets up intercompany eliminations in Org Structure dimension.*/

FIX (“Interco Sales”) “Interco Elimination LA” = “Interco Gross LA”;“Interco Elimination HC” = “Interco Gross HC”;ENDFIX

FIX (“Interco Raw Material”) “Interco Elimination LA” = “Interco Gross LA”->”Interco Sales”;“Interco Elimination HC” = “Interco Gross HC”->”Interco Sales”;ENDFIX

FIX (“Interco Tax Rebate”) “Interco Gross LA” = (“Interco Gross LA”->”Interco Sales”) * 0.0909;“Interco Gross HC” = (“Interco Gross HC”->”Interco Sales”) * 0.0909;ENDFIX

/* Push down commissions and adjust allocation accounts for later application of allocation percents. */

66

Page 67: Essbas Student Guide I

( Commissions = @ANCESTVAL (Org_Structure, 6, Commissions) *(“Gross Sales” / @ANCESTVAL (Org_Structure, 6, “Gross Sales”)); “% of Sales” = (“Gross Sales” / @ANCESTVAL (Org_Structure, 6, “Gross Sales”));“Alloc R & D” = “Direct R & D” - “Fixed R & D”;“Alloc Corp X Charge” = “Corp X Charge” - “Fixed Corp X Charge”;“Alloc IS X Charge” = “IS X Charge” - “Fixed IS X Charge”;“Alloc Mktg Admin” = “Div Mktg Admin” - “Fixed Mktg Admin”;“Alloc Mfg Admin” = “Div Mfg Admin” - “Fixed Mfg Admin”;)/*Allocate division level expenses to target level using percent table.*/(“Div Gen Mgr Admin” = @ANCESTVAL (Org_Structure, 6, “Div Gen Mgr Admin”) * “% Div Gen Admin”;“Alloc Mktg Admin” = @ANCESTVAL(Org_Structure, 6, “Alloc Mktg Admin”) * “% Alloc Marketing/Advertising”;“Alloc Mfg Admin” = @ANCESTVAL(Org_Structure, 6, “Alloc Mfg Admin”) * “% Alloc Mfg Admin”; “Div Sales Admin” = @ANCESTVAL (Org_Structure, 6, “Div Sales Admin”) * “% of Sales”;“Div F & A Admin” = @ANCESTVAL (Org_Structure, 6, “Div F & A Admin”) * “% Div Gen Admin”;“Div HR Admin” = @ANCESTVAL (Org_Structure, 6, “Div HR Admin”) * “% Div Gen Admin”;“Div Other Admin” = @ANCESTVAL (Org_Structure, 6, “Div Other Admin”) * “% Div Gen Admin”;“Alloc Corp X Charge” = @ANCESTVAL (Org_Structure, 6, “Alloc Corp X Charge”) * “% Alloc Corp X Charge”;“Alloc IS X Charge” = @ANCESTVAL(Org_Structure, 6, “Alloc IS X Charge”) * “% Alloc IS X Charge”;“Alloc R & D” = @ANCESTVAL (Org_Structure, 6, “Alloc R & D”) * “% Alloc R & D”;)/*Sequence to roll up all data that creates most of the upper level blocks. In above calculation sequences, all data is now “normalized”*/CALC DIM (Accounts, Year, Org_Structure, Region, DistiCenter);

67

Page 68: Essbas Student Guide I

/*Back calculate all rates from volume and dollar inputs, specifically for upper levels.*/(“Price Per SC” = “Gross Sales” / Volume;“List Price” = “Price Per SC” * “Stat Fac tor”;“Quantity Discount Perc” = “Quantity Discount” / “Gross Sales”;“Quantity Discount Rate” = “Quantity Discount” / Volume;“Cash Discount Perc” = “Cash Discount” / (“Gross Sales” - “Quantity Discount”);“Cash Discount Rate” = “Cash Discount” / Volume;“Unsaleables Perc” = Unsaleables / (“Gross Sales” - “Cash Discount”);“Reduced Revenue Case Rate” = “Reduced Revenue Case Rate Dollars” / Volume;“Price Variance Per SC” = “Price Variance” / Volume;“Raw Material Per SC” = “Raw Material At STD” / Volume;“WAVG Packer Fee Rate” = “Packer Fees” / Volume;“Delivery Per SC” = Delivery / Volume;“Commissions Pct” = Commissions / (“Gross Sales” - “Quantity Discount”);))/*Endfix for original Fix which defines the scenario and “in month” actuals.*/ENDFIX

Like the sample script above, a typical calculation script is divided into five sections:

Housekeeping Baseline Fix Normalization Main Rollup Back Calculation

68

Page 69: Essbas Student Guide I

Housekeeping SectionSET housekeeping commands prepare the Essbase calculator to properly process the next sequence of calculations. The following table includes typical commands in the Housekeeping section.

Command Description

SET AGGMISSG ON | OFF To override the default database setting for Aggregate missing values.

SET UPDATECALC ON | OFF For intelligent calculation block marking to override the Server .CFG setting for intelligent calculation.

SET CLEARUPDATESTATUS For intelligent calculation block AFTER | ONLY | OFF marking to override the Server .CFG

setting for intelligent calculation.

SET CACHE HIGH | DEFAULT For setting calculator cache levels | LOW | OFF | ALL according to essbase.cfg file setting.

SET MSG ERROR | WARN | For setting levels of calculating INFO | SUMMARY | DETAIL | message information in the NONE: application log. Several important housekeeping commands involve manipulation of data sets. The Essbase calculator provides a variety of data manipulation commands:

Command DescriptionCLEARBLOCK ALL | UPPER | For clearing previously input or upper-NONINPUT | DYNAMIC level data, or stored dynamic calculations.

CLEARDATA mbrName For clearing specific members or member combinations. The data blocks remain.

DATACOPY mbrName1 TO For copying focused or complete data mbrName2 sets from one set of members to another.

Baseline Fix SectionThe Baseline Fix section is found near the top of most calculation scripts and defines the script’s specific focus (typically defined by subsets of the Time or Scenario dimension). In the Baseline Fix, you answer the following question: What

69

Page 70: Essbas Student Guide I

are you trying to do?

There are a number of things to consider when creating a baseline fix. Focus on the type of data you are working with, as defined by:

Scenario: Actual versus budget versus forecast. The baseline fix often includes a scenario reference because scenarios typically differ in calculation requirements.Time Frame: Current fiscal year only, future time periods, and so on. The baseline fix often includes a time qualification (especially for accounting applications where data is calculated only for the current time frame).

Normally the Accounts dimension and business views are not in the Baseline Fix section, except organizational units that calculate their subset of a database separately from other organization elements.

Craft a Fix statement that focuses on the type of data you need:

/*Fix on specific actual/ forecast scenario and “in months” of actuals.*/ FIX (“FY95 DEC FCST”, “01”, “02”, “03”)

/*Commands go here.*/

ENDFIXCalculation scripts are generally organized to reflect specific steps in a process. The Baseline Fix statement is usually the indicator for script segregation. Separate calculation processes might be defined by different baseline fixes.

Note: It is typical to have multiple calculation scripts associated with a single database. In most circumstances, there is no inherent efficiency in combining scripts. Calculation scripts can be run automatically in correct sequence using ESSCMD or MaxL.Normalization SectionThe Normalization section of the calculation script focuses on preparing data for the CALC DIM or CALC ALL rollup. It answers the following question: What needs cleaning up here?

Input data may need to be manipulated or normalized before rollup. There are various circumstances that require you to normalize data. For example:

To allocate upper-level inputs to lower levels (charging departments for use of IS facilities)

To make special adjustments to input information (adjusting entries during month-end close)

To construct a consistent complement of unit, rate, and dollar

70

Page 71: Essbas Student Guide I

information (pushing down prices consistently across all business views)

To fix other anomalous accounts

You may need to do normalization calculations in two phases before the main rollup:

Focused rollups Actual normalization calculations

Focused RollupsThe focused rollup typically includes setting bases for use in later allocations or adjustments. For example:

To sum up units used later in building an allocation rateTo sum up dollars of data subsets that are to be allocated in later calculations

Focused rollups are typically wrapped in FIX statements (each of which means a separate pass on data blocks).

Write the FIX statements as tightly as possible to build only the data blocks needed for the calculation dependency.

FIX (Volume, “Gross Sales”, “Interco Sales”, “Fixed R & D”, “FixedCorp X Charge”, “Fixed IS X Charge”, “Fixed Mktg Admin”, “Fixed Mfg Admin””)CALC DIM (Org_Structure);ENDFIX

Normalization CalculationsThere are a broad range of normalization calculations. The Normalization section of a calculation script typically includes the most lines of code. To minimize passes on data blocks, organize normalization routines by category (for example, type of calculation).

Group like types of allocations together. Group special adjustments together.

As appropriate, wrap each category of normalization calculation together. Use FIX statements that further focus calculations only to the data blocks needed (for example, use the @Levmbrs function to focus allocations only on level 0 members).

Main Rollup SectionThe Main Rollup section is generally performed with a CALC ALL or CALC DIM statement on all dimensions. The major portion of data blocks are built at this time.

71

Page 72: Essbas Student Guide I

You are ready for rollup when normalization calculations are complete:

Allocations are calculated. Adjustments are made. Units, rates, and dollars are at consistent levels.

The main rollup should typically calculate your dimensions in default calculation order (that is, dense dimensions first, then sparse):

CALC DIM (Accounts, Year, Org_Structure, Region, DistiCenter);

Back Calculation SectionUpper-level rates, percentages, and certain dependency-type analysis calculations may force you to set up a Back Calculation section in the calculation script. You need back calculations in certain circumstances. With units, rates, and dollars, there are two combinations of inputs:

Units and rates Units and dollars

Regardless of the input, rates must be back-calculated for upper levels after the main man rollup because:

Rates in the Accounts dimension are aggregated at upper levels.Units * rates do not equal dollars after rollup.

The back calculation synchronizes the units * rates = dollars relationships for upper levels.

Calculating percentages or any other ratio is similar to calculating upper level rates:

Percentages are aggregated across dimensions at upper levels.Percentages need to be recalculated after the numerator and denominator values are summed up:

“Price Per SC” = “Gross Sales” / Volume;“List Price” = “Price Per SC” * “Stat Factor”;“Quantity Discount Perc” = “Quantity Discount” / “Gross Sales”;

72

Page 73: Essbas Student Guide I

“Quantity Discount Rate” = “Quantity Discount” / Volume;“Cash Discount Perc” = “Cash Discount” / (“Gross Sales” - “QuantityDiscount”);“Cash Discount Rate” = “Cash Discount” / Volume;“Unsaleables Perc” = Unsaleables / (“Gross Sales” - “Cash Discount”);“Reduced Revenue Case Rate” = “Reduced Revenue Case Rate Dollars” /Volume;

Controlling the Top-Down Calculator

In contrast to spreadsheet formulas where every calculated intersection needs a formula, the Essbase calculator has a top-down approach to calculation.

Note: Unless otherwise restricted by a FIX or IF command, every member formula in a calculation script (every line of code) is executed everywhere in the database (on each and every data block).

For example, a formula on a member in the Accounts dimension:

“Gross Margin %” = “Gross Margin” / “Net Sales”;is executed for every possible combination of members in the database where the underlying data exists. This top-down calculation capability allows you to accomplish a large amount of computing with few lines of code to write and debug.

Note: Be careful: Most of the time you do not want any given calculation script executing on all parts of your database.

A major element of calculation script drafting is understanding how to focus your calculations.

With the Essbase calculator, there are three principal methods for focusing calculations. You can use each method to accomplish the same focused calculation. Knowing how and when to use each one is a special skill.

1) Using FIX...ENDFIX where calculations within the FIX scope are restricted to the FIX argument parameters. For example:

FIX (Budget)“Gross Margin %” = “Gross Margin”/ “Net Sales”;ENDFIX

73

Page 74: Essbas Student Guide I

2) Using IF...ENDIF where calculations within the IF scope are restricted to the IF argument parameters. For example:

Gross Margin %(IF(@ISMBR (Budget))“Gross Margin %”= “Gross Margin”/ “Net Sales”;ENDIF)

3) Using the cross-dimensional operator, which allows explicit definition of member relationships in a formula. For example:

Gross Margin %(“Gross Margin %”->Budget = “Gross Margin”->Budget / “Net Sales”->Budget;)

Focusing with FIXFIX is one of three principal methods in Essbase for focusing the scope of calculations. FIX can be used only in calculation scripts, not in member formulas in the outline.

FIX may have one or several arguments that restrict the calculation scope:

Arguments may be member names or functions and may follow in any order:FIX (Actuals, @Descendants (“Family Total”)) is the same as:FIX (@Descendants (“Family Total”), Actuals)

Arguments separated by commas indicate AND logic. More complex focusing can be done with arguments using AND/OR operators:FIX((@CHILD(East) AND @UDA(Market, “New Mkt”)) OR@UDA(Market,”Big Mkt”))

All calculations in the FIX...ENDFIX statements are executed according to the restrictions in the arguments:

In this example, the CALC DIM statement and formula for list price are restricted to Actuals for February:FIX (Actuals, Feb)CALC DIM (Accounts, Product, Customer);(“List Price” = “Net Sales” / Units;)ENDFIX

74

Page 75: Essbas Student Guide I

You can nest FIX statements can be nested within FIX statements without limitation:FIX (Actuals, Feb)FIX (“Net Sales”)CALC DIM (Product, Customer);ENDFIX(“List Price” = “Net Sales” / Units;)ENDFIX

You cannot calculate members in a FIX statement where the FIX statement already restricts the calculation:FIX (Actual, “Jan”) CALC DIM (Accounts, “Year Tot”, Customer, Product);

ENDFIX

Preprocessing functions are supported within FIX statements. For example, you might want to fix on an upper-level member and its children, but exclude the descendants of one child.

/* Fix on all members that are level 0 Product, but not a Descendant of LIGHTBOLT*/FIX (@Remove(@Levmbrs(Product, 0), Descendants(LIGHTBOLT)));

Focusing with IFIF, ELSE, ELSEIF is the second of three principal methods in Essbase for focusing the scope of calculations. IF statements can be used in both calculation scripts and member formulas in the outline.

IF may have one or several arguments that restrict the calculation scope:

Arguments may be Booleans with member names or Booleans that reference functions. Arguments may follow in any order and include AND/OR operators:

IF (@ISMBR (Actual) AND @ISDESC (“Family Total”))Is the same asIF (@ISDESC (“Family Total”) AND @ISMBR (Actual))All calculations in the IF...ENDIF statements are executed according to the restrictions in the arguments. Additional ELSE or ELSEIF statements may be included.

In this example, the formula for list price and unit cost are restricted to Actuals for descendants of Family Total:

“List Price” (

IF (@ISMBR (Actual) AND @ISDESC (“Family Total”))

75

Page 76: Essbas Student Guide I

“List Price” = “Net Sales” / Units; “Unit Cost” = “Cost of Sales” / Units;ENDIF

In this example, the formula adds additional calculations with a conditional ELSEIF statement:“List Price”(IF (@ISMBR (Actual) AND @ISDESC (“Family Total”)) List Price” = “Net Sales” / Units;

“Unit Cost” = “Cost of Sales”/ Units; ELSEIF (@ISMBR (Budget) AND @ISDESC (“Family Total”))

“List Price” = “(Net Sales” ->Actual/ Units->Actual)*1.1; “Unit Cost” = “(Cost of Sales” ->Actual/Units->Actual)*1.1; ENDIF)

Note: IF statements are incorporated into and follow the syntax rules of calculation member blocks.

Comparing FIX and IFWith some exceptions, you may execute the same commands with FIX...ENDFIX and IF...ENDIF statements. The guidelines for when to use FIX versus IF are principally performance-related. Using IF when you should use FIX causes major degradation in calculation performance.

FIX StatementsFIX is index-driven. Its arguments are evaluated without bringing all data blocks into memory. Only those data blocks required by the FIX statement arguments are touched. For example:

Scenario is a sparse dimension. Therefore, in the following script, only the data blocks marked as Actual are brought into memory for calculation. Blocks for Forecast and Budget are bypassed.

FIX (Actual)CALC DIM (Accounts, “Year Tot”, Product, Customer);“List Price” = “Net Sales” / Units;)ENDFIXThe most important conclusions in the context of understanding passes on data blocks are as follows:

Each FIX statement triggers a separate pass on data blocks as specified in its arguments. Every time you see a FIX statement, think about whether your purpose is being served.

76

Page 77: Essbas Student Guide I

FIX is a powerful tool for focusing on when blocks are actually touched. Given that you need to make a pass, use FIX to pass only those blocks absolutely essential to your calculation purpose. Essbase calculates everything unless you focus on specific data blocks using FIX.

The best opportunities for reducing the number of blocks passed occurs in the Normalization and Back Calculation sections of your calculation script. Here are some guidelines:

In the Normalization section, use FIX precisely for focused rollups. Only subsets of upper-level data blocks across sparse business views need be built for allocation bases or aggregating totals to allocate.

As an alternative to FIX for creating focused rollups for allocation aggregations, use functions for computing subset descendancies. For example:

• @IDESCENDANTS (“Net Sales”);Computes only a subset of accounts used in the allocation.

• @IDESCENDANTS (“Family Total”);Computes only a subset of products used in the allocation.

With allocations and pushdowns, it is typical that calculations are required only at level 0. FIX using such functions as @GENMBRS or @LEVMBRS.

IF StatementsUnlike FIX, which controls the flow of calculations, IF is not index-driven. IF statements are interpreted formulas.

With IF statements, all blocks are brought into memory when the IF logic is applied. With such conditional logic, however, blocks are brought into memory only once, even though multiple conditions may be applied.

Year Tot is a dense dimension and Product is sparse. In the following script, all data blocks are brought into memory only once, even though each month has a different calculation requirement. Because the Year Tot dimension is dense, each month’s List Price is a separate cell in each data block.

“List Price”(IF (@ISMBR (Jan)“List Price” = 100ELSEIF (@ISMBR (Feb) “List Price” = 105ELSEIF (@ISMBR (Mar)“List Price” = 110

77

Page 78: Essbas Student Guide I

ENDIF)Two important conclusions about IF statements:

Each IF statement triggers a pass on all data blocks unless otherwise restricted by a previous FIX statement.IF is efficient for focusing calculations with conditional logic on members in dense dimensions.

There are important restrictions in the use of IF:

IF statements must be executed within a calculation member block, and you may execute only member formulas. Rollup functions such as CALC ALL, CALC DIM, and AGG cannot be used with IF.

Focusing with the Cross-Dimensional OperatorThe cross-dimensional (crossdim) operator (->) is the third method for focusing the scope of calculations.

Where FIX and IF are typically used to focus a series of member formulas (several at one time), the crossdim operator causes the focus to occur within a single member formula. The syntax of the crossdim operator is as follows:

The crossdim operator is a minus sign (-) followed by a greater-than sign (>): ->.

Crossdim connects members of dimensions (for example, Actual->Units->Jan). No spaces are allowed between member names and ->.

The order of members has no bearing. Actual->Units operates the same as Units->Actual.

A crossdim statement may have only one member from each dimension. There may be as many members in the statement as there are dimensions.

With just a few exceptions, the crossdim statement can be used anywhere that a regular member name is used.

For example:

DATACOPY Units->Actual TO Units->Budget;@ANCESTVAL (Product, 2, Units->Actual);IF (@ISMBR(Units->Actual))Used in formulas, the crossdim operator focuses the calculation to specific member combinations. The following examples illustrate how the crossdim operator works in different situations:

78

Page 79: Essbas Student Guide I

Forecast units to equal budget units plus 10%:FIX (Forecast)Units= Units->Budget * 1.1;ENDFIX

The calculation is performed for all forecast products, customers, and time periods. Think through what members are not included in the crossdim operator as well as what members are included.

Forecast units to equal budget units plus 10% for January, Lightbolt 365 A, IBM.FIX (Forecast)Units= Units->Budget->Jan->”Lightbolt 365 A” ->IBM * 1.1;ENDFIX

Leveraging Hierarchy Intelligence

Most database products with hierarchical rollups have no intelligence in the aggregation process. They have no knowledge of where the calculator is processing or relationships between members in the hierarchy.

The Essbase calculator knows where it is operating in the calculation cycle. Because of this knowledge, you may cause specific calculations to occur with generation, level, or member name references using special syntax that Essbase provides. The syntax falls into three categories: Macros, Booleans, and relationship functions.

The Essbase calculator incorporates a broad range of functions that reference the relationships between members within a hierarchy or generation/level references:

Function Description@ANCESTORS Any member higher in a hierarchy@CHILDREN Any member directly lower in a hierarchy@DESCENDENTS Any member anywhere lower in a hierarchy@SIBLINGS Any member anywhere with the same parent@PARENT Any member next up in a hierarchy@GEN Any member at a specific generation@LEV Any member at a specific level

The following table contains the types of relationship categories that incorporate the relationship language functionality:

79

Page 80: Essbas Student Guide I

Category Description Functions

Member Sets Return a list of @ANCESTORS members to be acted @CHILDREN upon. These functions @DESCENDANTS are most typically used @IDESCENDANTS with FIX statements.

Booleans Ask if a condition is @ISANCEST true or not about a @ISCHILD

relationship. Booleans @ISDESC are always used with IF @ISIDESC statements. @ISGEN

Relationship Return a value of the @ANCESTVALFunctions relationship member @MDANCESTVAL

relative to a member @SANCESTVAL being calculated. @PARENTVAL

@GEN @CURGEN

Scripting Member Set Functions

Member set commands create a list of members (a set) that is acted upon by another function which incorporates the member set reference in its own syntax.

The member set name describes the hierarchical relationship of the member list. For example:

@DESCENDANTS (“Net Sales”) Returns the descendants of Net Sales

@CHILDREN (“Quarter 1”) Returns Jan, Feb, and Mar @SIBLINGS (Performance) Returns Value and Unrecognized

Product, Performance’s siblings.

Use member sets in formulas and FIX commands where a subset of members are to be calculated or as stand alone member formulas.

FIX (@CHILDREN (Cash Discount))CALC DIM (Products, Customers);ENDFIX-or-@IDESCENDANTS (Gross Sales);

Member set commands are also used in the Essbase security system for setting up filters that control

80

Page 81: Essbas Student Guide I

a user’s access to subsets of the database outline and in partition area definitions.

Scripting BooleansLike member set functions, Booleans create a list. The Boolean operates only in the context of an IF, ELSE, or ELSEIF statement defining the IF condition and returning TRUE or FALSE for calculations on a data block or member cells.

If the condition defined by the Boolean is true then the commands following the IF statement are executed.

If the condition defined by the Boolean is false the commands following the IF statement are not executed.

Booleans incorporate a broad range of hierarchy relationships, level and generation references, and other members characteristics such as account type and UDAs.

@ISCHILD (mbrName)@ISICHILD (mbrName)@ISMBR (mbrName | rangeList | mbrList)@ISACCTYPE (FIRST|LAST|AVERAGE |TWOPASS|EXPENSE)@ISLEV (dimName, level)@ISUDA (dimName, Uda)@ISSAMEGEN (mbrName)

Scripting Relationship FunctionsBoolean and member set functions are used in the control of the flow of calculations. Relationship operators, by contrast, reference values of other members in the outline in relation to the member currently being calculated. The referenced value is then used in a member formula on the right side of an equation.

Relationship functions return values of members used in calculating formulas. For example:

@PARENTVAL (Product, Units) returns the number of units for the parent of the member being calculated in the Product dimension. This might be used in a product mix calculation such as:

“% Units”= Units / @PARENTVAL (Product, Units); @ANCESTVAL (Product, 2, “Net Sales”) returns the number of

units for the ancestor of the member being calculated in the Product dimension at generation 2. This might be used in a calculation for defining an allocation rate based on Net Sales.

AllocRate = “Net Sales”/ @ANCESTVAL (Product, 2, “Net Sales”);The values returned are relative to the current member being calculated.

81

Page 82: Essbas Student Guide I

The references can be based on a single dimension with @ANCESTVAL or on multiple dimensions with @MDANCESTVAL. The references can also be based on Shared Member relationships

using @SANCESTVAL:

@SANCESTVAL (Product, 3, Units);

Generating Member ListsIt is often the case that you need to generate a list of members when performing a calculation. For example, if you are performing a special allocation to the level 0 members of a specific product line, it does not make sense to create multiple IF statements to focus on a small group. You can use special commands to generate that list for you. There are two categories of calculation script functions that help you define a list:

Member set functionsRange functions

Special Member Set Function ExamplesThe simplest way to compose a list is by creating a comma-delimited list of member names.

@CURRMBR is a powerful function that can save a lot of explicit coding:• The syntax is @CURRMBR(dimName).

• This function returns the member from the dimension specified, which is currently being calculated.

• Important restrictions are that @CURRMBR cannot be used in a FIX statement or on the left side of an equation.The @UDA function is often used to group members. By creating UDAs in the outline, references using the UDAs help prevent excessive coding of member names. The syntax is: @UDA(dimName, UDAname)

The @MATCH function is used to generate lists based on string pattern matches.• The syntax is @MATCH (mbrName|genName|levName, “pattern”)• If you specify a mbrName, then the search is on the member and its descendants. Otherwise, you can specify a generation or level name. • A question mark (?) substitutes one occurrence of any character. You can use a question mark anywhere in the pattern. • An asterisk (*) substitutes any number of characters. You can use an asterisk only at theMost of the member set commands are limited to generating lists of members from

82

Page 83: Essbas Student Guide I

the same dimension, but there are many functions that accept multidimensional lists as arguments; for example, the math category of functions.

Lists from Multiple DimensionsMost of the member set commands are limited to generating lists of members from the same dimension, but there are many functions that accept multidimensional lists as arguments; for example, the math category of functions.

Since most member set commands are restricted to a single dimension, consider using the cross-dimensional operator in comma-delimited lists if you need multiple dimensions. For example:

“Average”=@AVG (SKIPMISSING,Nov->Units->”Prior Year”, Dec->Units ->”Prior Year”, ,Jan->Units->”Current Year”);Here the argument for the AVG function is an expList, which we generate with a list of cross-dimensional members.

You can also use @RANGE to generate lists that cross a member or member combination from one or more dimensions with a range list from another dimension.

Range FunctionsRange functions generate lists of members similar to the member set commands and then typically perform some operation on the generated list. You can sum, average, determine min or max values in a list, reference members by position (back or forward), and shift values across the list. When working with range functions, you should consider these parameters:

The default ranges in most of the range functions are the level 0 members of the dimension tagged as Time.

The ranges specified must all be from the same dimension. This means that you cannot use the cross-dimensional operator or the

@RANGE function in the range list. Most of the range functions accept as range lists a valid member

name, a comma-delimited list of member names, member set functions, and range functions.

It is common to use the @CURRMBRRANGE function as an argument to the range category of functions.

@CURRMBRRANGE The @CURRMBRRANGE function is often used to generate the range list for the range functions.

The syntax is as follows: @CURRMBRRANGE(dimName,{GEN|LEV},genLevNum, [startOffset], [endOffset])

83

Page 84: Essbas Student Guide I

It can also be used as an argument for expLists in the Math and Statistical categories of functions.

@AVGRANGEThe @AVGRANGE function is used to average values over the specified range. The syntax is:

@AVGRANGE(SKIPNONE | SKIPMISSING | SKIPZERO | SKIPBOTH, mbrName [, rangeList])

@SUMRANGEThe @SUMRANGE function is used to sum values over the specified range.

The syntax is similar to AVGRANGE without the need to specify how to treat missing or zero data:

@SUMRANGE(mbrName [, rangeList])

@PRIOR and @NEXTThe @PRIOR and @NEXT functions are used to reference prior or future values, usually an account value with time as the range list.

The syntax for these functions is as follows:

@PRIOR (mbrName [, n, rangeList])

@NEXT (mbrName [, n, rangeList])

@PRIORS and @NEXTSThe @PRIORS and @NEXTS functions returns the nth future or previous cell value in the sequence rangeList from the mbrName, with the ability to skip #MISSING, zero, or both #MISSING and zero values.

The syntax for these functions is as follows:

@PRIORS (SKIPNONE | SKIPMISSING | SKIPZERO | mbrName [, n, rangeList])

@NEXTS (SKIPNONE | SKIPMISSING | SKIPZERO | mbrName [, n, rangeList])

Optimizing the Outline with Dynamic Calculations

The dynamic calculation storage options provide more flexibility about how and under what circumstances to perform calculations and store data. The benefit of dynamic calculation is reduction of batch calculation times and reduced hard drive

84

Page 85: Essbas Student Guide I

storage requirements for very large databases.

Dynamic Calculation Storage OptionsDynamic calculation options allow members in the outline to be calculated on retrieval when requested by users rather than during the batch calculation process. There are two types of dynamic calculation settings:

Setting Description

Dynamic Calc Values are calculated on retrieval and discarded; they are not retained in the database.

Dynamic Calc and Values are calculated on retrieval and stored in Store the database after calculation. Any subsequent

retrieval of these cells reflect the originally calculated values.

Dynamic Calc MembersThe baseline characteristics of members with a Dynamic Calc setting are as follows:

To be tagged Dynamic Calc, a member must be at an upper level in a dimension’s hierarchy or have an outline formula associated with it.

The value for a Dynamic Calc member is calculated according to the consolidation operators of its children or its own outline formula.

During a batch rollup process (executing a CALC ALL or CALC DIM command from a calculation script), Dynamic Calc members are not calculated. Dynamic Calc members can not be displayed in a calculation script on the left hand side of a member formula.

During a batch rollup process, Dynamic Calc members are calculated for deriving any stored members values that are dependent. Since Dynamic Calc members are not stored, members tagged as such in a dense dimension do not occupy cells within the data block.

Since Dynamic Calc members are not stored, members tagged as such in a sparse dimension do not cause the creation of a data block.

Dynamic Calc members are skipped during data load. No error message is generated during data load if there is an attempt to load to a Dynamic Calc member.

Dynamic Calc and Store Members

The baseline characteristics of members with a Dynamic Calc and Store setting are as follows:

The value for the member is calculated according to the consolidation operator or member formula in the outline for the member. The value is calculated only when requested by a user from a spreadsheet

85

Page 86: Essbas Student Guide I

retrieval. After retrieval, the value is stored within the data block. During a batch calculation process (such as executing a CALC ALL or CALC DIM command from a calculation script), Dynamic Calc and Store members are bypassed.

Dynamic Calc and Store members in a dense dimension occupy cells within the data block whether or not the member has been retrieved, they do not reduce block size, and only marginally reduce batch calculation time. For these reasons, tagging upper-level members in dense dimensions as Dynamic Calc and Store is not recommended.

Since Dynamic Calc and Store members are stored, members tagged as such in a sparse dimension cause the creation of a data block when a retrieve is requested. The calculation

penalty on retrieval time, however, is paid only upon the first retrieval. Subsequent retrievals are as fast as a regular stored member.

During a batch calculation, if the intelligent calculator discovers that the children of a Dynamic Calc and Store member are changed or recalculated, it marks the data block of the parent Dynamic Calc and Store member as requiring calculation, which occurs upon the next retrieval request. At that time, the block is marked calculated and the results are stored.

Two commands can be used in batch calculation scripts to cleanse previously calculated Dynamic Calc and Store members:

• CLEARBLOCK DYNAMIC removes data blocks that are Dynamic Calc and Store. •CLEARDATA marks Dynamic Calc and Store members as noncalculated, thus forcing recalculation upon the next retrieve.

Dynamic Calc and Store members are skipped during data load (you cannot load data to a Dynamic Calc and Store member). No error message generates during data load.

Note: Intermediary blocks that may be required to calculate a Dynamic Calc and Store member (even though tagged themselves as Dynamic Calc and Store) are calculated but not stored.

Design Considerations

Dynamic Calc and Store members provide similar benefits to Dynamic Calc members. They potentially reduce batch calculation time for setting sparse members to this storage type.

In most cases, consider using Dynamic Calc before Dynamic Calc and Store.

Consider Dynamic Calc and Store for members in sparse dimensions with complex formulas or calculations. Formulas that include index functions (@ANCESTVAL,

86

Page 87: Essbas Student Guide I

@PARENTVAL), range operators (@AVGRANGE), and certain financial functions (@IRR) are good candidates, because these operations on sparse dimensions are fundamentally inefficient. Shifting the calculation load away from the batch operation, where all member combinations would be calculated to retrieval based calculation, would probably reduce the overall calculation load.

Do not use Dynamic Calc and Store for upper-level members of dense dimensions. There are no benefits related to reducing block size because cells for the members are reserved within the block, whether or not they are calculated.

Overview of Dynamic Calculation Order

The most important factor to understand in working with dynamic calculation is the order of calculation of dynamic members. The normal order of calculation in a batch process is as follows:

1. The dimension tagged Accounts (if dense).

2. The dimension tagged Time (if dense).

3. Dense dimensions in outline or CALC DIM statement order.

4. The dimension tagged Accounts (if sparse).

5. The dimension tagged Time (if sparse).

6. Sparse dimensions in outline order or CALC DIM statement order.

7. Second calculation pass on members tagged for two-pass calculation.

Note: If the Accounts dimension has no member formulas, the default Essbase behavior is to calculate all dense dimensions in the order they appear in the outline, followed by all sparse dimensions in the order they appear in the outline.

The principal implications of this calculation order for batch processes are as follows:

Account members are calculated first only if Accounts is dense. If it is sparse, you may run into percentage calculation problems.

Passes on data blocks are minimized because dense dimensions are calculated before sparse dimensions.

Two-pass members are always calculated correctly because their calculation occurs after all dimensions are rolled up.

Upon retrieval, the calculation order for Dynamic Calc members (stored and non-

87

Page 88: Essbas Student Guide I

stored) is as follows:

1) Dimension tagged Accounts (if sparse)

2) Dimension tagged Time (if sparse)

3) Sparse dimensions in outline order

4) Dimension tagged Accounts (if dense)

5) Dimension tagged Time (if dense)

6) Dense dimensions in outline order

7) Members tagged Dynamic and tagged with two-pass calculation

The principal implication of this calculation order for dynamic calculation and other members is that you may receive different results, because calculation order differs between batch and dynamic calculation.

When you change the storage attributes, make sure to test if calculations are still correct.

The calculation order is sparse rather than dense in dynamic calculations because blocks must be virtually created before they are filled up.

Dynamic calculation in combination with the two-pass calculation tag gives you extra control to make sure calculations work correctly.

Use two-pass calculation when you want a member to be calculated last. Typically this is a percentage calculation, because otherwise it is summed across other dimensions, leading to incorrect results.

Do not confuse the two-pass calculation tag in combination with Dynamic with a stand-alone two-pass calculation. Two-pass calculation on its own is used in batch calculations only.

Dense Dimension Guidelines

The performance trade-offs when assigning dynamic calculation to upper-level and formula members in dense dimensions tends to be favorable under specific conditions for specific reasons.

When a user retrieves information to create a report, an entire data block with the relevant information is brought into memory. Once in memory, the calculation of dynamic members is relatively efficient because:

The values of stored members whose cells are used to calculate the

88

Page 89: Essbas Student Guide I

dynamic member are usually all within a single block that is brought into memory for the dynamic calculation.

No additional read and write time on additional data blocks is necessary for each incremental dynamic member that needs calculating, because all dynamic members are usually associated with the same data block and are dependent on the same stored members.

Assigning members Dynamic Calc within a dense dimension reduces data block size. Smaller block size potentially improves performance because:

Within a range, smaller blocks move into and out of memory faster than bigger blocks.

You can define an additional dimension as dense that would otherwise be sparse, thus potentially reducing the overall number of blocks to be moved in and out of memory for a given batch calculation.

Note: Do not assign Dynamic Calc status to any dense member whose children must be multiplied or divided with a rate. The results are incorrect because of the dynamic calculation order of sparse first, dense second.

Sparse Dimension Guidelines

The following guidelines describe when and how to use dynamic calculations when the focus is on members in sparse dimensions. Basic batch calculation performance is improved by assigning Dynamic Calc to sparse members for the following reasons:

Batch rollup calculations create all combinations of sparse member data blocks where data exists whether users ever retrieve on such blocks. The ratio of upper-level to lower-level blocks in most databases is very high.Assigning Dynamic Calc to upper-level members in sparse dimensions eliminates the creation of many potential data blocks, thus reducing the initial rollup calculation time and any subsequent passes on data blocks.

Note: Tagging sparse members as dynamic leads to more calculation order issues. Because dynamic calculation order is sparse then dense, you may have difficulty obtaining the correct values for percentages on sparse dimensions. Although the two-pass calculation tag can help resolve many issues, there are greater levels of complexity to address.

The dynamic calculation penalty on retrieval across sparse dimensions is affected by three principles:

89

Page 90: Essbas Student Guide I

Principle DescriptionFan Out Calculation time on dynamic calculation members is affected by fan out of the children of the member. The fewer the children, the fewer

the number of blocks that must to be brought into memory to perform the dynamic calculation. Making members with many children dynamic may result in unacceptable retrieval times.

Stackup within A dynamic member with many descendants that are Dimensions also dynamic may result in a stackup of sequential

dynamic calculations that could significantly increase the retrieval and calculation time.

Sandwich Avoid sandwich situations where members of different storage types sit between each other in the hierarchy. Stored members sandwiched between dynamic members may result in incorrect calculations in certain circumstances. Dynamic members sandwiched between stored members may also cause data integrity issues. Sandwich situations

where stored members are dependent on dynamic calculation members can seriously impact batch calculation performance.

90

Page 91: Essbas Student Guide I

5. Designing and Optimizing Advanced Calculations

Chapter Objectives

Upon completion of this chapter, you will be able to:

• Create and test calculation scripts• Correct calculation behavior• Manipulate data with calculation scripts• Normalize data• Allocate data• Optimize calculation performance

Creating and Testing Calculation Scripts

When drafting calculation scripts, you should avoid developing and testing them on real-world databases. Because of the complexities of calculating in a multidimensional environment, you should develop and test scripts incrementally. That means writing line by line with frequent test cycles.

Developing and testing calculation scripts on full blown, real-world databases has two fundamental problems:

Cycle times (the turnaround for testing a segment of code) are substantially increased when testing on full databases. Long calculation times frustrate incremental development and testing, which is necessary in the Essbase environment where calculation dependencies are complex and multidimensional impacts are not immediately obvious.

Real-world data is often more difficult to audit than test data. During initial development of scripts, you should focus on the technical accuracy of calculation formulas rather than trying out prescribed control totals. It is typically easier to trace calculations and dependencies with contrived test data than real-world data.

The calculation script process has two phases:

Prototype Phase: Scripts are developed and tested for baseline accuracy.Pilot Phase: Scripts are tested for performance and capture of exception conditions.

91

Page 92: Essbas Student Guide I

Note: After the outline is completed, you should separate the calculation script development process from the data load and rollup testing process.

Developing a PrototypeThe recommended process for creating a prototype script consists of the following tasks:

1) Create test input data.

2) Create audit sheets.

3) Implement a draft and test cycle.

The objective of the prototype phase is to create a calculation script that correctly calculates dependences and values for baseline calculations. You want the prototype phase to be fast and efficient, with low test cycle times and easy to audit for complex calculations.

Creating Test Input DataTo check your results, you should create test data that is simple, easy to load, and easy to audit. It is not efficient to create prototype calculation scripts using a full or even partial sets of actual data.

Creating Audit SheetsWhen drafting and testing calculation scripts, you must be able to audit results easily. To accomplish this, set up in your testing workbook one or more audit sheets separate from input sheets.

Implementing a Draft and Test CycleAfter setting up input and audit worksheets, you draft and test your prototype calculation script by using a cyclical test procedure.

Testing in a Pilot Environment

When developing the prototype script, you confirmed that calculations and dependencies were working correctly on test data. During the pilot phase, you test the prototype script against real-world data and make any other necessary changes.

92

Page 93: Essbas Student Guide I

The initial step is to do a complete load of input data and then execute the prototype calculation script. Be sure to check database statistics before and after the calculation. You may write off a log file with the statistics using ESSCMD or MaxL.

You typically make two types of adjustments to the prototype script during pilot testing:

Performance Exception trapping

PerformanceDuring prototype testing, the database is typically too small for you to accurately assess performance impacts. During the pilot phase, you often modify the prototypical script to recognize performance issues. For example:

Focus calculations on specific data blocks using FIX statements. Revise sparse and dense settings to reflect final calculation

requirements. Revise dynamic calculation (versus batch calculation) approaches to

reflect final calculation requirements.

Exception TrappingThe prototype calculation script typically addresses known mainstream or baseline calculation requirements (how to calculate specific allocations or metrics). Testing this prototypical script against real-world data typically results in errors that represent exception conditions that require further refinement of the calculation script. For example:

Adjust allocation algorithms because actual data may come in at different levels than was assumed during the prototype calculation script phase.

Add IF, ELSEIF logic loops to handle such exception condition as zero values in calculated values.

Add DATACOPY commands to create data blocks. Turn off intelligent calculation if calculations do not seem to be doing

anything. The data may be marked as clean; if intelligent calculation is turned on, your calculation is ignored.

When possible, automate testing with MaxL scripts or VBA in the Spreadsheet Add-in to avoid testing errors.

Mimic your production environment. Follow the sequence you would ordinary follow in production. For example, upload from the G/L, make outline modifications, load into Essbase, and then run the calculation.

93

Page 94: Essbas Student Guide I

Correcting Calculation Behaviors

In this topic, you learn how the Essbase calculator works and you learn to recognize aberrations that may affect calculated results.

Data BlocksTo truly understand calculation performance you must first understand how your calculation script interacts with the data blocks in your database. Seemingly small changes to your scripts or to the database structure can profoundly affect calculation performance:

You change the Scenario dimension from dense to sparse and your calculation time improves by over 30%.

You add one back-calculation formula to your calculation script, and your calculation time doubles.

In the first example, the fundamental structure of all your data blocks changed; they all became much smaller, improving performance.

In the second example, the revised calculation script required the calculator to process many more blocks than it did originally, which adversely affected performance.

When developing calculation scripts, you can use the following methods to address calculation performance:

Block visualization Pass tracking Block minimizing

Block VisualizationTo understand what is occurring in a calculation, use a series of dependent formulas.

Visualize one data block moving from disk to memory, then analyze how the calculations would be applied to that single data block.

Visualize the balance of blocks, qualified by whatever FIX statements you may be using, moving from disk to memory. The same series of calculations that were performed on the first block are performed on the subsequent blocks.

Pass TrackingYou can use pass tracking after completing a calculation script or at key points during the script development.

Stop and count the number of passes you are making on data blocks.

94

Page 95: Essbas Student Guide I

If necessary, mark the passes on the script. The principal is simple: Two passes on a given set of data blocks take twice as long as one pass.

Ask the purpose of each pass and think through the performance implications: Is the pass necessary? Can it be combined with another member block pass? Can the pass be efficiently eliminated by converting to a dynamic calculation storage type?

Block MinimizingIf you have identified the specific passes on your data blocks, you can use block minimizing to analyze on what subset of blocks you are passing. Ask yourself three questions:

Are calculations being performed on a larger subset than necessary? Can a FIX statement be used to better focus on the relevant data

blocks? Did a previous CALC DIM command create more blocks in a

calculation dependency chain than were necessary?

Aggregating Missing Values

Aggregate missing values is a special calculation setting in Essbase that speeds up calculations by eliminating redundant aggregations in the rollup process. The default behavior of aggregate missing values is set at the database level.

Managing the SettingsWith aggregate missing values selected or set on in a calculation script, calculation performance is significantly enhanced during the rollup process:

In a data block, aggregations on cells that can be performed in two ways are summed only once.

95

Page 96: Essbas Student Guide I

Between data blocks (on sparse dimensions) totals that can be calculated by two pathways aggregating from other data block combinations are calculated only once.

The calculator also has an algorithm that attempts to compute through the shortest path (using the fewest number of blocks to compute the new total).

Upper-Level Step-OnsThe normal setting is for aggregate missing values to be selected at the database level. You then use SET AGGMISSG OFF in specific calculation scripts as conditions require.

You can set aggregate missing values off when you need to protect data loaded at upper levels.

Essbase rolls #Missing values of children over upper-level inputs or previously calculated values data during a CALC DIM process.To prevent such upper-level step-ons, set aggregate missing values off in a calculation script, or load to leaf node members.

Loading to Leaf NodesUnlike most other hierarchical products, with Essbase you can load and calculate data at upper levels in a hierarchy. However, there are additional issues of expected versus correct calculation behavior.

If you need to load to an upper level across a business view dimension and do not want your input stepped on (you choose not to allocate the value down to level 0), then use the following standard Essbase practices:

Set up a leaf node member or members called No X (where x is the dimension name). For example, No Product, No Customer, or No Region.

The requirement is that the member be a leaf node. That can be accomplished with a generation 2 member. The leaf node need not be buried at a high generation number with the mass of lower level input members.

Load to the leaf node No X member, which is now out of harm’s way during the rollup process.

Analyzing Expected Versus Correct Behavior

Step through the calculations in the following figures to understand why they show incorrect results.

document.doc Confidential Page 96 of 124© Adaequare, Inc

96

Page 97: Essbas Student Guide I

An apparently valid upper-level data load followed by a simple CALC DIM rollup returns an obviously incorrect answer.

Run a calculation script: CALC DIM (Accounts, Time).

document.doc Confidential Page 97 of 124© Adaequare, Inc

97

Page 98: Essbas Student Guide I

The completed calculation script (CALC DIM (Product); Price = Revenue / Units) results in the wrong answer.

The numbers are incorrect because Essbase calculates absolutely and literally through each dimension:

In the natural order of calculation, accounts are rolled up first regardless of the level of input. Sparse dimensions are then rolled up for each account.Lower-level rollups of accounts across the business view dimensions step on the upper-level inputs:• Unary operators for Gross Margin work only in the Accounts dimension.• You cannot aggregate accounts at multiple levels across a business view dimension.

The example situation occurs frequently when you are working with multilevel inputs.

Calculating Accounts FirstIt may appear that many expected versus correct Essbase calculation issues result from a practice of calculating the Accounts dimension first, followed by other dimensions. However, if you do not calculate Accounts first, the situation becomes more difficult.

The Accounts dimension typically includes rates and percentages as well as units and dollars. For example:

Unit prices Costs Input percentage factors

document.doc Confidential Page 98 of 124© Adaequare, Inc

98

Page 99: Essbas Student Guide I

• Discount rate • Allocation rate Many calculated metrics • Sales per employee • Cost per transaction Gross margin% Profit% Similar analytics used especially in financial analysis

Note: If the Accounts dimension includes multiplying and dividing with rates or percentages (and most do), then you must calculate Accounts first to obtain the correct numbers. It is a straightforward issue of calculation order.

Examine the following example:

SetupIn this simple model, Units * Rates = Dollars. Units and Rates are inputs for Jan, Feb, and Mar. Months sum to Qtr 1.

Accounts First, Time SecondCalculating Accounts first, month values for dollars are correctly computed.

Calculating Time second, quarter values for dollars and units are correct. Rate is incorrect, but this value is corrected in the back calculation.

Time First, Accounts SecondCalculating Time first, units are correctly summed. Rates are incorrectly summed.

Calculating Accounts second, month values for dollars are correct, but the quarter value for dollars is incorrect. The back calculation does not correct the summing of rates either.

document.doc Confidential Page 99 of 124© Adaequare, Inc

99

Page 100: Essbas Student Guide I

Note: The need to calculate the Accounts dimension first may be problematic if Accounts is set to sparse. Calculating a sparse dimension before a dense dimension causes an additional pass on data blocks and therefore a higher calculation time. Also, if Accounts is set as sparse, then by default the CALC DIM or CALC ALL commands calculate dense dimensions before Accounts.

Calculating Dense FirstIn a CALC ALL command, Essbase naturally calculates the dense dimensions first, and then the sparse dimensions. In a CALC DIM command, Essbase also calculates the dense dimensions first, despite the order in which you state the dimensions.

The simple reason for this is that calculation order delivers the shortest path in terms of the number of passes (reads and writes to and from disk) on data blocks.

Consider the following cases.

In case 1, dense is calculated first and sparse second (the normal order).

Assume 1,000 blocks are created upon data load. (Total reads and writes = 1,000) With dense calculated first, the 1,000 input blocks are read, then

written back to do the calculation on the sparse dimension(s). (Total reads and writes = 2,000)

For the calculations on the sparse dimensions, the same 1,000 blocks are read again with 10,000 blocks (all upper-level blocks created across the sparse dimension) written back.

(Total reads and writes =11,000) The result: Dense calculations are performed only on the input blocks.

The upper-level sparse blocks are then built from the “filled out” input blocks. This is the most economical way to calculate the database.

(Grand total reads and writes = 14,000)

In case 2, sparse is calculated first and dense second:

The same 1,000 blocks are created upon data load. (Total reads and writes = 1,000) With sparse calculated first, the 1,000 input blocks are read again and

10,000 blocks (all the upper-level blocks created across the sparse

document.doc Confidential Page 100 of 124© Adaequare, Inc

100

Page 101: Essbas Student Guide I

dimension) are written back. (Total reads and writes = 11,000) For the calculations on the dense dimensions, all 10,000 blocks are

read and then written back to disk. (Total reads and writes = 20,000) The result: When sparse is calculated first, upper-level blocks are

built prematurely. All have to be revisited to fill out the dense calculations, thus forcing passes on many additional blocks. This process requires more than twice the time as Case 1, where dense is calculated first. (Grand total reads and writes = 33,000)

Although calculating dense first reduces the number of passes on the data block, consider the following issues:

Calculating dense dimensions first conflicts with calculating a sparse Accounts dimension first. Therefore, in most cases the Accounts dimension should be set as dense.

The higher the ratio of upper-level to level 0 blocks, the greater the penalty of calculating sparse before dense.

Back Calculations

The back calculation is typically the least economical process in the calculation scripts in terms of calculation time: 1) The back calculation occurs after all data blocks are built during the CALC DIM process. That takes time because you revisit many blocks.

3) Essbase takes more time to visit existing blocks than it does to create new blocks. The CALC DIM command typically builds new blocks, whereas the back calculation retouches

4) blocks created during the CALC DIM.

Some categories of back calculations can be accomplished during the original pass on the blocks when they are being created. Here are the conditions:

Select two-pass calculation in the Database Properties dialog box (Actions > Edit properties). Make sure that the calculation script includes a CALC ALL or CALC DIM command incorporating all dimensions.

Make the calculation script the default calculation for the database. (Actions > Set > Set default calculation).

The formulas for members that require back calculation must be in the outline (not a calculation script) and marked as two-pass calculation in the Member Properties dialog box. This does not work for back-calculating upper-level rates on members that are also input

document.doc Confidential Page 101 of 124© Adaequare, Inc

101

Page 102: Essbas Student Guide I

accounts because of the order of the required CALC DIM calculation in the outline.

Configure the Accounts and Time dimensions as described below:

Two-for-One Works Two-for-One Does Not WorkAccounts dense, no Time Accounts sparse, no Time tag tag

Accounts dense, Time Accounts dense, Time sparse, and dense there are other dense dimensions

Accounts dense, Time Accounts sparse, Time sparsesparse, and no other dense dimensions

Accounts sparse, Time dense

Note: Although you can sometimes accomplish back calculations during the original pass on the data blocks, exercise caution when using this technique. It is very easy to produce erroneous data as the rollup process overwrites correctly calculated rates and percentages. Using a back calculation is safer and, in many cases, preferable.

Manipulating Data Sets

The calculator provides a variety of useful data manipulation commands:

Command DescriptionCLEARBLOCK ALL | UPPER | Clears previous input or upper-level NONINPUT | DYNAMIC data or stored dynamic calculations.

CLEARDATA mbrName Clears specific members or member combinations. The data blocks remain.

DATACOPY mbrName1 TO Copies focused or complete data sets mbrName2 from one set of members to another.

CLEARBLOCK or CLEARDATA

To clear space in a database, you can use the CLEARBLOCK or CLEARDATA command.

Both commands reclaim space:• CLEARDATA is more specific. You can clear member or member combinations by focusing the calculation using a FIX or by using the cross-dimensional operator.

document.doc Confidential Page 102 of 124© Adaequare, Inc

102

Page 103: Essbas Student Guide I

If the members are all sparse, then blocks are deleted. If any member in the member combination is dense, then only data cells are cleared.• CLEARBLOCK deletes blocks of data: ALL, NONINPUT, and UPPER blocks. • A CLEARBLOCK UPPER command does not reverse a CALC ALL calculation because upper levels of dense dimensions in level 0 blocks are not affected.CLEARBLOCK and CLEARDATA cannot be used in an IF statement, but they can be used in a FIX statement. CLEARBLOCK or CLEARDATA commands inside FIX statements:• On dense members only: clear data cells. • On sparse members only: delete the sparse blocks defined in the FIX.• On a combination of sparse and dense members: clear data cells.Any CLEARDATA command can be written as a CLEARBLOCK by using a CLEARBLOCK ALL command inside a FIX statement.

DATACOPY

The DATACOPY command creates blocks or fills in data cells within existing blocks.

If the FROM or TO member combinations are all dense members, then blocks are not created. Data cells within blocks are filled.

If any of the members in the member combination of the FROM or TO portions of the DATACOPY include sparse members, blocks are created if they do not already exist.

DATACOPY is often used to create non-input blocks as destinations for allocations.

Allocations usually involve pushing data from upper levels to lower levels. If these lower-level blocks do not exist, it is common to use the DATACOPY command to create them before the allocation.

It is common to use the Scenario dimension in DATACOPY since level 0 blocks usually exist for actual data.

DATACOPY is often used to copy forecasts or budgets throughout the budget cycle:

During the budgeting process, you can freeze a budget and begin modifying a new budget to create a history of the budgeting cycle.

In a rolling forecast, it is common to copy actuals data and continuously revise the forecast.

DATACOPY can be used with the cross-dimensional operator:

The TO member combination cannot be more specific than the FROM member combination. For example:

document.doc Confidential Page 103 of 124© Adaequare, Inc

103

Page 104: Essbas Student Guide I

DATACOPY “Current Year” TO “Prior Year”->Janwould result in an error at runtime.

If the FROM member combination is more specific than the TO member combination, then the TO portion is filled in with the missing members at runtime. For example:

DATACOPY “Current Year”->Jan TO “Prior Year” would result in data being copied from Current Year for January to Prior Year for January. The January member for Prior Year is automatically assumed.

DATACOPY commands can be used inside FIX statements. They cannot be used inside IF statements.

FIX statements around the DATACOPY command effectively add the restricted members to both the FROM and TO portions of the command.

This is a convenient way to both restrict the command and avoid having to write multiple DATACOPY commands. Multiple DATACOPY commands can be less efficient. For example, say you want to copy data from the prior year to the current year only for January, February, and March. You can write this as three separate DATACOPY commands or as one DATACOPY command inside a FIX.

Normalizing Data

Normalization calculations include push-downs, allocations, and intercompany eliminations. The bulk of calculation script code is often dedicated to normalization calculations, where the objective is to prepare specific members, usually in the Accounts dimension, for the main rollup.

Partitioning Calculations by ScenarioScenarios are a big driver of modeling and calculation requirements in the Accounts dimension. In financial applications, data from scenario to scenario typically differ with respect to the form of input and calculations. For example:

For budget and forecast data, inputs are typically units and rates (for example, units shipped), whereas selling prices and sales dollars are forward-calculated.

For actual data, the calculations and analysis are typically the reverse. Inputs are dollars from the general ledger system and units from the order processing or other systems, and rates are back-calculated by the formula rate = dollars/units.

document.doc Confidential Page 104 of 124© Adaequare, Inc

104

Page 105: Essbas Student Guide I

Data from scenario to scenario typically differ with respect to the level of input and calculations. For example:

Budget data often has:• Much product detail (standard cost by SKU)• Less customer detail (only top ten customers budgeted)• Much overhead detail (salaries by person)Actual data often has• Much product cost detail (standard cost by SKU)• Much revenue detail by customer (ship-to and ship-from invoices)• Less overhead detail (all salaries are lumped)Because of the form and level of inputs and calculations between scenarios, different scenarios require different data load procedures (different load rules) and different calculation scripts.

Developing a Normalization Table

The most important Essbase structures are often driven and distinguished by scenario members. Such structures include:

Partitioned database architectures User access and security Complexity and detail of the Accounts dimension Type and level of data inputs Calculation script functionality

To understand the impact of scenarios and to document input and calculation requirements, you should develop a normalization table. A normalization table includes the following elements:

Row Axis: Accounts are listed on the row axis. Especially list accounts that are input members.Column Axis: Scenarios are major sections on the column axis. Typical scenarios are actual, budget, and forecast.

For each account and scenario intersection, you need to define the following items on a nested column axis:

Data Type or Is the member created from a direct input, a formula Sourcing calculation, or a CALC DIM rollup?

Input Level For input accounts, at what generation or level is data being loaded?

Push to Level For input accounts, to what generation or level does the input data need to be copied or allocated?

Methodology For input accounts that must be pushed or allocated to another level, what methodology is to be used?

document.doc Confidential Page 105 of 124© Adaequare, Inc

105

Page 106: Essbas Student Guide I

For more complex hierarchies and upper-level load and calculation situations, the normalization table may require the following items:

Version Where the table is revised in subsequent rounds as Control new information is gained from data loading and

testing of calculation scripts.

Multiple Table Where tables are created for each different business Formats view dimension. This may be necessary when data is

being input and calculated upon at different levels across multiple dimensions.

Multiple Staff Where different scenarios are the domain of different Involvements staff groups. For example, when Accounting is best

qualified to develop the normalization table for the Actuals scenario and Financial Planning handles the Budget scenario portion of the table.

Optimizing Calculation Performance

By implementing the following best-practice design choices, you can optimize calculation performance:

Fewer blocks Smaller blocks Denser blocks Fewer calculation passes Dynamic calculation

Designing for Fewer Blocks

Load, dense restructure, and retrieve times are lower across the board if fewer data blocks are being passed back and forth from disk to memory. Batch calculation times for rollups and two-pass calculations are also substantially reduced if fewer blocks are being calculated.

Design strategies that potentially reduce the number of data blocks reduce calculation times and improve performance of other Essbase processes. The following strategies reduce the number of blocks:

Fewer sparse dimensions Fewer sparse members

document.doc Confidential Page 106 of 124© Adaequare, Inc

106

Page 107: Essbas Student Guide I

Fewer dimensions overall

Fewer Sparse Dimensions Within a Database

If you define fewer dimensions as sparse (setting them as dense), you can reduce the number of blocks.

Without dynamic calculations, this strategy has limits because: • Data blocks are larger; within a range, larger blocks move more slowly in and out of memory. • Such blocks are potentially less dense, thus causing more calculation time on #Missing cells.

With dynamic calculations (defining upper-level dense members as dynamic), the strategy of defining fewer dense dimensions has more flexibility.

• You can dramatically reduce block size by defining dense members as dynamic, thus making room in the block size for defining just one more dimension as sparse.

Fewer Sparse Members Within a Database

When you include fewer members within a sparse dimension (particularly at upper-level summary points where most data blocks are created), you reduce the number of blocks.

Without dynamic calculations, this strategy has clear limits because reporting requirements for multiple levels and alternate rollups are compromised.

With dynamic calculations the strategy has some merit within limits. Fewer blocks are created, but retrieve performance may be compromised beyond acceptable limits if too many subsidiary blocks must be moved into memory for calculating the dynamic member.

Fewer Dimensions Overall

When you include fewer dimensions within a database, you have fewer sparse dimensions, and therefore fewer data blocks need to be built during the batch calculation.

Dimensions proliferate when too many user needs or too much functionality are incorporated into a single database. The best indicator is interdimensional irrelevance, where the outline includes

document.doc Confidential Page 107 of 124© Adaequare, Inc

107

Page 108: Essbas Student Guide I

dimensions that result in awkward combinations of members. Without partitioning, reducing the number of dimensions by spinning

them off into an additional unlinked database is a difficult decision. • End users must connect to multiple databases for reporting. • If the data needs to be transferred or updated from one cube To another (such as for a corporate consolidation), the mechanics are cumbersome and prone to error. With partitioning, reducing the number of dimensions by spinning

them off into an additional database is an easier decision because the transferred data and outline updates are more easily managed.

• If end-user transparency of links is important, you can use transparent partitions.

Designing with Smaller Blocks

The input and output traverse time of data blocks to and from memory affects calculation, restructure, and retrieval times. Larger blocks move slower; smaller blocks move faster.

Design strategies that potentially reduce block size result in overall performance improvement. There are two fundamental approaches:

Fewer dense dimensions Fewer stored members in dense dimensions

Fewer Dense Dimensions Within A Database

When you define more dimensions as sparse (not setting them as dense), you reduce the block size.

This strategy has limitations. Data blocks that are smaller move faster in and out of memory. However:• Too small a block size may result in an unacceptable proliferation of blocks.• The smaller size efficiency is consumed by the magnitude of block counts.

When you define more dimensions as sparse (not setting them as dense), you reduce the block size and also potentially increase block density.

This strategy has limitations. Ultimately, setting one dimension as dense (the most dense of all the dimensions) results in the highest block density of all other combinations.

In most cases, however, maximizing block density results in a vast, unacceptable proliferation of data blocks (except very sparse databases).

Block density is important, but ultimately block size and block proliferation considerations overshadow sparse and dense settings that

document.doc Confidential Page 108 of 124© Adaequare, Inc

108

Page 109: Essbas Student Guide I

attempt to maximize this factor. Setting dense members as Dynamic Calc not only reduces block size,

but increases block density too. The density impact and performance results of dynamic calculation settings vary between databases, depending on form and level of inputs and structure of the dense dimensions.

Fewer Stored Members Within a Dense Dimension

When you have fewer stored members within a dimension, you reduce the number of cells and therefore data block size.

Without dynamic calculations, eliminating members in dense dimensions reduces block size. However, eliminating members also reduces reporting flexibility. It is usually not possible across the Accounts dimension for financial applications where reporting and analysis requirements are absolute in the required account detail.

With dynamic calculation (non-store), defining upper-level members as dynamic can substantially reduce block size and thereby improve performance across the board.

Using dynamic time series for accumulation calculations such as year, quarter, and month to date reduces the number of otherwise stored cells in the Time dimension, thereby reducing block size.

Designing for Denser Blocks

Block density is measured by the ratio of blocks that contain data (they are not #Missing) divided by the total number of stored cells. Denser blocks are better because the Essbase calculator calculates all cells within a data block regardless of need. The higher the block density, the less time is wasted calculating cells with #Missing.

Designing for Fewer Calculation Passes

You cannot avoid passing on data blocks during the main rollup of your database (CALC DIM or CALC ALL command). You may or may not be able to avoid an additional pass on the data blocks related to two-pass calculations and back calculation of average upper-level rates.

Use the following strategies to avoid additional passes on data blocks:

Do not include two-pass or upper-level rate calculations Set two-pass members to dynamic calculation

Do Not Include Two-Pass or Upper-Level Rate Calculations This approach, of course, may severely compromise reporting requirements.

Percentages and similar types of two-pass calculations can be calculated as required using spreadsheet formulas.

document.doc Confidential Page 109 of 124© Adaequare, Inc

109

Page 110: Essbas Student Guide I

Upper-level rates with incorrect aggregated values would not be shown on reports.

Set Two-Pass Members To Dynamic CalculationIf Two For The Price Of One conditions cannot be met, marking formulas members two-pass and dynamic calculation potentially avoids a second pass on data blocks.

Use this strategy for two-pass percentage calculations and mix calculations or allocations using @ANCESTVAL or @PARENTVAL constructions.

For upper-level rates, set up shadow Average Rate members that are marked as two-pass and dynamic calculation.

To benefit from this strategy, all members must be accommodated as two-pass/dynamic to avoid the second pass on data blocks. For example, calculating only one in a back calculation section of the calculation script (where others are handled as two-pass/dynamic in the outline) still forces a second complete pass on the data blocks.

Avoiding Complex Formulas on Stored SparseCertain formula constructions on sparse members may severely increase calculation time. Such sparse, inefficient constructions are member formulas that include:

Cross-dimensional operators (Performance ->OEM->Units) Certain range operators such as @AVGRANGE Financial functions such as @IRR Index functions such as @ANCESTVAL OR @PARENTVAL

When used in a formula on a sparse dimension member, these functions cause Essbase to look through all possible sparse combinations, not just the blocks that actually exist. This is extremely time-consuming, especially if there are many sparse dimensions and members.

Fortunately, there is a single, very helpful strategy for dealing with this situation: Mark members as Dynamic Calc and Store.

Mark As Dynamic Calc and Store Place formulas on members in sparse dimensions into the outline and mark such members Dynamic Calc and Store.

The calculation hit for the inefficient formulas occurs at the time of retrieval. For subsequent retrievals of the same data block, however, the calculation has been completed and stored. Therefore, retrieval is instantaneous. The inefficient calculation hit is avoided during the batch process for those member combinations that users never access.

document.doc Confidential Page 110 of 124© Adaequare, Inc

110

Page 111: Essbas Student Guide I

6. Aggregate Storage Databases

Appendix Objectives

This appendix contains an introduction to aggregate storage databases and multidimensional expressions (MDX). Upon completion of this appendix, you will be able to:

• Define aggregate storage databases• Describe the aggregate storage kernel• Describe MDX member formulas• Describe Hyperion Visual Explorer• Define MDX• Identify parts of an MDX query• Identify dimensions and members in MDX• Select multiple members in MDX• Describe the MDX query data model

Aggregate Storage Overview

Available in Essbase 7X, aggregate storage is an alternative to block storage. Aggregate storage databases efficiently support very sparse data sets of high dimensionality while allowing fast aggregations and query response time.

Aggregate storage is not a replacement for block storage. It is an alternative solution for the analytic needs of an organization. You can leverage the aggregate storage and block storage database types to create a powerful and flexible analysis platform.

document.doc Confidential Page 111 of 124© Adaequare, Inc

111

Page 112: Essbas Student Guide I

Accessing a Wider Application Set

Whereas some models are ideally suited for block storage, other models present advanced design challenges. The applications require more complex implementation methodologies to support sparse data sets and fast aggregations on a single platform. Aggregate storage supports those requirements.

Ultimately, a complete Essbase implementation may contain a combination of both block storage and aggregate storage databases. Each database will leverage the appropriate storage type to fit the business requirement.

document.doc Confidential Page 112 of 124© Adaequare, Inc

112

Page 113: Essbas Student Guide I

Combining Block and Aggregate Storage

When exploring the potential of aggregate storage, keep in mind that it is intended to provide a different storage option to suit a business requirement. The ultimate goal in any implementation is to derive and distribute analytic data as efficiently as possible.

Block storage databases are suited for databases that require:

Custom procedural calculation scripts or allocations in member formulas

Large-scale, direct write-back capability

Aggregate storage databases are suited for databases that require:

Large dimensionality, dimension combinations, or members Small batch windows (calculation and load)

By implementing aggregate storage in the combined solution, you achieve significant analytic benefits:

Aggregate storage and block storage on a single platform make Essbase a functionally rich solution. By connecting both application types through a transparent partition, you can leverage the functionality of both models.

You gain deeper business insight through the increased detail available—more dimensions and members per dimension. Faster load and aggregation times provide near real-time access to data.

You can leverage each storage type individually as the business situation dictates and leverage the functionality associated with each storage type.

document.doc Confidential Page 113 of 124© Adaequare, Inc

113

Page 114: Essbas Student Guide I

Delivering End-to-End Support

By implementing aggregate storage in the combined solution, you achieve significant personnel and IT benefits.

Personnel Benefits

A single solution across departments and divisions enables an organization to leverage the training and knowledge existing in IT and end-user communities.

Hyperion and third-party tools perform identically in both formats. From an end-user perspective, the data source is seamless. There is no indicator, visual or otherwise, that exposes the storage

type. With the exception of calculation scripts, fundamental database

objects still exist in one form or another. New terminology is directed at the developer community and does

not affect the end-user community.

IT Benefits

Hardware costs are lower due to fast calculation times and small disk footprints.

Fast update times reduce downtime for application maintenance, resulting in higher application availability.

Easier application tuning allows efficient data storage optimization to balance the application’s need for fast query response, extensive dimensionality, sophisticated analytics, and low data latency.

document.doc Confidential Page 114 of 124© Adaequare, Inc

114

Page 115: Essbas Student Guide I

Aggregate Storage Kernel Overview

Aggregate storage databases and block storage databases do not share the same architecture. The database types differ in both concept and design. Aggregate storage databases are read-only entities that accept data at level 0 intersections.

Whereas block storage databases use dense and sparse dimensions, aggregate storage databases use algorithms to intelligently select aggregates based on populated data sets. The base assumption is that all database cells are equally likely to be queried. At retrieval, all queries are dynamic and leverage the nearest stored view for optimal performance. The architecture that supports aggregate storage is intended to support rapid aggregation, high dimensionality, and sparse data sets.

For physical storage, aggregate storage databases use tablespaces, whereas block storage databases use index files and page files.

Creating Member Formulas

When working with aggregate storage databases, you must write all member formulas using the MaxL data manipulation language (DML). MaxL DML is a version of the multidimensional expression language (MDX). Hyperion has added a series of Essbase-specific commands to the language specification and embedded the language in the MaxL shell.

As with block storage, you enter member formulas using the Formula Editor in Essbase Administration Services. The difficulty of conversion from the Essbase calculator language to MDX varies, depending on the complexity of the formula.

document.doc Confidential Page 115 of 124© Adaequare, Inc

115

Page 116: Essbas Student Guide I

However, many commands and statements from calculation scripts have a counterpart in MDX.

Analyzing Data with Hyperion Visual Explorer

With the expansion of dimensions and members in an outline, it becomes increasingly difficult to create standard spreadsheet-based reports. More dimensions on a report equates to more complexity. Hyperion Visual Explorer presents data intersections in both graphical and numerical formats, which gives you the opportunity to derive patterns from highly complex data. Numeric outliers, for example, stand out clearly, and you can then bring the data directly into a spreadsheet report for distribution throughout your enterprise.

Note: Hyperion Visual Explorer is embedded in Hyperion Essbase Spreadsheet Add-in and activated with a separate Essbase license key.

document.doc Confidential Page 116 of 124© Adaequare, Inc

116

Page 117: Essbas Student Guide I

MDX Overview

Multidimensional expressions (MDX) is a standard query language specification for online analytical processing (OLAP) data sources. Hyperion supports MDX through C API, Java API, MaxL interface, and XMLA.

The Hyperion Essbase version of MDX includes an ever-growing list of functions developed specifically for Essbase, such as the IsUDA function. For that reason, MDX for Essbase is also called MaxL data manipulation language (DML), a subset of MaxL. However, MaxL DML and standard MaxL (MaxL DDL) have little in common syntactically. MDX is used in Essbase for advanced data analysis on both block and aggregate storage databases or for outline member formulas in aggregate storage outlines.

In many ways, MDX is comparable to the native Essbase report script language. However, whereas MDX is capable of performing the same selecting and calculating functions (and many other functions) as a report script, the Essbase report script language also includes a set of report formatting options to control how results are represented. On the other hand, the focus of an MDX query is solely on analytical data retrieval, with the underlying API handling the resulting data structures.

document.doc Confidential Page 117 of 124© Adaequare, Inc

117

Page 118: Essbas Student Guide I

Identifying the Parts of a QueryThe SELECT, FROM, and WHERE clauses in an MDX query indicate different structural parts of the query. If you know SQL, those clauses may look familiar, but their meaning is different in MDX. Every query uses the SELECT . . . FROM . . . (WHERE . . .) structure.

SELECT ClauseThe result of an MDX query on an OLAP cube is in itself another cube, or a hypercube. You can put any dimension (or combination of dimensions) on any axis of that result. You specify axes in the SELECT clause of an MDX query to state how your source cube’s dimensions are laid out in your result grids. The sample query lays out two measures “on columns” and three time periods “on rows.” The Hyperion implementation of MDX supports up to 64 different axes in a result grid.

Axis FrameworkYou signify that you are putting members on columns or rows or other axes of a query result by using the wording “on columns,” “on rows,” “on pages,” and so on. You can put the axis designations in any order; the sample query would display the same results if it were phrased as follows:

SELECT {Jan, Feb, Mar} ON ROWS,{[Net Sales], [Cost of Sales]} ON COLUMNSFROM [Hyptek].[Hyptek]WHERE ([North America], FY03)

You can also use numbers from 0 to 63 to specify the axis in the query. The first five axis numbers have corresponding aliases to conceptually match a typical

document.doc Confidential Page 118 of 124© Adaequare, Inc

118

Page 119: Essbas Student Guide I

printed report:

Columns Axis(0)Rows Axis(1)Pages Axis(2)Chapters Axis(3)Sections Axis(4)

For axes beyond Axis(4), you must use numbers because they do not have names.

FROM ClauseThe FROM clause names the database from which the data is being queried. The Hyperion XMLA provider supports only one database in the FROM clause. There are two ways to denote the name of a cube in the FROM clause:

Database (Hyptek)Application.Database (Hyptek.Hyptek)

WHERE ClauseThe WHERE clause defines a slicer, which is MDX terminology for a point of view (POV) for all other aspects of the query. If you do not specify a member for a given dimension in the SELECT clause or the WHERE clause, then MDX assumes a default of the top member in the missing dimension. Use of the WHERE clause is optional.

Identifying Dimensions and Members

You can identify dimensions by their tip members (the top member of the dimension), as in and or by using [Geography] [Measures] a function that

document.doc Confidential Page 119 of 124© Adaequare, Inc

119

Page 120: Essbas Student Guide I

returns a dimension. The member.Dimension and Dimension(member) formula constructions are synonymous, so both [North America].Dimension and Dimension([North return. America]) [Geography]You can identify members either by name alone, as in [North America] or [FY04] or with an ancestor name, separated by a dot, as in [Geography].[North America] [Fiscal Year].[FY04].It is good practice to use the dimension’s tip member name in this construction, as it provides a clear statement about the dimension of the member.

Selecting Multiple MembersIn many cases, your queries must return more information than the simple queries used in earlier examples. Instead of explicitly enumerating the many members required for a large query, you can use a function that selects multiple members at the same time. Relationship-based functions such as Children and Descendants are two of the most common selection functions.

Note: Some functions in MDX use a function-style notation (parentheses) and others use a dot notation without parentheses. Many functions allow you to use both types of notations synonymously, as with the Children function.

ChildrenThe Children function is based on parent-child relationships in the outline hierarchy. It is a convenient way to obtain a range of members based on a common parent. The following query selects the children of 1st Half in the columns of a report and the children of PERFORMANCE in the rows of a report. Note the use of both the dot notation and the function-style notation.

document.doc Confidential Page 120 of 124© Adaequare, Inc

120

Page 121: Essbas Student Guide I

SELECT{[1st Half].Children} ON COLUMNS,{Children([Performance])} ON ROWSFROM [Hyptek].[Hyptek]WHERE ([FY04])

DescendantsThe Descendants function, like the Children function, is also based on outline relationships, but is used to request members further away than immediate children. Descendants is a more complex function than Children. You can use it to refer to all descendants of a particular member, to limit it to a specific generation with the use of the optional layer , or to limit it to a certain number of steps down with the use of the optional index .

Defining the MDX Data Model: Tuples and Sets

Even though it starts with dimensions and members, the MDX data model is different from the cube data model. Understanding the MDX data model is key to using MDX well and understanding its syntax and operations.

document.doc Confidential Page 121 of 124© Adaequare, Inc

121

Page 122: Essbas Student Guide I

The following terms are key for defining the differences between the MDX data model and the cube data model:

Term DescriptionTuple Multidimensional abstraction for a member, as well as an

arbitrary slice from a cube on one or more members. (In this case, it performs a similar function to the cross-

dimensional operator in the Essbase calculation language.)

Set Ordered sequence of tuples. It may be empty or it may contain duplicate tuples.

Defining TuplesA tuple is a combination of members from one or more dimensions. When a tuple has more than one dimension, it has only one member from each dimension. A tuple can represent a slice of a cube, where the cube is sliced by the intersection of each member in the tuple. Essentially, each cell in the Essbase cube is defined by a tuple with one member from each dimension in the cube. Any single member on its own is also considered a tuple. Syntactically, any member name is a valid tuple:

[Fiscal Year].[FY03]You can also wrap a single member name in parentheses to make a valid tuple:

([Product].[PERFORMANCE])

When you combine members from more than one dimension to form a tuple, you must enclose them in parentheses. For example, to create a tuple representing the slice of the cube where FY03 and PERFORMANCE intersect, you can use one of the following syntaxes:

document.doc Confidential Page 122 of 124© Adaequare, Inc

122

Page 123: Essbas Student Guide I

([Fiscal Year].[FY03], [Product].[Performance])([Product].[Performance], [Fiscal Year].[FY03])

Although both examples are syntactically correct, they are not exactly the same tuple in MDX because the order of the dimensions is important.

The term dimensionality expresses the number of dimensions in the tuple and identifies the dimensions. Valid tuples do not have zero dimensions or null member references. Keeping these rules in mind, you can build tuples directly into queries. In fact, all previous query examples contain single-dimension tuples in their SELECT clauses. The following example uses a multidimensional tuple in a query:

SELECT{([Fiscal Year].[FY04], [Scenario].[Actuals]), ([Fiscal Year].[FY03], [Scenario].[Actuals])} ON COLUMNS,{[Product].[Performance].Children} ON ROWSFROM [Hyptek].[Hyptek]WHERE ([Measures].[Units])

Defining SetsA set in MDX is a sequence of tuples. A set can be empty, it can have just one tuple, or it can have more than one tuple. Duplicate tuples are allowed; when a set has more than one tuple, it can have duplicate tuples anywhere. There are no placeholders for empty tuples.

Every tuple in a set must have the same dimensionality; that is, every tuple must

document.doc Confidential Page 123 of 124© Adaequare, Inc

123

Page 124: Essbas Student Guide I

have the same set of dimensions and must be listed in the same order.

The simplest way to create a set is to use braces ({ }) to wrap one or more comma-separated tuples. For example:

Example Description{[Period].[Jan]} Set of one tuple (one member)

{[Period].[Jan], Set of three tuples (one member each)[Period].[Feb], [Period].[Mar]}

{([Period].[Jan], Set of one tuple with two dimensions [Geography].[North America])}

{([Period].[Jan], Set of two tuples with two dimensions each[Geography].[North America]), ([Period].[Jan], [Geography].[South America])}

Invalid SetsThe following sets are invalid because the tuples in the set do not have the same dimensionality.

Example Invalid Description

{([Period].[Jan]), The tuples contain a different number of ([Period].[Feb], dimensions.([Scenario].[Actuals])}

{([Period].[Jan], The tuples have the same number of [Scenario].[Actuals]), dimensions, but they are listed in a different ([Scenario].[Actuals], order from tuple to tuple.[Period].[Feb])}

document.doc Confidential Page 124 of 124© Adaequare, Inc

124