version 3 release 6.1 ibm tririga application platform...summed by geography. therefore, this fact...

72
IBM TRIRIGA Application Platform Version 3 Release 6.1 Application Building for the IBM TRIRIGA Application Platform: Performance Framework IBM

Upload: others

Post on 12-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

IBM TRIRIGA Application PlatformVersion 3 Release 6.1

Application Building for theIBM TRIRIGA Application Platform:Performance Framework

IBM

Page 2: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Note

Before using this information and the product it supports, read the information in “Notices” on page63.

This edition applies to version 3, release 6, modification 1 of IBM® TRIRIGA® Application Platform and to all subsequentreleases and modifications until otherwise indicated in new editions.© Copyright International Business Machines Corporation 2011, 2019.US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract withIBM Corp.

Page 3: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Contents

Chapter 1. Performance framework....................................................................... 1

Chapter 2. Data structures.....................................................................................3Architecture overview..................................................................................................................................3Fact tables.................................................................................................................................................... 4

Example fact table and associated dimensions.................................................................................... 6Metrics structure..........................................................................................................................................8ETL integration............................................................................................................................................. 9

ETL integration architecture...................................................................................................................9ETL integration process........................................................................................................................11Prerequisite setup for ETL integration.................................................................................................14Defining and maintaining ETL transforms............................................................................................15Running ETL transforms.......................................................................................................................47Customizing transform objects............................................................................................................ 51

Chapter 3. Metrics............................................................................................... 53Metrics reports...........................................................................................................................................53Key metrics................................................................................................................................................ 53Form metrics.............................................................................................................................................. 54

Data filtering......................................................................................................................................... 54Sub reports........................................................................................................................................... 55

Chapter 4. Hierarchy flattener..............................................................................57Flat hierarchies.......................................................................................................................................... 57

Examples of flat hierarchies.................................................................................................................58Hierarchy structure manager.................................................................................................................... 59

Accessing hierarchy structures............................................................................................................59Creating a data hierarchy..................................................................................................................... 59Creating a form hierarchy.....................................................................................................................59

Chapter 5. Fact tables..........................................................................................61List of fact tables and metrics supported................................................................................................. 61Facts that require special staging tables and ETLs...................................................................................61Dependent ETLs.........................................................................................................................................62

Notices................................................................................................................63Trademarks................................................................................................................................................ 64Terms and conditions for product documentation................................................................................... 65IBM Online Privacy Statement.................................................................................................................. 65

iii

Page 4: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

iv

Page 5: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Chapter 1. Performance frameworkIBM TRIRIGA Workplace Performance Management and IBM TRIRIGA Real Estate EnvironmentalSustainability provide viable solutions to help corporations strategically plan, manage, evaluate, andimprove processes that are related to facilities and real estate.

IBM TRIRIGA performance framework is managed within TRIRIGA Workplace Performance Managementand TRIRIGA Real Estate Environmental Sustainability, which include the following components:

• Data transform and fact table load services• A metric builder that uses the Data Modeler• A metric query engine• Enhanced Report Manager for building metric reports• Advanced portal features to render metric scorecards• A series of prebuilt metrics, reports, and alerts that significantly improve the productivity of the many

roles that are supported within TRIRIGA

© Copyright IBM Corp. 2011, 2019 1

Page 6: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

2 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 7: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Chapter 2. Data structuresTRIRIGA uses an extract, transform, and load (ETL) development environment as the mechanism formoving the data from business object tables to fact tables. In order to present the metrics, reports,scorecards, and other performance measures, the data must be in a form of fact tables and flat hierarchytables that the reporting tools can process.

Architecture overviewThe source data for TRIRIGA Workplace Performance Management comes from the TRIRIGA applicationdatabase, financial summary data that is imported from an external financial system, and building meterdata that is imported from external building management systems.

Using ETL technology, the source data is loaded into fact tables. The fact tables and dimension tables arein the same database repository as the TRIRIGA applications. The fact tables store the numerical data,referred to as facts, that is used to calculate the TRIRIGA Workplace Performance Management metricvalues. Each row in a fact table references one or more related business objects, classifications, or liststhat group and filter the facts. These rows are called dimensions.

The metric query engine runs queries on the fact and dimension tables. Metric queries quickly recalculatemetric values as the user moves up and down a hierarchical dimension.

The following diagram shows the distinct layers that make up this architecture and the flow of databetween these layers:

© Copyright IBM Corp. 2011, 2019 3

Page 8: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Fact tablesFact tables store the data that is used to calculate the metrics in metric reports. Fact tables are populatedonly through ETL transforms. To identify a business object as a fact table, from the Data Modeler set theExternally Managed flag in the fact table business object definition.

Each fact table is implemented in the IBM TRIRIGA Application Platform as a special business object thathas some or all of the following elements:

4 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 9: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Table 1. Fact tables

Fact table element Description

Hierarchical dimensions Each hierarchical dimension is a locator field to abusiness object that belongs to a hierarchicalmodule (for example, a Building, Service CostCode, or City). For each hierarchical dimension, acorresponding hierarchy structure supports metricreports.A hierarchical dimension can reference any or allbusiness objects within a module. Be as specific aspossible. Targeting a specific business objectimproves the granularity of your reporting.Each hierarchical dimension must have acorresponding hierarchy structure defined.Hierarchy structures are used for drill paths inmetric reports.

Non-hierarchical dimensions Each non-hierarchical dimension is either a listfield or a locator field to a business object thatbelongs to a non-hierarchical module (for example,a Task or Person).

Numeric fact fields Numeric fact fields are standard numeric fields,including or excluding Unit of Measure (UOM)properties. Numeric fact fields can becharacterized as one of the following types:

• Additive – Can be summed across alldimensions.

• Semi-additive – Can be summed only acrosssome dimensions. For example, the total numberof people for a building captured monthly cannotbe summed quarterly, since doing so would notyield a total for the quarter, whereas it can besummed by geography. Therefore, this fact isnon-additive over time.

• Non-additive – Cannot be summed across anydimension. For example, a ratio is a non-additivefact, since you cannot sum a ratio. Also, fieldsthat contain values from different grains are non-additive.

UOM fields Unit of measure (UOM) fields (except for Areafields) are captured in their local, entered, UOM.

Area fields Area fields are captured in both Imperial (forexample, square feet) and metric (for example,square meters) values.

Currency fields Currency fields are captured by using the basecurrency. No currency conversion occurs.

Chapter 2. Data structures 5

Page 10: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Table 1. Fact tables (continued)

Fact table element Description

Current time period The time period dimension is a special dimensionthat is used to identify the date/time period forwhich a single fact record is applicable. This ismost likely the time period when the data wascaptured. For cases when the time perioddimension is not used as a drill path or filter, thetriCapturePeriodTX field must be populated toindicate the dimension that is used to indicate thecapture period. If this field exists, thecorresponding business object for that dimensionshould contain a field that is named triCurrentBL,which is used to flag those dimension records thatreflect the current period. These records are thenused to filter the result set for the metric report.

Fiscal period The fiscal period classification is used by the ETLprocess to define the capture period for factrecords. This is the primary time period dimensionin metric reports.Because it is possible to have different fact tablesthat contain data that is based on different capturefrequencies with a single record for each levelwithin the hierarchy, each level can be flagged asthe current period. For example, if a year/quarter/month hierarchy is created in the fiscal periodclassification, it is possible to identify the currentyear, current quarter, and current month. A specialETL job type provides workflow to keep this datasynchronized.As a general rule, all data in a single fact tableshould be captured at the same time period grain/level (year, quarter, month). If the time periodgrain/level is changed after data has been capturedfor a particular fact table, all data in that fact tablemust either be revised to the correct grain/level ortruncated/removed.

Fact table business object To identify a business object as one that will havefact tables supporting it, select the ExternallyManaged radio button in the business objectproperties when creating the business object.

Tip: Do not delete or change any of the fact business objects, fact tables, or ETL scripts that are deliveredwith the standard TRIRIGA software. Instead, to change an existing one, copy it, rename the copy, andtailor the copy to your needs.

Example fact table and associated dimensionsThe fact and dimension tables are built by using the star schema method of data warehouse design. Theyare stored in the same database repository as the TRIRIGA applications.

The following diagram shows an example of one of the preconfigured fact tables in TRIRIGA WorkplacePerformance Management:

6 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 11: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

The diagram shows the space fact with five facts, including space capacity, total headcount, remainingcapacity, space area, and allocated area. The space fact also references seven dimensions, includingspace, building, location, geography, space class, building class, and building tenure. The dimensions inthe fact table link the facts to the corresponding dimension tables. Some dimensions are hierarchical,such as location and geography and others are not, such as space and building.

Flat hierarchy tables are used to identify the children of a selected business object. Flat hierarchy tablesenable the metric query engine to browse hierarchical modules, business objects, and classifications.

The following table shows an example of a flat hierarchy that is based on geography:

Table 2. Geography Flat Hierarchy Example

SPEC_ID Level NumberLevel 1SPEC_ID

Level 2SPEC_ID

Level 3SPEC_ID

Level 4SPEC_ID

World 1 World N/A N/A N/A

North America 2 World North America N/A N/A

EMEA 2 World EMEA N/A N/A

APAC 2 World APAC N/A N/A

United States 3 World North America United States N/A

Canada 3 World North America Canada N/A

Nevada 4 World North America United States Nevada

Texas 4 World North America United States Texas

Chapter 2. Data structures 7

Page 12: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Using the example, if you wanted to identify all of the geographies that are children of North America, youfirst look up the North America SPEC_ID in the first column of the geography flat example table. Then, youmight use the level number for North America, which is 2, to determine the filter column. By using theSPEC_ID and level number, you can identify all the geographies that are children, grandchildren, or anylevel below North America.

Metrics structureThe TRIRIGA Workplace Performance Management functionality focuses on capturing metric facts andenabling metric reporting.

Most TRIRIGA metrics are multidimensional, where the same metric provides a high-level summary view(for example, the Total Operating Cost/Area for the entire organization and portfolio) and, by drill downthrough various dimensions or filters, a role-specific view (for example, Total Operating Cost/Area forNorth American Operations for the facilities that are managed by North American operations).

Metrics measure process performance that is capable of identifying actionable results. Typically, themeasures are ratios, percentages, or scores. Metrics have targets, thresholds, action conditions,accountability, and action task functionality.

TRIRIGA Workplace Performance Management includes the following types of metrics, which are theScorecard categories in the Key Metrics portal section:Customer Metrics

Measure customer satisfactionFinancial Metrics

Measure financial performancePortfolio Metrics

Measure operational utilization and asset life cycle healthProcess Metrics

Measure process efficiency and effectivenessReporting and Analysis Metrics

Analyze a specific performance metric

Additionally, TRIRIGA Real Estate Environmental Sustainability includes the following types of metrics:Environmental

Measure performance of environmental initiativesBuilding meters

Measure characteristics of a building as reported by meters and sensors

The following high-level process diagram depicts how metrics are defined, captured, and presented to theuser:

8 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 13: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

The TRIRIGA database is the primary source of data for gathering operational data to be loaded into facttables. Optionally, you can extract data from other sources to be loaded into fact tables.

Each fact table contains the lowest level of aggregated data (such as Building level) for each metriccategory. For efficiency reasons, a fact table is a de-normalized (flattened) table containing data elementsfrom multiple TRIRIGA tables.

The dimension table contains dimensions for each metric. Dimensions are stored in a separate table forefficiency. The fact table contains a key (Spec ID) for each dimension. The dimension table can be either aflat hierarchy table or an TRIRIGA business object table.

The Metric Processing Engine (Analytic/Reporting Tool) generates metrics using data stored in fact tablesalong with metric setup data and dimension data.

Metric data, along with notifications, actions, and alerts, are presented to users in a role-based portal invarious forms (including reports, queries, and graphs) as defined in the metric setup table. A user can drilldown into a specific object or drill path to further analyze the metric data presented to them.

Metric reporting is dependent on metric fact tables. These fact tables are implemented using the DataModeler but are identified with a unique object type that signifies that it is a metric object. Metric objectsare populated using an ETL development environment, which is different from all other object types thatare updated through the metadata layer. The scheduling of the ETL process is controlled from within theTRIRIGA system using the Job Scheduler.

ETL integrationTRIRIGA uses either the Tivoli® Directory Integrator ETL development environment Configuration Editoror the Pentaho ETL development environment Spoon to generate transform XML files. These transforms,when run through the API, move data from source to destination tables.

ETL integration architectureTRIRIGA uses two ETL environments to create the ETL scripts that populate the fact tables. The two ETLdevelopment environments are the Tivoli Directory Integrator Configuration Editor and the Pentaho DataIntegration tool Spoon. The ETL development environments enable the creation of SQL queries that readdata from the TRIRIGA business object tables and map and transform the results to the fact table factand dimension columns.

The following diagram shows the flow of data between the source data, ETL development environment,and the TRIRIGA Workplace Performance Management data model layers:

Chapter 2. Data structures 9

Page 14: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

ETL job items are the business objects that reference the ETL scripts that are used to populate the facttables.

TRIRIGA Workplace Performance Management uses standard TRIRIGA Application Platform tools.

The following diagram shows the application platform tools:

10 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 15: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

ETL integration processTo move data from the source to destination tables, you run the ETL transform files that you developed ineither Tivoli Directory Integrator Configuration Editor or Pentaho Spoon through the API.

There must be a transform for each fact table. Fact tables are populated only through ETL transforms andnot through the TRIRIGA application.

The TRIRIGA Application Platform includes a workflow that runs on a schedule to load the fact tables.The workflow calls a platform Custom workflow task, which retrieves the latest transform XML from theContent Manager and uses either the Tivoli Directory Integrator or the Kettle API to run the transform.

The scheduled ETL process is available with the following licenses:

• Any IBM TRIRIGA Workplace Performance Management license• An IBM TRIRIGA Real Estate Environmental Sustainability Manager license• An IBM TRIRIGA Real Estate Environmental Sustainability Impact Manager license

Chapter 2. Data structures 11

Page 16: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• An IBM TRIRIGA Workplace Reservation Manager license• An IBM TRIRIGA Workplace Reservation Manager for Small Installations license

There are three main processes:

• Setup, which involves creating the Transform business object/form/navigation item and the workflowitself.

• Creating/Maintaining the Transform XML using an ETL development environment.• Runtime, which is a scheduled workflow that executes a Custom workflow task to periodically run

through the transforms to update the fact tables.

Restriction: Additional licenses may be required for the set up and maintenance processes, such as alicense for the triJobItem Module. The licenses needed to create or edit the ETLs differ from the licensesneeded for the runtime process.

The following diagram summarizes these processes for Pentaho Spoon ETL transforms:

12 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 17: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

The following diagram summarizes these processes for Tivoli Directory Integrator Configuration EditorETL transforms:

Chapter 2. Data structures 13

Page 18: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Data is pulled by ETL scripts from business objects, including the Financial Summary business object, intowhich financial summary records are imported from spreadsheets or by customer-designed interfaceswith a financial system.

Prerequisite setup for ETL integrationWithin TRIRIGA Application Platform, a business object and a form manage the transforms. The sourcetables and destination fact table must be defined and the mappings understood to create thetransformation.

There is a record for each fact table loaded through a transform. A binary field on the Transform businessobject pulls the transform XML file into the Content Manager. The form provides a way to upload/download the XML file so the transform XML can be easily maintained. TRIRIGA comes preconfiguredwith an ETL Job Item as the implementation of this business object or form.

14 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 19: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Within the TRIRIGA Application Platform, a workflow runs on a schedule and calls a Custom workflowtask for each fact table that needs to be loaded or updated. The Job Scheduler provides a mechanismthat automatically calls the custom workflow task for ETL Job Items.

TRIRIGA ships all business objects, forms, and workflows required to support the as-delivered TRIRIGAWorkplace Performance Management and TRIRIGA Real Estate Environmental Sustainability products.

Defining and maintaining ETL transformsUse an ETL development environment to create a transformation to move data. During the transform, youcan do calculations and use variables from TRIRIGA Application Platform and from the system.

Using ETLs with Pentaho SpoonYou can use Pentaho Spoon as an ETL development environment.

Overview of using Pentaho SpoonYou must first create the source and destination tables and establish the corresponding mappings. Next,you must identify variables that need to be passed into the transform and add these variables to thetransform business object or form. Then, you can use Pentaho Spoon and the following steps to defineand maintain transforms.

Tip: It might not be necessary to perform all of the following steps. The steps that are required depend onwhether you are defining or maintaining a transform.

• Run the spoon.bat or kettle.exe file by opening Spoon. Select No Repository as you do not need touse one.

• Either open an existing XML file, that was downloaded to the file system by using the Transform Form,or use File > New > Transformation to create a new transform.

• Define the JNDI settings for the local database. Use TRIRIGA as the connection name. Set the databaseconnection in the tool by using View > Database Connections > New. When the transform is run by theworkflow, the connection is overwritten with the application server’s connection information.

• Use the Design menu to lay out the transform as follows:

– Extract rows from the tables by using Design > Input > Table Input.– Make sure that all row fields have values when used in a calculation with Design > Transform >

Value Mapper.– Use Design > Transform > Calculator for calculations.– Provide the sequencing for the destination rows with Design > Lookup > Call DB Procedure by using

the NEXTVAL stored database procedure.– Use Scripting > Modified JavaScript Value and other steps to transform data as necessary.– Identify the table to which the rows are output with Design > Output > Table Output– Map the fields by Generated Mappings against Target Step.

• Link steps by using View > Hops and lay out the transform, step-by-step.• Test thoroughly by using execute, and other available utilities. Testing makes sure that the process is

accurate and that the expected rows are returned and transformed appropriately.• Save the transform by using File > Save. Do not save to the repository. Instead, set the file type to XML

and save with the .ktr file extension. If you do not set the file type, the default is Kettle transform,which saves an XML file with the .ktr file extension.

Installing Pentaho SpoonYou can install Pentaho Spoon as an ETL development environment. Use version 3.1, which is the versionwith which TRIRIGA integrates.

Procedure

1. Locate Pentaho Spoon version 3.1 at http://sourceforge.net/projects/pentaho/files/Data%20Integration/3.1.0-stable/pdi-open-3.1.0-826.zip.

Chapter 2. Data structures 15

Page 20: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

2. Extract the files from the .zip file and keep the directory structures intact.3. Review the most recent version of Pentaho Spoon and accompanying detailed documentation at

http://kettle.pentaho.org/.

Setting up a local JNDIYou must define the local JNDI settings for your database by updating the properties file.

Procedure

1. From the pdi-open-3.1.0-826/simple-jndi directory, edit the jdbc.properties file and addthe following properties:

• LocalJNDI/type=javax.sql.DataSource• LocalJNDI/driver=oracle.jdbc.driver.OracleDriver• LocalJNDI/url=jdbc:oracle:thin:@localhost:1521:orcl• LocalJNDI/user=tridata2• LocalJNDI/password=tridata2

2. Update the information as appropriate, including the driver if you are using DB2 or SQL Server.3. Save and close the file.

Creating transforms and database connectionsYou can create transforms and database connections for use between Pentaho Spoon and TRIRIGA

Procedure

1. Run the spoon.bat file in the pdi-open-3.1.0-826 directory by opening the Spoon tool. Choose torun without a repository.

2. To create a new transform, right-click Transformations and select New.3. In View mode, create your database connection. Right-click database connections within

Transformations and select New.4. The Custom workflow task replaces the TRIRIGA connection with the application server JNDI settings.

Configure the database connection as follows:

• Connection Name:TRIRIGA• Connection Type:Oracle• Access:JNDI• Settings:JNDI Name:LocalJNDI

5. Select Test to make sure that the connection is set up correctly.6. Save the database connection details.7. Be sure to save the transform as an XML file not in the repository. The extension for the Kettle

transformation is .ktr. The default for Kettle transformation saves the file as .ktr.

Running a transform from Pentaho SpoonYou can run a transform that is either completed or is in the process of being completed.

Procedure

1. Save the transform and select Run.2. Set variables, if necessary.3. Select Preview to display the changes to the input stream as each step is run.

16 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 21: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Selecting Spoon stepsYou can use the Design Mode to select the various Spoon step types and add them to a transform.

Procedure

1. To add a step to a transform, select Step type and drag the step in the left navigation onto yourpalette.

2. To link two steps, select View in the left navigation and double-click Hops.3. Put in the From and To steps and select OK.4. Alternatively, you might select Ctrl+click on two steps, right-click one of the steps, and select New

hop.5. To add a note to the transform, right-click the palette and select New Note.

Spoon example transformYou can download a copy of any of the existing .ktr scripts that are contained in an existing ETL job itemto follow along in the step descriptions. The following shows an example of a Spoon transform.

Most of the as-delivered ETLs have the same flow as the example but the specifics are different, forexample, the database tables from which data is extracted and how the data is transformed.

The example transform includes the following items:

• Pulls input rows and fields from T_TRIORGANIZATIONALLOCATION org, and T_TRISPACE space whereorg.TRILOCATIONLOOKUPTXOBJID = space.SPEC_ID.

• Uses IBS_SPEC.UPDATED_DATE to limit the rows that are selected, by using the date range that ispassed in from the transform business object.

• Use Value Mapper to make sure that there is a value in all rows for space.TRIHEADCOUNTNU,space.TRIHEADCOUNTOTHERNU, and org.TRIALLOCPERCENTNU, if not set it to 0.

• Uses the Calculator to set TRIFACTTOTALWORKERSASS to (space.TRIHEADCOUNTNU +space.TRIHEADCOUNTOTHERNU) * org.TRIALLOCPERCENTNU.

• Gets TRICREATEDBYTX and TRIRUNDA, passed in from the Transform BO through Get Variables step.• Uses Add Constant to set the sequence name and increment so that it is available in the input stream

for sequencing step.• Uses the DB Procedure NEXTVAL to set the SPEC_ID, set this step to use five threads for enhanced

performance.• Uses a JavaScript scripting step to determine whether the project was on time or not, and to calculate

the duration of the project. Set this step to use three threads for better performance.• Maps the fields to T_TRISPACEALLOCFACTOID.

Key things to consider as you build a transform include the following items:

• Test as you add each step to make sure that your transform is doing what you want.• Transforms need to be developed in a defensive manner. For example, if you are making calculations

that are based on specific fields, all rows must have a value in these fields, no empties. If not, thetransform crashes. Use Value Mapper to make sure that all fields used in a calculation have a value.

• Dates are difficult to handle as the databases TRIRIGA supports keep DATE and TIME in the date field.Date solutions show how to handle date ranges in SQL.

• Make sure to use JNDI settings and that your transform database is independent, especially if yoursolution needs to run multiple database platforms (DB2, Oracle, and Microsoft SQL Server).

• Any attributes on the Transform business object are sent to the Transform as a variable. There are acouple exceptions. Attributes of type Time or System Variable are ignored. You can use the variables inyour SQL or pull them into the input stream by using Get Variables with the following syntax: ${VariableName}, where VariableName is the attribute name.

• Make sure to completely test and set up the transform before you use variables in the Table Input. It ischallenging to test JavaScript, Table Input Preview, and Table Mapping. You can set variables in the

Chapter 2. Data structures 17

Page 22: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

transform with Edit > Set Environment Variables or in the Execute page Variable section. By usingvariables more of the test functions within Spoon are made available.

• Test your connection before you use JNDI, before you run a search, or before you run a Spoontransform. The JNDI connection must be tested to avoid Spoon having any potential performanceissues.

• Consider adding an index. It can be key to performance as the ETLs pull data from the T tables in amanner that is different from the regular application.

The preceding items detail the transform as you configure the Spoon steps used. The items concentrateon the main steps that are used by the transforms that are delivered with TRIRIGA. Spoon provides otherstep types that you can use to manipulate your data; use the steps as necessary, depending on yourtransform needs.

Configuring Spoon input stepsYou can use input steps to bring data into the transform.

About this taskTable input is the source of most of your data. By using the specified database connection, you can set upSQL to extract data from tables.

Procedure

1. Double-click a table input step to open up the information for the step.2. Set the connection to TRIRIGA or the source database.3. Enter your SQL into the SQL table.4. Select OK to save the table input.5. Select Preview to preview the data that the table input includes.

If you are using variables in SQL, the variables must be set for the Preview to function. You musteither hardcode the variable values while testing or select Edit > Set Environment Variables to set thevariable values. The variables in SQL are $(triActiveStartDA_MinDATE} and ${triActiveEndDA_MaxDATE}.

ResultsThe SQL provided extracts input rows from T_TRIORGANIZATIONALLOCATION organization andT_TRISPACE space, where org.TRILOCATIONLOOKUPTXOBJID = space.SPEC_ID. It uses dates from thetransform business object to limit the data that is included.

Configuring Spoon transform stepsYou can use transform steps to change input data or add information to the input stream.

About this taskIn the Spoon example transform, the Calculator, Add Constants, and Value Mapper steps are used. Youcan add a sequence through Spoon but it is not database independent and does not work on SQL Server.Instead, you can use the provided DB Procedure.

Procedure

1. Use the Value Mapper step to ensure that fields have values or to set fields different values. You canset values to a target field based on the values of a source field. If the target field is not specified, thesource field is set instead of the target field. You must ensure that all the fields in a calculation have avalue. If a null value is encountered during a calculation the transform fails.

2. Double-clicking the Value Mapper opens the dialog to input the necessary information. In the Spoonexample transform, it is used to set a field to 0 if it does not have a value.

3. Use the Add Constants step to add constants to the input stream and to set the values that areneeded for the NEXTVAL DB Procedure.

18 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 23: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

4. You must have this step in all transforms that use the NEXTVAL DB Procedure. Set SEQ_NAME toSEQ_FACTSOID and INCR to 1.

5. Use the Calculator step to take fields and run a limited set of calculations. It provides a set offunctions that are used on the field values. The Calculator step performs better than using JavaScriptscripting steps.

6. The built-in calculations are limited. Select the Calculation column to show the list of availablefunctions.

Configuring Spoon lookup stepsYou can use lookup steps to extract extra data from the database into the data stream.

About this taskThe Call DB procedure allows the transform to call a database procedure. Information flows through theprocedure and back to the transform. You can create sequences for the fact table entries.

Procedure

1. Set up your DB procedure call to use NEXTVAL, to send in SEQ_NAME and INCR and to output by usingCURR_VALUE.

2. Determine how many instances of this lookup step to run. When you are testing the transform, runningthis step with five instances greatly helps with performance.For example, for 30,000 records the performance time reduces from 90 seconds down to 30 seconds.

3. Change the number of threads that run a step by right-clicking the step and selecting Change numberof copies to start.

4. Tune the number of threads that are running the DB Procedure step.

Configuring Spoon job stepsEven though you are not creating jobs that you need to get Kettle variables and fields into the inputstream, you need to ensure that you can set an output field to a variable.

About this taskIn the Spoon example transform, the triCreatedByTX and triRunDA variables are brought into the inputstream. You also get variables to pull in the ONTIME and DURATION variables so that you can set themduring the JavaScript scripting steps.

Procedure

It is important in case there is a failure during a transform to time stamp when the transform is run. Theexample does it by using the triRunDA variable and this provides an avenue for rollback, even though theprocess does not have explicit steps for it.When you are setting fields to values in the transform, they must be the same type otherwise thetransform fails.

Configuring Spoon scripting stepsYou can use scripting steps to implement JavaScript features.

About this taskYou can use it for specific data manipulations on the input stream that cannot be done with theCalculator. You can calculate the duration or set values into the stream, which is based on other valueswith an if/then/else clause. You can set values into the transform stream that are constants or arefrom a variable.

Procedure

1. Use JavaScript scripting if you need logic to set the values.

Chapter 2. Data structures 19

Page 24: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

2. In the Spoon example transform, duration is calculated by subtracting two dates from each other. Theduration then determines whether a plan was on time.

3. With the JavaScript scripting features, if you want information out of the Table Input rows, you mustiterate to find the field you want. You cannot access the field directly, unless you alias the field in theTable Input step.

ExampleThe JavaScript scripting example details how to obtain and set the variables.

var actualEnd;var actualStart;var plannedEnd;var plannedStart;var duration;var valueDuration;var valueOnTime;

// loop through the input stream row and get the fields// we want to play withfor (var i=0;i<row.size();i++) { var value=row.getValue(i); // get the value of the field as a number if (value.getName().equals("TRIACTUALENDDA")) { actualEnd = value.getNumber(); } if (value.getName().equals("TRIACTUALSTARTDA")) { actualStart = value.getNumber(); } if (value.getName().equals("TRIPLANNEDENDDA")) { plannedEnd = value.getNumber(); } if (value.getName().equals("TRIPLANNEDSTARTDA")) { plannedStart = value.getNumber(); }

// these are the 'variables' in the stream that we want // to update with the duration and ontime setting // so we want the actual Value class not the value // of the variable if (value.getName().equals("DURATION")) { valueDuration = value; } if (value.getName().equals("ONTIME")) { valueOnTime = value; }}

// calculate the duration in daysduration = Math.round((actualEnd - actualStart) / (60*60*24*1000));// calculate the duration in hours// duration = (actualEnd - actualStart) / (60*60*1000);

// set the duration into the 'variable' in the rowvalueDuration.setValue(duration);

// determine ontime and set the value into the // 'variable' in the row streamif ((actualEnd == null) || (plannedEnd == null)) valueOnTime.setValue("");else if (actualEnd > plannedEnd) valueOnTime.setValue("no");else valueOnTime.setValue("yes");

Select Test Script to make sure that the JavaScript compiles. The Test Script and Preview steps in TableInput cannot handle variables unless they are set. You can set variables in the transform by using Edit >Set Environment Variables. This makes more of the test function within Pentaho Spoon.

For example, you can use Edit > Set Environment Variables and set triActiveStartDA_MinDATE toto_date(‘20061201’, ‘YYYYmmdd’).

If you are using column aliases when you are defining your query, you must use the same alias when youare looking up the column with getName.

20 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 25: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

The following example, in the table input step, shows the select option:

SELECT mainProject.triProjectCalcEndDA ActualEndDate, mainProject.triProjectActualStartDA ActualStartDate

If you are looking up the value for ActualEndDate, use the alias and not the column name from thedatabase, as illustrated:

if (value.getName().equals("ActualEndDate")) { actualEnd = value.getNumber();}

Configuring Spoon output stepsYou can use output steps to write data back to the database.

About this taskTable output and table output mapping stores information to a database. This information is then used inthe fact tables. When you have all the information to save a transform, you can add output steps to theend of your transform and connect them to the last step.

Procedure

1. Double-click and add the connection information and the fact table you want to use as the outputtable.

2. When it is set up and the steps are connected, right-click the table output step.3. Select Generate mapping against this target step.4. Map the source fields to the target fields in the target database and select OK.

Source fields include the additional fields added to the input stream. Ensure to set the table mappingbefore you use variables in the table input step.

5. To complete your transform, drag the mapping step in between the last two steps.6. If necessary, you can modify and add more fields to the mapping step.

Testing the transformYou can test the transform after you add each Spoon step, or at the end when all Spoon steps arecomplete. Testing after each step makes debugging easier. Before you can test the transform, you mustfirst save the transform.

About this taskThe variables section details the variables that are used in the transform. When you test the transform byusing Spoon, you can set values to these variables. When the transform is run from within TRIRIGA, thevariables form part of the transform business object. Set and save the values that are used into the smartobject before the custom workflow task is called.

Procedure

1. Set the triRunDA variable to the date and time of the workflow run. It does not need to be an attributeon the transform business object. It is the Number representation of the run date and time. triRunDAdoes not have six formats of the date since it is generated dynamically by the custom workflow task.triRunDA is needed for setting the create date of the fact row.

2. triCreatedByTX is an attribute on the transform business object.3. triActiveStartDA_MinDATE and triActiveEndDA_MaxDATE are the wrapped representations of

triActiveStartDA and triActiveEndDA. During Spoon testing, if you are testing on Oracle or DB2, youmust wrap them with to_date (‘the date you want’, ‘the format’).

4. Click Launch to run the transform. If a step has an error, the step appears in red and the error is savedto the log file. You can access the log file through the log page.

Chapter 2. Data structures 21

Page 26: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Date solutionSeveral date variables require calculation and comparison when used by Pentaho Spoon. Date solutionprovides these calculations and comparisons.

There are three instances when date solution is required:

1. Compare two dates. This comparison is used to determine whether a project is on time.2. Calculate duration between two dates in days. In some cases, this calculation is used to calculate the

duration in hours.3. Compare a date, such as modified date or processed date, to a range of dates, such as first day of

month and last day of month.

The first and second instances are solved by using JavaScript scripting steps.

The third instance is solved by using a date range in the table input.

There are two types of dates. Dates that are stored as a Date in the database and dates that are stored asa Number in the database.

Tip: All TRIRIGA objects store Date and Date and Time fields as numbers in the database. Select a fieldas a number to interact with business object tables. Select a field as a date to interact with systemplatform table fields defined as date.

Select field as dateYou can interact with system platform table fields defined as date by selecting a field as a date.

The following example code uses IBS_SPEC.UPDATED_DATE as the date field to determine whether arow is needed. triActiveStartDA and triActiveEndDA are the date range. These dates come fromthe triActiveStartDA and triActiveEndDA fields on the transform business object.

The IBS_SPEC table is not a TRIRIGA object. It is a system platform table that is used to track objects inTRIRIGA. It includes a field that changes every time an object in TRIRIGA is updated. The field is theUPDATED_DATE field and in the database it is a date field, not a number field.

In the following example code, ${triActiveStartDA_MinDATE} and ${triActiveEndDA_MaxDATE}are used. These wrapped dates fields fetch all records from 12:00 am on the start date to 11:59 pm onthe end date.

SELECT org.SPEC_ID ORG_SPEC_ID, org.TRIORGANIZATIONLOOKUOBJID, space.CLASSIFIEDBYSPACESYSKEY, org.TRIALLOCPERCENTNU, org.TRIALLOCAREANU, space.TRIHEADCOUNTNU, space.TRIHEADCOUNTOTHERNU, spec.UPDATED_DATEFROM T_TRIORGANIZATIONALLOCATION org, T_TRISPACE space, IBS_SPEC specWHERE org.TRILOCATIONLOOKUPTXOBJID = space.SPEC_IDand space.SPEC_ID = spec.SPEC_IDand spec.UPDATED_DATE >= ${triActiveStartDA_MinDATE}and spec.UPDATED_DATE <= ${triActiveEndDA_MaxDATE} order by UPDATED_DATE

In Oracle or DB2, ${triActiveStartDA_MinDATE} displays like to_date (‘20070701 00:00:00’, ‘YYYYmmddhh24:mi:ss’) and ${triActiveEndDA_MaxDATE} displays like to_date (‘20070731 23:59:59’, ‘YYYYmmddhh24:mi:ss’).

In SQL Server, these dates look slightly different because of database specifics, but are set up to captureall the rows between the two dates.

Select field as numberYou can interact with business object tables by selecting a field as a number.

Instead of using IBS_SPEC.UPDATED_DATE as the date determination field for the TRIRIGA date, thismethod compares the determination field directly to triActiveStartDA and triActiveEndDA, since they areall numbers in the database.

In the following example code, triCaptureDA is a field on T_TRISPACE.

SELECT org.SPEC_ID ORG_SPEC_ID, org.TRIORGANIZATIONLOOKUOBJID, space.CLASSIFIEDBYSPACESYSKEY, org.TRIALLOCPERCENTNU, org.TRIALLOCAREANU, space.TRIHEADCOUNTNU, space.TRIHEADCOUNTOTHERNU, space.TRICAPTUREDAFROM T_TRIORGANIZATIONALLOCATION org, T_TRISPACE space, IBS_SPEC spec

22 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 27: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

WHERE org.TRILOCATIONLOOKUPTXOBJID = space.SPEC_IDand space.TRICAPTUREDA >= ${triActiveStartDA_Min}and space.TRICAPTUREDA <= ${triActiveEndDA_Max} order by space.TRICAPTUREDA

Similar to the date fields, use the Min and Max variables to make sure that the start is 00:00:00 and theend is 23:59:59. For example, use these variables to make your search pick up a record on December31st at 13:54 in the afternoon.

Date variablesFor each Date or Date and Time attribute on the Fact Transform business object, the system creates sixKettle variables.

The following table summarizes these Kettle variables:

Table 3. Kettle variables

Kettle Variable Description

${triActiveStartDA} No suffix = is the value in milliseconds sinceJanuary 1, 2014, with no changes to the time. Thisvariable is for fields that are represented as anumber.

${triActiveStartDA_Min} Min = is the value in milliseconds since January 1,2014, with the time value set to 00:00:00 for thespecified date. This variable is for fields that arerepresented as a number.

${triActiveStartDA_Max} Max = is the value in milliseconds since January 1,2014, with the time value set to 23:59:59 for thespecified date. This variable is for fields that arerepresented as a number.

${triActiveStartDA_DATE} DATE = is the wrapped value in date format, withno changes to the time. This variable is for fieldsthat are represented as date in the database.For Oracle or DB2 it is wrapped and displays like:to_date (‘20070615 22:45:10’,’YYYYmmddh24:mi:ss’)For SQL Server it displays like: ‘2007061522:45:10’

${triActiveStartDA_MinDATE} MinDATE = is the wrapped value in date format,with the time value set to 00:00:00. This variable isfor fields that are represented as date in thedatabase.

${triActiveStartDA_MaxDATE} MaxDATE = is the wrapped value in date format,with the time value set to 23:59:59. This variable isfor fields that are represented as date in thedatabase.

When you specify the ${triActiveStartDA_Min} and ${triActiveStartDA_Max} variables to see a time periodbetween two dates, you need to capture all the rows within the time period. You need to start at midnightand stop at 1 second before midnight. If you use only the date value, you might not get all the rows thatyou want, depending on the time on the variable. You must specify the minutes and seconds becauseboth TRIRIGA databases store dates in a date time or number field.

The ${triActiveStartDA_MinDATE} and ${triActiveStartDA_MaxDATE} variables help with date comparisons.

For example, for triActiveStartDA whose value is 20070615 22:45:10,

triActiveStartDA_MinDATE =(Oracle) to_date(‘20070615 00:00:00’,’YYYYmmdd h24:mi:ss’)

Chapter 2. Data structures 23

Page 28: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

(SQL Server) ‘20070615 00:00:00’triActiveStartDA_MaxDATE =(Oracle) to_date(‘20070615 23:59:59’,’YYYYmmdd h24:mi:ss’) (SQL Server) ‘20070615 23:59:59’

Moving ETL Scripts into TRIRIGA from KettleOnce the transform is completed and tested, it must be uploaded to the TRIRIGA ETL job item.

Remember: Save the transform with the file type as XML and extension .ktr.

The following graphic describes the flow between the ETL environment and TRIRIGA.

Variables passed to KettleAll variables that are passed to Kettle are of type String. Number variables are converted by the customworkflow task to type String. The TRIRIGA field types that are supported in Kettle are Text, Boolean,Date, Date and Time, Locators, and Numbers.

Table 4. The following are example fields on the ETL Job Item:

Field name Field label Field type

triActiveEndDA Active End Date Date

triActiveStartDA Active Start Date Date

triBONamesTX BO Names Text

triControlNumberCN Control Number Control Number

triCreatedByTX Created By Text

triLocator triLocator Text

triModuleNamesTX Module Names Text

triNameTX Name Text

triTransformBI Transform File Binary

24 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 29: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Table 5. The following are the variables passed to Kettle:

Variable that is passed to Kettle Description

triNameTX (Text)

triActiveStartDA (Number) date in milliseconds since January 1,2014

triActiveStartDA_DATE (Date) wrapped if Oracle or DB2, time is whateverit was on the attribute

triActiveStartDA_MinDATE (Date) wrapped if Oracle or DB2, time is 00:00:00

triActiveStartDA_MaxDATE (Date) wrapped if Oracle or DB2, time is 23:59:59

triActiveStartDA_Min (Number) date in milliseconds since January 1,2014, time is 00:00:00

triActiveStartDA_Max (Number) date in milliseconds since January 1,2014, time is 23:59:59

triActiveEndDA (Number) date in milliseconds since January 1,2014

triActiveEndDA_DATE (Date) wrapped if Oracle or DB2, time is whateverit was on the attribute

triActiveEndDA_MinDATE (Date) wrapped if Oracle or DB2, time is 00:00:00

triActiveEndDA_MaxDATE (Date) wrapped if Oracle or DB2, time is 23:59:59

triActiveEndDA_Min (Number) date in milliseconds since January 1,2014, time is 00:00:00

triActiveEndDA_Max (Number) date in milliseconds since January 1,2014, time is 23:59:59

triActiveEndDA (Number) date in milliseconds since January 1,2014

triCreatedByTX (Text)

triRunDATE (Number) Run Date set by Custom workflow task

triLocator (Text – Locator) is a locator field that contains areference to another business object. This variablecontains the text value of that record’s field

triLocator_IBS_SPEC (Text - Locator) contains the spec_id of the recordin the triLocator field. You can use this spec_id tofind information related that record through otherdatabase tables

triControlNumberCN and triTransformBI are not passed to Kettle.

Important: Things to remember about variables:

• There are six variables for each Date, and Date and Time, type field. TRIRIGA wraps the value andhands it to Kettle in six different formats.

• Variables in Kettle are all strings. If you need a variable to be a number in the script, you need to use aconversion. You can set a number field like, for example, TRICREATEDDA, with a variable like, forexample, triRunDATE. Kettle does some implicit conversions, but if you want to do any calculations witha variable, you must first convert the variable to a number.

Chapter 2. Data structures 25

Page 30: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• For dates, you must use the correct representation. For example, you cannot includespec.UPDATED_DATE >= ${triCreatedDA} in your selection. spec.UPDATED_DATE is a date whiletriCreatedDA is a number. The results are inaccurate or the SQL fails.

• The attribute types supported to pass to Kettle are limited to Text, Boolean, Date, Date and Time, andNumbers. All other TRIRIGA data types are skipped (except Locators).

• For Locator fields, two variables are created, one for the text of the Locator and the other for theSPEC_ID of the linked record. You can use the SPEC_ID to find information that is related to that recordthrough other database tables.

Debugging ETL scripts in the applicationTo debug ETL scripts in the application, you must first set up logging and then trigger the RunETL Customworkflow task to view the log information.

Setting up loggingTRIRIGA provides debugging capabilities when ETL scripts run in the TRIRIGA application.

Procedure

1. In the Administrator Console, select the Platform Logging managed object. Then select the option toturn on ETL logging.

2. Select Category ETL > Transforms > Run Transform to turn on debug logging in the TRIRIGA platformcode that processes ETL job items. Log messages are printed to server.log.

3. Select Category ETL > Transforms > Kettle to turn on debug logging in the Kettle transforms. Logmessages are printed to the server.log.

4. Apply the changes. Now when an ETL Script runs, ETL related information will be put into the serverlog.

Important: Because of the large volume of information you may encounter in a log, set Pentaho Spoonlogging to debug for only one execution of the ETL job item.

Debugging using ETL jobsOnce you have set up logging, you will need a way to trigger the RunETL Custom workflow task to see anyinformation in the logs.

About this taskIf you are using the ETL Job Item, then you can simply click the Run Process action on that form.

Procedure

Do not forget to fill the field values in the form that the ETL Script would expect.Only use the Run Process action for debugging purposes. For production, use the Job Scheduler instead.Note that Run Process will update tables in the database, so do not use this action in a productionenvironment.

ExampleThe following shows a sample log output:

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) Kettle variable set - ${triCalendarPeriodTX_SPEC_ID} = 3103902

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) Kettle variable set - ${triCalendarPeriodTX} = \Classifications\Calendar Period\2010\Q4 - 2010\October - 2010

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) *** object field found = BoFieldImpl[name=triEndDA,id=1044,Section=BoSectionImpl[name=General,id=BoSectionId[categoryId=1,subCategoryId=1],Business Object=BoImpl

26 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 31: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

[name=triETLJobItem,id=10011948,module=ModuleImpl[name=triJobItem,id=22322]]]]

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) Kettle variable set - ${triEndDA_MinDATE} = to_date('20101031 00:00:00','YYYYmmdd hh24:mi:ss')

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) Kettle variable set - ${triEndDA_MaxDATE} = to_date('20101031 23:59:59','YYYYmmdd hh24:mi:ss')

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) Kettle variable set - ${triEndDA_DATE} = to_date('20101031 00:00:00','YYYYmmdd h24:mi:ss')

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) Kettle variable set - ${triEndDA} = 1288508400000

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) Kettle variable set - ${triEndDA_Min} = 1288508400000

2011-01-21 14:01:27,125 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) Kettle variable set - ${triEndDA_Max} = 1288594799000

2011-01-21 14:02:10,595 INFO [SpaceFact](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) SpaceFact - Process Remove Nulls (LEGALINTEREST_SPEC_ID)'.0 ended successfully, processed 3282 lines. ( 76 lines/s)

2011-01-21 14:02:10,595 INFO [SpaceFact](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) SpaceFact - Process Remove Nulls ( REALPROPERTYUSE_SPEC_ID)'.0 ended successfully, processed 3282 lines. ( 76 lines/s)

2011-01-21 14:02:10,595 INFO [SpaceFact](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) SpaceFact - Process Remove Nulls ( REALPROPERTYTYPE_SPEC_ID)'.0 ended successfully, processed 3282 lines. ( 76 lines/s)

2011-01-21 14:02:10,595 INFO [SpaceFact](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) SpaceFact - Process Filter rows'.0 ended successfully, processed 3307 lines. ( 76 lines/s)

2011-01-21 14:02:10,595 INFO [SpaceFact](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) SpaceFact - Process Dummy (do nothing)'.0 ended successfully, processed 25 lines. ( 0 lines/s)

2011-01-21 14:02:10,595 INFO [SpaceFact](WFA:11325389 - 3070255 triProcessManual:38447392 IE=38447392) SpaceFact - Process Query for Space'.0 ended successfully, processed 0 lines. ( 0 lines/s)

Performance tuning tipsUse the following tips to improve performance of ETLs with Spoon.

Summary

1. When you are finished getting your ETL to do what you want it to do, take a baseline performancemeasurement.

2. Using Spoon, run the ETL against a database where you have thousands of rows added to your facttable.

3. Make sure that you are using a JNDI connection and running Spoon on the network where thedatabase lives so that you do not have network latency. Do not run it through a VPN.

4. Get a completed list of your run. For example, from a run of the triSpacePeopleFact ETL.

Analysis

1. JavaScript and DB Procedure (Get Next Spec ID) steps have multiple copies. Right-click on the stepand changing the number of copies to start.

Chapter 2. Data structures 27

Page 32: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• For the triSpacePeopleFact ETL in the preceding example run, changing the JavaScript and DBProcedure (Get Next Spec ID) steps to three copies of each:

• Unaltered: 12.9, 12.7, 12.5, 12.6• Three copies of each: 11.4, 11.6, 12.3, 12.2

2. Change the default row set size from 1000 to 10000. New transformations have this set automatically.Right-click the ETL and open the properties of the transform.

3. Analyze the run. Is there a bottleneck? Is there a step that is slower than the others? Possibly othersteps can have multiple copies for better throughput.

4. Is the Data Input step a bottleneck? Will an index to the database help? If so, add an index and rerun.Is the performance better? Maybe use a Filter step instead of using the database to filter down theresult set.

5. Analysis is an iterative process. Always have multiple copies of the JavaScript and DB Procedure (GetNext Spec ID) steps.

6. An ETL run with 300-800 rows per second is performing well and definitely in the acceptableperformance range.

For the triSpacePeopleFact ETL, after initial development substantial improvements were achieved by justdoing Steps 1 and 2.

Whereas, for the triSpaceFact ETL, substantial improvements were achieved by doing Steps 1, 2, and 4.

The following shows the triSpacePeopleFact ETL run with Steps 1 and 2:

Query for Space People: Time = 11.6 sec; Speed (r/s) = 743.6

The following shows the triSpaceFact ETL run with Steps 1 and 2:

Query for Space: Time = 313.9 sec; Speed (r/s) = 24.0

Notice that it is clear that the Query for Space step, which is the Data Input step, is a bottleneck at 24rows per second.

Notice that the Query for Space People is not a bottleneck like the Query for Space step. The triSpaceFactETL runs well without any modifications besides Steps 1 and 2, getting over 700 rows per second.

For Step 4 on the triSpaceFact ETL, look at the SQL for the Query for Space task. Notice in the SQL thatthere are SUMs. SUMs are expensive, especially since there are two of them and none of the fields areindexed.

Add an index to T_TRIORGANIZATIONALLOCATION.TRILOCATIONLOOKUPTXOBJID. It is only necessaryto add an index to TRILOCATIONLOOKUPTXOBJID, even though the TRISTATUSCL is in the SELECT SUMWHERE. TRISTATUSCL is a 1000 character field and made the index slow and not even viable on SQLServer.

CREATE INDEX IDX01_TRIORGALLOC ON T_TRIORGANIZATIONALLOCATION (TRILOCATIONLOOKUPTXOBJID) NOPARALLEL;

Rerun the ETL.

The following shows the triSpaceFact ETL run with Steps 1, 2, and 4.

Query for Space: Time = 3.2 sec; Speed (r/s) = 2378.3

Notice that the change in the Data Input step rows per second (2378.3) and how long the ETL took to run(3.2 seconds for 7544 rows).

Important: Things to keep in mind while you are developing your ETLs:

• Avoid complex SQL and aggregate functions like COUNT, MIN, MAX, and SUM. If you need to use thesefunctions, see whether an index helps out the Data Input step. Do not create an index on a field that islarge varchar; SQL Server can handle only indexes < 900 bytes.

28 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 33: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• Avoid OR and NOT and using views (M_TableName in TRIRIGA databases) if possible.• Use the Calculator step instead of JavaScript if that is possible. The JavaScript step can be expensive.• Have only one JavaScript scripting step.

Using ETLs with IBM Tivoli Directory Integrator Configuration EditorTivoli Directory Integrator Configuration Editor is the ETL development environment that is included inTivoli Directory Integrator. Configuration Editor lets you create, maintain, test, and debug ETL transforms,which Tivoli Directory Integrator calls configuration files; it builds on the Eclipse platform to provide adevelopment environment that is both comprehensive and extensible.

Before you begin

Tivoli Directory Integrator must be installed and configured by the TRIRIGA system administrator. Forinstructions on how to configure TRIRIGA for Tivoli Directory Integrator, see the IBM TRIRIGA wiki.

System developers who are responsible for defining or maintaining transforms using Configuration Editormust have access to and experience working with TRIRIGA databases.

Before you can define and maintain ETL transforms using Tivoli Directory Integrator Configuration Editor,you must complete the following tasks:

• Create the source and destination tables• Establish the corresponding mappings• Identify the variables that need to be passed into the transform• Add the variables to the transform business object or form

Installing Tivoli Directory Integrator Configuration EditorTo create or modify Tivoli Directory Integrator AssemblyLines, you must install Configuration Editor on theworkstation that you will use for ETL development.

Procedure

1. Download the Tivoli Directory Integrator install package appropriate for your platform at http://www.ibm.com/software/howtobuy/passportadvantage/pao_customers.htm.Option Description

TDI711_TAP340_Install_Wind.zip IBM Tivoli Directory Integrator V7.1.1 Installer for IBMTRIRIGA Application Platform V3.4.0 on WindowsMultilingual

TDI711_TAP340_Install_Linux.tar IBM Tivoli Directory Integrator V7.1.1 Installer for IBMTRIRIGA Application Platform V3.4.0 on Linux Multilingual

TDI711_TAP340_Install_SOLIntl.tar IBM Tivoli Directory Integrator V7.1.1 Installer for IBMTRIRIGA Application Platform V3.4.0 on Solaris IntelMultilingual

TDI711_TAP340_Install_SOLSprc.tar IBM Tivoli Directory Integrator V7.1.1 Installer for IBMTRIRIGA Application Platform V3.4.0 on Solaris SparcMultilingual

TDI711_TAP340_Install_AIX.tar IBM Tivoli Directory Integrator V7.1.1 Installer for IBMTRIRIGA Application Platform V3.4.0 on AIX Multilingual

2. Update to IBM Tivoli Directory Integrator V7.1.1 Fixpack 4 or later, which is available at http://www.ibm.com/support/docview.wss?uid=swg27010509.

3. From the TRIRIGA install directory, copy the JDBC driver for the database type your AssemblyLineswill connect to into the Tivoli Directory Integrator directory TDI Install Directory/jars.

• For SQL Server, copy jtds-1.2.8.jar

Chapter 2. Data structures 29

Page 34: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• For Oracle, copy ojdbc6.jar• For DB2, copy db2jcc4.jar

Changing the ports that are used by Tivoli Directory IntegratorYou specify the ports that are used by Tivoli Directory Integrator during installation. In rare cases, youmay need to change these port settings later.

About this task

To change the port on which TRIRIGA sends ETL transforms to Tivoli Directory Integrator to execute,change TDI_HTTP_SERVER_PORT in TRIRIGAWEB.properties.

To change the port that is used by the Tivoli Directory Integrator Agent to manage the Tivoli DirectoryIntegrator server, complete the following tasks:

• Change TDI_SERVER_PORT in TRIRIGAWEB.properties• Change api.remote.naming.port in TRIRIGA_Install_Directory/TDI_IE/TDISolDir/solution.properties

Getting started with Tivoli Directory Integrator Configuration EditorTo get ready to define and maintain ETLs with Configuration Editor, you must first learn basic tasks, suchas opening Configuration Editor, understanding the views, and creating a project, AssemblyLine, hook,script, or connector, and importing a configuration file.

Configuration Editor is started by using the ibmditk wrapper script. This script is in the Tivoli DirectoryIntegrator installation directory. Choose a workspace folder to store your projects and files.

The workspace window of Configuration Editor displays the following views:

• The navigator (upper left) contains all of the projects and source files for server configurations and TivoliDirectory Integrator solutions. The navigator can also contain other files and projects, such as text files.Configuration Editor treats Tivoli Directory Integrator projects specifically, so other files and projectsremain unaffected by the Configuration Editor.

• The servers view (lower left) shows the status for each of the servers that are defined in the TDI Serversproject. You can define an unlimited number of servers. The server view provides a number of functionsto operate on servers and their configurations. The Refresh button refreshes status for all servers in theview.

• The editor area (upper right) is where you open a document, such as an AssemblyLine configuration, toedit. This area is split vertically with an area that contains various views to provide other relevantinformation. Among the most important are the Problems view that shows potential problems with aTivoli Directory Integrator component, the Error Log that shows errors that occur while you aredeveloping solutions, and the Console view that shows the console log for running Tivoli DirectoryIntegrator servers, for example, those that are started by Configuration Editor.

Common activities include the following basic tasks:

• To create a project, right-click File > New > Project.• To create an AssemblyLine, select a project from the navigator and right-click File > New >

AssemblyLine. An AssemblyLine is a set of components that are strung together to move and transformdata. An AssemblyLine describes the route along which the data will pass. The data that is handledthrough that journey is represented as an Entry object. The AssemblyLine works with a single entry at atime on each cycle of the AssemblyLine. It is the unit of work in Tivoli Directory Integrator and typicallyrepresents a flow of information from one or more data sources to one or more targets. You must enterthe name that you give the new AssemblyLine when you create the ETL job item that runs thisAssemblyLine from TRIRIGA.

• To add an AssemblyLine hook, in the editor area, click Options > AssemblyLine Hooks. Enable thecheckbox next to one of the hooks and click Close. After the hook has been added, you can select thehook and add JavaScript code.

30 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 35: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• To add a script, in the editor area, select either the Feed or Data Flow folder. Right-click and selectAdd Component. Select the scripts filter, then select a component and click Finish. Select the scriptthat was added to your AssemblyLine and add a user-defined block of JavaScript code.

• To add a connector, in the editor area, select either the Feed or Data Flow folder. Right-click andselect Add Component. Select the connectors filter, then select a component and click Finish. Selectthe connector that was added to your AssemblyLine and specify the required configuration data.

• To import a configuration file, click File > Import, then select IBM Tivoli Directory Integrator andConfiguration. Click Next, specify a configuration file, then click Finish. When prompted for the projectname, enter a name and click Finish.

For more information on using Tivoli Directory Integrator, see the IBM Tivoli Directory Integrator Version7.1.1 information center at http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/topic/com.ibm.IBMDI.doc_7.1.1/welcome.htm.

Tivoli Directory Integrator AgentThe Tivoli Directory Integrator Agent provides the interface to start and stop the Tivoli DirectoryIntegrator runtime server from within TRIRIGA.

The scheduled ETL process is available with the following licenses:

• Any IBM TRIRIGA Workplace Performance Management license• An IBM TRIRIGA Real Estate Environmental Sustainability Manager license• An IBM TRIRIGA Real Estate Environmental Sustainability Impact Manager license• An IBM TRIRIGA Workplace Reservation Manager license• An IBM TRIRIGA Workplace Reservation Manager for Small Installations license

The Tivoli Directory Integrator server must be running for Tivoli Directory Integrator type ETL job items torun successfully. ETL job Items fail if the Tivoli Directory Integrator server is not running and an error isprinted to the TRIRIGA server log.

To start or stop the Tivoli Directory Integrator server, go to the Agent Manager panel in the AdministratorConsole and start or stop the Tivoli Directory Integrator Agent. Once started, this agent runs on aschedule and monitors the Tivoli Directory Integrator server. If it finds the Tivoli Directory Integratorserver not running, it attempts to restart it.

For more information on the Agent Manager panel, see the IBM TRIRIGA Application Platform 3Administrator Console User Guide.

Tivoli Directory Integrator agent configurationYou can configure properties to check that the Tivoli Directory Integrator server is running.

The TDI_AGENT_SLEEPTIME property for the application server that runs the Tivoli Directory IntegratorAgent controls how often the agent checks if the Tivoli Directory Integrator server is running and tries torestart it.

The TDI_SERVER_TIMEOUT property for the application server that runs the Tivoli Directory IntegratorAgent controls how long to wait for a Tivoli Directory Integrator server start or stop command to completebefore raising a failure.

These properties are defined in the TRIRIGAWEB.properties file.

Tivoli Directory Integrator agent loggingIssues with the Tivoli Directory Integrator Agent are logged in the server.log. To obtain additionalinformation turn on debug logging in the Tivoli Directory Integrator Agent by changing platform loggingsettings in the Admin Console on the server where the agent is running.

Additional logging configuration can be done by changing the log4j settings on the server where the agentis running.

For example, you can monitor problems with the Tivoli Directory Integrator server by following standardlog4j conventions. If the Tivoli Directory Integrator Agent is unable to start the Tivoli Directory Integratorserver, it writes an error to the following log4j category:

Chapter 2. Data structures 31

Page 36: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

"com.tririga.platform.tdi.agent.TDIAgent.TDISTARTFAILED". By default, errors written to this category goto the TRIRIGA server log. You can configure different behavior, such as sending a mail notification, byconfiguring that category to write to a different log4j appender.

Running transforms from Tivoli Directory Integrator Configuration EditorTo run a transform that is either in process of development or completed, save the transform and thenclick Run from the toolbar.

About this taskA new window or tab is opened in the editor area with the same name as the AssemblyLine. This windowwill show the results from the execution of the AssemblyLine. You can also use the Debugger button tosee the changes to the input stream as each step is performed.

Tivoli Directory Integrator Configuration Editor example transformThe following information provides an example of some of the steps involved in a Tivoli DirectoryIntegrator transform.

You can download a copy of the ‘Load Meter Item Staging Table’ .xml configuration file that is containedin an existing ETL job item to follow along in the step descriptions. Many of the as-delivered ETLs followthe same flow except the specifics are different; for example, the database tables from which data ispulled and how the data is transformed.

This example transform does the following:

• Specifies the logging parameters that are used by this AssemblyLine, including Logger Type, File Path,Append, Date Pattern, Layout Pattern, Log Level, and Log Enabled.

• Establishes a database connection by using the AssemblyLine connection parameters as configured forDB-TRIRIGA.

• Retrieves the Energy Log Lag Time parameter from the Application Settings table• Uses a JDBC connector to retrieve records from the Asset Hourly Fact table• Uses an Input Attribute Map to map the database columns to internal Tivoli Directory Integrator work

entry attributes• Uses a Tivoli Directory Integrator hooks and scripts to perform data transforms.• Uses a JDBC connector to insert or update records in the Environmental Meter Item staging table• Uses an Output Attribute Map to map internal work variables to the database columns in the table that

is configured in the JDBC Output Connector• Uses On Success hook to complete the following tasks:

– Log processing statistics for the run– Set errCount and errMsg

• Uses Tivoli Directory Integrator On Failure hook to complete the following tasks:

– Log error messages– Set errCount and errMsg

The following sections provide more details about the key components of the Tivoli Directory Integratortransform. The discussion concentrates on the main components that are used by the TRIRIGA-deliveredtransform. Tivoli Directory Integrator provides many other components that you can use to manipulateyour data depending on your transform needs.

Consider the following tips as you build a transform:

• Use the testing features that are available as you build your AssemblyLine to make sure that yourtransform is doing what you want.

• Transforms need to be developed in a defensive manner. For example, if you are doing calculations thatuse specific fields, all rows must have a value in these fields, no empties. If not, the transform crashes.Add verification logic to make sure all fields that are used in a calculation have a value.

32 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 37: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• Check for null or undefined variables and return errors if the null or undefined variables are needed foryour AssemblyLine to succeed. For example, if your AssemblyLine depends on the DB-OTHER variables(for example jdbcOthUrl, jdbcOthDriver) in order to make a connection to an external database, it mustcheck that those variables are defined and handle the error case appropriately.

• Dates are difficult because all of the databases that TRIRIGA supports keep DATE and TIME in the datefield.

• Make sure to use the AssemblyLine Connections and make your transform database independent,especially if your solution needs to run multiple database platforms (DB2, Oracle, and Microsoft SQLServer).

• Test your AssemblyLine Connections before running the Tivoli Directory Integrator transform.Otherwise, your transform might end with a database connection error.

Configuring loggersConfigure the AssemblyLine loggers and specify the logging parameters that will be used by thisassembly, such as Logger Type, File Path, Append, Date Pattern, Layout Pattern, Log Level, and LogEnabled.

About this task

Note: Adhere to the following log settings conventions so that TRIRIGA will handle your ETL loggingcorrectly when the transform is run from TRIRIGA:

• When you set the file path, specify the relative path ../../log/TDI/log name where log name is thename of the AssemblyLine. For example, File Path = ../log/TDI/triLoadMeterData.log. Thisensures the triLoadMeterData.log can be viewed from the TRIRIGA Admin console with the rest of theTRIRIGA logs.

• Set the Logger Type to DailyRollingFileAppender to be consistent with other TRIRIGA logs.

These log settings apply to logs that are created when the transform is run from Configuration Editor.Some of the values will be overridden when the transform is invoked from a TRIRIGA ETL Job Item. Loglevel, Date Pattern and Layout Pattern will be overridden with values specified in the TRIRIGA log4j .xmlfile in order to make Tivoli Directory Integrator logs consistent with TRIRIGA logs and configurable in thesame way as TRIRIGA logs.

Procedure

In the editor area, click Options > Log Settings to configure the AssemblyLine loggers and specify thelogging parameters that will be used by this assembly.

ExampleIn the example, we have configured a DailyRollingFileAppender log with the file path setto ../../log/TDI/triLoadMeterData.log. The AssemblyLine hooks can write data to this log byinvoking a command similar to the following:

var methodName = "triLoadMeterData - LookupMeterItemDTO ";task.logmsg("DEBUG", methodName + "Entry");

Initializing AssemblyLinesDuring AssemblyLine initialization, the script engine is started and the AssemblyLine’s Prolog Hooks areinvoked.

Procedure

1. In editor area, create an AssemblyLine Hook by clicking Options > Assembly Line Hooks and enablethe Prolog – Before Init checkbox.

2. Add the following JavaScript code to the script to establish a database connection and retrieve theApplication Setting parameters:

Chapter 2. Data structures 33

Page 38: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

var methodName = "triLoadMeterData - Prolog - Before Init - "; // By default, this AL is setup to run from TRIRIGA. To run this AL from TDI CE, // the runStandAlone flag should be set to 1 and the database connection // parameters should be specified below.var runStandAlone = 1;

// Retrieve database connection parameters passed into this AL from TRIRIGAif (runStandAlone == 0){ task.logmsg("DEBUG", methodName + "Set TRIRIGA db connection parameters"); var op = task.getOpEntry(); var jdbcTriURL = op.getString("jdbcTriURL"); var jdbcTriDriver = op.getString("jdbcTriDriver"); var jdbcTriUser = op.getString("jdbcTriUser"); var jdbcTriPassword = op.getString("jdbcTriPassword");

}else// Modify these database connection parameters if running directly from TDI{ task.logmsg("DEBUG", methodName + "StandAlone: Set default TRIRIGA db connection parameters"); var jdbcTriURL = "jdbc:oracle:thin:@1.1.1.1:1521:test"; var jdbcTriDriver = "oracle.jdbc.driver.OracleDriver"; var jdbcTriUser = "userid"; var jdbcTriPassword = "password";}

try{ triConn.initialize(new Packages.com.ibm.di.server.ConnectorMode("Iterator"));}catch (err){ task.logmsg("ERROR", methodName + "TRIRIGA Connection Failed"); task.logmsg("DEBUG", methodName + "Exception:" + err); dbConnectionFailed = true; errCount=1; system.abortAssemblyLine("TRIRIGA DB Connection Failed"); return;}triConn.setCommitMode("After every database operation (Including Select)");var conn2 = triConn.getConnection();conn2.setAutoCommit(true);task.logmsg("DEBUG", methodName + "TRIRIGA connection success");

// Get application settingstask.logmsg("DEBUG", methodName + "Get triALEnergyLogLagTimeNU from Application Settings");

var selectStmt1 = conn1.createStatement();var query = "select TRIALENERGYLOGLAGTIMEN from T_TRIAPPLICATIONSETTINGS";task.logmsg("DEBUG", methodName + "query:" + query);var rs1 = selectStmt1.executeQuery(query);var result = rs1.next();while (result){ try { energyLogLagTime = rs1.getString("TRIALENERGYLOGLAGTIMEN"); if (energyLogLagTime == null) energyLogLagTime=5 } catch (err) { task.logmsg("INFO", methodName + "Setting Default Values for Application Settings"); energyLogLagTime=5 } task.logmsg("INFO", methodName + "energyLogLagTime:" + energyLogLagTime); result = rs1.next();}rs1.close();selectStmt1.close();

34 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 39: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

3. Save the AssemblyLine hook.

ExampleIn our example, the following tasks are completed in the Prolog – Before Init hook:

• Retrieve Tivoli Directory Integrator transformation variables passed into the AssemblyLine. This onlyoccurs when the AssemblyLine is run from an ETL job item. When the AssemblyLine is invoked from theConfiguration Editor, the JDBC connection parameters need to be configured in the Prolog.

• Establish database connections.• Retrieve AssemblyLine specific settings from the Application Settings record.

Data retrievalData enters the AssemblyLine from connected systems using Connectors and some sort of input mode.

In our example, we use a JDBC connector and an attribute map to feed data to our AssemblyLine.

Connector

Table input is the source of most of your data. This is where you set up a connector to pull data fromtables by using the database connection parameters passed to the AssemblyLine during initialization.

In our example, the connector named JDBCConnectorToHourlyFact retrieves input rows from the HourlyFact table. This component is added to the feed section of the AssemblyLine.

The Connection tab allows you to configure JDBC connection parameters. In the example, we useAdvanced (Javascript) to retrieve the connection parameters passed to the AssemblyLine duringinitialization. In addition, the following SQL query is specified under the Advanced section in the SQLSelect field to narrow the scope of the data retrieved from the database table.

select * from T_TRIASSETENERGYUSEHFACT where TRIMAINMETERBL in ('TRUE') and TRI-MAINMETERPROCESSEDN = 0 order by TRIWRITETIMETX.

Tip: To enable logging of SQL statements for a connector, enable the Detailed Log checkbox on theconnectors Connection tab.

Attribute map

Use a Connector Attribute Map to ensure certain fields have a value or to set a field to a different value. Inthe Attribute Map, you can set values to a target field based on the values of a source field. If the targetfield is not specified, the field can be set to a default value instead of the target field.

Click the JDBCConnectorToHourlyFact connector, then click the Input Map tab to display the AttributeMap. In the example, we use the map to set default values for some of the fields.

Once the first Connector has done its work, the bucket of information (the "work entry", called,appropriately, "work") is passed along the AssemblyLine to the next Component.

ScriptingScripts can be added to implement a user-defined block of JavaScript code.

In the example, scripting is used to add custom processing to the AssemblyLine. The following sectionsdescribe some of the different ways that scripting was used.

Validation

In our example, we perform field validation on our input data as shown in the script below:

var methodName = "triLoadMeterData - JDBCConnectorToHourlyFact - GetNext Successful ";

task.logmsg("DEBUG", methodName + "Entry");

rowsProcessed = parseInt(rowsProcessed) + 1;

Chapter 2. Data structures 35

Page 40: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

var SPEC_ID = work.SPEC_ID.getValue();task.logmsg("DEBUG", methodName + "SPEC_ID: " + SPEC_ID);

// Verify Required Fields and do not process record is any values are missingvalidData = true;if (work.TRICAPTUREDDT.getValue() == null){ task.logmsg("ERROR", methodName + "TRICOSTNU is null."); validData=false;}if (work.TRICOSTNU.getValue() == null){ task.logmsg("ERROR", methodName + "TRICOSTNU is null."); validData=false;}if (work.TRICOSTNU_UOM.getValue() == null){ task.logmsg("ERROR", methodName + "TRICOSTNU_UOM is null."); validData=false;}if (work.TRIDIMASSETTX.getValue() == null){ task.logmsg("ERROR", methodName + "TRIDIMASSETTX is null."); validData=false;}if (work.TRIENERGYTYPECL.getValue() == null){ task.logmsg("ERROR", methodName + "TRIENERGYTYPECL is null."); validData=false;}if (work.TRIENERGYTYPECLOBJID.getValue() == null){ task.logmsg("ERROR", methodName + "TRIENERGYTYPECLOBJID is null."); validData=false;}if (work.TRIMETERIDTX.getValue() == null){ task.logmsg("ERROR", methodName + "TRIMETERIDTX is null."); validData=false;}if (work.TRIRATENU.getValue() == null){ task.logmsg("ERROR", methodName + "TRIRATENU is null."); validData=false;}if (work.TRIRATENU_UOM.getValue() == null){ task.logmsg("ERROR", methodName + "TRIRATENU_UOM is null."); validData=false;}if (work.TRIWRITETIMETX.getValue() == null){ task.logmsg("ERROR", methodName + "TRIWRITETIMETX is null."); validData=false;}

if (!validData){ rowsNotValid = rowsNotValid + 1; task.logmsg("ERROR", methodName + "Record will NOT be processed."); var selectStmt1 = conn1.createStatement(); var query = "update " + tableName + " set TRIMAINMETERPROCESSEDN = 3 where SPEC_ID = \'" + SPEC_ID + "\'"; task.logmsg("DEBUG", methodName + "query:" + query); var count = selectStmt1.executeUpdate(query); task.logmsg("DEBUG", methodName + "Update count:" + count); selectStmt1.close(); system.exitBranch();}

36 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 41: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Data filtering

In our example, there is a script that determines if a record should be processed. The script belowincludes a conditional statement to compare two date fields. If the condition is true, thesystem.exitBrankch() method is called which results in the current record being skipped.

// Should we process this record?if (capturedDT < earliestDateUTC){ task.logmsg("DEBUG", methodName + "Skip!"); rowsSkipped = rowsSkipped + 1; // Set triMainMeterProcessedNU=2 flag in T_TRIASSETENERYUSEHFACT table // indicating that we will not process this record var selectStmt1 = conn1.createStatement(); var query = "update " + tableName + " set TRIMAINMETERPROCESSEDN = 2 where SPEC_ID = \'" + SPEC_ID + "\'"; task.logmsg("DEBUG", methodName + "query:" + query); var count = selectStmt1.executeUpdate(query); task.logmsg("DEBUG", methodName + "Update count:" + count); selectStmt1.close(); system.exitBranch();}

Call DB procedure

Stored procedures can be called from JavaScript. Information can flow out through the procedure andinformation can flow back to the transform.

Our example does not implement a stored procedure call, but an example has been provided below. Hereis an example of how to create sequences for the fact table entries. It calls a DB Procedure in either DB2,SQL Server or Oracle called NEXTVAL. The following is an example of the NEXTVAL stored procedure:

// Stored procedure calltask.logmsg("DEBUG", methodName + "Call Stored Procedure")var command = "{call NEXTVAL(?,?,?)}";try{ cstmt = conn2.prepareCall(command); cstmt.setString(1, "SEQ_FACTSOID"); cstmt.setInt(2, 1); cstmt.registerOutParameter(3, java.sql.Types.INTEGER); cstmt.execute(); result = cstmt.getInt(3); task.logmsg("DEBUG", methodName + "Result:" + result); work.setAttribute("SPECID", result) cstmt.close();}catch (e){ task.logmsg("DEBUG", "Stored Procedure call failed with exception:" + e);}

Joining data from different data sources by using a lookup connectorA lookup can be performed that enables you to join data from different data sources. This action can beimplemented using a custom script or using a lookup connector.

About this taskIn our example, we use a script to perform a database lookup and pull additional data into the datastream.

Procedure

1. In the editor area, click Data Flow to create a script.2. Click Add Component, select the Empty Script component, and click Finish.3. Add the following JavaScript code to the script to perform a lookup on a record in the Meter Item DTO

table.

Chapter 2. Data structures 37

Page 42: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

var methodName = "triLoadMeterData - LookupMeterItemDTO ";

task.logmsg("DEBUG", methodName + "Entry");

// Lookup Meter Item DTO record var recordFound = false;var selectStmt1 = conn1.createStatement();var rs1 = selectStmt1.executeQuery("select DC_SEQUENCE_ID, TRICOSTPERUNITNU, TRIQUANTITYNU from S_TRIENVMETERITEMDTO1 where TRIMETERIDTX = \'" + work.TRIMETERIDTX.getValue() + "\' and TRITODATEDA = " + TRITODATEDA );var result = rs1.next();while (result){...}rs1.close();selectStmt1.close();

Output of entries to a data sourceDuring output, transformed data is passed along the AssemblyLine to another JDBC connector in someoutput mode, which outputs the data to the connected system. Since the connected system is record-oriented, the various attributes in work are mapped to columns in the record using an Output AttributeMap.

Connector

Table output is the target of most of your data. This is where you set up a JDBC connector to insert orupdate data into tables using the database connection parameters passed to the Assembly Line duringinitialization.

In our example, the connector named JDBCConnectoToMeterItemDTO is used to store information in theTRIRIGA Environmental Meter Log staging table. This component is added to the end of yourAssemblyLine once you have all of the information generated to save.

The Link Criteria tab on the connector is used to specify the following criteria used to generate a SQLstatement to update the output table. In our example, the criteria include the following entries:

• TRIMETERIDTX equals $TRIMETERIDTX• TRITODATEDA equals $TRITODATEDA

Tip: The $ is used to indicate that the variable name that follows should be replaced with the internalwork entry value.

Attribute map

The connector has an Output Attribute Map that is used to specify the mapping between internal workvariables and the column names in the output table.

Select the ‘JDBCConnectorToMeterItemDTO’ component, then click the Output Map tab. The OutputMap page opens. Use this page to map the source fields to the target fields in the target database. Noticethat the source fields include the additional fields added to the input stream.

Propagating status to ETL job itemsAll scripts must contain both an On Success and On Failure hook that populates the errCount and errMsgwork attributes to report status back to TRIRIGA when the transform is run from an ETL job item.

About this task

The errCount work attribute is the number of errors encountered when the AssemblyLine file ran. TheerrMsg work attribute is the error message that is written to the TRIRIGA log.

Procedure

1. To add AssemblyLine hooks, in the editor area, click Options > AssemblyLine Hooks.

38 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 43: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

2. Select the checkboxes next to the On Success and On Failure hooks and click Close.3. Modify the On Success hook to set the errCount and errMsg work attributes by using the following

code.

// Set errCount and errMessage for ALif (task.getResult() == null) { var result = system.newEntry(); result.errCount = errCount; result.errMsg='Assembly line completed successfully.'; task.setWork(result);} else{ work.errCount = errCount; work.errMsg='Assembly line completed successfully.';}

4. Modify the On Failure hook to set the errCount and errMsg work attributes by using the following code.

// Set errCount and errMessage for triDispatcher ALif (task.getResult() == null) { var result = system.newEntry(); result.errCount = 1; result.errMsg='Assembly line failed.'; task.setWork(result);} else{ work.errCount = 1; work.errMsg='Assembly line failed.';}

Transform testing in Tivoli Directory IntegratorTesting after each step makes debugging easier.

You need to save before running a transform; otherwise Tivoli Directory Integrator will invoke theAssemblyLine without the changes you have make since the last save.

As you develop the AssemblyLine you can test it by either running to completion or by stepping throughthe components one by one. There are two buttons to run the AssemblyLine. The Run in Console actionstarts the AssemblyLine and shows the output in a console view. The Debugger action runs theAssemblyLine with the debugger.

The process of starting an AssemblyLine goes through the following steps:

1. If the AssemblyLine contains errors, such as missing output maps, you will be prompted to confirmrunning the AssemblyLine with the following message:This AssemblyLine has one or moreerrors in it. Proceed with run?

2. The next check is whether the Tivoli Directory Integrator server is available. If the server isunreachable you will see this message:Connection to server Default.tdiserver cannotbe obtained.

3. Finally, Configuration Editor transfers the runtime configuration to the server and waits for theAssemblyLine to be started. In this step you will see a progress bar in the upper right part of thewindow. The toolbar button to stop the AssemblyLine is also grayed out as it hasn't started yet. Oncethe AssemblyLine is running, the progress bar will be spinning and you should start seeing messages inthe log window. You can now stop the AssemblyLine by clicking the Stop action in the toolbar.

Moving ETL scripts into TRIRIGA from Tivoli Directory IntegratorOnce the transform is completed and tested, it must be uploaded to the TRIRIGA ETL job item.

The following graphic describes the flow between the ETL environment and TRIRIGA.

Chapter 2. Data structures 39

Page 44: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Setting up the TRIRIGA database AssemblyLine connectionsBefore you upload the runtime configuration file to a TRIRIGA ETL job item record, you must configure theTRIRIGA database AssemblyLine connections with the parameters to be passed to the Tivoli DirectoryIntegrator assembly during initialization.

Procedure

1. Click Tools > System Setup > General > Application Settings.2. On the Environmental Settings tab, in the AssemblyLine Settings section, configure the database

AssemblyLine connections.a) Required: Configure DB-TRIRIGA as the connection to your TRIRIGA database.

If this connection is not configured, Tivoli Directory Integrator ETL job items will not run.b) Optional: Configure DB-OTHER if your Tivoli Directory Integrator AssemblyLines must connect to an

external database.Because DB-OTHER is optional, TRIRIGA does not verify that it is set. Therefore the AssemblyLinemust check that this connection is set before using it.

3. Click Test DB Connection to verify that the connection data is correct.

Retrieving the AssemblyLine parametersAfter the TRIRIGA database AssemblyLine connections have been configured, modify the AssemblyLineto retrieve the parameters.

Procedure

1. Open the Prolog script that initializes the AssemblyLine.2. Add the following statements to retrieve the TRIRIGA-DB connection parameters.

var op = task.getOpEntry(); var jdbcTriURL = op.getString("jdbcTriURL");var jdbcTriDriver = op.getString("jdbcTriDriver");var jdbcTriUser = op.getString("jdbcTriUser");var jdbcTriPassword = op.getString("jdbcTriPassword");

3. Add the following statements to retrieve the OTHER-DB connection parameters.

var op = task.getOpEntry(); var jdbcOthURL = op.getString("jdbcOthURL");var jdbcOthDriver = op.getString("jdbcOthDriver");

40 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 45: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

var jdbcOthUser = op.getString("jdbcOthUser");var jdbcOthPassword = op.getString("jdbcOthPassword");

4. Save the changes.

Moving ETL scripts into TRIRIGAAfter you have developed an ETL script in Tivoli Directory Integrator Configuration Editor, you must moveit into TRIRIGA.

About this taskTivoli Directory Integrator Configuration Editor automatically stores the runtime configuration file in thefollowing location: Workspace/ProjectName/Runtime-ProjectName/ProjectName.xml.

Procedure

1. Create an ETL job item record with the Job Item Type field set to Tivoli Directory IntegratorTransformation.

2. Set the Assembly Line Name field to the name of the AssemblyLine that you want to start afterloading the configuration file.

3. Set the Transform File field by uploading the runtime config file that you created in ConfigurationEditor.

4. If your ETL transform requires any files as input, associate those files to the ETL job item as resourcefiles.For example, if your ETL transform must process data from a spreadsheet file, you must associate thatfile as a resource file.a) In the Resource File section, click Add.b) Set the Resource Name to a name that is used to identify the resource file.

This name will be part of the temporary resource file name that TRIRIGA sends to the ETLtransform. This is useful if you associate more than one Resource File to an ETL job item because itenables the ETL transform to identify the multiple files that are sent to it by name.

c) To set the Resource File field, click the Upload Resource File icon next to the field, specify the filelocation, and click OK to upload this file to the Resource File field.

Variables passed to Tivoli Directory IntegratorTRIRIGA passes several input variables to Tivoli Directory Integrator ETLs.

To use any of these variables in an AssemblyLine, access them via the Tivoli Directory IntegratorgetOpEntry() interface. For example, var op = task.getOpEntry(); var jdbcTriURL =op.getString("jdbcTriURL");

TRIRIGA extracts field values from the ETL job item record and passes them as input parameters. Thefield types that are supported are Text, Boolean, Date, Date and Time, Locators, and Numbers. Any ETLjob item field of this type is passed as variables.

If an ETL job item has the following fields, the variables in the 2nd table will be passed to Tivoli DirectoryIntegrator:

Table 6. ETL job item fields

Field name Field label Field type

triActiveEndDA Active End Date Date

triActiveStartDA Active Start Date Date

triControlNumberCN Control Number Control Number

triCreatedByTX Created By Text

triLocator triLocator Text

Chapter 2. Data structures 41

Page 46: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Table 6. ETL job item fields (continued)

Field name Field label Field type

triNameTX Name Text

triTransformBI Transform File Binary

The variables that are passed to Tivoli Directory Integrator are as follows:

Table 7. Variables passed to Tivoli Directory Integrator

Variable passed to Tivoli Directory Integrator Description

triNameTX (Text)

triActiveStartDA (Number) date in milliseconds since January 1,1970

triActiveStartDA_DATE (Date) wrapped if Oracle or DB2, time is whateverit was on the attribute

triActiveStartDA_MinDATE (Date) wrapped if Oracle or DB2, time is 00:00:00

triActiveStartDA_MaxDATE (Date) wrapped if Oracle or DB2, time is 23:59:59

triActiveStartDA_Min (Number) date in milliseconds since January 1,1970, time is 00:00:00

triActiveStartDA_Max (Number) date in milliseconds since January 1,1970, time is 23:59:59

triActiveEndDA (Number) date in milliseconds since January 1,1970

triActiveEndDA_DATE (Date) wrapped if Oracle or DB2, time is whateverit was on the attribute

triActiveEndDA_MinDATE (Date) wrapped if Oracle or DB2, time is 00:00:00

triActiveEndDA_MaxDATE (Date) wrapped if Oracle or DB2, time is 23:59:59

triActiveEndDA_Min (Number) date in milliseconds since January 1,1970, time is 00:00:00

triActiveEndDA_Max (Number) date in milliseconds since January 1,1970, time is 23:59:59

triCreatedByTX (Text)

triRunDATE (Number) Run Date set by Custom workflow task

triLocator (Text – Locator) is a locator field that contains areference to another business object. This variablecontains the text value of that record’s field

triLocator_IBS_SPEC (Text - Locator) contains the spec_id of the recordin the triLocator field. You can use this spec_id tofind information related that record through otherdatabase tables

triAssemblyLineNameTX The name of the main AssemblyLine in the ETLtransform that you want to run

42 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 47: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

TRIRIGA also passes Resource File variables. A Resource File is a file that an ETL transform requires asinput. For example, if an ETL transform must process data from a Comma Separated Value (.csv) file, itneeds to know where and how to reference that file when it runs.

If an ETL job item has two associated Resource File records, each includes the following fields:

Table 8. Fields associated with Resource File records

Field name Field label Field type

triResourceNameTX Name that is used to identify theresource file

Text

triResourceFileBI Contents of the resource file Binary

The variables that are passed to Tivoli Directory Integrator for these Resource File records are as follows:

Table 9. Variables passed to Tivoli Directory Integrator

Variable passed to Tivoli Directory Integrator Description

RESOURCE_1 Fully qualified file name of the resource file. Filename contains value from triResourceNameTXfield to help ETL transform identify it

RESOURCE_2 Fully qualified file name of the other resource file.File name contains value from triResourceNameTXfield to help ETL transform identify it

Table 10. TRIRIGA also passes JDBC input variables that are defined in the Application Settings.

Variable passed to Tivoli Directory Integrator Description

jdbcTriURL TRIRIGA database driver URL

jdbcTriDriver TRIRIGA database driver name

jdbcTriUser TRIRIGA database user name

jdbcTriPassword TRIRIGA database password

jdbcOthURL Other database driver URL

jdbcOthDriver Other database driver name

jdbcOthUser Other database user name

jdbcOthPassword Other data

Debugging ETL scripts in the applicationTo debug ETL scripts in the application, you must first set up logging and then trigger the RunETL Customworkflow task to view the log information.

Setting up loggingTRIRIGA provides debugging capabilities when ETL scripts run in the TRIRIGA application.

Procedure

1. In the Administrator Console, select the Platform Logging managed object. Then select the option toturn on ETL logging.

2. Select Category ETL > Transforms > Run Transform to turn on debug logging in the TRIRIGA platformcode that processes ETL job items. Log messages are printed to server.log.

Chapter 2. Data structures 43

Page 48: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

3. Select Category ETL > Transforms > Tivoli Directory Integrator to turn on debug logging in the TivoliDirectory Integrator AssemblyLines. Log messages are printed to the AssemblyLine log. EachAssemblyLine has its own log file.

4. Apply the changes. Now when an ETL Script runs, ETL related information will be put into the serverlog or AssemblyLine log.

Important: Because of the large volume of information you may encounter in a log, set Tivoli DirectoryIntegrator logging to debug for only one execution of the ETL job item.

Debugging using ETL jobsOnce you have set up logging, you will need to trigger the RunETL Custom workflow task to see anyinformation in the logs.

Procedure

If you are using the ETL Job Item, then you can simply click Run Process on that form.Do not forget to fill the field values in the form that the ETL Script would expect.

Note: Only use the Run Process action for debugging purposes. For production, use the Job Schedulerinstead. Note that Run Process will update tables in the database, so do not use this action in aproduction environment.

ExampleThe following shows a sample server log output:

2014-03-27 13:18:10,427 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Entry: getETLVarsFromFields...2014-03-27 13:18:10,431 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Exit: getETLVarsFromFields2014-03-27 13:18:10,431 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Entry: getETLVarsFromResourceFiles2014-03-27 13:18:10,432 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Exit: getETLVarsFromResourceFiles2014-03-27 13:18:10,432 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Processing Job Item with Type = Tivoli Directory Integrator Transformation2014-03-27 13:18:10,432 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Entry: transformRecordTDI2014-03-27 13:18:10,474 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.TDIRequest](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Entry: init2014-03-27 13:18:10,474 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.TDIRequest](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Exit: init2014-03-27 13:18:10,483 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) *** ETL Variable = triIdTX : triLoadMeterData2014-03-27 13:18:10,483 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) *** ETL Variable = triAssemblyLineNameTX : triLoadMeterData...2014-03-27 13:18:10,483 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) *** ETL Variable = triNameTX : Load Meter Item Staging Table

44 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 49: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

2014-03-27 13:18:10,488 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.DataSourceConnectionInfoImpl] (WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Entry: init2014-03-27 13:18:10,495 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.DataSourceConnectionInfoImpl](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Exit: init2014-03-27 13:18:10,495 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.DataSourceConnectionInfoImpl](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Entry: init2014-03-27 13:18:10,496 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.DataSourceConnectionInfoImpl](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Exit: init2014-03-27 13:18:10,497 INFO [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Setting TDI log level to Debug.2014-03-27 13:18:10,503 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.LogSettingsServiceImpl](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Entry: getLogSettings2014-03-27 13:18:10,503 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.LogSettingsServiceImpl](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Found DailyRollingFileAppender.2014-03-27 13:18:10,503 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.LogSettingsServiceImpl](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Exit: getLogSettings2014-03-27 13:18:10,503 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.TDIRequest](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Entry: send2014-03-27 13:18:14,396 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.TDIRequest](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Exit: send2014-03-27 13:18:14,396 INFO [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) RunETL request returned from TDI server version: 7.1.1.3 - 2013-12-06 running on host: i3650x3cr22014-03-27 13:18:14,396 INFO [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) RunETL: Prepare=2014/03/27 13:18:10.475 Start=2014/03/27 13:18:10.503 Stop=2014/03/27 13:18:14.3962014-03-27 13:18:14,396 INFO [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) RunETL: Processing ended after 3 seconds.2014-03-27 13:18:14,396 DEBUG [com.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL](WFA:221931 - 15290804 triProcessManual:305676189 IE=305676189) Exit: transformRecordTDI

The following shows a sample AssemblyLine log:

2014-03-27 13:18:11,062 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] CTGDIS967I AssemblyLine started by triLoadMeterData_1395951491025.2014-03-27 13:18:11,063 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] CTGDIS255I AssemblyLine AssemblyLines/triLoadMeterData is started.2014-03-27 13:18:11,073 DEBUG [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] CTGDIS089I Current statistics: Interval=0, Maximum Errors=0, Maximum Read=0 2014-03-27 13:18:11,073 DEBUG [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] CTGDIS069I Loading Connectors.

Chapter 2. Data structures 45

Page 50: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

...2014-03-27 13:18:14,384 DEBUG [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success Entry2014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success Processing Success2014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success -----------------------------------------------------------------------------

2014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success Processing Summary2014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success rowsProcessed = 3602014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success stagingTableWriteRowSuccess = 3602014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success stagingTableWriteRowFail = 02014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success rowsSkipped = 02014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success rowsNotValid = 02014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success dcJobsToReadyState = 02014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success createdDT = 13959514910882014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success finishedDT = 13959514943842014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success seconds = 32014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success secondsPerRecord = 0.012014-03-27 13:18:14,384 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success -----------------------------------------------------------------------------2014-03-27 13:18:14,386 DEBUG [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] triLoadMeterData - On Success Exit2014-03-27 13:18:14,386 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] CTGDIS116I Scripting hook of type onsuccess finished.2014-03-27 13:18:14,386 INFO [org.apache.log4j.DailyRollingFileAppender.a9e2d096-4cdc-4f87-a57b-5e31323093d9] CTGDIS080I Terminated successfully (0 errors).

Performance tuning tipsUse the following information to improve performance.

When you are done getting your ETL to do what you want it to do, take a baseline performancemeasurement.

1. Using the Tivoli Directory Integrator Configuration Editor, run the ETL against a database where youwill have thousands of rows added to your fact table.

2. Make sure you are using a database connection and running the Tivoli Directory Integratortransformation on the network where the database lives so you do not have network latency. Do notrun it through a VPN.

3. Review the information logged for your run. For example, from a run of the triLoadMeterData ETL,review the triLoadMeterData.log file.

Analysis of the run:

1. Analyze the run. Is there a step that is slower than the others?

46 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 51: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

2. Is the data input step slow? Should an index be added to the database? If so, add an index and rerun.Is the performance better? Maybe a filter step should be used instead of using the database to filterdown the result set.

Tips for developing ETLs:

• Avoid complex SQL and aggregate functions like COUNT, MIN, MAX, and SUM. If you need to use these,see if an index will help out the Data Input step. Do not create an index on a field that is large varchar;SQL Server can only handle indexes < 900 bytes.

• Avoid OR and NOT and using views (M_TableName in IBM TRIRIGA databases) if possible.

Running ETL transformsUse the TRIRIGA Job Scheduler to run the ETL job items and job groups that are used to move data intothe TRIRIGA fact tables or flattened hierarchy tables.

ETL job items, job groups, and job schedulersETL job items define an ETL transformation script or rebuild hierarchy process that is used to captureinformation from the TRIRIGA database, transform it, and load it into fact tables. ETL job groups aresimply collections of ETL job items. Job schedulers define when ETL job items and job groups are to run. Ajob schedule must be activated to run the jobs that are associated with it.

The TRIRIGA Workplace Performance Management analytic/reporting tool calculates metrics that areused to generate reports and graphs by using queries against fact tables that are grouped by the data inflat hierarchy tables. ETL job items are used by the Job Scheduler to trigger the workflows that extractdata from source tables, such as TRIRIGA business object tables, and loading it into fact tables. ETL jobitems also can be used to update flat hierarchies.

There are three types of ETL job items in TRIRIGA Workplace Performance Management :Kettle Transformation

Extracts data from TRIRIGA business object tables and loads the data into TRIRIGA WorkplacePerformance Management fact tables.

Tivoli Directory Integrator TransformationExtracts data from TRIRIGA business object tables or tables from external sources and loads datainto TRIRIGA Workplace Performance Management fact tables.

Rebuild HierarchyExtracts data from TRIRIGA business object tables and loads the data into TRIRIGA WorkplacePerformance Management flattened-hierarchy tables.

Creating or modifying ETL job itemsETL job items define the scripts that capture information from the TRIRIGA database, transform it, andload it into fact tables or flattened hierarchy tables.

Procedure

1. Click Tools > System Setup > Job Scheduling > ETL Job Item.2. Select an existing job item or click Add.3. In the General section, enter an ID for the ETL job item.

Include the fact table name in the ID field. The status is supplied by the TRIRIGA system when theETL job item is created.

4. Enter a name and description for the ETL job item.In the Details section, the Job Item Class field is set to ETL by the TRIRIGA system.

5. Select the Job Item Type.6. If the job item type is Rebuild Hierarchy, complete the following steps.

a) Enter the hierarchy module name.If specified, it must be a module with a hierarchy defined in TRIRIGA, such as Location. When thisETL job item runs, all flat hierarchies for this module are rebuilt.

Chapter 2. Data structures 47

Page 52: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

b) Enter the hierarchy name.When this ETL job item runs, the flat hierarchy for this business object is rebuilt. If specified, thehierarchy name takes precedence over the hierarchy module name.

c) The TRIRIGA system ignores information that is entered in the calendar period, fiscal period, andother dates in the Details section.

d) To rebuild all flat hierarchies of a specific module, specify the hierarchy module name and leavehierarchy name blank.

e) To rebuild a single flat hierarchy, specify both the hierarchy module name and hierarchy name.f) If both the hierarchy module name and hierarchy name are blank, or either contains All, all flat

hierarchies are rebuilt.7. If the job item type is Kettle Transformation or TDI Transformation, complete the following steps.

a) Specify the transform file by browsing for and selecting the file.The TRIRIGA system expects the Kettle transform file to be in .ktl or .xml format and the TivoliDirectory Integrator transform file to be in .xml format.

b) Optional: After you upload the transform file, it can be viewed by clicking View Content.8. When job item type is TDI Transformation, enter an assembly line name.9. When job item type is Kettle Transformation,

a) Enter the module names. If more than one name is specified, they must be delimited with acomma.

• Each module name is converted into a variable for the transform file in the format ${Module.<moduleName>.ViewName} where <moduleName> is the module name.

• Each variable’s value that is passed into the ETL is the name of the View for that module. Thisvariable’s value can be used if the ETL must know the name of a specific Module's view.

b) Enter the business object names. If you specify more than one business object name, the namesmust be delimited with a comma.

• Each business object name is converted into a variable for the transform file in the format ${BO.<boName>.TableName} where <boName> is the business object name.

• A business object is not guaranteed to be unique across the database unless the module nameis included. If you use a business object that is not uniquely named, include the module name inthe comma-separated list. Use the following syntax: <moduleName>::<boName>, where<moduleName> is the module name and <boName> is the business object name.

• A variable is provided to the transform file as ${BO.<moduleName>::<boName>.TableName}.Each variable’s value is the name of the table for that business object. This variable’s value canbe used if the ETL must know the name of a specific Business Object's table.

10. If the job item type is Kettle Transformation or TDI Transformation, complete the following steps.a) Typically, ETL job items are run under the control of one or more job schedules.b) To unit test the ETL job item, set the date parameters in the Details section.c) The following date parameters depend on the ETL, some ETLs use the information, some ETLs do

not. The date parameters are overwritten when an ETL job item is run by the Job Scheduler.

• Select the Calendar Period to pass a variable that contains the calendar period of the job.• Select the Fiscal Period to pass a variable that contains the fiscal period of the job. The fiscal

period is used by the Capture Period field in the TRIRIGA Workplace Performance Managementfact tables.

• Select the Date to pass a variable that contains the date record of the job. The date record isused to stamp the Date Dimension field in the TRIRIGA Workplace Performance Managementfact tables.

• Select the Date to pass a variable that contains the date of the job. Enter a date or click theCalendar icon and select a date.

48 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 53: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• Select the Start Date to specify the date of the first data capture and to pass a variable thatcontains the start date of the job. Enter a date or click the Calendar icon and select a date.

• Select the End Date to specify the date of the last data capture and to pass a variable thatcontains the end date of the job. Enter a date or click the Calendar icon and select a date.

11. The Metrics section summarizes logging data for this ETL job item. The Average Duration iscalculated based on the Total Duration and the # of Runs (Total Duration / # of Runs).

12. The Logs section shows the time and status from each time the ETL job item is run. This data issummarized in the Metrics section.

13. Click Create Draft.14. Click Activate.

ResultsThe ETL job item record is created and ready to be included in a job group, a job schedule, or both.

What to do nextFor unit testing, click Run Process to trigger the Kettle or Tivoli Directory Integrator transform or theRebuild Hierarchy process that is specified in the ETL job item.

Adding or modifying job groupsTo simplify job scheduling, use job groups to make collections of ETL job items to be run on the sameschedule.

Procedure

1. Click Tools > System Setup > Job Scheduling > Job Group.2. Select an existing job group or click Add.3. In the Job Group form, enter a name and description for the job group.4. Add or remove ETL job items from the job group in the Job Items section by using the Find or Remove

actions.5. Adjust the sequence of the job items.6. Click Create.

ResultsThe job group record is available to include in a job schedule.

Creating or modifying job schedulersUse job schedulers to schedule when TRIRIGA runs ETL job items and job groups. You can schedule jobsto run on an hourly, daily, weekly, or monthly basis.

Before you beginAlthough TRIRIGA includes predefined job schedulers, no jobs are scheduled until you revise thepredefined job schedulers or create a new job scheduler with a start date and end date and click Activate.

Procedure

1. Click Tools > System Setup > Job Scheduling > Job Scheduler.2. Select an existing job scheduler or click Add.3. Enter a name and description for the job scheduler.4. In the Schedule section, select the Schedule Type frequency from the list presented.

• If you select the Daily schedule type, the Hourly and Every fields are displayed. Click the Hourlycheckbox to schedule jobs to run hourly, then click the Every field to specify the number of hoursbetween each scheduled job.

Chapter 2. Data structures 49

Page 54: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• If you select the Advanced schedule type, the Recurrence Pattern field displays in the Schedulesection. Click Recurrence Pattern to open the Job Event form, which provides flexible options forscheduling jobs.

5. In the Schedule section, select the Run Historic Captures? check box to indicate that historic data isincluded in the metrics that are calculated from the results of the activities in this job schedule.When this job schedule is activated, this option instructs the job scheduler to generate and run jobswhere the scheduled date is earlier than today. The parameters, such as start date, end date, andfiscal period, for the time frame or period are passed into each job item to simulate a historic capture.However, since the process is being run now, success is entirely dependent on how the ETL scripts usethese parameters. Running historic captures is an advanced option that requires a completeunderstanding of how each script works.

6. Select the starting date for the first capture, the ending date for the last capture, and the capture lagfrom the end date of each capture, which is the amount of time that the system waits to start theworkflows that process the job.Each data capture period is calculated by using the start date and the schedule type. The systemtriggers a workflow to run the ETL job items immediately after the capture lag from the end of eachcapture period.For example, if the start date is 01/01/2014, the schedule type is monthly, and the capture lag is oneday, the following events are scheduled:

Table 11. Job runs scheduled from the beginning of the month

Job runs - Just after midnighton Captured data start date Captured data end date

02/01/2014 01/01/2014 01/31/2014

03/01/2014 02/01/2014 02/28/2014

04/01/2014 03/01/2014 03/31/2014

The End Date determines the last event to capture.Using the preceding example, if 03/15/2014 were the End Date, the last event would be scheduled asfollows:

Table 12. Job runs scheduled from the middle of the month

Job runs - Just after midnighton

Captured data start date Captured data end date

03/16/2014 03/01/2014 03/15/2014

• When the Reset Capture Period? check box is selected, it forces the system to ensure that thecapture period is kept current every time a job runs.

• For example, if an activated job scheduler is configured with schedule type of monthly, the systemwakes up every month and runs the job items in the record. During the wake-up, if the JobScheduler’s reset capture period is selected, the system ensures that the capture period is setcorrectly based on the wake-up date.

• When you specify job items, job groups, or both and activated this job schedule record, the list ofscheduled events for this Job Schedule shows in the Scheduled Jobs section.

7. In the Job Items section, use the Find or Remove actions to select the ETL job items and job groups toinclude in the schedule and click OK.The job items run in the order that is specified in the Sequence column.

• When this job schedule is activated, the Metrics section summarizes logging data for this jobschedule.

• The Average Duration is the calculated based on the Total Duration and the Number of Runs (TotalDuration / Number of Runs).

50 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 55: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• When this job schedule is activated, the Logs section shows the time and status from each time thisjob schedule is run.

8. Optional: Adjust the sequence of the job items.9. Click Create Draft and then click Activate.

ResultsThe TRIRIGA system creates a set of scheduled jobs that are based on the schedule type and the startand end dates. These can be seen in the scheduled jobs section of the job schedule.

Customizing transform objectsTRIRIGA provides ETL job items and transform objects. Rather than defining a new transform object, youcan customize an existing ETL job item transform object. If you use an existing transform object, then youmust define or maintain the transform. However, you do not need to define or maintain the businessobjects, forms, or workflow tasks, as they are already defined.

Defining transform business objects, forms, and workflowsYou can use the existing TRIRIGA ETL job item as an example, when you define a new transform object.Defining a new own transform object also requires that you define and maintain the business objects,forms, and workflows.

Before you beginCreate the source and destination tables (business objects) and establish the corresponding mappings.

Procedure

1. Create the transform business object.a) Identify variables to be passed into the transform and add them to the transform business object.

For example, time period.b) Ensure that there is a binary field for the transform XML.

2. Create the transform form and provide a navigation item, or other method, to show the transform form.3. Create the workflow that calls the ETL transform custom workflow task.

a) Set up the workflow to run on a schedule.b) Iterate through all the transform business object records that must be run, calling the custom

workflow task for each one.

Saving transform XML into the Content ManagerAfter you define the transform XML, you can save it in the file system and upload it into the TRIRIGAContent Manager. The transform XML can then be tested by using an ETL development environment.

Procedure

1. Open the navigation item for your transform business object. Edit an existing transform businessobject or add a new one.

2. Use the binary field to upload the transform XML into the TRIRIGA Content Manager.3. Update other fields, if necessary.4. Save the transform record.

Chapter 2. Data structures 51

Page 56: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Configuring workflow run timeAfter you upload the transform XML into the TRIRIGA Content Manager, you can and set up the workflowto run on a schedule. The workflow iterates through all the transform business object records and callsthe custom workflow task for each record.

Procedure

1. The workflow gets records for the transform business object.2. The workflow determines the records to be sent to the custom workflow task.3. The workflow iterates through calling custom workflow task for each record. The class name must becom.tririga.platform.workflow.runtime.taskhandler.ETL.RunETL.

4. The custom workflow task:a) Loads the transform XML from the Content Manager into a temporary file.b) Gathers all the fields on the business object and creates a variable to hand to the ETL tool. Special

handling is needed for date/date and time formats.c) Creates the ETL environment.d) Sets the TRIRIGA connection to the local application server JNDI.e) Runs the transform by using the ETL API.f) Returns false if an error occurs during processing, otherwise return true to the workflow.

Running an ETL custom workflow task specificationThe workflow iterates through all the transform business object records and calls the custom workflowtask for each record. The custom workflow task includes a defined specification.

When the custom workflow task is called for each transform business object, the fields on it areprocessed as follows:

1. The triTransformBI field is required and holds the reference to the transform XML file that youwant to run.

2. The triBONamesTX field, if present, is parsed as a comma-separated list of business object names.The custom workflow task creates variables of the form ${BO.<boName>.TableName}. For example, ifthe field contains triBuilding, there is a ${BO.triBuilding.TableName} variable available in the ETLscript. This variable contains the actual database table name that stores triBuilding records. Sincebusiness object names might not be unique, you have the option of specifying the module by using theform <moduleName>::<boName>, which results in a corresponding ${BO.<moduleName>::<boName>.TableName} variable. For example, Location::triBuilding is availableas the variable ${BO.Location::triBuilding.TableName} in the ETL script.

3. The triModuleNamesTX field, if present, is parsed as a comma-separated list of module names. Thecustom workflow task creates variables of the form ${Module.<moduleName>.ViewName}.

52 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 57: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Chapter 3. MetricsA metric is an operational objective that you want to measure. All TRIRIGA metrics use the same metrictechnology and data but different metrics are intended for different purposes.

Metrics can be divided into the following two purposes:Performance metrics

Metrics that measure process performance to identify actions that can be taken for improvement.Typically, the measures are ratios, percentages, or scores. Performance metrics have targets,thresholds, action conditions, accountability, and action task functions.

Analysis and reporting metricsMetrics that are information for general reporting or further analysis of a related performance metric.This information is useful for dimensional analysis and navigation, so it uses the metric capabilities ofthe performance management application. Analysis and reporting metrics do not have targets,thresholds, action conditions, accountability, and action tasks. The key metrics values for the Results,Target, and Status fields are blank for this metric type.

For more information, see the IBM TRIRIGA 10 Workplace Performance Management User Guide.

Metrics reportsMetric reports use the dimensions and fact fields that are defined in the fact tables to represent a specificmetric calculation. Metric reports show aggregated values for a single metric.

Collections of metric reports for each user’s role appear in the portal Home page and in the PerformanceManager.

Do not edit or modify the as-delivered metric reports as you customize TRIRIGA software to yourcompany’s environment. Instead, create a derivative metric report by copying an existing metric report,renaming it, and editing it, or by creating an entirely new metric report. View the metric reports at MyReports > System Reports with the Module filter set to triMetricFact, the Display Type filter set to Metric,and the Name filter set to Metric. To view the metric tabular reports that are related to metric graphicreports, set the Name filter to Related Reports.

Each metric report consists of the following elements:Drill paths

A special reporting feature that takes advantage of hierarchical dimensions, providing report userswith the ability to move up and down a hierarchy.

FiltersProvide report users with the ability to change or filter the data that is presented in a report.

Metric calculationThe metric, which is the main focus of the report.

Related reportsDisplay more, possibly non-metric, data for a metric report.

For more information, see the IBM TRIRIGA Application Platform 3 Reporting User Guide.

Key metricsThe Scorecard portal includes Key Metrics sections that collect multiple metrics into a single view.

Do not edit or modify the as-delivered key metrics as you customize TRIRIGA for your organization.Rather, create derivative key metrics by copying an as-delivered key metric, renaming it, and editing it.You can also create an entirely new key metrics portal section.

© Copyright IBM Corp. 2011, 2019 53

Page 58: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

The process of using the Scorecard Builder, Portal Builder, and Navigation Builder to set up a portal isdocumented in the IBM TRIRIGA Application Platform 3 User Experience User Guide.

Tips:

• In the Queries tab of the Scorecard form, include all queries that are to appear in this Key Metrics portalsection in the Selected Queries section.

• When you put the Scorecard in a navigation item, specify the Default Report.• The response time that users experience while they moving around in their Home portal and their

Performance Manager can vary. The response time is directly related to the number of metrics that areincluded in the Scorecard and the amount of data behind each metric. The more precise the filters, themore performance improves.

Form metricsAn alternative method to displaying metrics is by using a form view.

Displaying metrics in a form is accomplished by defining a query section in a form, where the queryreferenced is a metric query. The data that the metric query selects can be filtered to display based onthe parent record that the form is displayed in. Related switching of reports allows the exchanging ofqueries that are based on user actions.

At design time, define a metric query with a $$RECORDID$$ and $$PARENT::[Section]:: [Field]$$ filter. At runtime, the parent record the form is displayed in implicitly filters the data that the MetricQuery Engine selects to display.

You can define other metric queries as related reports. At runtime, when the metric query is displayedwithin a form-related report, switching control enables the user to exchange the displayed query.

Data filteringBefore metrics are displayed in a form, the data is filtered.

At runtime, before the query is run,

• The $$RECORDID$$ filter is replaced by the record ID of the parent record that the form is beingdisplayed within.

• A $$PARENT::[Section]::[Field]$$ filter is resolved to the defined parent record field’s value.• If a metric query is displayed outside of a parent record, any parent-sensitive filters, such as $$RECORDID$$ and $$PARENT::[Section]::[Field]$$ are ignored.

Define $$RECORDID$$ and $$PARENT::[Section]::[Field]$$ filters against fields that are definedas a drill path in the fact table. The metric that is filtered for a program or a classification, and that usesthese filters, acts similar to selecting from the drill path list. It also filters for the specified record andincludes all children of that record.

When a $$PARENT::[Section]::[Field]$$ filter value is specified on a hierarchical filter, a nullparent field is the equivalent of choosing the root node of the drill path. It also includes all of the recordsthat match the root value. A non-hierarchical filter behaves much like a business object query filter andfilters for records with null value when the parent field value is null.

$$RECORDID$$ and $$PARENT::[Section]::[Field]$$ filters behave as if they are runtime filtersexcept that their value is passed from the parent record.

If a metric query has $$RUNTIME$$ and $$PARENT::[Section]::[Field]$$ filters that are definedagainst the same field, when the query is displayed inside a form, the $$RUNTIME$$ filter control is notused. Instead, the value from the$$PARENT::[Section]::[Field]$$ is used as the filter value.

54 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 59: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Sub reportsForm metrics can include both tabular and graphical sub reports.

A tabular sub report defines a Performance Manager’s related report. A non-tabular sub report representsa metric graph that can be switched within the form.

Both tabular and graphical reports can be added to the Sub Reports section of a metric query. At runtime,a Performance Manager displays only tabular reports as Related Reports, and a Metric in a form querysection has only graphical reports as options in the Sub Report section swap controls.

The type of the sub report (for example, metric, query, or report) displays in the Sub Reports section ofthe Report Builder. For metric queries, the Tabular Output flag is available.

A metric query can display other metric queries as sub reports.

Chapter 3. Metrics 55

Page 60: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

56 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 61: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Chapter 4. Hierarchy flattenerThe definition of a hierarchy in the TRIRIGA Application Platform provides flexibility for implementationswith multiple permutations of data hierarchies, for example, organizations. However, reporting toolsprefer a more structured data summary to simplify reporting and maximize performance.

The purpose of the hierarchy flattener tool is for an administrator to define a set of named structures thatare used by the TRIRIGA Metric Reporting engine to quickly process the hierarchical data within thesystem. The resulting structure is referred to as a flat hierarchy and these structures are used in a metricreports.

Administrators use the job scheduler, with the help from ETL job items, to run the hierarchy flattenerprocess. System developers trigger the process by using a custom workflow task. In this case, the classname that is used to run the hierarchy flattener process is

com.tririga.platform.workflow.runtime.taskhandler.flathierarchy.RebuildFlatHierarchies

Triggering the process by using a custom workflow task is discussed in Application Building for the IBMTRIRIGA Application Platform 3.

Important: Whenever a hierarchy in TRIRIGA is changed, the hierarchy tree is promptly updated. Forexample, suppose that you add a new triBuilding, then the Location hierarchy is updated. However, thecorresponding flat hierarchy for triBuilding is not updated until you rebuild it with a Rebuild Hierarchy JobItem. Therefore, it is important to schedule rebuild hierarchy job items at the same time as when you planto capture TRIRIGA Workplace Performance Management data through fact tables. Rebuild hierarchy jobitems ensure that you keep information current.

Flat hierarchiesHierarchy structure definitions depend on what the flattening is based on. The flat hierarchy can be basedon the standard parent-child relationships for a specified module’s hierarchy. The flat hierarchy can alsobe based on specific levels in the module’s hierarchy, and their respective business objects.

Each hierarchy structure definition contains a single header record that identifies hierarchy name,module, and hierarchy type. The hierarchy name describes the hierarchy and the module is the modulethat the hierarchy represents. The hierarchy type is used by the flattening process to understand how toflatten the data. There are two hierarchy types, Data and Form.

A data hierarchy is used to flatten the path of data based on the standard parent-child relationships forthe specified module’s hierarchy. This hierarchy type has no named levels because TRIRIGA ApplicationPlatform applications allow different types of data to be represented at the same physical level in amodule’s hierarchy. For example, a location hierarchy might have data for both property, building andfloor, and for building and floor. Thus the first level in the hierarchy would contain a mixture of propertiesand buildings, and the second level would contain a mix of buildings and floors.

A form hierarchy is used to flatten the path of data based on the parent-child relationships for thespecified module’s hierarchy and the business objects that represent levels. Only one business object canrepresent each level.

Each form hierarchy must specify explicit levels that contain the level number, the business object thatthe level represents, and the type. The type is used by the flattening process to understand how to findthe data for the level. The type has three options: Find, Ignore, and Recurse.

• When the type value is Find, the system searches through the sublevels of the instance data for aparticular thread until a record is found for the specified form. If no records are found, the remaininglevels in the hierarchy definition are ignored and no more flat data is created for that thread. If a recordis found, the system creates a flat data record for that node and proceeds to the next level in thedefinition. This mode provides the capability to collapse a tree to better align your business data.

© Copyright IBM Corp. 2011, 2019 57

Page 62: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

• When the type value is Ignore, the system searches for the specified form, one level below the lastparent. If a record is not found, the system creates a gap for this level and proceeds with the next levelin the definition. If a record is found, the system creates a flat data record for that node and proceeds tothe next level in the definition. This mode provides the capability to expand a tree to better align yourbusiness data. To facilitate the reporting process, the gaps must be given a name or label. Use the GapLabel value in the Hierarchy Structure Manager for this purpose.

• When the type value is Recurse, the system searches through the sublevels of the instance data for aparticular thread until a record is found for the specified form. If no records are found, the remaininglevels in the hierarchy definition are ignored and no more flat data is created for that thread. For eachrecord found, the system creates a flat data record for that node before it proceeds to the next level inthe definition.

Examples of flat hierarchiesYou can reference flat hierarchy examples to better understand the structure of a flat hierarchy definition.

Sample flat hierarchy header records

The following table shows examples of flat hierarchies that are based on modules within hierarchies:

Table 13. Sample header records

Hierarchy name Module Hierarchy type

Space Hierarchy Location GUI

Land Hierarchy Location GUI

City Hierarchy Geography GUI

Full Location Hierarchy Location Data

Full Organization Hierarchy Organization Data

Internal Organization Hierarchy Organization GUI

External Organization Hierarchy Organization GUI

Sample flat hierarchy level records

The following table shows examples of flat hierarchies that are based on levels within hierarchies:

Table 14. Sample flat hierarchy level records

Hierarchy name Level number Form Find module

Space Hierarchy 1 Property Ignore

Space Hierarchy 2 Building Find

Space Hierarchy 3 Floor Find

Space Hierarchy 4 Space Recurse

Internal OrganizationHierarchy

1 Company Find

Internal OrganizationHierarchy

2 Division Ignore

Internal OrganizationHierarchy

3 Department Recurse

58 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 63: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Hierarchy structure managerYou can define hierarchies and level information by using the Hierarchy Structure Manager. The HierarchyStructure Manager provides a single interface for creating, updating, and deleting flat hierarchies.

Accessing hierarchy structuresTo add, modify, or delete hierarchies, access the hierarchy structures functionality.

Procedure

1. Click Tools > Builder Tools > Data Modeler.2. Click Utilities.3. Click Hierarchy Structures.

Creating a data hierarchyA data hierarchy is used to flatten the path of data based on the standard parent-child relationships forthe specified module’s hierarchy. When you create a data hierarchy, named levels are not required asdifferent types of data can be represented at the same physical level in a module’s hierarchy.

Procedure

1. Click Create Hierarchy.2. In the Name field, enter a name to describe what the hierarchy represents.3. From the Module list, select the relevant module for the data hierarchy.4. From the Hierarchy Type list, select Data.5. Click Create.6. Click Save, then click Close.

Creating a form hierarchyA form hierarchy is used to flatten the path of data based on the parent-child relationships for thespecified module’s hierarchy and the business objects that represent levels. When you create a formhierarchy, only one business object can represent each level.

Procedure

1. Click Create Hierarchy.2. In the Name field, enter a name to describe what the hierarchy represents.3. From the Module list, select the relevant module for the form hierarchy.4. From the Hierarchy Type list, select Form.5. Click Create. The Levels section displays. Enter information for the level 1 form.6. From the Business Object list, select the relevant business object.7. From the Form list, select the relevant form.8. From the Type list, select Find.

The Gap Label is the label that is specified when Ignore is selected from the Type list and a record isnot found.

9. Click Save.10. Continue entering and saving information until all levels are defined.11. Click Save, then click Close.

Chapter 4. Hierarchy flattener 59

Page 64: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

60 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 65: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Chapter 5. Fact tablesFact tables consist of the measurements, metrics, or facts of a business process. Fact tables store thedata that is used to calculate the metrics in metric reports.

Fact table information is based on the as-delivered TRIRIGA Workplace Performance Management andTRIRIGA Real Estate Environmental Sustainability products. The implementation at your company mightbe different.

Each fact table has an ETL to load data and another to clear data. The names of the ETLs that clear dataend with – Clear, for example Building Cost Fact – Clear. To view the ETLs, click Tools > System Setup >ETL Job Items.

List of fact tables and metrics supportedYou can reference the list of fact tables and metrics that are supported in the TRIRIGA implementation atyour company.

Accessing fact tables

Click Tools > Builder Tools > Data Modeler.

Locate triMetricFact and select it to reveal the list of fact table business objects.

Accessing metrics

Click Tools > Builder Tools > Report Manager > System Reports.

Filter by Business Object and Module to sort the metric information and obtain the relevant list ofreports.

Tip: #FM# means that a metric is also a form metric in the system.

Facts that require special staging tables and ETLsFor most fact tables, the process to load the stored data to calculate metrics is simple. However, somefact tables require special staging tables, and ETLs to assist with the loading process.

The following table shows the facts that require special staging tables and ETLs:

Table 15. Facts that require special staging tables and ETLs

Fact table name Fact table business object

Financial Summary triFinancialSummary

Standard Hours triStandardHours

Standard Hours Details triStandardHoursDetails

Asset Analytic Hourly Fact triAssetAnalyticHFact

Asset Energy Use Daily Fact triAssetEnergyUseDFact

Asset Energy Use Hourly Fact triAssetEnergyUseHFact

Asset Energy Use Monthly Fact triAssetEnergyUseMFact

© Copyright IBM Corp. 2011, 2019 61

Page 66: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Dependent ETLsSome ETLs depend on other ETLs to assist with the loading process.

The following table shows the ETLs that depend on other ETLs:

Table 16. Dependent ETLs

Fact table name Fact table business object

Building Cost Fact Load ETL This ETL depends on the availability of data in theFinancial Summary Table. The Financial SummaryTable can be loaded either by backend integrationwith your financial system or by using the OfflineFinancial Summary Excel process. To Facilitate theOffline Financial Summary Excel process, there is aspecial ETL to push the data from the Excel/Offlineprocess to the Financial Summary Table. In the as-delivered TRIRIGA Workplace PerformanceManagement, the special ETLs are named LoadFinancial Summary From Offline Staging and ClearFinancial Summary From Offline Staging. If you areimporting financial summary data with the OfflineFinancial Summary Excel process, you must firstrun the Load Financial Summary From OfflineStaging ETL. You must then run the Building CostFact Load ETL.

Building Fact Load ETL This ETL depends on the availability of data in theFinancial Summary Table. The Financial SummaryTable can be loaded either by backend integrationwith your financial system or by using the OfflineFinancial Summary Excel process. To Facilitate theOffline Financial Summary Excel process, there is aspecial ETL to push the data from the Excel/Offlineprocess to the Financial Summary Table. In the as-delivered TRIRIGA Workplace PerformanceManagement, the special ETLs are named LoadFinancial Summary From Offline Staging and ClearFinancial Summary From Offline Staging. If you areimporting financial summary data with the OfflineFinancial Summary Excel process, you must firstrun the Load Financial Summary From OfflineStaging ETL. You must then run the Building FactLoad ETL.

Resource Fact Load ETL Dependent on Standard Hours Load ETL.

Standard Hours Load ETL Dependent on Standard Hours Details Load ETL.

Asset Daily Fact ETL

Asset Hourly Fact ETL

Asset Monthly Fact ETL

These ETLs depend on the availability of data in astaging table. The staging table types must begeneric. The staging table must include specificfields. The staging table can be loaded by back-endintegration with the building management system.The staging table can also be loaded by using anETL to bring the data in from an external database.See the Integrated Service Management Library formore details including, sample staging tables andsample ETLs.

62 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 67: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Notices

This information was developed for products and services offered in the US. This material might beavailable from IBM in other languages. However, you may be required to own a copy of the product orproduct version in that language in order to access it.

IBM may not offer the products, services, or features discussed in this document in other countries.Consult your local IBM representative for information on the products and services currently available inyour area. Any reference to an IBM product, program, or service is not intended to state or imply that onlythat IBM product, program, or service may be used. Any functionally equivalent product, program, orservice that does not infringe any IBM intellectual property right may be used instead. However, it is theuser's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in thisdocument. The furnishing of this document does not grant you any license to these patents. You can sendlicense inquiries, in writing, to:

IBM Director of LicensingIBM CorporationNorth Castle Drive, MD-NC119Armonk, NY 10504-1785US

For license inquiries regarding double-byte character set (DBCS) information, contact the IBM IntellectualProperty Department in your country or send inquiries, in writing, to:

Intellectual Property LicensingLegal and Intellectual Property LawIBM Japan Ltd.19-21, Nihonbashi-Hakozakicho, Chuo-kuTokyo 103-8510, Japan

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR APARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties incertain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodicallymade to the information herein; these changes will be incorporated in new editions of the publication.IBM may make improvements and/or changes in the product(s) and/or the program(s) described in thispublication at any time without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not inany manner serve as an endorsement of those websites. The materials at those websites are not part ofthe materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you provide in any way it believes appropriate withoutincurring any obligation to you.

Licensees of this program who wish to have information about it for the purpose of enabling: (i) theexchange of information between independently created programs and other programs (including thisone) and (ii) the mutual use of the information which has been exchanged, should contact:

IBM Director of LicensingIBM CorporationNorth Castle Drive, MD-NC119Armonk, NY 10504-1785US

© Copyright IBM Corp. 2011, 2019 63

Page 68: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Such information may be available, subject to appropriate terms and conditions, including in some cases,payment of a fee.

The licensed program described in this document and all licensed material available for it are provided byIBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or anyequivalent agreement between us.

The performance data and client examples cited are presented for illustrative purposes only. Actualperformance results may vary depending on specific configurations and operating conditions.

Information concerning non-IBM products was obtained from the suppliers of those products, theirpublished announcements or other publicly available sources. IBM has not tested those products andcannot confirm the accuracy of performance, compatibility or any other claims related to non-IBMproducts. Questions on the capabilities of non-IBM products should be addressed to the suppliers ofthose products.

Statements regarding IBM's future direction or intent are subject to change or withdrawal without notice,and represent goals and objectives only.

This information contains examples of data and reports used in daily business operations. To illustratethem as completely as possible, the examples include the names of individuals, companies, brands, andproducts. All of these names are fictitious and any similarity to actual people or business enterprises isentirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programsin any form without payment to IBM, for the purposes of developing, using, marketing or distributingapplication programs conforming to the application programming interface for the operating platform forwhich the sample programs are written. These examples have not been thoroughly tested under allconditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of theseprograms. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not beliable for any damages arising out of your use of the sample programs.

Each copy or any portion of these sample programs or any derivative work must include a copyright notice as follows: © (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. © Copyright IBM Corp. _enter the year or years_.

TrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International BusinessMachines Corp., registered in many jurisdictions worldwide. Other product and service names might betrademarks of IBM or other companies. A current list of IBM trademarks is available on the web at"Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.

Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/orits affiliates.

Linux® is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in theUnited States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other product and service names might be trademarks of IBM or other companies.

64 Notices

Page 69: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

Terms and conditions for product documentationPermissions for the use of these publications are granted subject to the following terms and conditions.

Applicability

These terms and conditions are in addition to any terms of use for the IBM website.

Personal use

You may reproduce these publications for your personal, noncommercial use provided that all proprietarynotices are preserved. You may not distribute, display or make derivative work of these publications, orany portion thereof, without the express consent of IBM.

Commercial use

You may reproduce, distribute and display these publications solely within your enterprise provided thatall proprietary notices are preserved. You may not make derivative works of these publications, orreproduce, distribute or display these publications or any portion thereof outside your enterprise, withoutthe express consent of IBM.

Rights

Except as expressly granted in this permission, no other permissions, licenses or rights are granted, eitherexpress or implied, to the publications or any information, data, software or other intellectual propertycontained therein.

IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use ofthe publications is detrimental to its interest or, as determined by IBM, the above instructions are notbeing properly followed.

You may not download, export or re-export this information except in full compliance with all applicablelaws and regulations, including all United States export laws and regulations.

IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS AREPROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.

IBM Online Privacy StatementIBM Software products, including software as a service solutions, (“Software Offerings”) may use cookiesor other technologies to collect product usage information, to help improve the end user experience, totailor interactions with the end user, or for other purposes. In many cases no personally identifiableinformation is collected by the Software Offerings. Some of our Software Offerings can help enable you tocollect personally identifiable information. If this Software Offering uses cookies to collect personallyidentifiable information, specific information about this offering’s use of cookies is set forth below.

This Software Offering does not use cookies or other technologies to collect personally identifiableinformation.

If the configurations deployed for this Software Offering provide you as customer the ability to collectpersonally identifiable information from end users via cookies and other technologies, you should seekyour own legal advice about any laws applicable to such data collection, including any requirements fornotice and consent.

For more information about the use of various technologies, including cookies, for these purposes, seeIBM’s Privacy Policy at http://www.ibm.com/privacy and IBM's Online Privacy Statement at https://www.ibm.com/privacy/details/us/en/ in the section entitled “Cookies, Web Beacons and OtherTechnologies.”

Notices 65

Page 70: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

66 IBM TRIRIGA Application Platform : © Copyright IBM Corp. 2011, 2019

Page 71: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension
Page 72: Version 3 Release 6.1 IBM TRIRIGA Application Platform...summed by geography. Therefore, this fact is non-additive over time. • Non-additive – Cannot be summed across any dimension

IBM®

Part Number:

(1P) P

/N: