odi series part 9

44
ODI Series – Renaming planning members Well the title pretty much sets up exactly what this blog is going to be about, it is an area I have not covered though I did need to go back to check as I can never remembered what I have and have not discussed, it is an area that I believe requires some dedicated airtime. Now there are a few different ways to approach renaming members, one of the methods which I am going to cover today involves going directly to the planning database tables and understandably might not be favoured by some, so in the next blog to be complete I am going to go through another method which is much more convoluted but stays on a supported track. There are occasions when updating alias members is not enough and you have to resort to renaming members, there has never been any functionality through the available tools (ODI,DIM,HAL & Outline loader) to directly rename members. I am not sure why this functionality has not been provided yet as it doesn’t seem a complex area to build into an adaptor and would definitely be beneficial; hopefully the ability to do so will be made available one day. Let’s go through the first method, I must point out that if you plan on renaming entity members please make sure that there is no workflow process running for the members that are going to be updated. I also do not promote changing planning underlying tables without experience in this area, so don’t hold me responsible if anything goes wrong it is all at your own risk :)

Upload: pan-ranred

Post on 29-Jan-2016

231 views

Category:

Documents


1 download

DESCRIPTION

ODI Series Part 9

TRANSCRIPT

Page 1: ODI Series Part 9

ODI Series – Renaming planning members

Well the title pretty much sets up exactly what this blog is going to be about, it is an area I have not covered though I did need to go back to check as I can never remembered what I have and have not discussed, it is an area that I believe requires some dedicated airtime.

Now there are a few different ways to approach renaming members, one of the methods which I am going to cover today involves going directly to the planning database tables and understandably might not be favoured by some, so in the next blog to be complete I am going to go through another method which is much more convoluted but stays on a supported track.

There are occasions when updating alias members is not enough and you have to resort to renaming members, there has never been any functionality through the available tools (ODI,DIM,HAL & Outline loader) to directly rename members. I am not sure why this functionality has not been provided yet as it doesn’t seem a complex area to build into an adaptor and would definitely be beneficial; hopefully the ability to do so will be made available one day.

Let’s go through the first method, I must point out that if you plan on renaming entity members please make sure that there is no workflow process running for the members that are going to be updated. I also do not promote changing planning underlying tables without experience in this area, so don’t hold me responsible if anything goes wrong it is all at your own risk :)

In this example I want to rename the level 0 DV members so that is DV1000 – DV4000

I have a source table that contains just two columns, one with the original member name and the other with the name I want to rename the member to, so nice and simple.

Page 2: ODI Series Part 9

In the planning applications relational database there are a few tables that we will need to use to be able to update the member names.

The first table and the most important is HSP_OBJECT, this is the core table in the planning architecture, nearly all other tables relate to this table in some way, and the key to it all is the OBJECT_ID, though this is not going to be a lesson on what’s in the table; it is only about the columns we are interested in.

The columns that are important for this exercise are

OBJECT_ID – Each object inside planning has a numerical id, so for each member there will be an object id.OBJECT_NAME – In our case this holds all the member name information.OBJECT_TYPE – This column is driven by an ID and defines what the object is within planning e.g. form, user, member…OLD_NAME – This contains the previous name of an object, so if a member is changed in planning the OBJECT_NAME will hold the new name and OLD_NAME holds the previous name. Some would argue that I don’t need to touch this column but I have see instances where the value can be null and just to be certain I am covering all angles I am including it.

The next table is HSP_OBJECT_TYPE; this table hold descriptive information to what the object in planning is.The table links to HSP_OBJECT by the OBJECT_TYPE column

Page 3: ODI Series Part 9

So by linking the two tables together you can make sure you are only retrieving information for the correct type, in this example all members are part of a user defined dimension so will have an object_type of 50 associated with them. It is import that this table is used as we only want to update member information and not another object like a form that could have the same name

The final table is HSP_UNIQUE_NAMES. As you are probably well aware member names/aliases all have to be unique with planning, this table keeps a record of all the unique names to make sure you don’t add a new member that already exists.

Page 4: ODI Series Part 9

The table links back to the HSP_OBJECT table by the important OBJECT_ID. When we update the member name we will also have to update the name in this table to keep everything in sync.

Now we have the source and targets sorted out we can move on to ODI.

You will need to set up a physical connection to your planning applications database.

I am not going into the details of setting up physical and logical schemas, if you don’t know how to do it then read back into my early blogs. I am also going to assume you have set up the schemas to your source data.

Once you have set up the connection, create a model using the logical schema created and reverse the required tables (HSP_OBJECT, HSP_OBJECT_TYPE,HSP_UNIQUE_NAMES)

Page 5: ODI Series Part 9

Before we start building an interface we need to make sure the correct KM’s have been imported, as the integration is updating the target then one of the IKM UPDATE knowledge modules can be used.

There are a number of different UPDATE IKM’s depending on the technology using, I am not going to go through them all because I don’t feel performance will be an issue when updating the number of records you will have to when dealing with planning, you can read the documentation to see the differences between each KM, in this example I am going to use the plain and simple “IKM SQL Incremental Update”

When using UPDATE KM’s you will need to have a primary key, there is no need to worry in this example, as OBJECT_ID is already defined as a primary key in the tables.

Ok, the first interface we require is one that will map our source records to the target (HSP_OBJECT) and updated them accordingly.

Create a new interface and drag the HSP_OBJECT datastore on to the target.

On the source drag in the datastore that contains the orginal and new member names, in my case it is called “MEMBER_UPDATE”

Now drag HSP_OBJECT and HSP_OBJECT_TYPE on to the source.

Page 6: ODI Series Part 9

Create a join between “MEMBER_UPDATE”.“Original Name” and “HSP_OBJECT”.“Object Name” (drag “Original Name” on to “Object Name”) this will make sure we only update the records that match by name.

Next create a join between “HSP_OBJECT”.”OBJECT_TYPE” and “HSP_OBJECT_TYPE”.”OBJECT_TYPE”

Page 7: ODI Series Part 9

This will let us define which object types to return in the query ODI creates, to make the definition a filter is required, drag TYPE_NAME on to the source area.

So this will only return the types of “User Defined Dimension Member”, if you want your interface to deal with more than one type of dimension member update you can easily add more logic into the filter e.g.

Page 8: ODI Series Part 9

Now we can perform the target mappings, which is straightforward.

OBJECT_ID is mapped to the source.OBJECT_NAME is mapped to “New Name” as we want this column to hold the new member name.OLD_NAME is mapped to “Original Name” as we this column to be updated with the original member name

In the flow tab I updated the following parameters for the IKM.

INSERT set to “No” as we are only interested in updating the target, we don’t want to insert any records.SYNC_JRN_DELETE & FLOW_CONTROL set to “No” as we don’t require this functionality.

Page 9: ODI Series Part 9

That is the first interface complete and we now need a similar interface that will update the HSP_UNIQUE_RECORDS table.

Create a new interface; drag the HSP_UNIQUE_RECORDS datastore to the target and source as well as the source member information (“MEMBER_UPDATE”)

Create a join between “MEMBER_UPDATE”.”Original_Name” and “HSP_UNIQUE_NAMES”.”OBJECT_NAME” as this will focus on only member names we want to update.

You will notice I have used the UPPER function (UCASE in Oracle), as the member information in HSP_UNIQUE_NAMES is all in uppercase so I need to convert my source member information to Uppercase before I map. I know in my example all the member names are in uppercase but they may not be in future.

The target mapping is straightforward again.

Page 10: ODI Series Part 9

OBJECT_ID is mapped to “HSP_UNIQUE_NAMES”."OBJECT_ID"OBJECT_NAME is mapped to “MEMBER_UPDATE”.”New Name” and converted to uppercase.

The IKM options are set up like the previous interface.

So I now have two interfaces that should map to the source member file and update to the new member name information on both of the target datastores.

Both interfaces run successfully and updates = 4 which is correct.

Page 11: ODI Series Part 9

A quick query on the planning tables and all looks good as the new member names are in place.

This is where the blog could end but I am not so lucky, if you look in planning the original member names are still there, back with a vengeance is the planning cache in action, the cache still has the original names stored in it.

There are a few options, easiest is just to restart planning and the planning cache will be refreshed but I don’t want to take that route as it may not be convenient to restart the planning web app each time a member is renamed.

If you run a planning refresh the planning cache for member information is also refreshed so I could also create an interface that loads 1 record that already exist into the planning application and use the KM option of “REFRESH_DATABASE” set to Yes, this is fine but I want my to be able to use my renaming interfaces again and again and I don’t want to have to load one dummy record in each time.

Ok, I could call the cube refresh utility by creating a procedure and using an OS command.

What I need is an interface that just runs a refresh, is this possible? Well, after too much time spent digging around the java code trying to understand if it could be done I have come up with a solution and here is how it can be done.

Right click the KM “IKM SQL to Hyperion Planning” and select duplicate, I want to keep my original KM so that’s why I am creating a copy.

Page 12: ODI Series Part 9

In the details tab delete the following commands, as they are not required.

“Lock journalized table”“Load data into planning”“Report Statistics”“Cleanup journalized table”

Edit the command “Prepare for Loading”, you can rename it if you like.Most of the code stays pretty much the same; I have just imported a couple of extra classes from the planning adaptor and used them later in the code.

Here is the complete code if you want you copy and paste it in.

from java.util import HashMapfrom java.lang import Booleanfrom java.lang import Integerfrom com.hyperion.odi.common import ODIConstants# new importsfrom com.hyperion.odi.planning import ODIPlanningWriterfrom com.hyperion.odi.planning import ODIPlanningConnection

## Target planning connection properties#serverName = "<%=odiRef.getInfo("DEST_DSERV_NAME")%>"userName = "<%=odiRef.getInfo("DEST_USER_NAME")%>"password = "<%=odiRef.getInfo("DEST_PASS")%>"application = "<%=odiRef.getInfo("DEST_CATALOG")%>"

Page 13: ODI Series Part 9

srvportParts = serverName.split(':',2)srvStr = srvportParts[0]portStr = srvportParts[1]

## Put the connection properites and initialize the planning loader#targetProps = HashMap()targetProps.put(ODIConstants.SERVER,srvStr)targetProps.put(ODIConstants.PORT,portStr)targetProps.put(ODIConstants.USER,userName)targetProps.put(ODIConstants.PASSWORD,password)targetProps.put(ODIConstants.APPLICATION_NAME,application)

print "Initalizing the planning wrapper and connecting"

dimension = "<%=snpRef.getTargetTable("RES_NAME")%>"loadOrder = 0sortParentChild = 0logEnabled = <%=snpRef.getOption("LOG_ENABLED")%> logFileName = r"<%=snpRef.getOption("LOG_FILE_NAME")%>"maxErrors = 0 logErrors = <%=snpRef.getOption("LOG_ERRORS")%> cubeRefresh = 1errFileName = r"<%=snpRef.getOption("ERROR_LOG_FILENAME")%>"errColDelimiter = r"<%=snpRef.getOption("ERR_COL_DELIMITER")%>"errRowDelimiter = r"<%=snpRef.getOption("ERR_ROW_DELIMITER")%>"errTextDelimiter = r"<%=snpRef.getOption("ERR_TEXT_DELIMITER")%>"logHeader = <%=snpRef.getOption("ERR_LOG_HEADER_ROW")%>

# set the load optionsloadOptions = HashMap()loadOptions.put(ODIConstants.SORT_IN_PARENT_CHILD, Boolean(sortParentChild))loadOptions.put(ODIConstants.LOAD_ORDER_BY_INPUT, Boolean(loadOrder))loadOptions.put(ODIConstants.DIMENSION, dimension)loadOptions.put(ODIConstants.LOG_ENABLED, Boolean(logEnabled))loadOptions.put(ODIConstants.LOG_FILE_NAME, logFileName)loadOptions.put(ODIConstants.MAXIMUM_ERRORS_ALLOWED, Integer(maxErrors))loadOptions.put(ODIConstants.LOG_ERRORS, Boolean(logErrors))loadOptions.put(ODIConstants.ERROR_LOG_FILENAME, errFileName)loadOptions.put(ODIConstants.ERR_COL_DELIMITER, errColDelimiter)loadOptions.put(ODIConstants.ERR_ROW_DELIMITER, errRowDelimiter)loadOptions.put(ODIConstants.ERR_TEXT_DELIMITER, errTextDelimiter)loadOptions.put(ODIConstants.ERR_LOG_HEADER_ROW, Boolean(logHeader))loadOptions.put(ODIConstants.REFRESH_DATABASE, Boolean(cubeRefresh))

Page 14: ODI Series Part 9

# connect to planning and set parametersodiPC = ODIPlanningConnection(targetProps)ODIPlanWrite = ODIPlanningWriter(odiPC)ODIPlanWrite.beginLoad(loadOptions)# run refresh with or without filters odiPC.getWrapper().runCubeRefresh(Boolean("false"), Boolean(<%=odiRef.getOption("REFRESH_FILTERS")%>))# clean up ODIPlanWrite.endLoad()

print "Planning refresh completed"

I left in all the logging options, they won’t really tell you much other than the connection to planning was successful.

I also added into the code the option to refresh filters or not which takes me on to the options, some of them I removed because they are not required anymore.

I added the following option :- REFRESH_FILTERS

I deleted the following options :-

LOAD_ORDER_BY_INPUTSORT_PARENT_CHILDMAXIMUM_ERRORS_ALLOWEDREFRESH_DATABASE

Page 15: ODI Series Part 9

To save you time on creating the KM, you can download it from here.

To use the refresh KM in an interface you just need to create an interface set a staging area, I usually use the sunopsis memory engine as no processing is done in the staging area it is just used so the interface will validate.

Drag any planning dimension datastore on to the target, it doesn’t matter which dimension, it is just being used to generate the correct connection details to the planning application.

Page 16: ODI Series Part 9

In the IKM options you can define whether or not to refresh the security filters, I doubt you will see anything being generated in the error log and I probably could of removed it.

Now you have an interface that can be used again and again for refreshing planning, to change which planning app it refreshes is also simple, either drag a different datastore on to the target or just update the physical connection information in the topology manager.

To put it all together I created a package.

As you can see I added a variable that I set to define the planning dimension type of the members that will be renamed.

Page 17: ODI Series Part 9

I updated the renaming interface to use the variable, this gives the flexibility to quickly change the dimension I will be renaming members for or just include all the dimension types.

The complete flow is :- Set Planning Dimension variable, Rename Members interface, Update unique members table interface and finally a planning refresh.

The interfaces can easily used for different planning applications by either dragging a different datastore to the targets or by updating the physical connection information in the topology manager.

Executing the package will complete the process for renaming members and a quick check in planning shows it was successful

So in the end I have a solution that can be easily ported to any environment where I need to rename members.

As usual I thought this blog would be really quick to write up, it always seems quick in my mind until I go through it all step by step.

Page 18: ODI Series Part 9

Next time I will go through the alternative method of achieving the same end game but without having to go directly to the planning tables.

Page 19: ODI Series Part 9

ODI Series - Will the real ODI-HFM blog please stand up.

I have always been meaning to write up a few blogs on using ODI with HFM adaptors but I have never got round to it. I have seen more and more questions coming up lately on problems configuring and using the adaptor so I thought it would be a good time to go through the basics of what can be done.

Yes, I have already seen a one off blog about ODI/HFM but to honest I was not impressed with the quality and depth of it so I thought it was time to put my version together, I will let you decide which one to use.

I would first like to make it clear I am not a HFM consultant so apologies if I use any incorrect terminology or don’t go about some of the methods in the correct way, no doubt I will be contacted as usual if I make any slip ups.

I am not setting out to go into too much depth just enough to get people up and running and gain a little knowledge of using the adaptors, my end game really is to be able to move metadata and data between essbase, planning and HFM but we will see how it pans out.

In this first blog I will go through installing the HFM client, configuration in ODI and some pitfalls that may be encountered and hopefully finishing up with a successful reverse of an FM application.

I have wrote a number of articles in the past using the Hyperion adaptors and I am not going to replicate the detail I have in the past so if you don’t understand you may have to read back through my earlier blogs.

I will be using a two-machine architecture, one machine hosting HFM and EPM foundation (Workspace and shared services). I have created a FM application named SAMPLE that is based on the “Simple Demo” information that can be found in “Hyperion\products\FinancialManagement\Sample Apps\Simple Demo\” of the HFM installation.

The second machine will be hosting ODI; both machines are windows based as that is a pre-requisite for HFM and also the HFM client that has to be installed on the ODI machine.

The EPM version installed is 11.1.1.3

Ok, lets start with the installation of the HFM client.The files you will need (based on 11.1.1.3) are

Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition Release 11.1.1.3.0 Foundation Hyperion Enterprise Performance Management System Foundation Services Release 11.1.1.3.0 Part 1Oracle Hyperion Financial Management, Fusion Edition Release 11.1.1.3.0

Page 20: ODI Series Part 9

Extract all the files to the same location and execute “InstallTool.cmd” to launch the installer.

Select “Choose components individually”

Uncheck all, expand Financial Management and select “Financial Management Client”

Should not take too long to install and after installation there is no need to run the configurator.

After installing the client I wanted to make sure I could connect to my HFM instance.

I opened up the FM configuration utility from the start menu > Oracle EPM System > Financial Management > Client Configuration

I added the server name of my HFM instance (HFMV11), enabled DCOM and that was all that was required to be done in the utility.

Next opened up the FM desktop to make a connection to my SAMPLE FM application.

Page 21: ODI Series Part 9

When logging in make sure the Domain box is cleared and don’t enter the HFM server as it will not use Shared Services security and when you try and open the application it will generate a funky error.

I connected to the cluster and the SAMPLE application appeared in the list on application, clicked on Open and no errors were received so that is enough for me to believe the client is working successfully so on to ODI.

Open the Topology Manager

This is where we will define the connection to the HFM server, define the application to use and associate this with a logical schema and context.

Page 22: ODI Series Part 9

Enter a name for the connection, the Cluster name and an account that has been provisioned with a FM admin role in Shared Services.

Do not click Test as this is only for JDBC technologies, the HFM is using an API so clicking it will just generate a driver error, I have lost count the number of times people have clicked the Test button and thought there was a problem.

Manually enter the HFM application name into the Catalog and Work Catalog boxes, make sure you enter exactly the same name into both boxes as I have also seen problems with integrations where this has not been done.

Page 23: ODI Series Part 9

Select a Context which you want to use and associate a logical Schema, you will need to manually type the name of the logical schema you want to use.If you don’t understand about Logical Schemas and contexts then I suggest you have a quick read here

That is all you need to do in the Topology manager for now so on to the Designer.

I have created a new project for the purpose of this HFM blog.

Once the project has been created the HFM knowledge modules will need to be imported. There are currently five HFM knowledge modules

IKM SQL to Hyperion Financial Management DataIKM SQL to Hyperion Financial Management Dimension LKM Hyperion Financial Management Data to SQLLKM Hyperion Financial Management Members to SQL RKM Hyperion Financial Management

All the xml based KMs are situated in OraHome_1\oracledi\impexp

I am only going to be using the RKM today to reverse engineer the FM application into an ODI Model.Right click the project and select Import Knowledge Modules.Multi-select the KMs I have highlighted above and import.

Page 24: ODI Series Part 9

In the models tab create a new Model, the model is basically going to be an ODI interpretation of the structure of the HFM application broken down into manageable Datastores.

Give the model a name and select “Hyperion Financial Management” and select the Logical Schema you have just defined, this will create the link to the physical connection information for the FM application.

Page 25: ODI Series Part 9

In the reverse tab, select customized, select the context and logical agent you want to use to do handle the reversing, you can choose a local agent if you have not created another agent, if you want to understand how to create agents look back to one of my earlier blogs.

You will need to the select the RKM that was imported earlier, I have left the other options default for now, I will go into them in more detail at a later stage.Click Reverse and go into the operator. This is where you may encounter some issues.

You may receive an error “Error occurred while loading driver”, this is due the agent not being able to see the ODI HFM driver (HFMDriver.dll / HFMDriver64.dll) in the \oracledi\drivers directory.

Depending on what sort of agent you are using will determine what you need to do to resolve this issue.

If you are using the “Local Agent” you will need to add the driver location to the windows environment variable (Path)

Once you have added the location make sure you restart all the ODI components to pick up the driver or you will continue to receive the error message.If you are using an agent created as a windows service you will have to update a configuration file, when the windows service agent is started up it retrieves some of its parameter information from OraHome_1\oracledi\tools\wrapper\conf\snpsagent.conf

Page 26: ODI Series Part 9

Edit the file and add the following line below “wrapper.java.library.path.1=….”

wrapper.java.library.path.1=./drivers

This means the DLLs in the drivers directory will be picked up.Restart the agent once the changes have been made.

Reverse again.

If you using EPM version 11.1.1.3 or above like I am then you will encounter the error “Error occurecd in driver while connecting to Financial Management application [xxxxx] on [xxxxx] using user-name [xxxxx]..."

Don’t panic it is a known bug.

8725017: IKM SQL to HFM - Is not compatible with Oracle's Hyperion Financial Mangement 11.1.1.3.0. NOTE: Oracle’s Hyperion Financial Management – Fusion Edition Release 11.1.1.3 is the only release supported for this patch.

Tbere are currently a couple of solutions to fix the problem, you could download and install the following patch from “My Oracle Support”

Patch 8785892 - ORACLE DATA INTEGRATOR 10.1.3.5.2_01 ONE-OFF PATCH

Or you could install one of the following later patches as they address another issue with using HFM adaptor with ODI9201073 - Cannot load metadata that includes shared members into HFM 11.1.1.3

Page 27: ODI Series Part 9

Patch 9327111: ORACLE DATA INTEGRATOR 10.1.3.5.6 CUMULATIVE PATCH Or Patch 9285774: ORACLE DATA INTEGRATOR 10.1.3.5.5_01 ONE-OFF PATCH

I opted for patch 9285774, it is really simple to install, stop all ODI components and agent, extract the three files in the patch to \oracledi\drivers overwriting the existing files and start up the ODI components again.

Reverse once again and this time it should be successful.

Check the model and make sure all the Datastores have been created.

Page 28: ODI Series Part 9

That concludes the first part and should get you up and running with the adaptor, in the next blog I will start looking in more detail at using the available FM KMs.

Page 29: ODI Series Part 9

ODI Series - Loading HFM metadata

In the previous blog I went through getting up and running with the HFM ODI adaptor, today I am going to start looking at the different load methods available with the four available Knowledge Modules.Through the use of the Knowledge modules you will be able to do the following :-

Load metadata and data Extract data Consolidate data Enumerate members of member lists

The Knowledge modules basically replicates the functionality that is available in the HFM desktop client, if you have ever used the client then you be comfortable using the ODI adaptors

The objective today is load metadata information from a flat file into the Account dimension of an HFM application, if you have loaded metadata using any of the other Hyperion adaptors then you will see no difference in using the HFM adaptor and the process is straightforward, in fact this example I am going to go through couldn’t get much simpler.

You will need to make sure you have imported in the KM - IKM SQL to Hyperion Financial management Dimension.

With the KM you are able to load metadata in dimensions Account, Currency, Custom 1-4, Entity and Scenario.

As I will be loading from a flat file you will also need to have imported in the KM – LKM File to SQL

Currently I have the following hierarchy in my sample HFM application (this hierarchy was loaded from the Simple Demo files that are available in the HFM installation directory)

Page 30: ODI Series Part 9

In my example I want to load the following expense members as a child of Account.

If you have a look at the Account datastore in the Designer you will see all the available properties that can be set when loading in metadata.

Page 31: ODI Series Part 9

I am not going to go through what each of the columns means, as this is all available in the HFM documentation, if you have a read here you will find the details for each of the properties on the Account dimension, the documentation also contains information on the other dimensions.

Now I created a file template with the exact same column headers as the datastore columns, I find it easier to match the column headers as when you have auto mapping enabled in ODI it will set all your target mapping for you without having to manually set them up. You don’t have to create a template with all the columns if you don’t like, most of the time you might never use some of the columns so it is up to you if you include them in your file.

The file was populated with all the metadata information.

Once you the file is completed then it is on to defining the file model and datastore in the designer.

Create a new model of file technology and use a logical schema that points to the correct location

Page 32: ODI Series Part 9

of where the file will be stored. I am using a logical schema set up in previous blogs and just points a directory where most of the flat files I use are stored.

Next a new DataStore is inserted under the model, the file is manually selected using the browse functionality as the file is sitting on the same machine as where ODI is installed.

If you looked at the image of the file I am using you will notice it was a CSV and has heading so I set the file format as Delimited, the Heading number of Lines is 1 and the field separator is set to comma.

Page 33: ODI Series Part 9

Reversing the file returns and stores all the columns in the datastore against the header columns in the csv file. After reversing you may need to change length of the columns, I have just left them set to the default for the example.

I usually set flat file types to String and then if I need to convert them to a different type I do it on the mappings of an interface, I do this because I find I have more success and encounter less when loading from files.

Right on to creating the interface to load the load from the file into the HFM application.

I am just using the SUNOPSIS_MEMORY_ENGINE as the staging area, this is because the volume of data from the flat file is very small, I would think that when loading metadata into a HFM then the volume would usually be pretty low.

Page 34: ODI Series Part 9

You will always need to set a staging area as the target (HFM) can not be used to transform data as it has not SQL capability.

Drag the file source datasource on to the source of the diagram.

Drag the HFM Account DataStore on to the target of the diagram, as the columns on my source match the target all the columns are auto-mapped.

The warnings are just done to column length difference between source and target; if your source data is consistent then you don’t need to worry about it otherwise you could substring the length of the data in the target mapping or change the length in the source file datastore.

Page 35: ODI Series Part 9

The default settings are kept for the LKM.

The IKM only has a few options

CLEAR_ALL_METADATA_BEFORE_LOAD - All dimension members and corresponding data, journals, and intercompany transactions in the application database are deleted.

REPLACE_MODE -

No= Merge - If a dimension member exists in the load file and in the application database, then the member in the database is replaced with the member from the load file. If the database has other dimension members that are not referenced in the load file, the members in the database are unchanged.

Yes = Replace - All dimension members in the application database are deleted and the members from the load file are put into the database.

In this example I am just doing a simple load for a few members so all the default settings are kept. I enabled the log and set the full path and filename.

Page 36: ODI Series Part 9

After executing the scenario check the operator to make sure it was successful.

In the log file you will notice another log is generated, the log is the equivalent of what the HFM client produces, it is a validation of the source data.

Page 37: ODI Series Part 9

If the interface fails then the additional log is the place you should check to find out the reason why the validation failed, I have no idea how the numeric code in the log filename relates to anything, it doesn’t look like a timestamp, maybe it is just created so the file will be unique.

In the HFM application you should now be able to select the new members that have been loaded. So there we have it a really simple example of loading metadata into HFM.

As I said in the first HFM blog I am going to keep it to the basics just to get people up and running with the different functionality available with the HFM adaptor.

Page 38: ODI Series Part 9