groovy, game changing technology at breakthru beverage

23
Groovy, Game Chang Technology at Breakthr Beverage Group – Webinar Invitation Join Us On March 22, 2018 On March 22, 2018, I am hosting a webinar featuring the work delivered at Breakthru Beverage Group in Chicago, Illinois. Breakthru had the traditional challenges, but it had some additional obstacles other don’t. With requirements of entering budget at any level, complex allocation and seeding logic, and the need for consolidated reporting in real time, we had to get creative. Welcome, Groovy Calculations! Groovy calculations were released in June of 2017, just in time to be a key resource to solve the previously stated problems. This application will highlight solutions like changing a product price at a consolidated level, and seeing it allocated down to delivery channel, material group, and company. It will show how we consolidated and pushed to results to the P&L applications in seconds. It is easy to participate, you can RSVP TODAY. I will discuss the architecture, the challenges, and how we used Groovy to do things never before possible in Hyperion Planning. Although we will be discussing the technology used, this is not a technical discussion on how to write Groovy calculations. If you are an administrator, user, or owner of PBCS, we will highlight challenges you are likely facing, and how to overcome them using Groovy at a more functional level. If you are looking to purchase or move to the cloud, this

Upload: others

Post on 02-Mar-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Groovy, Game ChangingTechnology at BreakthruBeverage Group – WebinarInvitation

Join Us On March 22, 2018On March 22, 2018, I am hosting a webinar featuring the workdelivered at Breakthru Beverage Group in Chicago, Illinois. Breakthru had the traditional challenges, but it had someadditional obstacles other don’t. With requirements ofentering budget at any level, complex allocation and seedinglogic, and the need for consolidated reporting in real time,we had to get creative. Welcome, Groovy Calculations! Groovycalculations were released in June of 2017, just in time to bea key resource to solve the previously stated problems. Thisapplication will highlight solutions like changing a productprice at a consolidated level, and seeing it allocated down todelivery channel, material group, and company. It will showhow we consolidated and pushed to results to the P&Lapplications in seconds.

It is easy to participate, you can RSVP TODAY.

I will discuss the architecture, the challenges, and how weused Groovy to do things never before possible in HyperionPlanning.

Although we will be discussing the technology used, this isnot a technical discussion on how to write Groovycalculations. If you are an administrator, user, or owner ofPBCS, we will highlight challenges you are likely facing, andhow to overcome them using Groovy at a more functional level. If you are looking to purchase or move to the cloud, this

presentation will educate you on the possibilities nowavailable with the new functionality of Groovy calculations.

AgendaIntroduction: Setting the expectations and introducingthe speakersApplication Overview: Application, purpose, top downplanning, and seedingPerformance Challenges: Product updates, allocations,long wait times for consolidated reportingReal Time Reporting: How Groovy allowed us to overcomeperformance issues and enable real time consolidatedreportingThe Groovy 411: Live demo showing how GroovyCalculations solved performance issuesMore Than Performance: Live demo showing otherenhancements Groovy provides, like user input validationFinishing Up: Q/A, review, and opportunities for nextsteps to setup an optimization assessment

The Official InvitationTop Down and Bottom Up Planning at Breakthru Beverage Group

Planners are always looking for real time reporting and fasterfeedback. They are looking to make the forecasting andplanning process faster by using historical trends and theability to enter data at any level, enter growth factors, anddrive the results down to the lowest level of the business. They want instant feedback on consolidated results.

Join this webcast and hear from the VP of Financial Planning &Analysis at Breakthru Beverage Group on how they are usingOracle Planning and Budgeting Cloud Service (PBCS) integratedwith game changing technology, Groovy, to improve speed andperformance across planning processes.

Leave this session with an understanding on how BreakthruBeverage:

Attained strategic benefits of building a driver based1.budget and forecasting application with the ability toseed product level data and apply growth ratesconsolidated levels to effectively build a bottoms upplan.Leveraged work force planning to include the ability to2.allocate people over multiple cost centers andcompanies.Developed a technical architecture and strategy to allow3.this to happen and integrate with the higher level P&Lin real time.

RSVP today and learn how you can take advantage of Groovy.

Exporting Data in PBCS WithBusiness Rules

IntroductionIf your environment is a cloud product, whether it be PBCS orePBCS, one thing that is critical to understand is the backupsproduced in the Migration area, may not be what you think. Learning this after the fact may have negative consequences onyour ability to restore data. In the migration, the EssbaseData section is a copy of the pag, ind, and otl files. Whenthis is used to restore data, it restored the entiredatabase. This includes data and metadata. This may be OKfor many situation, but it won’t help you if

only specific data is required to be restoredspecific data has changed and needs to be excluded fromthe restorecorruption exists in the database and all data isrequired to be restoredThe pag files that hold the data are not readableThe size of the backup is quite large as it includes alldata, and upper level data is normally exponentiallylarger than just level 0 data

Text Data ExportBusiness Rules can be written to export data to theInbox/Outbox that is delimited with a few formatting options. The entire database can be included. With fix statements,specific data can be isolated. So, forecast could be exportedto a file, plan another, and actuals a third. Specificaccounts, entities, and/or products can be isolated in caseswhen specific data was inadvertently changed or deleted. Thisfile is a text file that can be opened in any text editor,Microsoft Excel, a database, or any other application that youopen text files to view or manipulate.

Example Business Rule[crayon-621e809ebc740502210821/]

Some HintsThere are a few things that you may encounter and be a littleconfused about, so the following are a few things that mighthelp.

To see the data export, it must be exported to1./u03/lcm/, which is the equivalent of your inbox. Anyfile name can be used.Setting DataExportLevel to 0 will export the level 02.blocks, not the level 0 members. If there are any

stored members in any of your dense dimensions, theywill be exported unless the dimension is also in the fixto include ONLY level 0 members.The fix statement works the same as a fix statement in3.any business rule, so the data to be exported can beeasily defined.My experience exporting dynamic calculated members4.drastically increases the time of the export.The export options are all pretty logical. Some work in5.conjunction with each other and others are ignoreddepending on dependent setting values. These aredocumented for version 11.1.2.4 here.This process can be automated with EPM Automate and6.include the download and time stamp of the backup forlater use.

ConclusionThere are benefits to both types of backups. My preference isto either run both nightly, or run just the Business Rule. Byhaving both, the administrator has the option of restoring thedata as needed, in the way that is most effective. Havingboth provides the ultimate flexibility. If space is an issue,exclude the data option in the Migration and just run thebusiness rule.

From Oracle’s DocumentationDataExportLevel ALL | LEVEL0 | INPUT

ALL—(Default) All data, including consolidation andcalculation results.LEVEL0—Data from level 0 data blocks only (blockscontaining only level 0 sparse member combinations).INPUT—Input blocks only (blocks containing data from a

previous data load or grid client data-updateoperation). This option excludes dynamically calculateddata. See also the DataExportDynamicCalc option.

In specifying the value for the DataExportLevel option, usethese guidelines:

The values are case-insensitive. For example, you canspecify LEVEL0 or level0.Enclosing the value in quotation marks is optional. Forexample, you can specify LEVEL0 or “LEVEL0”.If the value is not specified, Essbase uses the defaultvalue of ALL.If the value is incorrectly expressed (for example,LEVEL 0 or LEVEL2), Essbase uses the default value ofALL.

Description

Specifies the amount of data to export.

DataExportDynamicCalc ON | OFF

ON—(Default) Dynamically calculated values are includedin the export.OFF—No dynamically calculated values are included in thereport.

Description

Specifies whether a text data export excludes dynamicallycalculated data.

Notes:

Text data exports only. If DataExportDynamicCalc ON isencountered with a binary export (DATAEXPORT BINFILE …)it is ignored. No dynamically calculated data isexported.

The DataExportDynamicCalc option does not apply toattribute values.If DataExportLevel INPUT is also specified and the FIXstatement range includes sparse Dynamic Calc members,the FIX statement is ignored.

DataExportNonExistingBlocks ON | OFF

ON—Data from all possible data blocks, including allcombinations in sparse dimensions, are exported.OFF—(Default) Only data from existing data blocks isexported.

Description

Specifies whether to export data from all possible datablocks. For large outlines with a large number of members insparse dimensions, the number of potential data blocks can bevery high. Exporting Dynamic Calc members from all possibleblocks can significantly impact performance.

DataExportPrecision n

n (Optional; default 16)—A value that specifies the number ofpositions in exported numeric data. If n < 0, 16-positionprecision is used.

Description

Specifies that the DATAEXPORT calculation command will outputnumeric data with emphasis on precision (accuracy). Dependingon the size of a data value and number of decimal positions,some numeric fields may be written in exponential format; forexample, 678123e+008. You may consider usingDataExportPrecision for export files intended as backup orwhen data ranges from very large to very small values. Theoutput files typically are smaller and data values moreaccurate. For output data to be read by people or someexternal programs, you may consider specifying the

DataExportDecimal option instead.

Notes:

By default, Essbase supports 16 positions for numericdata, including decimal positions.The DataExportDecimal option has precedence over theDataExportPrecision option.

Example

[crayon-621e809ebc750789008632/]Initial Data Load Values[crayon-621e809ebc755882222351/]Exported Data Format[crayon-621e809ebc758540221326/]DataExportDecimal n

Where n is a value between 0 and 16.

If no value is provided, the number of decimal positions ofthe data to be exported is used, up to 16 positions, or avalue determined by the DataExportPrecision option if that isspecified.

Description

Specifies that the DATAEXPORT calculation command will outputnumeric data with emphasis on legibility; output data is instraight text format. Regardless of the number of decimalpositions in the data, the specified number is output. It ispossible the data can lose accuracy, particularly if the dataranges from very large values to very small values, above andbelow the decimal point.

Notes:

By default, Essbase supports 16 positions for numericdata, including decimal positions.

If both the DataExportDecimal option and theDataExportPrecision option are specified, theDataExportPrecision option is ignored.

Example

[crayon-621e809ebc75b203523635/]Initial Data Load Values[crayon-621e809ebc75d223470146/]Exported Data Format[crayon-621e809ebc760617784632/]

Output Format Options

DataExportColFormat ON | OFF

ON—The data is output in columnar format.OFF—Default. The data is output in non-columnar format.

Description

Specifies if data is output in columnar format. Columnarformat displays a member name from every dimension; names canbe repeated from row to row, enabling use by applicationsother than Essbase tools. In non-columnar format, sparsemembers identifying a data block are included only once forthe block. Non-columnar export files are smaller, enablingfaster loading to an Essbase database.

Notes

Do not use the DataExportColFormat option in combination withthe DataExportRelationalFile option, which already assumescolumnar format for files destined as input files torelational databases.

Example[crayon-621e809ebc762086978837/]DataExportColHeader dimensionName

Description

Specifies the name of the dense dimension that is the columnheader (the focus) around which other data is referenced inthe export file. Use the DataExportColHeader option only whenyou export data to a text file. For example, if from SampleBasic the Year dimension is specified, the output data startswith data associated with the first member of the Yeardimension: Year. After all data for Year is output, itcontinues with the second member: Qtr1, and so on.

Notes

MaxL, ESSCMD, and Essbase exports do not provide a similarcapability. With these methods, Essbase determines the focalpoint of the output data.

Exporting through Report Writer enables you to specify theheader in the report script.

Example[crayon-621e809ebc765468350293/]Specifies Scenario as the page header in the export file. TheScenario dimension contains three members: Scenario, Actual,and Budget. All Scenario data is shown first, followed by allActual data, then all Budget data.

DataExportDimHeader ON | OFF

ON—The header record is included.OFF—Default. The header record is not included.

Description

Use the DataExportDimHeader option to insert the optionalheader record at the beginning of the export data file. Theheader record contains all dimension names in the order asthey are used in the file. Specifying this command alwayswrites the data in “column format”.

Example[crayon-621e809ebc768624987264/]Specifying the DataExporttDimHeader ON option while exportingSample Basic writes the data in column format, with commonmembers repeated in each row. The data begins with a dimensionheader, as shown in the first two rows of the example filebelow:[crayon-621e809ebc76a217146980/]DataExportRelationalFile ON | OFF

ON—The output text export file is formatted for importto a relational database.

Data is in column format; sparse member names arerepeated. (The DataExportColFormat option isignored.)The first record in the export file is data; nocolumn heading or dimension header is included,even if specified. (The DataExportColHeader andDataExportDimHeader options are ignored.)Missing and invalid data is skipped, resulting inconsecutive delimiters (commas) in the output. Theoptional “missing_char” parameter for DATAEXPORTis ignored

OFF—Default. The data is not explicitly formatted foruse as input to a relational database.

Description

Using the DataExportRelationalFile option with DATAEXPORTenables you to format the text export file to be used directlyas an input file for a relational database.

Example[crayon-621e809ebc76d356467551/]

Processing Options

DataExportOverwriteFile ON | OFF

ON—The existing file with the same name and location isreplaced.OFF—Default. If a file with the same name and locationalready exists, no file is output.

Description

Manages whether an existing file with the same name andlocation is replaced.

DataExportDryRun ON | OFF

ON—DATAEXPORT and associated commands are run, withoutexporting data.OFF—Default. Data is exported

Description

Enables running the calculation script data export commands tosee information about the coded export, without exporting thedata. When the DataExportDryRun option value is ON, thefollowing information is written to the output file specifiedin the DATAEXPORT command:

Summary of data export settingsInfo, Warning, and Error messagesExact number of blocks to be exportedEstimated time, excluding I/O time.

Notes

The DataExportDryRun option does not work with exportsto relational databases.If you modify the script for reuse for the actualexport, besides removing the DataExportDryRun optionfrom the script you may want to change the name of theexport file.

Example

[crayon-621e809ebc76f126876437/]

Adventures in Groovy – Part9: Ohio Valley OAUGPresentationI was lucky enough to be invited to talk about the new GroovyCalculation in PBCS and ePBCS at the Ohio Valley OAUG meetingtoday. If you have read the Groovy series, you know howstrongly I feel about the advancements in Hyperion Planningwith the addition of Groovy Calculations. I want to share thepresentation with a wider audience. This is a functionaloverview for those who are new to the concepts. This alsointroduces readers how to develop their first GroovyCalculation, and provides some examples.

Updating EPM Automate JustGot Easier

IntroductionOne of the challenges with EPM Automate has been eliminatedthis month. Although it was a minor issue, the need to updateEPM Automate regularly was something that had to be consideredmonthly. Administrators of PBCS do not always have access tothe on-premise footprint, like a Windows VM, that runs the

automation. Even more frequently, access to the production VMis only available to IT staff, so updating that environment ismore strict, and has to be scheduled. That schedule doesn’talways sync up to the changes in PBCS.

Update CommandAs of the 02.18 release, a new command is available. The“update” command will automatically download, and silentlyinstall, the newest version of the EPM Automate utility. Oncelogged in, execute the following command.[crayon-621e809ebd934538759274/]If you are a frequent visitor, you know I am a fan ofPowerShell. All the automation I do with EPM Automate in theWindows environment utilizes this free scripting tool. Thiscommand has been added to all my new projects so there is nomanual effort in keeping the utility current. This alsoeliminates any issues that pop up due to incompatibilityissues with PBCS.

In my reusable scripts, this new function has been added.[crayon-621e809ebd93c430321874/]During a nightly process, the function is referenced. If therequest fails, the administrators are emails.[crayon-621e809ebd93e389561613/]

SummaryThis is a welcome addition. Now, administrators andapplication owners don’t have to worry about using newfeatures or keeping EPM Automate in sync with the activeversion of PBCS. As the great, Forrest Gump would say – “OneLess Thing.”

Supercharge PBCS withPowerShellLast year I presented an in-depth overview on PowerShell andhow it can be utilized in the Hyperion environment. I havebeen asked many times to share it. The presentation is atechnical presentation and is meant to provide a strongintroductory level foundation for anybody that wants to startusing PowerShell to automate repetitive tasks. I have built alarge library of shared functions that can be used to automatePBCS and ePBCS, and I plan to share pieces of this in futureposts.

For now, anybody that is interested in learning PowerShell, orhas used it and doesn’t know why some things work and othersdon’t, this might prove to be a valuable resource.

Ohio Valley OAUG – GettingGroovy in LouisvilleI have been selected to speak at the OVOAUG on February 16,2018. I have been there before, and it is a very nice groupof people to engage with. If you are in the area, or wouldlike to hear more about how Groovy in PBCS can change thelandscape of performance, user interaction, improvement ofdata input, and reduced user frustration, please go tohttp://ohio.communities.oaug.org/ and register. I would loveto see as many of you there as possible.

Here is the agenda. It is going to be a functional overview,but we will touch on how to start writing Groovy, and if youshow up, I will be more than happy to talk before and/or afterthe session.

PBCS Data Map / Smart PushHas Data volume Limits

IntroductionWhen moving data in PBCS with Data Maps or Smart Pushes, theyhave limits on how much data can be moved. The amount of datacan be seen in the logs, and look something like this.

Failure

Exporting data…Exported data file(s) size is: 207.1 MB.Push Data failed. Error: Exported data size of data map thatis being executed from groovy is more than permissibleamount: 100 MB.

Success

Exported data file(s) size is: 464.7 MB.EXPORT elapsed time: 39584IMPORTING – AppName: AreakFinTRANSFORM elapsed time: 63634IMPORTING elapsed time: 21166TOTAL elapsed time: 124553

Prior to the Feb, 2018 release, the following did not alwayshold true. If you are/were seeing inconsistencies, see BugReport: Push Data failed. It also includes information abouthow the data cap works, as it is different between Data Mapsand Smart Pushes, which is worth reading.

Data Movement Limits IdentifiedI got the following information from Oracle, and it is usefulif you are using the data movement functionality. When theseare developed, it is a good idea to evaluate the size and planfor growth. If the production data movements are nearing thethresholds, it is recommended to be proactive and try toreduce the POV that is used to move the data. If it can’t bereduced, one option is to split it into multiple pushes whichcan be done with Smart Pushes on the Data Form save, or withGroovy. Groovy also allows you to further condense the POV bydynamically changing the POV based on the cells edited, whichis the most productive and efficient way to handle these.

So, here is what was documented. The data limits imposed onthe movement methods are below.

There is not a cap when running a Data MapWhen executing the following, there is a cap of 100MB

Smart Push on a Data FormSmart Push via a Groovy CalculationData Map via a Groovy Calculation

SummaryIf you are not seeing this, I would recommend opening a ticketwith Oracle to resolve. I will be writing a post explaininghow to execute and override POVs in Smart Pushes and Data Mapswith a Groovy Calculation in the near future, so look for anarticle in my Adventures in Groovy series.

Bug Report: Push Data failed.Error: Exported data sizeviolates permissible amount:100 MB

IntroductionData Map Error:

Push Data failed. Error: Exported data size of data map thatis being executed from groovy is more than permissible amount:100 MB.

If you are confused, join the club. The results areinconsistent as some data pushes are successful that are over

the 100MB limit. So, why the following error?

Exporting data…Exported data file(s) size is: 207.1 MB.Push Data failed. Error: Exported data size of data map thatis being executed from groovy is more than permissibleamount: 100 MB.

ClarificationA point of clarification for those of you who are new to datamaps and smart pushes. If you think they are the same thing,here is the clarification from Oracle, in my words.

A Data Map is any data map executed from the Data Maparea, whether it is through the UI, EPM Automate, or theREST API.A Smart Push is essentially any Data Map executed from aData Form.

Although they seem like the same function, they have differentlogical areas in execution. My understanding is that a DataMap should never hit a cap on memory. A Smart Push does havea cap. Not only that, the way it was explained to me is thatthere is a hard cap on how much memory Smart Pushes canconsume, and this is a global limit, not a limit per SmartPush. So, the reason you are experiencing inconsistentresults with Smart Pushes is quite simple. The more SmartPushes that are executed in a time window, the more memory isused. So, you may never have a problem in a Test, or atnight, but during UAT or in Prod, successful execution may beintermittent. The reason is when these are run periodically,that limit may never be reached. Run multiple times bymultiple people in short durations will cause the limit to beconsumed.

This bug only applies to Data Maps.

The ProblemThe same Data Map executed results in two different outcomes.

Failure

Exporting data…Exported data file(s) size is: 207.1 MB.Push Data failed. Error: Exported data size of data map thatis being executed from groovy is more than permissibleamount: 100 MB.

Success

Exported data file(s) size is: 464.7 MB.EXPORT elapsed time: 39584IMPORTING – AppName: AreakFinTRANSFORM elapsed time: 63634IMPORTING elapsed time: 21166TOTAL elapsed time: 124553

So, if there is a cap at 100MB, what gives? If you have seenthe following error, and wondered why the same Data Mapsometimes runs and sometimes fails, it is related to Bug27161430.

The FixAlthough support was difficult to navigate, I was lucky enoughto be at an Oracle session in Virginia and talked to adeveloper. He immediately requested the ticket number andsaid flat out, this is a problem. I don’t want to name names,so a huge thank you to an unidentified developer at Oracle forgiving me a few minutes and helping, because I don’t believeit would have been escalated to the development teamotherwise.

The ticket was updated yesterday, and the fix is slated to be

released in February. Although this is an internal bug, hereare the details.

Bug 27161430 – PBCS: EXPORTED DATA SIZE OF DATA MAP THAT ISBEING EXECUTED FROM GROOVY IS MORE

Bug Report: GroovySubstitutionVariable ClassNot FunctioningIf you have jumped into Groovy Calculations, one of the thingsyou likely would try to do is grab a value for a sub var. Hopefully, you haven’t spent too much time before readingthis. I wasted a ton of time trying to get this to workbefore I opened a ticket with Oracle. This class is NOTavailable yet and was inadvertently included in the publicdocs at https://docs.oracle.com/cloud/latest/epm-common/GROOV/. The development team told me they are going toremove it from the API docs.

Without it, the best way I have found to get this value is byadding it to a grid and pulling the dimension value from thatcolumn/row. For example, if your periods are in the columnsand you are need the value of a substitution variable thatholds the current month, add the substitution variable to thefirst column for the variable that holds the current month ofactuals, and hide the column so the users are not confusedwith its purpose/location. If you make use ofgetCellWithMember, and don’t pass it any parameters, it willpull the top left cell in the grid, even if it is hidden. Since this is a period member, use the getPeriodName method. If it is a custom dimension, the getMemberName will provide

what you need.

Here is an example. The grid’s first column is hidden, andthe period is set the substitution variable that representsthe last month of actuals for the year.[crayon-621e809ebdf79540294985/]The sCurMonth variable can be used where needed in the Groovycalculation to obtain the substitution variable value.

Bug Report: EPM AutomateA bug with EPM Automate has been identified. This is notreplicated on every version or client. Please pay attentionto any EPM Automate updates installed. In the past, I wasable to install the latest version without any issues. Currently, the install prompts users to uninstall the olderversion. In the past, this worked as expected, but now, whenselected, this has no effect and the new EPM Automate is NOTinstalled, leaving you with the existing version. I noticedthat this goes VERY fast, like nothing was updated. If youexperience a similar, sub second installation, you may havethe same issue.

Oracle has assigned a bug number to this issue, but no releasedate has been assigned to a fix. The following is not apublic bug.

Bug 25429167 : EPMAUTOMATE NO LONGER PROPERLY REMOVES OLDVERSION.

When you update EPM Automate, validate the install worked byrunning EPM Automate and checking the version number.

The version should generally reflect the date of download, ifyou download this from Oracle’s website. The version abovesignifies a release of December, 2017 (17.12).

If the version doesn’t change and shows a prior installversion date, go to Control Panel, select Programs, andUninstall a Program. Find EPM Automate and uninstall it. Once this is completed, install the newest version fromOracle’s website and you should be good to go.

Happy Holidays!