o tech magazine fall 2013

46
T - otechmag.com - Fall 2013 - 1 Fall 2013 A brand new magazine The art of content security Pick the right integration infrastructure component PL/SQL Function Statistics Database 12c for Developers Succesfully combining UX and ADF e c h Magazine T WebLogic on the Oracle Database Appliance And more...

Upload: ali-anwar-ocp

Post on 13-May-2015

149 views

Category:

Technology


1 download

DESCRIPTION

Content related to Security,Oracle 12c, PL/SQL statistics

TRANSCRIPT

Page 1: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -1

Fall 2013

A brand new magazine

The art of content security

Pick the right integration

infrastructure component

PL/SQL Function Statistics

Database 12c for Developers

Succesfully combining UX and ADF

echMagazineT

WebLogic on the Oracle Database Appliance

And more...

Page 2: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -2

This adventure started about half a year ago. It started as just a crazy idea. ‘What about releasing a magazine’. Loads of people declared me an idiot (a general type of remark about most of my ideas, so I didn’t really listen to those). But almost just a large amount of people thought it was just simply cool.

And because of both groups of people this magazine is here.

And what an adventure it has been so far. In the beginning it was just about offering something else. I wanted to create something other that the usual Oracle Magazine (Oracle telling how good Oracle is) or the usual blog (consultants telling how good they are).

In the small world around the technology that bounds us – Oracle – there are quite a lot of very great personalities. The knowledge is extensive and I wanted to offer them a platform that really has some-thing to offer. A magazine, I imagined.

People working with Oracle software want good information. Independent information. But indepen-dent and well-written information is pretty hard to come by. At least in the world that’s called Oracle.

When I started working on this magazine I had no idea if it would work – as a matter of fact I still don’t, you are the judge of that – but the enthusiastic response of a lot of highly regarded professionals in the Oracle scene made me work hard on this first issue.

This magazine is – for now – just fun and games. It started just as a hobby. If this first edition will catch on there will be a second. If the second catches on a third.

Without content there is no magazine. Therefore I would like to express my deepest gratitude to the authors of this magazine. Troy Allen, Lucas Jellema, Billy Cripe, Sander Haaksma, Marcel Maas, Simon Haslam, Peter Paul van de Beek, Michael Rosenblum and Lonneke Dikmans: thank you so much for participating with me on this adventure!

Cheers!

Douwe Pieter van den BosSeptember 2013

A brand new magazine

Editorial

Page 3: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -3

The Picture Desk

Page 4: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -4

Contents

Blending BI and Social Competitive Intelli-gence for deep insight into your businessPage 20

Database 12c for DevelopersPage 12

The art of content securityPage 7

The Book ClubPage 23

Successfully combining UX with ADFPage 25

Page 5: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -5

Contents

Pick the right integration infrastructure compo-nentPage 33

Oracle PL/SQL Function StatisticsPage 37

Stop generating your UI, Start designing IT!Page 41

WebLogic on the Ora-cle Database Appliance Virtual PlatformPage 29

Page 6: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -6

The Picture Desk

Page 7: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -7

WebCenter Content

Over the years, I’ve had opportunities to work with many organizations, ranging from very small to some of the most recognizable brands in the world, and each one of them had the same requirements and questions:“I need to lock down our IP (intellectual property).” “We can’t have people digging through our files as they please.” “I only want my department to see our stuff, except for those others who need to see it.” “How can I restrict access to our files?” “Do I need a separate security model for each department?”

In most cases, businesses under-stand that they need to secure their information in some fashion, but have no idea where to begin at an enterprise level. Security tends to be left in the hands of department managers, which will often lead to silos of information repositories and duplication of effort, and content, across the entire company. Additionally, or-ganizations find themselves with

an over-kill of content security (leading to making it too difficult to work with their repositories) or virtually no security at all (leaving the organiza-tion at risk for data loss and corruption).

Before someone can pick up paints, a brush, and a canvas to re-create a Picasso, they need to have a good idea of what they want to create and have an understanding of the tools they will use and how to mix and blend colors to get the desired results. Creating a content security strategy abides by the same requirements: know what needs to be accomplished and understand the tool and how security elements blend to get the desired results.

Every content management application on the market provides some level of security and a defined set of elements to control user access and permissions to content. This article focuses on the Oracle WebCenter Content application (with the principles applying also to WebCenter Records Management), but the overall strategies outlined here can be applied to other repository tools.

Learning to use the BrushesOracle WebCenter Content (WCC) utilizes Security Groups, Roles, Accounts and Access Control Levels (ACLs) to control contribution (the ability to add

new or edit existing content), consump-tion (the ability to search for and utilize or view content), and management (the control of back-end processes of content including designing and managing workflows) of content.

Security Groups act like storage contain-ers within WCC. Content must be assigned to a Security Group, but it can only be assigned to one at a time. WCC utilizes Roles like a set of keys to grant users with permissions to the storage containers, or Security Groups.

Roles provide users with specific permissions (Read, Write, Delete, and Ad-ministrate) to groups of content. Users can be assigned to more than one, and Roles can grant permissions to more than one Security Group.

Many legacy WCC customers still only utilize Security Groups and Roles to secure their content and have faced a situation where the number of Secu-rity Groups and Roles that have to be created to manage their implementation become unmanageable or they simply cannot get to the level of granularity that is required. (As a side note, Oracle recommended no more than 50 Security Groups prior to the WCC 11g release. From an operational standpoint, this is still a good best-practice to keep in mind).

In order to meet the demands of more complex security requirements, Accounts had been introduced to provide granular control ofcontent. If we visualize Security Groups as a filing cabinet, then accounts would be the folders that are held within it. Sometimes you have content that isn’t in a File folder, but is in the filing cabinet drawer; hence, a piece of content is being submitted to a Security Group without having an account applied to it. In physical filing cabinets, File folders can often contain more File folders providing a hierarchy of storage – Cabinet drawer, File folder, File folder, and then content.

WCC Accounts can be difficult to grasp at first, but make perfect sense once the proverbial “light bulb” turns on. Account structures are identical

across Security Groups. To put it another way, File folders are organized the exact same way across all cabinet drawers. Another point to remem-ber is that Accounts are hierarchical in nature.

Another way to think about Accounts is to visualize a set of stairs that you are walking down. The Account structure or “stairs” has a top level of “Employee”, as an example, with the next step down being “Marketing”. We can continue to add more steps down, or sub-accounts, such as Employee/Marketing/Creative/ArtDept. Any user set at the top of the “Stairs”, or in-serted into the top Account and given Read access, will have Read access all the way down the stairs or Account structure. The user would have Read access to Employee and all the accounts down to, and including, Employee/Market-ing/Creative/ArtDept.

Continuing on the previous Account example, Employee, Employee/Marketing, Employee/Marketing/Cre-ative, and Employee/Marketing/Cre-ative/ArtDept would exist in both the Public and Secure Security Groups. In order to see content in Employee/Marketing under the Public Security Group, Bob would have to at least be assigned to Public_Consumer Role AND either Read to the Account Employee, or Read to the Account Employee/Marketing. Mary would

likewise need Secure_Consumer Role to the Secure Security Group AND at least Read to Employee, Employee/Marketing, or Employee/Marketing/Creative in order to see content in the Secure Security Group stored under the Account Employee/Marketing/Creative.

WCC evaluates the Role and Account assignments of each user to deter-mine what the actual combined permission set is for any given content item.

When evaluating a user’s Roles, permissions between Roles that grant different access rights to the same Security Group will result in the user receiving the greatest permission between the two Roles.

The lost art of content security Troy Allen TekStream Solutions

Page 8: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -8

WCC performs a similar operation when evaluating the permissions granted between Account assignments for a user. This becomes a bit more complex given that Accounts are hi-erarchical. For example, a user given Read and Write access for the Account Employee AND given Read access for the Account Employee/Marketing, will actually have Read and Write access to Employee/Marketing. Since Employ-ee is a higher level Account than Employee/

Marketing, and the Access rights granted are also greater than those for Employee/Marketing, Read Write access prevails.

If we changed permissions so that the user only has Read access to Account Employee and Read Write access to Account Employee/Marketing, then the user still retains the greater per-mission for the Account Employee/Marketing. However, the user’s access rights to the Account Employee do not change.

If you recall the correlation of Accounts to being like stairs, then you also need to visualize that you can only go one direction on them….down. An administrator can assign the user entry to an Account structure anywhere within its levels, but the permissions granted are ONLY valid for that entry point and below. A user being assigned to Employee/Marketing with Read access would NOT be able to see content that was assigned to the higher level Account Employee.

As mentioned earlier on, Accounts are designed to provide a level of granularity to content security controls by providing a deeper level of control to Roles and Security Groups. Due to the fact that every content item has to be assigned to one, and only one, Security Group, users must have at least one Role that grants access to that Security Group in order to see content. A user may have an Account assignment

with Read, or greater permissions, but may still not be able to have access to the content if they cannot enter into the Security Group it has been assigned to.

WCC evaluates the rights a user has been assigned to a Security Group through the user’s Role assignment. WCC then evaluates the permissions assigned to the user’s Account assignment. This evaluation produces the combined permission that the user ultimately has on the content item. If the user’s Role grants Read, Write, Delete, and Administrate and his or her Account assignment is Read and Write, then WCC intersects between the two resulting in Read Write access for this particular case.

WCC Access Control Lists (ACLs) provide another layer of controls for se-curing content. However, ACLs are counter intuitive in how they work and

how most people expect them to work.

ACLs are assigned to individual users, to groups of users (utilizing WCC’s Aliases), or to defined WCC Roles. Any combination of these may be applied to control users across multiple levels.

Most users would expect that when they assign an ACL to content or a folder, that the permission assignment would “overrule” the security normally granted to other users for that piece of content. ACLs actually work in the same fashion that WCC Roles and Security Groups work with WCC Accounts. The actual permission granted to a user for a piece of content or a folder when ACLs are applied is an intersection of the overall permissions evaluated from Roles, Accounts, and ACLs.

For example, Jill has Read Write access to all content assigned to the WCC Security Group Public through her WCC Role of Public_Contributor. Anoth-er user checks-in a piece of content, assigns it to the Public Security Group, and then direct-ly assigns “R” permission to Jill. The intersection between her WCC Role of Read Write and the

applied ACL of Read is Read for this particular content item.

A more complex version, including Accounts, Roles, and ACL means that WCC has to evaluate all the cre-dentials and determine the final security for the user on a specified content item or folder. A user with Read Write to a Security Group, Read Write Delete to an account, and an ACL assignment of Read Write Delete Admin to a piece of content with the same Security Group and Account will receive Read Write permission to the document.

Painting the PicturesAn artist wanting to paint a picture of the ocean can do it in an infinite number of ways and styles. An administrator wanting to apply security to their enterprise content can do it in many different ways based on several different approaches. The toughest part, for the artist and the administra-tor, is to determine exactly what they want to paint or what they want to secure and how.

In terms of working with the “enterprise”, it can be difficult to determine

what is needed since so many people and groups can be impacted. In my experience, I’ve seen a majority of organizations who have two types of content, Company Public and Departmental Restricted. I also see a rising need for collaborative security (usually driven by organizations that are familiar with departmental collaboration and want to apply a group level security across an enterprise). A final model that will be discussed is the Exception model which provides a structured approach to departmental security and managed exceptions to permit non-departmental access or restrictions within a department.

Company Public and Departmental Restricted (CPDR)In a CPDR model, companies see the majority of their content as being company public domain, but should have controls for who can edit it. This model also addresses the need to have content that is restricted to departmental use. “What happens if an employee performs a search and finds a document that was presumed to be secure or receives a link to a document that was presumed to be secure and is able to open the link and view the document?” In most organizations, and for the majority of content, the answer is “as long as they cannot change it, it is okay.” It seems that many companies view the targeting of content as the same as content security (for this article, we will only focus on security). Occasionally, the answer is, “that would be very bad, that is a classified document or intended for a certain management level.” The second answer is usually given for legal, financial, human resources, or other controlled departments. CPDR model allows for both answers by maintaining a minimum set of Security Groups and Roles and relying on two types of Account structures.

CPDR utilizes two primary Account structures, one for providing Read access to all public content while controlling who can edit it, and another for restricting both the consumption and contribution of content to a de-partment structure. It also utilizes two primary Security Groups for Public content and Controlled or Restricted content with Roles for each to con-trol Read access, Read Write access, and Read Write Delete Admin access. By doing so, organizations can submit content for general consumption to the Security Group Public and assign an account under the Employee tree to provide global consumption but controlled contribution. Automatically assigning all employees the Public_Consumer Role and Read to the root of the Employee account structure makes this possible. Content that is specific to a particular department, and should only be accessed by that department, can be assigned either to the Public or Restricted Security Groups and then to the appropriate level within a Department Account structure. Utilizing the Restricted Security Group allows departments to have Departmental Public content as well as “hidden or restricted” departmental content that only a select few within each department can view.

Page 9: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -9

The following outlines the elements of a CPDR modeled security imple-mentation:

This model lends itself to large organizations taking an enterprise-wide view of their content. This model is highlighted by how it:

• Minimizes the overall number of Roles and Accounts (which relate to LDAP Groups) that must be managed• Minimizes the overall number of Role and Account assignments typical users must be assigned• Provides the ability to designate enterprise content or department spe cific content

Collaborative SecuritySome content management applications are based on the concept that groups of users collaborate on content and need to add participants in a more fluid manner. While this strategy works well on a small scale, it rare-ly lends itself to supporting “published” or finalized content that needs to be shared across the enterprise. In most cases, organizations will have multiple instances or installs of these types of systems to support each department.

To make Collaborative Security work at the enterprise level, it is recom-mended that organizations start with the CPDR Model as a foundation. By adding WCC ACLs (Access Control Lists) into the mix, companies can specify a security model to support the ad hoc assignment of permissions to users, groups of users, or even WCC Roles. As described earlier in this article, ACLs act as an additional filter to the security that has already been assigned to users. Based on that, additional Security Groups and Accounts may be required.

In most instances, adding a “Collaboration” Security Group will suffice with a single role of Collaboration that grants Read, Write, Delete, and Admin without having to add any additional Accounts (unless the compa-ny wants to have strict limits on participation). All users should have the default Role of Collaboration assigned to them.

When a user creates a Folder for their collaboration efforts, they need to

select the Collaboration security group, and then assign a specific set of users with defined ACL rights, groups of users with defined ACL rights, and/or WCC Roles with ACL rights. The same will hold true for content items as well. The net result of this can be illustrated by the following example:

“All users of the Company-x repository have been assigned the LDAP Group Collaboration (which maps to the WCC Role of Collaboration) grant-ing them Read, Write, Delete, and Admin (RWDA) privileges to all content and folders assigned to the Collaboration WCC Security Group. Bob, a project manager, needs to create a collaboration folder in the repository and wants to assign specific rights to users. He creates the folder and assigns it the WCC Security Group of Collaboration. Bob then assigns him-self as a direct user with RWDA ACL permissions. (It is best practice for the creator of folders and content in the Collaboration model to assign them-selves the RWDA permissions so that someone else cannot inadvertently override their access). Bob also directly assigns Mary RW ACL permissions, Beth RWD permissions, and the Finance Group (controlled by WCC Aliases) R permission.

“Mary is new to Company-x and has not been assign to the LDAP Group Collaboration yet. Because WCC evaluates the intersection of Roles to Roles, Roles to Accounts, Roles to ACLs, Accounts to ACLs, and Roles to Accounts to ACLs, Mary will not be able to access the folder or any of the content Bob created. Her permission intersection is null (since she has no rights to the Security Group Collaboration, there is nothing to intersect with her ACL permissions of RW). Beth on the other hand does have the LDAP Group Collaboration and her intersection of permissions gives her RWD to the folder and its content. Assuming that the users assigned to the Finance Alias Group have the LDAP Group of Collaboration, their inter-section of permissions is Read for the folder and its content”

The following outlines the elements of a Collaboration modeled security implementation:

This model lends itself to large organizations taking an enterprise wide view of their content while supporting collaboration groups for content.

This model is highlighted by how it:

• Minimizes the overall number of Roles and Accounts (which relate to LDAP Groups) that must be managed• Minimizes the overall number of Role and Account assignments typical users must be assigned• Provides the ability to designate enterprise content or department spe cific content• Provides flexibility to assign permissions to content and folders by users without having to extend or add Roles and Accounts for each new proj ect or collaboration group.

Exception ModelSome organizations have always managed their content by departments or divisions, with very little public sharing of content. In many ways, it is like the DPDR model minus the Account tree to provide global access with controlled contribution. In addition, many organizations take a very controlled and structured approach to assign security privileges and do not permit the ad hoc assignments that ACLs bring to the table.

Managing exceptions comes from having folder structures in which a division or department expects their own people to see the structures and content, not let other departments or divisions have access, but also hide folders within a department or division from some of that department or division’s own people. The model also assumes that the hierarchical nature of accounts is being utilized for access to the folders.

A company might have a divi-sion that contains a folder for all management documents. The Account for that folder may be assigned to DIVX\MGT (division x\management ac-count). It is expected that all managers will have access to the management folder and all of its subfolders and cot-nent EXCEPT for the Employee Files folder which only certain people should be able to access. If ACLs were used, then a manager within Division X could assign specific users to that folder. The downside to doing that, is that any one on the ACL list with at least Read and Write permission could grant other users access to that folder and its content. The company in this example realizes that ACLs could cause serious issues with ad hoc granting of security, so they have decided to use the Exception model instead. To protect the Employ-ee Files folder and all of its content, they have assigned it to an excpetion account E\DIVX\MGT\EFILES (E to denote an exception Account tree, DIVX for the division, MGT for the management level access, and EFILES to secure employee files). If the division had assigned it to the normal DIVX\MGT account of the parent folder, then anyone who could access the Management folder could see the Employee Files folder. By assigning the exception account, user must have specific rights to the account in oder to see it at all.

Utilizing Exception Accounts will increase the number of Accounts that need to be managed and added to LDAP as groups, but it does provide a highly restricted security model with tight controls for granting permis-sion to users.

Page 10: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -10

The following outlines the elements of Exception modeled security imple-mentation:

This model lends itself to large organizations taking a strict departmen-tal/division view of their content while supporting the ability to manage access exceptions. This model is highlighted by how it:

• Provides the ability to designate department specific content• Provides flexibility to assign permissions for content and folders to spe cific LDAP groups to support exception access controls

Striking the BalanceDesigning a security model to meet the needs of any or-ganization is a balancing act between providing the right amount of access and permissions and limiting the amount of administration required to support it. While there are many different approaches, the three listed in this article are the most common models and seem to fit the needs of a wide variety of requirements and needs of my clients. My preference is to use either the CPDR model or the Collabora-tion model when possible. That said, there are times when the Exception model is the right approach.

The following matrix provides some guidelines on when to use which model:

Special Note for the ReaderIn all the examples of Accounts utilized by this article, I have shown a full, or almost full, account name such as Employee/Marketing/Creative to illustrate the types of accounts and structures utilized by each mod-el. This is not practical in a true implementation due to limitations of the dAccount field size in WCC. In most cases, an abbreviation method or numbering sequence is used to represent the accounts. For example, Employee would be the Account 01, the next level of Marketing would be 01 (and another entry at that level like Finance, would be 02), Creative would be 01, and ArtDept would be 01. The full Account value for Employ-ee/Marketing/Creative/ArtDept would be 01010101 and the Account value for Employee/Finance/Receivables/Management would be 01020101. It is common practice to provide a display name that makes sense to users and a storage value and LDAP group name based off of numbering or abbreviations to stay under the field size limits.

There are many different ways that organizations can model security, and each company will have their own specific requirements. For many, it is a daunting task and may be difficulty to determine where to start. There are many firms that have years of experience in designing, developing, and deploying security models and companies engaging in modeling security for their content needs should seek a firm that has specific experience in the content management application and how to ensure a security model which will fit the needs of the enterprise. Leveraging external resources brings their years of experience and best practices to your initiatives.

Troy Allen is Director Web-Center Solutions and Train-ing at the Atlata, USA based TekStream Solutions.

Page 11: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -11

The Picture Desk

Page 12: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -12

Database

The initial release of Oracle Database 12c has been available since late June 2013. A long anticipated release – it took almost four years since the previous major database version – that is characterized first and foremost by the multitenant architecture with pluggable databases.

This major architectural change – the biggest one since Oracle V6 was the first to support parallelism - is impressive and potentially has great impact from the administration point of view. For database developers however, this mechanism is entirely transparent. The question for developers now becomes: what is the big news for this 12c release – what is in it for me? This article will introduce a number of features that are introduced or enhanced in Oracle Database 12c that are – or could/should be – of rele-vance for application development. Features that make things possible that were formerly either impossible or very hard to do efficiently, fea-tures that make life easier for developers and features that currently may appear like a solution looking for a problem.

After reading this article, as a developer you should have a good notion of what makes 12c of interest to you and what functionality you probably should take a closer look at in order to benefit from 12c when it arrives in your environment.

To very succinctly list some highlights: SQL Pagination, Limit and Top-N Query; SQL Pattern Matching; In-line PL/SQL Functions, Flashback im-provements, revised Default definition, Data Masking, Security enhance-ments and miscellaneous details.

SQL TranslationThe use case: an application sends SQL to the Oracle Database that is less than optimal. Before 12c, we were able to use Plan Management, to force

the optimizer to apply an execution plan of our own design. This allowed interference in a non-functional way – typically to improve performance. This new 12c SQL Translation framework brings similar functionality at a more functional level. We can create policies that instruct the database to

replace specific SQL statements received from application with al-ternative SQL statements. These alternative statements can make use of the same bind parameters that are used in the original state-ment. The alternative statement is expected – obviously – to return a result set with the same structure as the statement it replaces. This mechanism allows us to make an application run on SQL that is optimized for our database – for example using optimized Oracle SQL for an application that runs only generic SQL or using queries that contain additional join or filter conditions that make sense in our specific environment. Especially when 3rd party COTS applica-tions are used or when frameworks are applied in .NET, SOA, Java and other middleware applications that generate SQL for accessing the database, the SQL Translator framework is an option to ensure

that only desirable SQL is executed.

A simple example of using the SQL Translation framework:

BEGIN DBMS_SQL_TRANSLATOR.REGISTER_SQL_TRANSLATION( profile_name => ‘ORDERS_APP_PROFILE’, sql_text => ‘select count(*) from or-ders’, translated_text => ‘select count(*) from or-ders_south’ );END;

The result of this statement is that a mapping is registered in the named ORDERS_APP_PROFILE that specifies that when the query ‘select count(*) from orders’ is submitted, the database will in fact execute the statement ‘select count(*) from orders_south’. This profile can have many such map-pings associated with it. A profile is typically created for each application

for which SQL statements need to be translated.

Before you can create such a profile, the schema needs to have been granted the create sql translation profile privilege. In the session in which we want a profile to be applied, we need to explicitly alter the session and set the sql_translation_profile. Finally, the 10601 system event must be set.

See for example this blog article for more details on SQL Translation: https://blogs.oracle.com/dominicgiles/entry/sql_translator_profiles_in_oracle. This article is also very useful: http://kerryosborne.oracle-guy.com/2013/07/sql-translation-framework/.

SQL Pattern Matching Analytical functions were introduced in Oracle SQL in the 8i release and extended in 9i and to a small extent in 10g and 11g (LISTAGG). These func-tions added the ability to SQL to determine the outcome of a result row using other result rows, for example using LAG and LEAD to explicitly refer-ence other rows in the result set. This ability provided tremendous oppor-tunities to calculate aggregates, compare rows, spot fixed row patterns and more in an elegant, efficient manner.

The 12c release adds SQL Pattern Matching functionality to complement the analytical functionality. It is also used to analyze between multiple rows in the result set and specifically to spot occurrences of patterns between these rows. However, pattern matching goes beyond analytical functions in its ability to find ‘dynamic’ and ‘fuzzy’ patterns instead of only predefined, fixed patterns.

A simple example of this apparently subtle distinction would be, using the following table with ‘color events’: Fixed Pattern: find all occurrences of three subsequent

records with the payload values ‘red’, ‘yellow’ and ‘blue’

Variable Pattern: find all occurrenc-es of subsequent records, starting with one or more

Overview of Oracle Database 12c Application Development facilities

Lucas JellemaAMIS Services

Page 13: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -13

‘red’ records, followed by one or more ‘yellow’ records, followed by one or more ‘blue’ records. Both patterns result in the famous color combination ‘red, yellow and blue’. However, the variable pattern is much more flexible. Using analyti-cal functions, the fixed pattern is easily queried for. The variable pattern however is much harder – if doable at all. The new SQL Pattern Matching functionality is perfectly equipped to tackle this kind of challenge.

The SQL for this particular task would be like this:

SELECT *FROM events MATCH_RECOGNIZE ( ORDER BY seq MEASURES RED.seq AS redseq , MATCH_NUMBER() AS match_num ALL ROWS PER MATCH PATTERN (RED+ YELLOW+ BLUE+) DEFINE RED AS RED.payload =’red’, YELLOW AS YELLOW.payload =’yellow’, BLUE AS BLUE.payload =’blue’) MRORDER BY MR.redseq, MR.seq;

The core of this statement is the PATTERN that is to be found. This pattern is in fact a regular expression that refers to occurrences labeled RED, YEL-LOW and BLUE. These occurrences are defined as ‘a record with a payload value of ‘red’, ‘yellow’ and ‘blue’ respectively. The conditions used to de-fine occurrences can be a lot more complex than these ones; they can for example include references to other rows in the candidate pattern, using keywords PREV and NEXT (similar in function to LAG and LEAD).

A somewhat more involved example uses a table of observations:

In the collection of observations, we try to find the longest sequence of the same observations – the longest stretch of A or B values. However, we have de-cided to allow for a single interruption. So AAABAAAA would count as a sequence with length 8, despite the interruption with a single B value. The sequence AAABBAAAA however is not a single sequence – it con-sists of three sequences: AAA,BB and AAAA.

The SQL statement for this challenge uses SQL Pattern Matching and can be written like this:

SELECT substr(section_category,1,1) cat, section_start, seqFROM observations MATCH_RECOGNIZE ( ORDER BY seq MEASURES SAME_CATEGORY.category as section_cate-gory , FIRST(SAME_CATEGORY.seq) as section_start , seq as seq ONE ROW PER MATCH AFTER MATCH SKIP TO NEXT ROW -- a next row in the current match may be -- start of a next string PATTERN (SAME_CATEGORY+ DIFFERENT_CATEGORY{0,1} SAME_CATEGORY* ) DEFINE SAME_CATEGORY AS SAME_CATEGORY.category = FIRST(SAME_CATEGORY.category) , DIFFERENT_CATEGORY AS DIFFERENT_CATEGORY.category != SAME_CATEGORY.category) MRorder by rows_in_section desc

Note: the MATCH_RECOGNIZE syntax is virtually the same as the syntax used in CQL or Continuous Query Language. CQL is used in Oracle Event Processor (fka Complex Event Processor) to process a continuous stream of events to identify trends and patterns, find outliers and spot missing events.

This blog article gives an example of using the SQL Pattern Match to find the most valuable player in a football match: http://technology.amis.nl/2013/07/24/oracle-database-12c-find-most-valuable-player-using-match_recognize-in-sql/ . A more general introduction to Pattern Match-ing in Oracle Database 12c is given in this article: http://technology.amis.nl/2013/06/27/oracle-database-12c-pattern-matching-through-match_recognize-in-sql/ .

In-line PL/SQL Functions In Oracle Database 9i, the select statement was changed quite dra-matically: the WITH clause through which inline views could be defined as introduced, meaning that a select statement could start with WITH:

Inline views proved a very powerful instrument for SQL developers – mak-ing the creation of complex SQL queries much easier. In 12c, another big step is taken with the SQL statement through the introduction of the inline PL/SQL function or procedure.

An example:

WITH procedure increment( operand in out number , incsize in number)isbegin operand:= operand + incsize;end;FUNCTION inc(value number) RETURN number IS l_value number(10):= value;BEGIN increment(l_value, 100); RETURN l_value;end;SELECT inc(sal)from emp

Here we see a simple select statement (select inc(sal) from emp). The interesting bit is that PL/SQL function INC is defined inside this very SQL statement. The DBA will never be bothered with a DDL script for the creation of this function INC; in fact, that function is available only during the executing of the SQL statement and does not require any administra-tion effort. Another important aspects of inline PL/SQL functions: these functions do not suffer from the regular SQL <> PL/SQL context switch that adds so much overhead to interaction between SQL and PL/SQL. Inline PL/SQL functions are compiled ‘in the SQL way’ and therefore do not require the context switch. Note that by adding the PRAGMA UDF switch to any stand-alone PL/SQL Program Unit, we can also make it compiled the SQL way, meaning that it can be invoked from SQL without context switch overhead. When such a program unit is invoked from regular PL/SQL units, these calls will suffer from a context switch.

Inline PL/SQL functions and procedures can invoke each other and them-selves (recursively). Dynamic PL/SQL can be used – EXECUTE IMMEDIATE. The following statement is legal – if not particularly good programming:

WITHFUNCTION EMP_ENRICHER(p_operand varchar2) RETURN varchar2 IS l_sql_stmt varchar2(500); l_job varchar2(500);BEGIN l_sql_stmt := ‘SELECT job FROM emp WHERE ename = :param’; EXECUTE IMMEDIATE l_sql_stmt INTO l_job USING p_operand; RETURN ‘ has job ‘||l_job;END;SELECT ename || EMP_ENRICHER(ename)from emp

Page 14: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -14

Some details on Inline PL/SQL Functions are described in this blog article: http://technology.amis.nl/2013/06/25/oracle-database-12c-in-line-plsql-functions-in-sql-queries/.

Flashback for application developersA major new feature in Oracle Database 9i was the introduction of the notion of flashback. Based on the UNDO data that has been leveraged in the Oracle Data-base since time immemorial to produce multi version read concurrency and long running query read consistency, flash-back was both spectacular and quite straightforward. The past of our data as it existed in some previous point in time is still available in the database, ready to be unleashed. And unleashed it was, through Flashback Table and Database – for fine grained point in time recovery – as well as through Flashback Query and Flashback Versions (10g) in simple SQL queries. Before the 11g release of the database the usability of flashback was somewhat limited for application developers because there really was not much guarantee as to exactly how much history would be available for a particular data set. Would we be able to go back in time for a week, a month or hardly two hours? It depended on that single big pile of UNDO data where all transactions dumped their undo stuff.

The Flashback Data Archive was introduced in the 11g release – touted as the Total Recall option. It made flashback part of database design: per table can be specified if and how much history should be retained. This makes all the difference: if the availability of history is assured, we can start to base application functionality on that fact.

A couple of snags still existed with the 11g situation:

• 11g Flashback Data Archive requires the Database Enterprise Edition EE with Advanced Compression database option• In 11g, the history of the data is kept but not the meta-history of the transactions so the flashback data archive does not tell you who made a change • In 11g, the start of time in your flashback data archive is the moment at which the table is associated with the archive; therefore: your history starts ‘today’

Now for the good news on Flashback in Oracle Database 12c – good news that comes in three parts

1. As of 12c – Flashback will capture the session context of transactions. To set the user context level (determining how much user context is to be saved), use the DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL procedure. To access the context information, use the DBMS_FLASHBACK_ARCHIVE.GET_SYS_CONTEXT function. (The DBMS_FLASHBACK_ARCHIVE package is described in Oracle Database PL/SQL Packages and Types Reference.)

2. As of 12c – you can construct and manipulate the contents of the Flashback Data Archive. In other words: you can create your own histo-ry. Which means that a flashback query can travel back in time to way beyond the moment you turned on the FDA. In fact, it can go to before the introduction of the Flashback feature in the Oracle Database and even before the launch of the Oracle RDBMS product. It is in your hands! Import and export of history using DBMS_FLASHBACK_ARCHIVE procedures to create a temporary history table, and then later importing that table into the designated history table after loading that table with the desired history data. The temporary history table can be loaded using a variety of methods, including Data Pump. Support is also included for importing user-generated history. If you have been maintaining history using some other mechanism, such as triggers, you can import that history into Flash-back Data Archive.

3. As of 12c, Flashback Data Archive is available in every edition of the database (XE, SE, SE One, EE).

All of the above means that any application developer developing an ap-plication that will run against an Oracle Database 12c instance can benefit from flashback in queries. Fine grained flashback based on flashback data archives defined per table can be counted on. These archives can be pop-ulated with custom history data - for example taken from existing, custom journaling tables. Finally flashback can be configured to keep track from the session context at the time of each transaction to capture for example the client identifier of the real end user on whose behalf the transaction is executed.

The syntax for flashback queries and flashback versions queries are the same in 12c as in earlier releases.

SQL Temporal Validity aka Effective Data ModelingThe SQL 2011 standard – which Oracle helps create and uphold – intro-duced a fairly new concept called ‘temporal database’, associated with terms such as Valid Time and Effective Date. This concept is explained in some detail in Wikipedia: http://en.wikipedia.org/wiki/SQL:2011#Tempo-ral_support. The short story is that a substantial number of records in our databases are somehow associated with time periods. Such records have a certain start date or time and a certain end timestamp. Between these two points in time, the record is valid or effective and outside that period it is not. Examples are price, discount, membership, allocation, subscrip-tion, employment, life in general. In a temporal database or one that sup-ports temporal validity, the database itself is aware of the effective date: it knows when records are valid from a business perspective. This knowl-edge can be translated into more efficient execution plans, enforcement of constraints related to the time based validity of the data and business validity related

Flashback queries based on transaction time return records as they exist-ed in the database at the requested timestamp, regardless of what their logical status was at that time. Flashback queries based on valid date look at the valid time period for each record and uses that to determine wheth-er the record ‘logically existed’ at the requested timestamp.

Take a look at this example: Table EMP that has been extended with a FIREDATE column and an effective time period based on HIREDATE and FIREDATE

CREATE TABLE EMP( employee_number NUMBER, salary NUMBER, department_id NUMBER, name VARCHAR2(30), hiredate TIMESTAMP, firedate TIMESTAMP, PERIOD FOR employment (hiredate, firedate));

We can now execute the following flashback query, based on the effective date as indicated by the employment period, to find all employees that were active at June 1st 2013:

SELECT * FROM EMP AS OF PERIOD FOR employment TO_TIMESTAMP(‘01-JUN-2013 12.00.01 PM’)

Just like we can go back in time in a session for transaction based flash-back using the dbms_flashback package, we can do the same thing for effective time based flashback:

EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time ( ‘ASOF’ , TO_TIMESTAMP(‘29-JUL-13 12.00.01 PM’) );

Any query executed in that session after this statement has been executed will only return data that is either not associated with a valid time period or that is valid on the 29th of July 2013.

A similar statement ensures that we will always see the records that are currently valid:

EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time(‘CURRENT’);

And the default of course is that we will always see all records, regardless of whether they are valid or not:

EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time(‘ALL’);

These first steps in the 12.1 release of the Oracle Database on the road towards full temporal database support, are likely to be followed by a lot of additional functionality in upcoming releases. The SQL 2011 standard defines a number of facilities in SQL and around database design that are likely to make their way into the Oracle Database at some point in the not too distant future.

These could include:

• Valid time aware DML - Update and deletion of application time rows with automatic time period splitting • Temporal primary keys incorporating application time periods with op-tional non-overlapping constraints via the WITHOUT OVERLAPS clause • Temporal referential constraints that take into account the valid-time

Page 15: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -15

during which the rows exist: Child needs to have a valid Master at any time during its own validity• Application time tables are queried using regular query syntax or using new temporal predicates for time periods including CONTAINS, OVER-LAPS, EQUALS, PRECEDES, SUCCEEDS, IMMEDIATELY PRECEDES, and IMMEDIATELY SUCCEEDS • Temporal Aggregation - group or order by valid-time• Normalization - coalescing rows which are in adjacent or overlapping time periods• Temporal joins – joins between tables with valid-time semantics based on ‘simultaneous validity’• Use the Valid Time information for Information Lifecycle Management (ILM) to assess records to move

The support for valid time modeling is potentially far reaching. If valid period related data is common in your database, it might be a good idea to study the theory and reference cases and keep a close watch on what Oracle’s next moves are going to be.

Inspecting the PL/SQL Call StackIn Oracle Database 10g, the package dbms_utility was made available with two procedures (DBMS_UTILITY.FORMAT_ERROR_BACKTRACE and DBMS_UTILITY.format_call_stack) that helped inspect the call stack during PL/SQL execution. This provides insight into the program units that have been invoked to get to the current execution (or exception) point. The output of these units is formatted for human consumption and is not very useful for automated processing.

In this 12c release, PL/SQL developers get a new facility that makes call stack information available in a more structured fashion that can be used programmatically. The new PL/SQL Package UTL_CALL_STACK provides API for inspecting the PL/SQL Callstack.

The following helper procedure demonstrates how utl_call_stack can be accessed to get information about the current call stack:

procedure tell_on_call_stackis l_prg_uqn UTL_CALL_STACK.UNIT_QUALIFIED_NAME;begin dbms_output.put_line(‘==== TELL ON CALLSTACK ==== ‘ ||UTL_CALL_STACK.DYNAMIC_DEPTH ); for i in 1..UTL_CALL_STACK.DYNAMIC_DEPTH loop l_prg_uqn := UTL_CALL_STACK.SUBPROGRAM(i); dbms_output.put_line( l_prg_uqn(1) ||’ line ‘||UTL_CALL_STACK.UNIT_LINE(i) ||’ ‘ ||UTL_Call_Stack.Concatenate_Subprogram ( UTL_Call_Stack.Subprogram(i)) ); end loop;end tell_on_call_stack;

When this helper procedure is used from a simple PL/SQL fragment that performs a number of nested calls:

create or replace package body callstack_demoasfunction b( p1 in number, p2 in number) return num-ber is l number:=1;begin tell_on_call_stack; return l;end b; procedure a ( p1 in number, p2 out number) isbegin tell_on_call_stack; for i in 1..p1 loop p2:= b(i, p1); end loop;end a;function c( p_a in number) return number is l number;begin tell_on_call_stack; a(p_a, l); return l;end c;end callstack_demo;

The output is as follows:

This output gives insight in how the anonymous PL/SQL call to package CALLSTACK_DEMO was processed. The initial call from the anonymous block got to line 50 in procedure c. One level deeper, from line 51 in C, a call had been made to procedure A. One level deeper still, a call from line 40 in A had been made to B.

Package UTL_CALL_stack contains several other units that help with the call stack inspection. See for example this article for some examples: http://technology.amis.nl/2013/06/26/oracle-database-12c-plsql-pack-age-utl_call_stack-for-programmatically-inspecting-the-plsql-call-stack/.

Default Column valueSpecifying a default value for a column has been possible in the Oracle Database for a very long time now. It’s fairly simple: you specify in the column definition which value the database should apply automatically

to the column in a newly inserted record if the insert statement does not reference that particular column. When an application provides a NULL for the column, the default value is not applied; only when the column is missing completely from the insert statement will the default kick in.

One typical example of a default value is the assignment of a primary key value based on a database sequence. However, that particular use case was never supported by the Oracle Database because a default value could only be either a constant, a reference to a pseudo-function such as systimestamp or an application context.

In 12c, things have changed for the column default. We can now specify that a default value should be applied also when the insert statement provides NULL for a column. A column default can now also be based on a sequence – obviating the use of a before row insert trigger to retrieve the value from the sequence and assign it to the column. A column can even be created as an Identity column – meaning that the column is the primary key with its value automatically maintained (using an implicitly maintained system sequence). Finally – especially of interest to adminis-trators – a column can be added to a table with a meta data only default value; this means that the default value for the column is not explicitly set for every record, but is retrieved instead from the meta data definition; this means a huge savings in time and storage.

The syntax for creating a default that is applied when a NULL is inserted:

alter table empmodify ( sal number(10,2) DEFAULT ON NULL 1000 )

And the syntax for basing the default value on a sequence:

alter table empmodify ( empno number(5) NOT NULL DEFAULT ON NULL EMPNO_SEQ.NEXTVAL )

Page 16: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -16

Data Masking aka Data RedactionIdeally, testing of applications can be done using production-like data. However, generating such data is usually not a realistic option. Using the real production data in a testing environment seems the better alterna-tive. However, this data set may contain sensitive data – financial, medi-cal, personal – that for various reasons should be a visible outside the pro-duction environment. The Data Redaction feature in Oracle Database 12c supports policies that can be defined on individual tables. These policies specify how data in the table should be ‘redacted’ before being returned – in order to ensure that unauthorized users cannot view the sensitive data. Redaction is selective and on-the-fly – not interfering with the data as it is stored but only with the way the data is returned.

The next figure illustrates the data redaction process: a normal SQL statement is submitted and executed. When the results are prepared, the data redaction policies are applied and the actual results are transformed through conversion, randomizing and masking. This approach is very simi-lar to the way Virtual Private Database (fine grained access policies) can be used to mask data – records or column values). Redaction can be conditional, based on different factors that are tracked by the database or passed to the database by applications such as user identifiers, application identifiers, or client IP addresses. Redaction can apply to specific columns only – and act in specific ways on the values returned for those columns.

Here is an example of a redaction policy:

BEGIN DBMS_REDACT.ADD_POLICY( object_schema => ‘scott’, object_name => ‘emp’, column_name => ‘hiredate’, policy_name => ‘partially mask hiredate’, expression => ‘SYS_CONTEXT(‘’USERENV’’,’’SES-SION_USER’’)!= ‘’GOD’’’, function_type => DBMS_REDACT.PARTIAL, function_parameters => ‘m1d31YHMS’, expression => ‘1=1’ );END;

This policy ensures that values from col-umn HIREDATE are redacted for any user except GOD. The Month and Day parts of values for this column are all set to 1 and 31 respectively; the other data compo-

nents – Year, Hour, Minute and Second – are all untouched. The query results from a query against table EMP will be masked. Data redaction seems most useful for ensuring that an export from the production database does not contain sensitive, unredacted data. A second use could be to ensure that application administrators can do their job in a production environment, working with all required records without being able to see the actual values of sensitive columns.

More details on Data Redaction are for example in this White Paper: http://www.oracle.com/technetwork/database/options/advanced-security/advanced-security-wp-12c-1896139.pdf. An straightforward example is in this blog article: http://blog.contractoracle.com/2013/06/ora-

cle-12c-new-features-data-re-daction.html.

SQL Pagination, Limit and Top-N QueryA common mistake made by inexperienced Oracle SQL devel-opers is the misinterpretation of what [filtering on] ROWNUM will do. The assumption that this next query will return the top three earning employees is so easily made:

Oracle does not have – at least not before 12c – a simple SQL syntax to return the first few records from an ordered row

selection. You need to resort to inline views – for example like this:

Perhaps this is not a big deal to you. Anyway, Oracle decided to provide a simple syntax in 12c SQL to return the first X records from query, after the filtering and sorting has been completed. In our case, this statement would be used for the Top-3 earning employees:

select *from emporderby sal descFETCH FIRST 3 ROWS ONLY;

Slightly more interesting I think is the simple support for row pagination that is introduced in this fashion. Many applications and services require the ability to query for records and then show the first set [or page] of maybe 20 records and then allow the next batch [or page] of 20 records to be returned. The new SQL syntax for retrieving a subset of records out of a larger collection looks like this:

select *from emporderby sal descOFFSET 20 FETCH NEXT 20 ROWS ONLY;

Here we specify to select all records from emp, sort them by salary in descending order and then return the 21st through 40th record (if that many are available). The syntax also support fetching a certain percentage of records rather than a specific number. It does not have special support for ‘bottom-n’ queries. Note: checking the explain plan output for que-ries with the pagination or top-n queries is interesting: the SQL that gets executed uses familiar analytical functions such as ROW_NUMBER() to return the correct records – no new kernel functionality was added for this functionality.

Page 17: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -17

SecurityMany improvements were introduced in this 12c release in the area of se-curity. Some are primarily of interest to the administrator and others are quite relevant to application developers.

Capture privilege usageOne of these security related new features is the ‘capture privilege usage’ – a facility through which you can inspect the privileges that are actu-ally required by users to run applications. This feature is introduced to strengthen the security of the database to enforce the principle of least privilege: it tells you which privileges are used in a certain period of time by a certain user. When you compare these privileges with the privileges that have actually been granted to the user, there may some privileges that have been granted but are not actually required and should probably be revoked. Also see http://bijoos.com/oraclenotes/oraclenotes/2013/92

Invoker Rights ViewIn addition to the Invoker Rights package, that has been around for a long time already, now finally there also is an invoker rights view – although its specific syntax uses the term bequeath current_user:

create or replace view view_name( col1, col2,….)BEQUEATH CURRENT_USERasselect …from table1 join table2 …

This statement specifies that privileged users can reuse the view’s SQL definition but only have the SQL applied to database objects owned by or granted explicitly to the user that invokes the view. Before 12c, anyone who has the select privilege on the view can query data from that view leveraging the select privileges of the view’s owner on the all objects refer-enced from the view.

Inherit PrivilegesIn the same area of invoker rights definitions, the database before 12c contains something of a loophole: when a user invokes an invoker rights package, anything done by the package is done using the authorizations of the invoking user – that is after all the whole idea. However, this means that the code in the package can do things based on the invoking user’s privileges and channel results to the user who owns the invoker rights package:

In this example, the owner of the invoker rights program unit has added code to the procedure that leverages the invoking user’s select privilege on the special table to retrieve data that it then writes to its own TAB_TA-BLE on which it has granted public access. In previous releases, the invok-ing user had no control over who could have leverage his or her access privileges when he or she runs an invoker’s rights procedure.

Starting with 12c, invoker’s rights procedure calls only can run with the privileges of the invoker if the procedure’s owner has the INHERIT PRIVI-LEGES privilege on the invoker or if the procedure’s owner has the INHER-IT ANY PRIVILEGES privilege. This gives invoking users control over who has access to their privileges when they run invoker’s rights procedures or query BEQUEATH CURRENT_USER views. Any user can grant or revoke the INHERIT PRIVILEGES privilege on themselves to the user whose invoker’s rights procedures they want to run.

SYS_SESSION_ROLESA new built-in namespace, SYS_SESSION_ROLES, allows you to deter-mine if a specified role is enabled in the current session. For example, the following example determines if the HRM_ADMIN role is enabled for the current user:

SELECT SYS_CONTEXT(‘SYS_SESSION_ROLES’, ‘HRM_AD-MIN’)FROM DUAL;

This query returns either ‘TRUE’ or ‘FALSE’.

Attach Roles to Program UnitsIn 12c, you can attach database roles to program units functions, pro-cedures, packages, and types. The role then becomes enabled during execution of the program unit (but not during compilation of the program unit). This feature enables you to temporarily escalate privileges in the PL/SQL code without granting the role directly to the user. The benefit of this feature is that it increases security for applications and helps to enforce the principle of least privilege.

The syntax is quite straightforward:

GRANT hrm_admin TO procedure scott.process_salaries

If the execute privilege on procedure process_salaries is granted to some user JOHN_D, then during a call to process_salaries by JOHN_D an in-

spection using SYS_SESSION_ROLES into the role HRM_ADMIN being enabled would return that the role is indeed enabled – even though that role has not been granted to JOHN_D.

This blog article by Tom Kyte shows more de-tails on this facility: http://tkyte.blogspot.nl/2013/07/12c-code-based-access-control-cbac-part.html.

White List on Program UnitsIn 12c, we can indicate through a white list which program units are al-lowed to access a certain package or procedure. If a white list is specified, only a program unit on the list for a certain object can access the object.

In the next figure, this has been illustrated. A, B, C as well as P, q, r and s are all PL/SQL program units in the same database schema. Units q and s have been associated with a white list. Unit s can only be invoked by object P and unit q is accessible only from P and r. This means for example that A, even though it is in the same schema as unit s, it cannot invoke unit s. If it would try to do so, it would run into an PLS-00904: insufficient privilege to access object s error.

This white list mechanism can be used for example to restrict access to certain units in a very fine grained way. In the example above, it is al-most like a ‘blue module’ is created in the schema of which object P is the public interface and that contains private objects q and s that are for module-internal use only.

The syntax for adding a white list to a PL/SQL program unit consists of the keywords accessible by followed by a list of one or more program units.

create package s accessible by (p)is procedure …;end s;

Note that the actual accessibility is checked at run time, not compile time. This means that you will be able to compile packages that reference pro-gram units with white lists in which they do not appear and that they will not be able to successfully access at run time.

This blog article by Tom Kyte explains PL/SQL white lists very clearly: http://tkyte.blogspot.nl/2013/07/12c-whitelists.html.

Page 18: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -18

ConclusionThe essence of the 12c release of the Oracle Database does not lie in appli-cation development facilities. Having said that, there is of course an inter-esting next step in the evolution of what database developers can do with the database. SQL and PL/SQL have evolved, allowing for more elegant, better performing and easier to write programs. Some facilities – for ex-ample SQL Temporal Validity and Flashback – are potentially far reaching and may lead to different designs of data models and applications.

The compilation in this article is obviously quite incomplete. I have mentioned some of the most striking – in my eyes – new and improved features. Some glaring omissions are in the next list – which is of course equally incomplete:

• Lateral Inline Views• (Cross and Outer) Apply for joining with Collections• VARCHAR2(32k)• XQuery improvements and other XMLDB extensions• Java in Database Java 6 or 7• Export View as Table• DICOM support for WebCenter Content• New package dbms_monitor for fine grained trace collection• DBMS_UTILITY.EXPAND_SQL_TEXT for full query discovery

Browsing through the Oracle Documentation on Oracle Database 12c - http://www.oracle.com/pls/db121/homepage - and browsing the internet for search terms such as ‘oracle database 12c new feature sql pl/sql’ are a pretty obvious way of getting more inspiration on what the next gener-ation of Oracle’s flagship product has to offer. Hopefully this article has contributed to that exploration as well.

Lucas Jellema is CTO of the Dutch based company AMIS.

Page 19: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -19

The Picture Desk

Page 20: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -20

Business intelligence tells you what happened at work. Good business intelligence tells you what is happening now. Competitive intelli-gence tells you what your competitors and the market did. Good competitive intelligence tells you where they’re headed. Both BI and CI crunch big data to deliver answers to the questions, “what happened?” and “how did that happen?” Only a social and deep web competitive intelligence framework can answer the most important question, “so what?”

In order to be actionable, intelligence must answer the “so what?” ques-tion. The answer to “so what?” describes the impact of the information. It describes assumed and presupposed context. It fills in the rest of the statement that starts out, “we care about this because…”

Social competitive intelligence is a new discipline. It is emerging now and will continue to grow over the next decade. Some solutions exist already. But they and their marketing cousins – social media management soft-ware - are still largely focused on listening to and tracking social mentions and sentiment activity. While these are important, the solutions today are overly simplistic. They can list changes to competitor websites, report on where competitor PPC (pay per click) ads are run and measure generic brand sentiment. However, rather than exerting a contextualizing force on the already massive volumes of available business and social information,

they add to the data tsunami. When your cup is already running over, it makes little sense to put it under a faster faucet.

The better solution is to put into place a framework that gathers, filters, synthesizes and analyzes social competitive intelligence and deep-web analytics (i.e. the web beyond Google indexes). Then use that framework as a lens through which to view your existing BI and CI data. Then you will be able to answer the all-important, “so what?” question.

Here are several real-world examples.

The Call Center Cost HoleBI reporting shows increasing costs and increasing churn in your call center; overall a bad trend. If that were the only information you had, the “so what?” answer would be to kick up call center recruiting a fill the gaps and some stricter MBOs for call center managers on employee retention. However, with a social competitive intelligence framework in place, it is revealed that social media, blogs and discussion boards are full with blistering criticism of your call center escalation processes. The withering criticism is poisoning the work environment and making a tough job even more unpleasant. Viewed through this lens, the correct answer to “so what?” is not to step up recruiting. Rather, it is to fix the poison call center environment and re-engineer the escalation processes while empowering call center employees.

Not only does this save substantial time and money, it actually boosts net productivity by empowering knowledgeable employees and eliminating training and the ramp up to full productivity required for each new hire.

The Outside ExpertYou are ready to launch a new product into an overseas market. But there are a host of regulatory issues to navigate. While you have plenty of “in-dependent” research and case studies validating your approach, you still want an expert in your technology and the foreign market to help guide you through the approval process. The regulators don’t look kindly on experts who are among your paid staff due to potential conflict of interest. You want an external expert but you want to avoid someone who regularly works for your competitors or who has expressed harsh opinions of your company or product in the past.

Traditional competitive intelligence will not provide expertise location like this. Traditional BI only tells you that your new market has a lot of po-tential. If that were the only information you had, the “so what?” answer

would be to get some internal recommendations and do a Google search and hope the person is available and credible. But hope makes for a poor strategy, especially with something as big as a new foreign market launch.

With a social competitive intelligence framework in place, you are able to perform a social network analysis to first locate the influencers on the topic area, measure their credibility and influence relative to one anoth-er, and finally screen them for competitor interaction and engagement. This approach yields not only a deeper, more highly qualified “short list” of available experts, it also reveals a large and rich set of topic influenc-ers who your team can target for engagement and awareness of your new product. Ultimately, this delivers not only the help navigating new regulatory processes in new markets, it also identifies a new set of up and coming influencers who will help your product remain successful after the initial splash.

The Competitor Customer ListYour internal BI tells you that sales are plateauing despite the fact that you have a better product with more features and a better history of quality. Your competitive intelligence tells you that competitors are facing similar slow-growth periods. It looks like the market is reaching saturation and new opportunities are small. If that were all the information you had, the answer to the “so what?” question would be to switch over your sales strategy from a hunting to a farming operation. Marketing would shift to promoting small incremental improvements and the grind of upgrades/maintenance/renewal would become the core of your revenue model.

However, with a social competitive intelligence framework in place you would reveal a gold mine of new accounts that you can hunt while dra-matically boosting your competitive advantage. The framework would reveal your competitor’s customer lists. First, realize that all customers - yours and your competitors – are interested first in solving a business problem and only secondarily staying with a particular vendor or service provider. Staying with a particular provider tends to be more a matter of convenience and trust than inherent and continued ability to deliver value. This means there is opportunity to knock out your competitor or at least to come alongside them and establish a beach head; but only if you know who they are and how to approach them. This is what a social competitive intelligence framework delivers.

That they are your competitor’s customer means that at one time in the past, they got a better deal or had a better recommendation or were simply aware of your competitor at the time they needed a solution. In the B2B world, there are few things that lock in customer. Sure, they exist;

Social Media

Blending BI and Social Competitive Intelli-gence for deep insight into your business

Billy CripeBloomThink

Page 21: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -21

big computing platform and enterprise application decisions tend to have at least a 7 year life cycle. Similarly, being a Mac, Windows or Linux shop tend to be about corporate culture. But as the recent Samsung mobile vs Apple iPhone campaigns demonstrate, even the most loyal customers can switch to a completely different platform if the reason to switch is compel-ling.

A social competitive intelligence framework makes developing a target list of your competitor’s customers easy. First, perform a social network analysis of your competitor. See who is commenting, following, liking and (re)tweeting about your competitor. Then filter that list by companies and contacts you’d like to target. Perform this analysis again around the time of your competitor’s big events like conferences and trade shows. The cadence of social activity spikes during those times. Additionally, your competitor will trot out their favorite case studies and customer testimo-nials during that time to add credibility to their pitch. What they’re doing for you is validating the customer need, interest and ability to pay. You just need to get them to switch or try out your product too. Finally, mine your competitor’s website for their customer information. Companies routinely post logos and ROI or case studies online. Even if competitor brag sheets use unnamed customers, there will generally be enough infor-mation to make a very educated guess and narrow it down to only one or two possible companies (your potential customers!) in the area.

My company, BloomThink, recently performed a social competitive intelli-gence engagement designed to create a competitor customer list. During one trade show, the target competitor was demonstrating an unbranded intranet system. However, the layout, color scheme and look/feel of their demo perfectly matched an educational YouTube video posted at about the same time by a large local health care organization. The health care company was added to the “competitor customer target list”. Only a so-cial competitive intelligence framework and strategy could have revealed the connection that was publicly available but buried in a mountain of previously unrelated social data.

ConclusionAs the old saying goes, “text without context becomes pretext”. No matter how good your BI data is alone, without the contextualizing force of a social competitive intelligence framework, it becomes justification for gut feelings, political games-playing and flights of fancy. That is no way to run a business.

Enterprises and especially CIOs, CMOs and Sales EVPs need to implement a social competitive intelligence framework that understands how to do the following:

1. Collect & Gather deep web and social information2. Filter & Categorize information to keep what matters and cull what doesn’t3. Analyze & Synthesize that information with existing BI & CI data4. Report & Act so that actionable intelligence can deliver meaningful business impact

Billy Cripe is the founder of the Minneapolis, USA based compa-ny BloomThink

Page 22: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -22

The Picture Desk

Page 23: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -23

Oracle Enterprise Man-ager Cloud Control 12c: Managing Cloud Center Chaos

Porus Homi Havewala

Packt Publishing ISBN: 978-1-84968-478-1Published: December 2012

Rating:

Being in charge of the Oracle Enterprise Manager line of business for the ASEAN region, Mr. Havewala has certainly been close to the action when it comes to the topic of concern of this book.

Being released in December 2012, the Oracle Enterprise Manager Cloud Control 12c: Managing Data Center Chaos book by Packt Publishing is one of the first titles on the subject of version 12c of the administration tool of choice for an Oracle environment.

The book is pretty well written and, because of the style it is written in, is a pretty easy read. Despite the in-depth topics it coffers. This is something we rarely see in technical books and both the author as the publisher can see this as a complement.

Oracle Enterprise Manager Cloud Control 12c: Managing Data Center Chaos offers the reader some critical insights in how EM is supposed to help you to handle the everyday chaos in your data center(s). With chapter titles like “Ease the Chaos with Performance Management” and “Ease the Chaos with Automated Provisioning” it’s all about managing some sort of awful situation in your data centers.

And it works. It actually offers a lot of insight about what you can do with the help of Enterprise Manager. It does show us how to look further than just the basic elements of EM that we have used for some while now. It certainly helps looking for those functions of EM that where introduced in 12c.

The backside of it is that it only shows us how it is supposed to work. The book barely goes beyond what might or should be possible. There are no real live examples and all the examples that are worked out properly are taken of the demo grounds at Oracle. But hey, that’s what you get when a business development manager writes a book.

All in all it is a pretty nice read. Especially for those of us who are still looking on where to position Oracle Enterprise Manager and want some insights in how all this is implemented. It does offer the reader some good and qualified information on the topic at hand: how do I create a more manageable environment in my data center?

Oracle WebLogic Serv-er 12c AdvancedAdministration Cook-book

Dalton Iwazaki

Packt Publishing ISBN: 978-1-84968-684-6Published: June 2013

Rating:

A cookbook. What’s up with that? The idea seems handy: short articles on how to do a specific job. And most of the time it works. It works fine. But what when it is about a subject that has already been described at length? Well, than it might be just a tiny bit too much.

Last June Packt Publishing released a title on Oracle WebLogic Server 12c in their popular cookbook series. The book covers more some 60+ ‘reci-pes’ that teach readers how to install, configure and run Oracle WebLogic Server 12c.

The chapters, or ‘recipes’ as Packt tends to call them, about installation and running truly reminds us of the installation and configuration chap-ters in the official Oracle documentation. Does that mean that these recipes add nothing to the knowledge of the reader? Of course not, it is actually necessary in a book that tries to be complete about things and shows us that the Oracle documentation is correct about some points.

Some of the articles cover configuring for high-availability, troubleshoot-ing and stability & performance. And this is where the value of the book kicks in. Because for a main product in the Oracle stack, WebLogic isn’t always the best understandable of systems. If a book shows us where to look for stability and when trouble is under way, it pays of immediately.

So, does this book add something to the overall knowledge of the profes-sional who works with Oracle WebLogic Server 12c? Definitely, even if it’s just as a convenient reference.

Oracle Enterprise Man-ager 12c Administration Cook-book

Dhananjay Papde, Tushar Nath & Vipul Patel

Packt Publishing ISBN: 978-1-84968-740-9Published: March 2013

Rating:

Oracle Enterprise Manager is certainly gaining momentum and the times that it was just a toolset to manage single database instances is definitely in the past. That also means that the product is gaining a larger fan base, as it probably should.

In this book a total of three authors wrote some 50 recipes on managing the Oracle stack using the latest version of Enterprise Manager. What you really notice right away is that it is not only about managing the Oracle database, but there’s also a bit about managing middleware as well.

The last chapter of the book is completely reserved for a description of the iPhone / iPad app for using Oracle Enterprise Manager. This is all exit-ing stuff, but probably not the most interesting for in-depth administrators on an excessive Oracle Stack.

What’s really missing in this book is the entire ‘Cloud Management’ part of the latest edition of Enterprise Manager. Oracle’s promises that the toolset is the perfect companion for the private or public cloud and data centers managing those is not seen anywhere in this book.

Because cloud management is the main focus of Oracle Enterprise Man-ager 12c it is really a shame that it is not part of this book. If the authors would have shared the focus of the other Packt title on Oracle Entprise Manager, it would really have added to the overall reading experience.

The Book Club

TTTTT TTTTT TTTTT

Page 24: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -24

The Picture Desk

Page 25: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -25

It could just be the key to easy adaptationThis article is written based on our experience of working on ADF proj-ects together with a User Experience Designer (UX designer) and his value in this team. In this article we will explain what UX is and how it is used in a big project to get more value out of our software. We will also explain how we have used UX and ADF in an Agile environment. In this article we are going to talk mainly about the front-end of soft-ware. When reading this article you should have a basic understand-ing of the scrum process however you need no knowledge about User Experience design.

What is UX?In this part of our article we will explain what UX is and what the role of the UX designer in a software project is. A software project can be a web, mobile or a desktop application.

In almost any IT project a business annalist is responsible for under-standing and translating the business needs into clear software spec-ifications. The business annalist is the eyes and ears of the business, they make clear what solution should be built. So we have IT and business involved in our project together. But who takes care of the end users? Is there anybody who cares about them? This is where the UX designer comes in.

UX is an acronym for “user experience”. It is almost always followed by the word “design.” By the nature of the term, people who perform the work become “UX designers.” But these designers aren’t designing things in the same sense as a visual or interface designer. UX is the intangible design of a strategy that brings us to a solution. So what does this actually mean? This solution can be divided into several layers. Each of these layers de-scribes a more detailed part of the system. A UX designer creates the total user experience by designing and thinking about each of the layers and validate these results with the end users of the system. In the following example the techniques we used for our solution are described. For this example we have used the “Elements of User Experience” developed by a renowned User Experience professional called Jesse James Garret. Our “solution” is an ADF web application built on a BPM middle layer to create a case management system with workflow.

The Elements of User Experience SurfaceThis is the visual style end users will see when they use the applicationWe have used the standard corporate design rules called “de Rijkshuisssti-jl” offered by the Dutch government for designing web applications.

Skeleton Describes the interactive components needed on the pages like buttons, list boxes etc. We have used component descriptions in wireframes to visualize the in-teractive components for the system. A component is a functional piece of software used on a screen for example: a search box on the screen.

Structure Describes the pages needed in the application and their navigation flowsFor each step in the process we have used flow diagrams and standard page layouts to structure the application screens.

Scope Describes the scope of the project that needs to be built.

• We have used a process design for scoping the screens witch where needed.• Product backlog with user stories to scope and prioritize the needed functionality

StrategyDescribes the underlying application strategy to align with business and user needs.

• For strategy we have used workshops to determine standards to be used thru-out the whole application,• We have talked to the future users to understand their needs.• Talked with the business to determine their business goals

So we have gone over some background on what a UX designer is and what kind of work he does. But why do you need a UX designer in your ADF project?

• There is somebody who cares about your users and wants to make them happy. Happy employees are more productive and less ill.• A UX designer has the skills to test your application for usability issues early in the build process.• A UX designer keeps asking critical questions about functionality like “is this really needed for our users”. This can lead to less functionality to build.• In a scrum project the UX designer helps to get user stories ready for in-clusion in a sprint. Visualizing the software will save the developers time.• A UX designer improves the user acceptance by including the end-users early in the design process and use their feedback to improve the product.• A UX designer is an objective hub between the business, development and the end user. He tries to balance these to get the most usable and eco-nomical product as possible.

But bear in mind that when you use a user centered design process in your project you need to continually invest time into improving your applica-tion and not only adding more functionality. So give the UX designer room to organize user sessions and working this feedback back into the applica-tion.

Oracle ADF

Successfully combining UX with ADF Marcel MaasAMIS Services

Sander HaaksmaUX Company

Page 26: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -26

A real world caseFor the project in which I worked with a UX designer the goal was to create a case management system using Oracle BPM and an optimized worklist application. Since a great deal of productivity could be gained by improv-ing the screens and work processes that are being used by the end users we flew in a UX designer. His assignment was to think up a really usable interface which would be smart and supportive to the user. We wanted to use contextually aware widgets to provide extra info to the end user at ev-ery step of a case, as well as define our own navigation for the application. We quickly realized we needed to create a new worklist application from scratch instead of using and customizing the BPM workspace. Therefore 2 ADF developers were hired including me. For the realization of the BPM processes 2 specialists were hired, as well as a tester, a process analyst and a project manager. At our arrival it was already decided to use Scrum as the way of managing the project which suited us well. At that time we knew little of the requirements of the system and had a limited bag of money at our disposal. So we settled for 2 week sprints and went to work on the first iteration. We made sure the UX designer and the BPM guys were always a sprint ahead of the ADF developers in terms of function-ality to make sure they could rely on tasks created by the BPM team and designs by the UX designer. The ADF team then only would have to focus on the technology. During the sprint the UX designer would have sessions with end users to define the UI which would then be validated by the ADF team and eventually end up in one of the next sprints. At the time of writing we are still going strong and are almost ready for the first release. In the next few paragraphs we will dive into various parts of the process more deeply.

Converting wishes to screensThe starting point of a project is always the user’s wishes which in our case are a backlog of user stories. These stories provide a way of describ-ing functionality in the perspective of the end user. The UX designer takes these stories as a starting point for determining the general structure and flow of the application. The stories themself provide no hint as to how screens should look but they describe functionality. It is the “What”, not the “How”. The designer takes a step back and analyses the stories to find overlap and get a general feel for what the user wants. This is done by talking to the user him or herself. The general flow and structure of the application are hereby determined.

From here on out the designer takes a number of stories from the back-log and uses these to create a sketch of the screen and its components. These are functional components such as buttons, panels, images etc. These sketches are then discussed with the end user to validate them and modify them if needed until all stakeholders are satisfied with the result. The trick is not to drive a sketch to perfection but specify just enough for a developer to start building what the user wants. This makes it possible to stay flexible when new insights arise.

Now the screens are ready to be implemented for real in a sprint. In the next paragraph we will dig into this a bit more.

After the screens have been created the designer hosts a usability lab to validate the usability of the screen and its components. A Usability lab is a session where users are asked to complete tasks with the new software. During these sessions stakeholders observe the behavior of the test users and together decide which issues are important. The usability issues will be logged as new user stories on the backlog. The usability issues found

during the lab will be prioritized and added to the backlog and find their way back into a sprint to improve functionality. These iterations greatly improve the usability of the product. Involvement of the business is abso-lutely necessary for this process to be a success.

Example screen and components wire-framed in tool Axure RP.

Successfully combining ADF with UX designIn the previous paragraph the process of designing screens from user sto-ries was explained. However nothing was said about the implementation in ADF. In this part we will describe how to leverage the power of ADF and combine it with effective UX design.

The availability of a UX designer for creating screens saves a lot of time for a developer because he no longer needs to think about and design the screens himself. However when one releases an UX designer on a product and he gets to work as lone wolf usually the most beautiful and intuitive

design is created. However this design still needs to be implemented by a specific technology, which has its own pro’s and con’s. This means it can take a lot of time to create specific components when the technology itself probably supports it in a different way.

Page 27: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -27

ADF is no different. A UX design for ADF is only a help when there is com-munication between the UX designer and the developer. The UX designer can explain what the user and business want and developer can explain how the solution can be best implemented using ADF leveraging its strengths. In this case the designer will come to the developer and using the sketch explains what is needed. The developer can then provide the ADF components that need to be used to make sure most of the func-tionality can be achieved by using standard components and patterns. This makes sure there is a balance between the user’s needs, the cost of development and the technical feasibility. Luckily we don’t need to think up everything. A lot of ways of interacting with users through ADF have already been thought out and tested by Oracle. They have bundled these ADF UX patterns and published them on the web. They can be found here: http://www.oracle.com/technetwork/topics/ux/applications/gps-1601227.html Some of these design patterns have already been implemented in ADF components and others you can implement by yourself. This website can be a great help however it is not strictly necessary.

The next step is to validate the design made by the UX designer against ADF’s capabilities. We must try to design screens that are easily built using ADF standard components. Because this saves us time from developing components that look great but which take twice as much time to create. For example: One goal of the UX designer was to only show input fields on the screen when they are actually needed. If they weren’t, they would not be shown. Now this could easily be realized with ADF. Only when we add validations on these input components and the validations fire when the component is hidden strange things will happen. So we have modified the designs to always show required input fields as well as fields with other validations. Sometimes we could see and avoid such problems before-hand and for some others we learned the hard way. As you can see your own experience can really make a difference here so think hard about the possible difficulties that could arise when implementing a screen design in ADF. Finding issues at this point and thinking up an alternative design which works just as well for the UX designer as well as ADF can save you a lot of time later on.

So 80% of the functionality is realized in 20% of the time. Also iteration is key here. The solution gets created, tested by users and improved if neces-sary. By using standard components and patterns there is time saved on development, which then can be used for improvements after receiving feedback. To sum it all upIn this article you have read about UX design in combination with Oracle ADF. The question is whether it pays to hire or request a UX designer on your next project. The answer to this can be short, yes I think that when interfacing with humans is involved it is already enough of a reason to hire a UX designer. However it is not enough to hire a designer and let him go about his business. The designer must talk not only with the business end of your project but with the development team as well to make sure the solution envisioned by the designer is feasible with the technology chosen to implement the design. In the case of ADF this is the main point. When a UX designer is paired with ADF developers and they mix their knowledge

the greatest potential is unlocked. Try and create a UX design which uses design patterns that are already supported in ADF. In this case you get the best usability and design possible which is then realized in a minimal amount of time. Another key point here is iteration. It does not matter whether you are doing an agile project or any other. Validate your work with end users and improve your designs from there on. Agile projects are best suited for this but you can imple-ment it in other projects as well. Because you have included end users from very early on they are more willing to adapt the product and there is less of a learning curve. When done right you get a happy customer and happy end users because the application is easy to use and was made in less time and money.

Marcel Maas is Senior Oracle Devel-oper at the Dutch company AMIS Services

Sander Haaksma is UX Designer at the Dutch company UX Company.

Page 28: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -28

The Picture Desk

Page 29: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -29

Ten years ago it was common to hear the adage that every Oracle product has a database in it somewhere but even today, with Oracle’s portfolio of thousands of products, there’s probably still some truth in it! You would expect this to be the case for the Oracle Database Ap-pliance (ODA) of course, but what is new is that it can now run virtual machines, and in particular those for WebLogic Server and Oracle Traffic Director (or OTD, Oracle’s software load balancer with iPlanet heritage and acquired via Sun).

Recently I was fortunate enough to work with a favorite mid-sized custom-er considering the ODA as part of a hardware refresh, and had get hands-on experience of the latest ODA X3-2 model during a Proof of Concept (POC). I have been sharing those experiences on my blog but here, in OTech magazine, I’m combining them for the first time along with some additional analysis.

HardwareFirst, let us step back for a moment. For those of you who have not come across it before the ODA (apparently pronounced to rhyme with “Yoda”) is the smallest of Oracle’s Engineered Systems. Whilst not boasting exotic components, such as InfiniBand fabric, it is comparable in spirit to the other engineered systems like Exalogic and Exadata – you are buying not just hardware but also the software installation design and the mainte-nance approach.

Regarding the hardware itself, ODA consists of two 1U “pizza box” X3-2 servers connected to one or two small storage arrays (but sold as a package) includ-ing software for provisioning and applying updates.

Each of the servers has two Intel E5-2690 2.9GHz proces-sors, giving 16 cores per server, and 256GB RAM. The storage array(s) are directly attached to the servers via SAS2 and have 24 2.5” bays, populated with twenty 900GB 10k RPM spinning disks and four 200GB SSDs. For networking each server has four on-board 10GbE ports (copper

as you might expect) and a PCI card with dual 10GbE ports.

I have been specifying this sort of 2 socket servers, especially for middleware, over a number of years now and still consider them to be the sweet spot in x86-64 sizing. Intel have recently announced a new version of this processor family that now has up to 12 cores and which, following a process shrink from 32nm to 22nm, promises even more perfor-mance. With virtualization I expect most, if not all, of my (mid-sized) customers could run their middleware production estates on a handful of, or maybe even just two, servers like these.

So that’s the hardware. We could debate in particular whether the storage is sufficiently well-specified for Oracle enterprise deploy-ments but I’ve come to realize that, for the ODA’s target customers, most probably the performance is “adequate” (as Rolls-Royce apparently used to say, in an understated manner, whenever asked by the motoring press!).

Virtualized PlatformOriginally the ODA ran Oracle Linux in a purely physical manner but since earlier in 2013 it has the abil-ity to run virtual machines instead – 2 of which, called “ODA Base”, run the database (typically RAC) and what you do with the remainder of the resources is up to you. In this mode ODA runs Oracle VM on each server as the hypervisor and has a command line interface (CLI) – not OVMM – to manage these two hypervisors, and their associated VM repositories, called OAK CLI (i.e. oakcli). The SAS storage control-lers are passed through directly to the ODA Base VM (also known as oakDom1), so are essentially

The rear of the ODA X3-2 is a mass of cables. Note this was the POC system and so several cables weren’t connected – in production for this customer there would be another 9 redundant power and data cables to fit into the same space.

connected to the I/O the same way as a physical host would be. This only leaves the mirrored (boot) disks in each server to provide space for the virtual machines, though it is supported to connect to NFS storage.

The installation and configuration of the ODA Virtualized Platform is straightforward - you re-image with the virtualized system, copy over the ODA Base template and run a utility called the ODA Appliance Manager… by this point you will then have a system to run databases or other virtual machines.

Oracle Database Appliance

WebLogic on the Oracle Database Appliance Virtualized Platform

Simon HaslamVeriton Limited

Page 30: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -30

WebLogic and Oracle Traffic Director Pro-visioningNow we reach the question most middleware ad-ministrators would be interested in – how do we get WebLogic onto the ODA?

As we’ve already discussed, each of the two nodes runs OVM and has a local repository for virtual ma-chines. Using oakcli you can, according to Oracle, load any VM template and start virtual servers. Oracle cur-rently supplies two special templates for ODA though – one for WebLogic and one for Oracle Traffic Director. Note that the WebLogic templates are currently avail-able in 10.3.6 and 12.1.1 flavors.

Alongside the templates Oracle also provides a sec-ond tool: the WebLogic on ODA configuration utility. Using this you can very easily define a WebLogic do-main and cluster, and optionally everything necessary to set up Oracle Traffic Director (OTD). You run this utility from oakDom1 and, as you might expect, it is very easy to use. For example you set up the domain and cluster details, like so:

The requested details for the WebLogic domain including size of the cluster

After configuring IP addresses for all the managed servers in your new cluster, finally you need to configure the public Virtual IP and a few other network details for your load balancer:

Load balancer configuration including the public Virtual IP used to connect to your applications

Note that the above screenshots were taken from the POC where I just asked for a temporary block of IP addresses to work with – in practice of course you would use hostnames conforming to proper naming conven-tions.

I asked for a 4 node cluster with OTD and, 41 minutes after clicking the “Install” button, all 8 VMs (5 WebLogic, 3 OTD), along with a new RAC database (curiously called “w0l0sint”), were up and running! “A new RAC database?” I hear you ask. Yes, this is created automatically for you and contains tables for JMS, JTA and WSM leasing. At the time of testing it also had a somewhat generous, depending on final use of course, 24GB SGA which you obviously may wish to lower a notch or two. I have been told that instead it can create a RAC One Node or single instance database for you (though haven’t yet investigated how that works).

What is the WebLogic Installation like?This is the second important question administrators will ask and, looking into the generated VMs, I have to say I have been impressed by what I saw. Here are a few observations:

• The Oracle Linux installation has a very low RPM count meaning fewer running, but unused, services and fewer potential exploits to worry about.• The domain configuration has: a properly configured GridLink data source for the new RAC database, a leasing table configured for Whole Server Migration (WSM) with migratable targets defined, JDBC persistent stores for JMS, JTA transaction logs stored via JDBC, separate channels for administration and public traffic, and so on.• The Oracle Traffic Director configuration was straightforward and ap-peared as I expected, including a single application VIP.

One point to note is that the default administrator account is called sys-tem and not weblogic.

The separate “external” network channel, used to separate public from admin-istration traffic

That’s not to say there isn’t configuration work still to be done, most notably around SSL certificates, but I can imagine that for a typical Java EE application you could well be 80% of the way to production readiness, right out of the box.

Page 31: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -31

Work in Progress?Whilst I was pleasantly surprised with the initial build of WebLogic/Traffic Director on WebLogic there is still some development work to be done. For example, as yet there’s no obvious automated patching strategy (WebLogic patching is currently outside the scope of the main ODA patch mechanism) and this is something that most customers, particularly those with internet facing systems, will be interested in (for example CPUs, PSUs and JDK updates).

Another well-known constraint is the restriction to one WebLogic do-main and a single cluster per ODA. This may suit a few customers with a monolithic application but many will need more granularity than this, especially when you consider how many processor cores may be avail-able to WebLogic. In addition, on a test ODA all but a trivial application would need several environments, all in their own domains. Note that one work-around would be to create your own VM template and use that for additional environments, but that of course increases complexity and administrative overhead.

I gather that both of these areas are being addressed by Oracle.

Further ConsiderationsSo far we have only looked at WebLogic – what about other Fusion Mid-dleware products? The logical place would be to run those on ODA too but that will require VM templates, and for most products they are not sufficiently up to date, so you will end up building your own.

Finally, we come to Oracle software licensing. One purely commercial restriction is that you are only allowed to run Oracle Database Enterprise Edition. This is not a topic I have space to debate here but the upshot is that if you want to run Standard Edition One, or Standard Edition (pos-sibly with RAC), and don’t think upgrading to Enterprise Edition features will save you enough cores to make it more cost effective, then you should rule out ODA right now.

Secondly you will have read above several mentions of Oracle Traffic Di-rector, which used to be an Exalogic-only product. Now OTD is included in WebLogic Enterprise Edition and WebLogic Suite licenses (providing you are running on ODA) but note that, if you are not able to use Named User Plus, this means that the OTD VMs need a total of 3 dedicated processor licenses by default.

The last point is that you are allowed to do sub-server licensing, i.e. only a certain number of cores for each product, as well as vary the licenses in use on ODA – both up and down. If you do reduce the licenses used by ODA though this only means they are available for use on a different serv-er – don’t expect a refund from Oracle!

Parting ThoughtsThis article I hope gives you some idea as to what the ODA Virtualized Platform is, and how WebLogic and Oracle Traffic Director can be run on it. It won’t surprise any readers that a package like ODA won’t suit all cus-tomers, or even all of those with smaller WebLogic and database require-ments. The platform performance should be “good enough” for many but a careful trade-off will need to be made between the amount of memory allocated to the database – memory which arguably compensates for the short-fall in storage capability – with that assigned to WebLogic VMs (and their JVM heaps). Likewise the decision on whether to deploy OTD will de-pend on the value of the product in your situation, as well as your feelings towards a separate (typically hardware) load balancer pair which may cost less than the extra Oracle licenses for ODA.

What is clear, however, is that the overall ODA package including the WebLogic templates and provisioning software, if it meets with your do-main requirements, is an incredibly quick and easy way to build a robust WebLogic platform.

Simon Haslam is Principal Con-sultant at the UK based company Veriton Limited.

Page 32: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -32

The Picture Desk

Page 33: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -33

Service Oriented Architecture

Whether consulting or in pre-sales, one of the most frequently asked questions is how to pick the right integration infrastructure component to solve the problem at hand. This article addresses that question. We will look at requirements for integration infrastructure components, compare several components from Oracle’s Fusion Middleware (FMW) stack, and provide recommendation on selecting the proper component to fulfill the needs of the organization.

A growing number of organizations adopt the principles of Service Orien-tation to solve and implement integrations in heterogeneous application environments. Over the last decade a growing number of tools and inte-gration infrastructure components became available to support the needs in this field. Maturity in the market grew.

Due to acquisitions and mergers the number of vendors decreased. How-ever the number of products didn’t drop at the same rate. An example of this is the acquisition of BEA and Sun by Oracle. Looking at the product portfolio it is clear that not all products with an at least partial functional overlap disappeared or where merged.

Requirements for an integration layerTo determine requirements for integration infrastructure components we will look at their desired capabilities. The functions they have to perform. When diving into this we will focus on the capabilities of a Service Bus. Since the Service Bus is often regarded as the core of an integration-solu-tion.

A Service Bus offers the most important capabilities to design and imple-ment a future proof integration layer. The core functions are described by the VETRO pattern, by David Chapell in his book Enterprise Service Bus: Theory in Practise. VETRO is the acronym for:

• Validate• Enrich• Transform• Route• Operate

The Validate part offers capabilities to assess the correctness of mes-sages before these are forwarded or handled in any other way. Examples of implementation of this function are XSD validations and Schematron (rule based validation for XML). Besides these there are tools that support coding and/or configuration of validations.

In the Enrich stage, data is added to the message in order to add addi-tional value to the message or to make it simply understandable for the receiving end. In most cases this stage requires data look up of some sort in one or more other software components. An example is the case where article data is added to a message, that are required in a specific system while by default the message contains just an identification.

During the Transform stage there will be one or more steps to manipulate the incoming message creating a message that can be understood by the receiving component. This can involve protocol, data structure and/or content transformations. Examples are transformations of json to xml, transformation of a COTS application’s XSD to the canonical format (XSD), and country codes to full names.

To transform the data structure XSLT or XPath can be used. If content is translated we call this Translation. As you can see from the example this reaches beyond language translations. In Oracle’s Mediator it is common to use Domain Value Mapping (DVM) for translation.

The Routing ensures the message is delivered at the proper target service. In some cases this has to be done dynamically. An example that is often used to clarify this is the case where there is an business application per geographic territory. In that case messages will be routed to the right application for the territory to handle this message. Another scenario is where the change of the product price will only be send to a specific appli-cation if a promotion is planned for the future for this product.

The part that is called Operate ensures that target services are called or other ways of interacting with the a target application or component are called. This determines for example the operation of a web service that will be called.

Besides these functions modern integration infrastructure components offer all kinds of capabilities to cater for business and non-functional requirements:

• Management – including monitoring and auditing of the integration platform. This includes tools to manage the volume that is handled by the platform. An examples is message throttling, that allows you to set mes-sage processing rates, connections rates and time out values;• Endpoint virtualization – This is enabled by the routing capabilities of the integration infrastructure, but should not go unnoted since it’s import-ant in decoupling applications and components. By having web service clients call an endpoint on the Service Bus, instead of a more direct call

to the application that implements the service, a virtualization layer is created. This layer enables changes on the web service without the clients even notice. Just the Service Bus has to be adjusted;• Non-functional requirements – This includes guaranteed message delivery, availability and transaction management. Queuing is one of the patterns that is most used here.

Overview of Oracle’s integration infrastructure componentsPart of the answer to the question to select the proper infrastructure, lays in the history and background of the components. This paragraph gives an overview of the most used integration infrastructure components from Oracle’s Fusion Middleware stack. Later on we will have a closer look into all different components.

BPELOne of the first products of what became Oracle’s SOA Suite was the Or-acle BPEL Process manager. This is a product based on BPEL, a standard based executable notation format for processes. This product became an Oracle product with the acquisition of Collaxa in 2004 . Since SOA Suite 11g BPEL Process Manager is one of the component within the Service Component Architecture (SCA).

A tool like BPEL Process Manager is aimed at orchestrating (web) services to (business) processes. The Activities as defined in the BPEL standard, offer a variety of possibilities for data manipulation, flow logic and con-trolling and handling faults and exceptions.

Pick the right integration infrastructure component

Peter Paul van de BeekBol.com

Page 34: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -34

Service Bus and MediatorIn 2006 Oracle launched the first versions of their Service Bus. This prod-uct was a part of SOA Suite 10g and was called Oracle Enterprise Service Bus (OESB). With the release of SOA Suite 11g the OESB part Routing Service became the Service Component Architecture (SCA) component Mediator. Since 11g Mediator is the component to handle transformations and routing (of XML) messages within a SCA.

With the acquisition of BEA Systems in 2008 by Oracle, the AquaLogic Ser-vice Bus (ALSB) became available as an Oracle product. The ALSB product was adapted to the Oracle standards and is called Oracle Service Bus (OSB) from the 11g suite on.

OSB is the stand-alone enterprise level Service Bus provided by Oracle. Oracle has enhanced this product with some easy features from Mediator that made live easier for both developers, administrators and application management. Examples include features like domain-value maps (DVM), cross-references (XRefs) and the development of JCA compliant adapters for OSB in JDeveloper.

Oracle Data IntegratorOracle Data Integrator (ODI) is a product that was developed by Sun-opsis, a company that was acquired by Oracle in 2006. ODI is aimed at data-integration. Using a declarative approach a data-integration and/or transformation can be developed. Unlike most other ETL tools (Extract Transform Load) that provide similar solutions, ODI uses an E-LT way of working. After the extract, data is loaded first and then transformed within the RDBMS. This leverages the capabilities of transformation the RDBMS without requiring an additional platform for transformation.

Determine the fit of the integration componentsTo determine the fit of integration components for a business case we need to look into more than there origin, we also have to look into their purpose. In the coming paragraphs we will do this for each of them.

BPEL process managerBPEL is used to orchestrate services to (business) processes. The building blocks of BPEL Process Manager are activities. These activities are defined in the BPEL standard. These activities can be grouped into activities that: call web services, receive messages, manipulate data, perform process or flow logic and manage event- or error handling. Given the nature of these activities BPEL is very well equipped to to handle integrations that require complex data enrichment, transformation and / or routing. In the Enrichment part there is a number of ways, like (technology) adapt-ers and flow control, to gather additional data or to manipulate the data.

Flow control, Assign and the possibility to include native Java allow an extensive range of types of Transformations. From BPEL processes it is very easy to call web services and other functional capabili-ties using Technology Adapters.

Besides that BPEL Process Manager has a stateful engine. This means that the state of the system is persisted. This enables to see which messages are received and what actions were performed on the message in a very easy way. The downside of this is that a lot of data needs to be persisted. Because of this BPEL is less suited to handle large volumes of messages compared to the other tools. And of course this persistence requires large storage capacity volumes.

Another feature, that sets it apart from the other infrastructure compo-nents discussed in this article, is the easy way to implement interactions with humans in the integration. This is done with a Human Task compo-nent.

MediatorLooking at the structure of a Routing Service in Mediator, it becomes quickly clear what the possibilities of the tool are: Received messages can be filtered (to determine whether they should be routed using a defined path), transformations (XSLT) can be performed and an endpoint of an-other web service (or an adapter) can be called. The Technology Adapters enable extensive (protocol) transformations to send messages to different types of applications and technologies. Compared to BPEL Process Manager, Mediator is a more lightweight and more message oriented infrastructure compo-nent. It is less aimed at calling web service in an orchestration. Because of this Me-diator will especially be used in scenarios where less or less complex data enrich-ment is needed in the integration.

Mediator is very well capable in scenarios where transformation of data structure or protocol is required. It’s main use will be to perform these functions within the boundaries of a SCA composite or for messages entering a SCA composite. Mediator offers solid integration with the other components of Oracle’s SOA Suite. This also goes for the applications management part.

So Mediator will handle all parts of the VETRO pattern. However it can do this not in such an extensive way as BPEL of OSB is able to do. Enrich-ment, Transformation and Routing can be achieved for all kinds of use cases, but there are better components to handle the most complex use cases. As example we’ll look into Transformation.

Transformation can be done using Domain Value Maps. However the ca-pabilities to call web service to achieve this, or by other means, aren’t that extensive and/or developer friendly compare to some of the other compo-nents in this comparison.

Oracle Service Bus (OSB)The global way of handling messages using the Oracle Service Bus (OSB) is shown in the next figure: A message is received using a Proxy Service. The message than can be ex-posed to several ways and methods of validation, enrichment, transfor-mation and routing (VETR). In the figure these are called “Message Magic”. After the magic is performed a Business Service is used to call the end point of another web service providing the implementation of the needed logic.

One of the difference of OSB com-pared to Mediator is that OSB can store (intermediate) results in vari-ables. These variables can be used to

alter the content of e.g. the header of the message or for routing purposes. Beside that OSB is better equipped to create service compositions, in other words create services out of other services.

Compared to BPEL the service provided by OSB are stateless. Because of this OSB is more appropriate to implement for synchronous services and handle larger volumes of messages.

OSB will be deployed to interface with external partners, both for providing and consuming services. Besides that it will be used to create the connection between enterprise domains. In implementations terms it will connect SCA composites.

OSB offers what are called “Enterprise Level Management Capabilities”. One can tell by looking at and comparing possibilities for load balancing and result caching. These extend beyond those of the other infrastructure components discussed here. There are other features here of which we like to mention throttling (to handle peak volumes of messages) capabili-ties and settings that support parallel execution of Business Services. Ora-cle OSB is very well equipped to handle very large numbers of messages.

Checking against the VETRO pattern, OSB can handle all part very well. One could argue that it can handle less complex cases for Enrichment, Transformation and Routing compare to BPEL. However because of its ca-pability to handle larger volumes of messages and better Enterprise Level Management Capabilities it will be the component of choice when state and orchestration to processes aren’t very big requirements.

Page 35: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -35

Oracle Data Integrator (ODI)As is in the name Oracle Data Integrator (ODI) is aimed at data-integra-tion. ODI is capable of handling very large volumes of data. None of the

other components can handle the volume like ODI. From VETRO perspective ODI is able to handle the Enrichment, Transformation and Routing part within the limits of the chosen technology. However ODI has restricted possibilities for transformation and routing. The other infrastructure com-ponents are better equipped for scenarios integrating func-tionality and data.

In most organizations ODI will be used beside a Service Bus or another integration platform that is capable of sharing business logic and expose it to other applications.

Putting the pieces togetherEvery component we looked into has its strengths and weaknesses. The tools can be used besides each other so they can enhance each other’s strengths. A global overview of how an architecture of a deployment like that could look is depicted in the following figure.

ODI will be used to synchronize large volumes of data. Especially in cases were real-time web services can’t be used or not in an efficient way. Be-sides that it can be used to replicate data to a data warehouse.

Mediator will handle transformations and routing inside enterprise do-mains, in implementation terms inside a SCA composite. Besides that it will be used inside SCA composites to handle protocol or data transforma-tions.BPEL will be used inside SCA composites to orchestrate service to (busi-ness) processes and where human interaction is required. Besides that it will be used to perform more complex data enrichment or transforma-tions. It will handle all integration that requires statefull processing. The use of BPEL will be avoided where large volumes of message need to be handled.

OSB is the platform to handle messages send by or received from exter-nal organizations or between enterprise domains (between multiple SCA composites). OSB will be the Service Bus transporting messages between domains. OSB will be used for scenarios requiring endpoint virtualization. Besides this decoupling, this enables features like message throttling and result caching. In this way the consequences of handling large volumes of messages from clients that we don’t control can be managed.

BPEL could be used to implement business processes requiring service provided by multiples SCA composites or business domains. Many compa-nies have business processes crossing borders of business domains, but no tool to automate and manage this. BPEL can do this.

WrapThis article looked into the requirements for an integra-tion platform, using the VETRO pattern as a base line. From this base we took a closer look into four components of the Oracle’s Fusion Middleware stack: BPEL Process Manager, Mediator, OSB en ODI.

For each of these products we showed background, strengths and weaknesses and offered example scenarios of where their strength could be properly used. In the end we showed how they could be used beside each other to leverage the strengths of the individual components.

Peter Paul van de Beek is Soft-ware Architect at the Dutch company Bol.com

Page 36: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -36

The Picture Desk

Page 37: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -37

IntroductionThere are few Oracle professionals who are not aware of the importance of keeping database statistics up to date. However, contrary to common knowledge, these statistics are not limited to tables, columns and indexes. A number of years ago it was a big surprise for me to discover the follow-ing facts from an article by my fellow Oracle ACE, Adrian Billington:

1. PL/SQL functions have a number of associated statistics, but Oracle default values are somewhat strange: a. CPU cost – 3000 [CPU instructions] b. I/0 cost – 0 [data blocks to be read/written] c. Network cost – 0 [data blocks to be transmitted] d. Selectivity – 1% [i.e. if you compare the results of your function to a constant, they will match only 1 in every 100 comparisons] e. Cardinality (for functions returning collections) – 8168 [objects]2. Cost-Based Optimizer uses these default values to build execution plans.3. Default values could be changed using the ASSOCIATE STATISTICS com mand in two ways: either with hard-coded values or dynamic calcula tion using the Oracle Extensible Optimizer.

It is very easy to imagine scenarios where insufficiently set defaults could cause performance issues. For the purposes of this article, the scope will be limited to the cost of PL/SQL functions, because usually that is the area where performance problems most often occur.

Why do we care about PL/SQL statistics?To illustrate the need for properly maintained PL/SQL statistics, examine a simple case. Assume that there are two different function calls in the WHERE clause. The first function is calculation-heavy, while the second one is calculation-light. Keep in mind that any time Oracle processes a group of Boolean statements, the short-circuit evaluation is applied to eliminate unnecessary calls: if there is a statement “ConditionA OR Con-ditionB” and ConditionA is TRUE, the whole calculation of ConditionB can be ignored. To utilize such optimization techniques in the described case of two functions, it always makes sense to first check the light function and continue only when it does not return a needed value.

The real issue is how to tell the CBO which function is light and which function is heavy. If those functions are standalone and you know for sure that they are so different, the solution is very simple. You can just set really high hard-coded values.

CREATE FUNCTION f_light_tx (i_empno number) RETURN VARCHAR2 ISBEGIN RETURN ‘Light:’||i_empno; END;

ASSOCIATE STATISTICS WITH FUNCTIONS f_light_txDEFAULT COST (0,0,0); -- order of parameters: CPU, I/O, Network

CREATE FUNCTION f_heavy_tx (i_deptno number) RETURN VARCHAR2 ISBEGIN RETURN ‘Heavy:’||i_deptno; END;

ASSOCIATE STATISTICS WITH FUNCTIONS f_heavy_txDEFAULT COST (99999,99999,99999);

The impact of the statistics can be seen immediately. The CBO will change the order of predicates on the fly as shown here:

SQL> set autotrace on explainSQL> select count(*) 2 from emp 3 where f_heavy_tx(deptno) = ‘A’ 4 or f_light_tx(empno) = ‘B’;

Keep in mind that there are techniques to get more realistic values instead of 99999s without using the Extensible Optimizer. Getting I/O and network costs is reasonably straightforward with regular session statistics: “db block gets”/“consistent gets” and “bytes sent via SQL*Net to dblink”/”-bytes received via SQL*Net from dblink”. As for CPU cost, as long as you know how much CPU time is consumed (on average) by the function, you can always find a number of CPU instructions for that time. The code snip-pet below shows how to do it for 0.01 sec interval:

SQL> DECLARE 2 v_units_nr NUMBER; 3 v_time_nr number:=0.01; -- in seconds 4 BEGIN 5 v_units_nr := DBMS_ODCI.ESTIMATE_CPU_UNITS(v_time_nr) * 1000; 6 DBMS_OUTPUT.PUT_LINE(‘Instruc-tions:’||round(v_units_nr)); 7 END; 8 /Instructions:9476396SQL>

Oracle PL/SQL Function Statistics Michael RosenblumDulcian, Inc.

Database - PL/SQL

Page 38: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -38

Solving the Problem of Packaged FunctionsThe manipulation of the statistics demonstrated above seemed to be very promising until I realized the following restriction: If multiple functions are inside of the same package, there is no way to associate hard-coded statistics with them separately:

• ASSOSIATE STATISTICS WITH FUNCTIONS <PackageName>.<Function-Name> is invalid syntax• ASSOCIATE STATISTICS WITH PACKAGES <PackageName> applies hard-coded values to all functions in the package.

The last hope was the Extensible Optimizer with all of its obscure syntac-tical constructions, very limited documentation, and high learning curve. After a number of unsuccessful attempts, it became obvious that the problem was solvable. It is indeed possible to associate different statistics with packaged functions as long as you understand how the Extensible Optimizer treats them.

To illustrate this finding in the following example, assume that you have a package with three functions:

CREATE OR REPLACE PACKAGE perf_pkg IS FUNCTION f_heavy_tx (i_deptno NUMBER) RETURN VARCHAR2; FUNCTION f_light_tx (i_empno NUMBER) RETURN VARCHAR2; FUNCTION f_medium_tx (i_name VARCHAR2) RETURN VARCHAR2;END;

CREATE OR REPLACE PACKAGE BODY perf_pkg IS FUNCTION f_heavy_tx (i_deptno number) RETURN VARCHAR2 IS BEGIN RETURN ‘Heavy:’||i_deptno; END;

FUNCTION f_light_tx (i_empno number) RETURN VARCHAR2 IS BEGIN RETURN ‘Light:’||i_empno; END;

FUNCTION f_medium_tx (i_name VARCHAR2) RETURN VARCHAR2 IS BEGIN RETURN initcap(i_name); END;END;

The challenge is how to tell the CBO about the different costs of these functions. As always with the Extensible Optimizer, you need to start with a special object type that matches the required template.

CREATE OR REPLACE TYPE function_cost_oty AS OBJECT ( dummy_attribute NUMBER,

STATIC FUNCTION ODCIGetInterfaces (p_interfaces OUT SYS.ODCIObjectList) RETURN NUMBER,

STATIC FUNCTION ODCIStatsFunctionCost ( p_func_info IN SYS.ODCI-FuncInfo, p_cost OUT SYS.ODCICost, p_args IN SYS.ODCIArgDescList, i_single_nr IN NUMBER, p_env IN SYS.ODCIEnv ) RETURN NUMBER,

STATIC FUNCTION ODCIStatsFunctionCost ( p_func_info IN SYS.ODCI-FuncInfo, p_cost OUT SYS.ODCICost, p_args IN SYS.ODCIArgDescList, i_single_tx IN VARCHAR2, p_env IN SYS.ODCIEnv ) RETURN NUMBER )

Since the reason to create this type is to adjust I/O cost, the only meth-od needed is ODCIStatsFunctionCost (in addition to the mandatory ODCIGetInterfaces). The structure of the parameters in this method is a bit strange. It includes a number of standard ones (p_func_info, p_cost, p_args), a list of user-defined parameters of functions that we are planning to correct, and another standard parameter (p_env).

The following is the trick to make it work. You need to create as many overloads of the ODCIGetInterfaces method as the number of distinct combinations of function input parameters. In the package PERF_PKG, there are two functions with one NUMBER IN-parameter and one function with a VARCHAR2 IN-parameter. This means that you need to have two overloads: one for single numeric input and one for single textual input. The parameter names are irrelevant, but in the case of multiple parame-ters, their order would matter.

The body of FUNCTION_COST_OTY is as obscure as the specification, but don’t forget that the format must match the Oracle templates in order to work:

CREATE OR REPLACE TYPE BODY function_cost_otyAS STATIC FUNCTION ODCIGetInterfaces (p_interfaces OUT SYS.ODCIObjectList) RETURN NUMBER IS BEGIN p_interfaces := SYS.ODCIObjectList(SYS.ODCIOb-ject (‘SYS’, ‘ODCISTATS2’)); RETURN ODCIConst.success; END ODCIGetInterfaces;

STATIC FUNCTION ODCIStatsFunctionCost (p_func_info IN SYS.ODCIFuncInfo, p_cost OUT SYS.ODCICost, p_args IN SYS.ODCIArgDescList, i_single_nr IN NUMBER, p_env IN SYS.ODCIEnv ) RETURN NUMBER IS BEGIN IF p_func_info.MethodName like ‘%HEAVY%’ THEN p_cost := SYS.ODCICost (CPUcost=>NULL, IOcost=>1000, Network-Cost=>NULL, IndexCostInfo=>NULL); END if; RETURN ODCIConst.success; END;

STATIC FUNCTION ODCIStatsFunctionCost (p_func_info IN SYS.ODCIFuncInfo, p_cost OUT SYS.ODCICost, p_args IN SYS.ODCIArgDescList, i_single_tx IN VARCHAR2, p_env IN SYS.ODCIEnv ) RETURN NUMBER IS BEGIN IF p_func_info.MethodName like ‘%MEDIUM%’ THEN p_cost := SYS.ODCICost(NULL, 10, NULL, IN-ULL); END IF; RETURN ODCIConst.success; END;END;

Page 39: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -39

The logic of the code is as follows: It evaluates a name of the function passed using the P_FUNC_INFO input parameter and sets higher I/O costs if needed. That parameter is of type SYS.ODCIFuncInfo and has the following attributes: ObjectSchema, ObjectName, MethodName, Flags. If the function is standalone, its name will be represented in ObjectName, but if the function is part of the package, the package name will be in ObjectName and the function name will be in MethodName. In terms of assigning costs, NULL would mean to use the default. Anything other than NULL indicates override.

Now that all of the parts of the equation are set, it is time to associate statistics with the package and test its impact:

SQL> associate statistics with packages perf_pkg using function_cost_oty;SQL> set autotrace on explainSQL> select count(*) 2 from emp 3 where perf_pkg.f_heavy_tx(empno)=’A’ 4 or perf_pkg.f_light_tx(deptno)=’B’ 5 or perf_pkg.f_medium_tx(job)=’C’;

As you can see, the CBO changed the sequence of the functions to evalu-ate them in the order of the associated costs, namely light, medium, and heavy, which was exactly as required.

SummaryIt will be a big surprise for a lot of Database Administrators to know that they need to worry about PL/SQL statistics. These statistics direct-ly impact the way in which Oracle calculates the costs of the different execution plans of corresponding SQL statements. Providing valid data to the CBO is crucial in ensuring that it makes the most informed decision, because currently there is no other way for the CBO to look inside of the PL/SQL units. On the other hand, special object types provided by the Extensible Optimizer can do the job regardless of whether you are using standalone or packaged functions. It just takes a bit of time to learn and experiment. For more information about this topic see my upcoming book PL/SQL Performance Tuning: Tips and Techniques (McGraw-Hill Educa-tion, ETA - late spring 2014)

Michael Rosenblum is Software Architect and Senior DBA at the New Jersey, USA based company Dulcian, Inc.

Page 40: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -40

The Picture DeskThe Picture Desk

Page 41: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -41

Code generation is a powerful concept, and Oracle has a long history of applying it in their toolsets and demos. In the Oracle Designer days, developers would model a table and generate code to create the table in the database. But not only that, they would also generate the user interface, based on the table definition. There is a problem with this approach however. Relational database models are optimized for storing data. We normalize models to minimize redundancy. User in-terfaces should be optimized to execute a task in a matter that fits the user, the context of use and the goal of the user. Exposing the relation-al model to the user does not help with that.

Recently, a number of new technologies have emerged to model and execute business processes; technologies like Oracle SOA Suite and Oracle BPM Suite. Process analysts and developers often make the mistake of equating user interaction with process flow. This is even more abundant now that we can execute the BPMN 2.0 models that we create using Busi-ness Process Management Systems (BPMS). Analysts tend to model the user interaction in the process and then ask the developers to generate a screen for every human task that is modeled. This results in user interfaces that error prone and inflexi-ble. It also results in processes that are hard to maintain. Saving money in the project by generating the user interface will cost the organization money in maintenance mode. A better approach is to design the process flow and the user interface(s) separate from each other. This way you will be more flexible both to make process changes and user interface changes and the application will be more usable and fit for the tasks they are supposed to support. In this article I will discuss two cases; one in which the project used the ‘code-generation’ approach and one in which the design approach was used. Case: Building permits – Code gen-eration approachIn this project, an application was built to support a new permit process. There are a number of different users, or roles involved in this process:.

• Applicant: a company or person who wants to build a new structure or

wants to remodel and existing building;• Front office employee. The front office receives the application and com-municates with the applicant about the progress of the permit process;• Building inspector who reviews the permit application. They check whether the applicant is in compliance with the rules that apply to this particular case;• Finance department who sends the invoice;• External advisors that the building inspector can consult for specific information. A common advisor is the local fire department.

The application was realized using Oracle SOA Suite10g, based on a pro-cess design in BPMN 1.1. The business logic consisted of a combination of automated and human tasks. This was realized using BPEL, Java Web Services, the Human Task Service and Java Server Faces (JSF) for the front end. The process was fine grained and the user interface was generated based on the XML that was passed from the BPEL process into the human task.

Tightly coupled UI

The problemThe developers were able to build the user interface in very little time. So at first, this might look like a good way of saving cost. However, after being in production for some time, the organization experienced two problems with this approach, one from a process perspective and one from a user experience perspective.

Process perspective: 1. The process was too fine grained, because every step in the user inter-face needed to be a step in the process. This meant that a change in the user interface logic also needed a change in the process. 2. The BPEL process carried a lot of data, because all the data that needed to be shown to the user was presented using a screen based on a human task. This means that the business process was cluttered with assign statements, copying values from one activity to the next. The data was not persisted in the database, so if it was needed somewhere else in the appli-

cation, calls were made to running BPEL instances to fetch the data.

User experience perspective:1. The user interface was organized completely from the ‘process unit’ perspective: the application for a building permit. This is fine for the applicant and the front office. However, building inspectors like to organize inspections based on the location of the building site, so that they can efficiently visit sites; The same is true for the people that need to approve the final decision and the financial administration; they want to handle a set of applications that have reached a certain state periodically, and not one by one.2. Users have little flexibility and only have access to a limited set of data: just the data that is carried by the process.

Oracle SOA Suite & BPM Suite

Stop Generating your User Interface! Start designing IT!

Lonneke DikmansVennster

Page 42: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -42

After the project was delivered and taken into production, the cost the application was much higher than expected. This was caused by:

1. High administration cost. Because the process was so fine grained a lot of effort needed to be spent to keep the system up and running. If users made a mistake and they wanted to change data, they needed a technical administrator to change something in the process flow, to restart a pro-cess or to resubmit a service call in the BPEL process. There were a lot of administrators assigned to managing the BPEL processes using the BPEL administration console. 2. High cost to change something in the application. Whenever a change was needed, all layers of the application needed to be changed. Most change requests were related to user interface improvements. The impact of these changes was very high because the data in the UI were based on the data sent by the BPEL process. A new field in the UI meant changing the process and the user interface.

Case: employee self service and manager self service – user centered design approachAn organization was planning to offer employees and managers self-ser-vice for HR related processes to save money. Instead of having people from the payroll department enter expense data, change bank account numbers, etc. employees can do this themselves. One of the most im-portant requirements was that it would be easy to use. The user popula-tion was heterogeneous: from blue-collar workers to local government officials. A pilot to compare three solutions was defined for the expense process.

The following roles were identified:• Employees who want to submit expenses;• Managers who need to approve the expenses;• Pay roll administrators who check the expense against tax rules and pay the expenses.

The process consisted of a combination of manual and automated steps. The organization used E-Business Suite 12 for HR processes. The data about the employees and managers came from EBS. The end result (ex-pense lines) was stored in EBS as a pay element. The pay roll department processed the expenses using EBS from there.

The three possible solutions that were compared were:• Use the self service module from EBS;• Buy an off the shelf product offering integration with EBS;• Build something custom made for the organization.

There were several criteria important for the decision, but the most important ones were related to usability and cost of development/imple-mentation. Usability contributed to the business case in two ways: if the self-service application would be too complex, the organization would end up with a lot of incorrect data. Secondly, if the self-service application would be too complex users might end up having a lot of questions for the payroll department. For this reason user experience was an important criterion in the pilot.

The custom made solution consisted of the following steps:1. Define and create the business process based on BPM techniques;2. Design and create the user interface based on User Experience tech-niques;3. Integrate the self-service application with EBS and guiding the process using SOA and BPM.

Note that these steps were partly executed in parallel.

The resultThe process was designed based on BPMN method and Style (Bruce Silver). It started with determining the ‘unit of execution’. There was some discussion whether this should be an expense (trip), or expense lines (travel, meal), or all expenses that occurred in a month (travel, meal, meal etc.). From the employee’s perspective, the unit of execution is the expense. From the manager’s perspective, the unit of execution is the expense line. From the payroll department’s perspective the unit of execu-tion is all the expense that occur in a month. Here the separation between user interface logic and process design really helped. Because the UI would not be based on the process directly, we could analyze the process and determine that the unit of control is an expense line. The manager could approve the travel, but decline the meal and vice versa. But, the user interface could be designed in such a way that the employee would not have to reenter all the data from the trip. For the pay roll department the data would be presented in batches, so they could pay monthly, at the same time the salary was paid. These different views on the data can be realized easily if the user interface is not generated based on the process definition. At the same time, the individual items can be monitored in the process design without having to take into account the different views the users have on expenses.

Screen shot of the prototype that was build.

The user interface was designed, developed and tested using well-estab-lished user experience techniques:• Personas and scenarios based on the roles and the different types of users in the organization;• Interaction design and prototypes;• Usability testing to improve the design.

Page 43: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -43

Because the user interface was built separately from the process, the process was easy to realize. It only contained two steps: approving the expense and sending it to E-Business Suite after it was approved. The user interface could be tested with users, before the development of the inte-gration with EBS and the process model was finished. We executed several iterations of usability testing. First with paper prototypes, then with the actual software using mock ups for the data.

The Expense Process

The different parts (GUI, Process and E-Business Suite) were integrated based on services. The process was connected to Oracle E-Business Suite using PL/SQL web services. The application services, the process and the user interface were connected using REST services.

ConclusionThe user centered design approach proved to have several advantages over the code generation approach:• The application appeals to users and satisfies there needs because it is designed and developed to support specific scenarios;• The process is monitored and executed in compliance with tax rules;• Parallel execution of different disciplines (process design & develop-ment, user interface design and development, service design & devel-opment) because of the loosely coupled architecture, so no delay in the project;• Extra cost in beginning of project, but less cost in maintenance because it is easy to change and there is less need to change.

Architecture overview

Lonneke Dikmans is Managing Part-ner at the Dutch company Vennster.

Page 44: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -44

Your company could be right here.

On a full page in the next issue of echMagazineT

Contact us at www.otechmag.com or +31614714343 for more information.

Page 45: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -45

Information

OTech MagazineOTech Magazine is an independent magazine for Oracle professionals.

OTech Magazine‘s goal is to offer a clear perspective on Oracle technol-ogies and the way they are put into action. OTech Magazine publishes news stories, credible rumors and how-to’s covering a variety of topics. As a trusted technology magazine, OTech Magazine provides opinion and analysis on the news in addition to the facts.

OTech Magazine is a trusted source for news, information and analysis about Oracle and its products. Our readership is made up of professionals who work with Oracle and Oracle related technologies on a daily basis, in addition we cover topics relevant to niches like software architects, devel-opers, designers and others.

OTech Magazine‘s writers are considered the top of the Oracle profes-sionals in the world. Only selected and high-quality articles will make the magazine. Our editors are trusted worldwide for their knowledge in the Oracle field.

OTech Magazine will be published four times a year, every season once. In the fast, internet driven world it’s hard to keep track of what’s important and what’s not. OTech Magazine will help the Oracle professional keep focus.

OTech Magazine will always be available free of charge. Therefore the digital edition of the magazine will be published on the web.

OTech Magazine is an initiative of Douwe Pieter van den Bos.

Please note our terms and our privacy policy at www.otechmag.com.

IndependenceOTech Magazine is an independent magazine. We are not affiliated, associ-ated, authorized, endorsed by, or in any way officially connected with The Oracle Corporation or any of its subsidiaries or its affiliates. The official Oracle web site is available at www.oracle.com. All Oracle software, logo’s etc. are registered trademarks of the Oracle Corporation. All other compa-ny and product names are trademarks or registered trademarks of their respective companies.

In other words: we are not Oracle, Oracle is Oracle. We are OTech Maga-zine.

AuthorsWhy would you like to be published in OTech Magazine?

- Credibility. OTech Magazine only publishes stories of the best-of-the-best of the Oracle technology professionals. Therefore, if you publish with us, you are the best-of-the-best.- Quality. Only selected articles make it to OTech Magazine. Therefore, your article must be of high quality.- Reach. Our readers are highly interested in in the opinion of the best Ora-cle professionals in the world. And all around the world are our readers. They will appreciate your views.

OTech Magazine is always looking for the best of the best of the Oracle technology professionals to write articles. Because we only want to offer high-quality information, background stories, best-practices or how-to’s to our readers we also need the best of the best.

Do you want to be part of the select few who write for OTech Magazine? Review our ‘writers guidelines’ and submit a proposal today at www.otechmag.com.

AdvertisementIn this first issue of OTech Magazine there are no advertisements placed. For now, this was solely a hobby-project. In the future, to make sure the digital edition of OTech Magazine will still be available free of charge, we will add advertisements. Are you willing to participate with us? Contact us on www.otechmag.com or +31614914343.

Intellectual PropertyOTech Magazine and otechmag.com are trademarks that you may not use without written permission of OTech Magazine.

The contents of otechmag.com and each issue of OTech Magazine, includ-ing all text and photography, are the intellectual property of OTech Maga-zine. You may retrieve and display content from this website on a comput-er screen, print individual pages on paper (but not photocopy them), and store such pages in electronic form on disk (but not on any server or other storage device connected to a network) for your own personal, non-com-mercial use. You may not make commercial or other unauthorized use, by publication, distribution, or performance without the permission of OTech Magazine. To do so without permission is a violation of copyright law.

Programs and Code SamplesOTech Magazine and otechmag.com could contain technical inaccuracies or typographical errors. Also, illustrations contained herein may show pro-totype equipment. Your system configuration may differ slightly. The web-site and magazine contains small programs and code samples that are furnished as simple examples to provide an illustration. These examples have not been thoroughly tested under all conditions. otechmag.com, therefore, cannot guarantee or imply reliability, serviceability or function of these programs and code samples. All programs and code samples contained herein are provided to you AS IS. IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE ARE EXPRESSLY DISCLAIMED.

Page 46: O tech magazine   fall 2013

T - otechmag.com - Fall 2013 -46

echMagazineTwww.otechmag.com