journal of operational risk management - operational risks of not sharing data
TRANSCRIPT
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
1/14
The Journal of Operational Risk (114) Volume 6/Number 1, Spring 2011
The most insidious operational risk:
lack of effective information sharingSteven Francis
Massachusetts Institute of Technology, Sloan School of Management,
600 Memorial Drive, W98-200 Cambridge, MA 02139-4822, USA;
email: [email protected]
Poorly integrated information systems often lead to risks that are high in fre-
quency and low in severity. However, many catastrophic failures and loss events
can also be traced back to a lack of integration. No single software application,
consulting engagement or silver bullet exists that can solve the problem either:
doing so requires vision,discipline and an understandingof why many integration
initiatives do not succeed.
1 INTRODUCTION
Sharing information across departments, software applications and other silos may
pose the greatest existing systemic operational risk in any industry. The problem
hinders decision making across nearly every organizational function. The Senior
Supervisors Group, a group of senior managers from a large number of interna-
tional financial firms (see Senior Supervisors Group (2009)), concluded that one of
the four primary causes of the financial crisis was inadequate and often fragmented
technological infrastructures that hindered effective risk identification and measure-
ment. The lack of commitment to such risk control by management as well as lack
of resources to develop the required information technology (IT) infrastructure were
cited as ongoing obstacles to improvement. Poor integration of data across systems
and departments is repeatedly mentioned throughout the document as an impediment
to risk management.
In no way are these problems specific to banking, or specific to managing credit
risk or market risk. Consider the operational risks that the US intelligence community
faces because of the same problem. The terrorist attack of September 11, 2001 and
the attempted attack on December 25, 2009 could both have been prevented if data
had been shared more effectively (National Commission on Terrorist Attacks (2004)).
Operational risks should be considered in terms of their relevance to an organizations
ability to perform its mission. There is perhaps no greater example of mission failure.
1
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
2/14
2 S. Francis
Salespeople lose customers, and service personnel often provide slow or ineffective
service because key information that is needed to service customers is spread all over
the organization. Sales, service and operations staff spend tremendous amounts of
time entering nearly identical data into multiple systems, and it does not always getentered in the same way, causing process errors and mistakes later on. A study by
Vanson Bourne (2009) showed the following:
(1) 89% of respondents stated that they cannot get a single view of process perfor-
mance because information on businessprocesses is held in multiple operational
systems;
(2) 80% of respondents use middleware to try to bring data together in a way that
is unsatisfactory to those in charge; and
(3) 67% admitted that they hear about problems in service from customers before
they identify the problems themselves.
Why are firms so bad at this? Making a mental connection between integration infra-
structure and customer attrition due to poor service can be a difficult leap. There are
other fairly simple reasons, and thankfully there are also remedies. First we define
the problem with some examples and focus on what some of the solutions look like.
We then look at some of the risks and difficulties associated with implementing these
solutions.
The ability to serve customers faster, reduce processing errors and dramatically
reduce cycle times of core business processes are direct benefits of having integrated
data. Event-processing infrastructure can enable real-time visibility into key perfor-
mance metrics that would otherwise only be available weeks or months after the
relevant point in time. This allows for rapid responses in a constantly changing busi-ness environment. If customer waiting times go up, we should know right away. If
customer complaints increase, we should know right away. If orders decrease, we
should know right away. If input costs increase, we should know right away. Each of
these simple examples presents a measurable operational risk that, if not addressed
promptly, can lead to substantial loss events. Information technology organizations
already spend around 30% of their budgets on enterprise information integration
(Gannon et al (2009)), yet most of these efforts are not conducted in a proactive or
systematic fashion. Integrations are built and maintained in a reactive way with little
thought for consistency, reusability or any kind of strategy.
1.1 How do we spot integration problems?
We now give some examples of how this problem might occur, and a short description
of straight-through processing. Straight-through processing is interesting because it
The Journal of Operational Risk Volume 6/Number 1, Spring 2011
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
3/14
Operational risk: lack of effective information sharing 3
is a formalized way of implementing some of the recommendations in this paper
specifically in the financial-services industry. Because the stakes of data accuracy,
integrity and timeliness are so high in financial services, best practices for integration
are formalized and broadly adopted. For these reasons, many financial-services firmshave become very adept at process automation and integration.
A large manufacturing firm in the Pacific Northwest had a very difficult time obtain-
ing a complete view of its committed costs. It did not know what had been spent in
a given month or quarter. Only when the books were rolled up at the end of the
period was the company able to see what was actually spent, and to compare this
with budgeted numbers. This obviously made it difficult to control spending. Spend-
ing would frequently exceed what was budgeted and redundant stocks of spare parts
were common. Hundreds of thousands or even millions of dollars were being wasted.
The reason for these problems was that there were many remote offices where spend-
ing took place. Some of the larger remote offices had developed their own systems
for tracking expenses and spending, while some of the small offices just kept thisdata in spreadsheets. The remote offices would reenter the data into the corporate
financial system, or send batch uploads of the data, on an infrequent and inconsis-
tent basis. The company solved this problem by integrating the larger remote office
systems into the corporate financial system in real time, and by rolling out a web-
based expense-tracking tool to the smaller offices that directly updated the corporate
financial system. The result was greatly improved expense management, cash man-
agement, vendor management and inventory management. All of these benefits arose
from just a little integration.
An oil and gas company that I worked with in Calgary several years ago was finding
it difficult to decide where to invest tens of millions of dollars because they lacked
visibility into which projects were paying off (or had the most potential to pay off)
the best. Without an integrated view of how different projects were producing, related
production costs, potential for future production and past and future maintenance
costs, the organization had little confidence that they were investing their capital
optimally. To make matters worse, its economic situation changed daily due to fluc-
tuations in spot and future oil prices. Capital allocation was being performed based
on monthly and quarterly data, in an environment where the economics were chang-
ing every day. One executive said that he felt like he was investing tens of millions
of dollars with a blindfold on. To address this problem, the organization deployed
technology to consolidate information into a reporting database. They standardized
definitions of critical data across applications and delivered this information to users
through personalized dashboards. These dashboards were updated throughout the day
as data was delivered to the reporting database via events in real time. They also auto-
mated many core business processes to ensure that when data was updated in one
application it stayed in sync with other applications. This solution gave the customer
Research Paper www.thejournalofoperationalrisk.com
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
4/14
4 S. Francis
vastly improved visibility of their business and the increased confidence that they
were making the right investments at the right time.
In financial services, sharing information across partners and departments is broadly
known as straight-through processing (STP). Due to the high stakes of data quality inthe financial-services industry, many firms have become very good at this. Straight-
through processing represents a major shift from common TC3 trading to same-day
settlement. This is accomplished by sharing information more effectively between
departments and between partners in a transaction. Straight-through processing helps
to keep the front office and back office synchronized by integrating data and systems.
This kind of integrationcan drive significant operational costs outof a business. Rather
than relying on operational staff to manually enter data from trading systems intoback
office systems, which is expensive and leads to errors, STP automates these activities.
By implementing STP, firms increase the probability that a contract or an agreement is
settled on time, thereby reducing settlement risk. The sharing of information between
transaction partners is another important aspect of STP and is a great way for firmsto find a competitive advantage and to further reduce costs. In essence, however, STP
is just a way to share information more effectively in order to improve transaction
processes.
1.2 Risk management technology
When people think about software for risk management, integration infrastructure is
not usually the first thing that comes to mind: indeed, it is typically not even the third
or fourth thing. There are a handful of technologies that are often associated with and
applied to risk management problems. Document management packages, business
continuity technology, assessment and survey tools, security technology, and busi-
ness applications all play a role. These technologies are often combined to address
a given risk or regulatory requirement. For example, electric utilities facing Federal
Energy Regulatory Commission requirements may be very concerned with ensuring
the continuing operation of the grid, and may therefore have to prevent any unautho-
rized access to systems that effect grid operations or, in the event of a catastrophe,
they may need to ensure that systems that affect grid operations continue to run. Busi-
ness continuity and identity and access management solutions may therefore be very
important. Organizations strengthening SarbanesOxley compliance may be inter-
ested in auditing and document management. Many governance risk and compliance
vendors package software components, industry best practices, and implementations
of regulatory requirements into governance risk and compliance suites addressing
these specific problems. We focus now on technologies, solutions and best practices
forsharing information acrossan enterprise.Althoughsuch solutions are more broadly
focused, they are still highly relevant from a risk management perspective, and the
The Journal of Operational Risk Volume 6/Number 1, Spring 2011
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
5/14
Operational risk: lack of effective information sharing 5
impact of such solutions may be far greater than the impact from more specific, or
narrowly focused solutions. Lam (2003) identifies three clear benefits that should
come from operational risk management endeavors. Data sharing initiatives satisfy
all of these.
(1) Rigorous operational risk management should both minimize day-to-day losses
and reduce the potential for occurrences of more costly incidents.
(2) Effective operational risk management improves a companys ability to achieve
its business objectives. As such, management can focus its efforts on revenue-
generating activities, as opposed to managing one crisis after another.
(3) Finally, accounting for operational risk strengthens the overall enterprise risk
management system.
1.3 How did it get so bad?
As organizations evolve they typically adopt more and more software applications
and data sources, and divisions occur naturally. The launch of a new project, the
acquisition of a company or the introduction of a product often involves the rollout of
some new software applications. With each new application, data that already exists
often gets duplicated. As a result there are often multiple copies of customer data,
vendor data, product data or asset data. This causes processes to become more lengthy
and complex. What was once a simple transaction can become quite unwieldy and
may require the use of several systems and the knowledge of many arbitrary rules. As
this natural evolution occurs, problems typically manifest in one of two ways.
Operational efficiency: the proliferation of systems and data often make completing
simple tasks more difficult, which leads to poor data quality, process errors and
efficiency problems. What used to be accomplished by accessing only one system
may now require access to and knowledge of three or four systems.
Business visibility: getting a complete view across systems can become very difficult
as the number of systems grows.As data is distributed across multiple applications
and data sources, retaining a single rational view of a customer, a vendor or an
asset can become very challenging. This leads to decisions that may be based on
inaccurate or incomplete information.
Aside from process breakdowns that occur due to inconsistent data, or bad decisions
that are made due to fragmented views of information, data proliferation causes other
serious problems. One sure sign of a data-visibility problem is the proliferation of
spreadsheets and desktop databases. Users often extract data from software appli-
cations and put it into spreadsheets or desktop databases. As soon as they become
Research Paper www.thejournalofoperationalrisk.com
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
6/14
6 S. Francis
reliant on these spreadsheets or small databases to do their job, it becomes difficult
to change the upstream application because this will prevent users from keeping their
spreadsheets up to date. This limits IT flexibility, which in turn limits the flexibility
of the whole enterprise. Such desktop databases are also a security risk: it is very easyfor employees to e-mail databases or to store them on a personal drive.
Another persistent problem often occurs in the way in which siloed applications
are integrated. Applications are typically integrated manually, through duplicate data
entry, or by creating all manner of different software connections between differ-
ent applications. We have already covered the problems with manual integration and
multiple entry. However, the problems created by excessive software connections, or
interfaces, are also profound. These connections can severely inhibit the flexibility
of an entire organization. These connections are created using whatever technology
the developer desires and they are usually not monitored or created in a consistent
way. These connections also get duplicated frequently, as data from one applica-
tion may need to integrate with several other applications. The maintenance of theseconnections consumes significant resources and the connections themselves prevent
organizations from upgrading or switching software applications due to the effort that
would be required to rebuild these connections if a new application were installed.
Benefitting from new technology is very hard when an organization is tightly wired to
a legacy technology infrastructure. The next section outlines several ways of address-
ing these visibility and operational efficiency problems.
2 COMMON APPROACHES
2.1 Data warehousing and business intelligence
Perhaps the most recognized and widely used way of integrating information is by
consolidating data from multiple applications into a data mart or data warehouse.
A data warehouse is a database that has been optimized for reporting and decision
support functions. It usually includes data that is summarized at different levels in
order to quickly answer common questions, such as questions based on a time period,
geography or a department. Reporting and business-intelligence tools are usually
used in conjunction with data warehouses to enable ad hoc reporting and analysis.
Common reports or frequently sought-after information may be displayed in person-
alized dashboards, with varying levels of summarization and visibility permitted for
different roles or levels within an organization.
The expense and effort required to implement such a system is significant: they
tend to be lengthy and expensive projects. However, the value of a successful data
warehouse and decision support system can be tremendous. They can provide visi-
bility and business insights that were previously impossible to obtain. Although these
The Journal of Operational Risk Volume 6/Number 1, Spring 2011
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
7/14
Operational risk: lack of effective information sharing 7
systems can help immensely with data visibility problems, they do little to address
the operational efficiency problems related to siloed information. In fact, the new
connections that are created in order to push data into a warehouse mean that the
problem is often made worse.In reality, as organizations grow, data warehouses often become a necessity. Without
a consistent, consolidated and summarized source of information, reporting becomes
increasingly difficult over time. Also, data-warehouse solutions fit well with the
project-oriented nature of many IT departments.
Although data warehouses can be used for some predictive analysis, they are typi-
cally backward looking and intended for historical reporting.
2.2 Event-based or service-based integration
The concept of building an integration infrastructure is often overlooked as a possible
means of controlling risk. This may be because this is not technology that end usersever see or directly interact with. However, a well-developed integration strategy and
infrastructure can improve operational efficiency, data quality, process quality and
decision making.
The power of such solutions is tremendous. For example, consider a personal
financial advisor who uses four different applications to service his customers. Maybe
he has a marketing and recommendations application, a customer information and
service application, a portfolio management application and another application for
insurance products. Let us consider how a common process might unfold if these
applications are not integrated. When a customer notifies his broker of an address
change or a life-event change, such as a marriage, the broker may need to transfer
the call to the operations department, or personally update this information in threeor four systems. Either of these scenarios would be likely to irritate a customer. If
these systems were integrated in a consistent fashion, they would just stay in sync.
The customer would be serviced more quickly and there would be fewer process
breakdowns down the road because data quality would improve. It would also not be
necessary to create a laundry list of hard-wired software connections that inhibit
IT and organizational flexibility.
Michelson (2006) gives the following description of an event:
A service may generate an event. The event may signify a problem or impending
problem, an opportunity, a threshold, or a deviation. Upon generation, the event
is immediately disseminated to all interested parties (human or automated). The
interested parties evaluate the event, and optionally take action. The event-drivenaction may include the invocation of a service, the triggering of a business process,
and/or further information publication/syndication. In this interaction, the service is
purely one of many event sources in a broader event-driven architecture.
Research Paper www.thejournalofoperationalrisk.com
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
8/14
8 S. Francis
Organizations have always been driven by business events, so why is this not
the case for IT departments? Chandy and Schulte (2007) cite three reasons for the
increased importance of event-based integration:
(1) increased business competitiveness;
(2) an increased level of responsiveness and real-time interaction due to broad
adoption of internet technologies;
(3) increased power and capability of hardware and software, accompanied by
dramatic increase in technical capabilities.
2.3 Process automation tools
Process automation tools allow organizations to create new processes, or new micro-
applications that run across existing applications. A list of a few products that have
this capability is as follows (note there are many other products as well and this is notmeant as an endorsement of any kind):
(1) IBM WebSphere Business Modeler;
(2) Oracle BPM Suite;
(3) Pegasystems SmartBPM Suite;
(4) SoftwareAG Business Process Management Suite.
Such tools make it relatively easy to talk to and integrate with existing applications
and trading partners. A business process that crosses multiple applications can often
be graphically defined and implemented using the same tool. These tools are not just
pretty picture drawers that facilitate documentation of processes: many modern
tools exist that have the power to actually automate and reengineer processes across
a variety of existing systems. Such tools are indispensable for firms that are serious
about STP. These tools generate audit trails of processes executions and enable the
reporting of processes that have been executed based on the data that flowed through
theprocesses. Such tools enablethe creation of composite applications,or applications
that are comprised of functions and services from existing applications. This ensures
that neither data nor functionality are unnecessarily duplicated. Through the use of
adapters, some process automation tools enable IT departments to talk to all different
kinds of applications and technologies in a consistent way. This frees users from being
required to know the nitty-gritty details or specific programming language of legacy
software applications. Such tools make most software look almost identical.
These tools are especially useful for STP when automating long-running trans-
actions such as loan originations. Rather than rekeying data for a loan application,
The Journal of Operational Risk Volume 6/Number 1, Spring 2011
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
9/14
Operational risk: lack of effective information sharing 9
financial analysis, scoring, credit decision or document preparation, business-process
tools can automate much of this process by exposing and reusing functions in the
existing systems, then connecting these exiting functions in new ways. Process tools
typically enable complex rule definition as well. For example, if a customer is goingto be denied a loan, the deny step of the process can automatically check to see
whether the customer qualifies for a credit card, which may be another path to profit,
and a way to help a customer rebuild damaged credit. This same rule could be reused
in another process as well, such as a process for a customer that is opening a new
checking account.
Process automation tools may be implemented with or without an event-based and
service-based integration infrastructure. However, the value that process automation
tools provide is often greater if an event and services infrastructure is already in place.
Event-based and service-based integration infrastructure increases the usefulness
of this technology for two reasons. First, it cuts down on the number of hard-wired
connections, which are very damaging to organizational flexibility. Event technologyenables business processes to tell the messaging technology about an important event
that needs to be shared across applications, or, vice versa, enables the messaging
technology to tell a process about an important event that needs to be shared across
applications. This contrasts with hard-wired connections between applications, which
involve massive duplication and do not allow for reuse.
Let us now consider an example that includes business-process automation as well
as event-based integration. Consider what a part of a loan application process might
look like if systems are not integrated. First, a loan officer has to manually enter
data into a loan processing system, a credit management system and a document
preparation system. Next, they walk to (or call) the credit department to find a credit
analyst to tell them to hurry up with the credit processing. The credit analyst has some
difficulty locating the credit request because the identification in the credit system
is different from that of the loan processing system. After locating the record, the
credit analyst informs the loan officer that the request has not yet been completed, or
even begun, because the loan officer had entered some of the customers information
incorrectly and the customers identification could not be validated. This is just the
beginning of the process.
As an alternative, imagine that when the loan officer entered information into the
loan processing application it automatically created a loan request event. Other
applications, such as the credit management and document preparation applications,
might be interested in this event and have a subscription to it. This event might also
be subscribed to by a business process called loan application. This process can tie
together the activities of the loan application across all of the different applications
that it touches and it can also assign tasks to people that are required to advance the
loan application process. The loan request event has all of the loan application data
Research Paper www.thejournalofoperationalrisk.com
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
10/14
10 S. Francis
attached to it, and the subscribing process and systems are automatically updated
with this information. As soon as the loan request event is received by these other
systems, credit analysis and document preparation will begin. When the credit analyst
completes his credit analysis task and submits his feedback, this would fire an eventcalled credit feedback, which the loan application process would subscribe to,
causing the process to wake up from a state where it was waiting for feedback.
Then the processing of the loan application would resume. All system interaction and
human interactionare orchestrated by the loan applicationprocess, and each event and
activity in the process is recorded and fully auditable. There are countless examples
of other improvements that are possible in almost any kind of business.
3 IMPLEMENTATION CHALLENGES AND COMMON OBSTACLES
Implementing event-based and process-based integration is no easy task. The tech-
nology is not inherently difficult to use but IT organizations become very adept atcreating hard-wired integrations whenever a need arises. When project timelines are
being squeezed and someone needs to save a day or two rolling out a new application,
quick and dirty integration may be a great way to cut corners. There is also a consid-
erable amount of planning and coordination that must be done up front in order to
develop standard definitions and representations of data. Over time, event-based inte-
gration makes integration much faster and easier. However, there is definitely some
up-front planning and investment required.
The benefits of this technology are very compelling, but many organizations have
started down this path without achieving success. There are some good reasons for
this. Here are a few of them, as well as some tips on how to overcome these obstacles.
This is by no means an exhaustive list, but these are certainly some of the morecommon factors inhibiting successful integration initiatives.
(1) Information technology is usually project driven, and the value of integration
with a single project is quite low compared with the value of integration across
an enterprise. The benefits of integration typically accrue to an organization,
not to a project.
(2) Developers tend to be very independent, and like to do things their own way.
They may resist such changes. Incentives for developers are often overlooked.
(3) Projects are usually carried out on very tight schedules, and if an integration
infrastructure is new, this can slow things down for a couple of days when
first used. Switching to the old way can therefore please both developers and
project managers.
The Journal of Operational Risk Volume 6/Number 1, Spring 2011
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
11/14
Operational risk: lack of effective information sharing 11
FIGURE 1 Policy resistance to integration standards.
Dataquality
Datavisibility
Standardizedintegrationinitiatives
Burden for
project/developers
Burden fordepartment
Benefit toorganization
Haphazardintegration
Project schedulepressure
New project orsoftware application
+
+
+
+
+
++
+
+
+
(4) Lack of standard definitions for data can slow things down. It is easy to get
stuck in the weeds discussing what data constitutes a customer or an order.
This is further complicated by differences in semantic representations of data
in different applications. For example, height can be measured in feet, inches,
centimeters, etc. Weight and distances often have different measurements as
well (Gannon etal (2009)). Such issues complicate the construction of standard
definitions and, for those who have not dealt with such issues, the task can seem
quite daunting.
Figure 1 depicts a causal loop diagram showing how standardized integration ini-
tiatives can be thwarted by some of the forces described above. The diagram is easiest
to read by starting on the right-hand side, at project schedule compression, and
following the arrows.
4 TIPS FOR OVERCOMING COMMON OBSTACLES
In Figure 1, the arrows leading to haphazard integration that have a positive polarity
represent the forces supporting the status quo and preventing improved integration
practices. These dynamics need to be disrupted in order to achieve success. The next
subsection gives some ways of doing this effectively.
Research Paper www.thejournalofoperationalrisk.com
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
12/14
12 S. Francis
4.1 Measurement
By creating a database of loss events, the measurement and analysis of these risks
becomes far easier. Whether it is customer attrition, a work stoppage event or some-
thing more serious, the objective reporting of such events into a single repository will
increase visibility and awareness. The reporting of such events needs to be indepen-
dent, however. A salesperson or serviceperson may be reluctant to disclose a loss, or
may attribute a loss incorrectly in order to manage his/her performance metrics. For
this reason, reporting of loss events should be reviewed in a short process or workflow
before being entered into a database. Even when there is a high level of independence
in the reporting process, attribution is still difficult. However, as long as the reporting
process is fairly independent, this can be addressed later. Such measurement efforts
make it much easier to achieve strategic management support and sponsorship, which
is obviously critical.
It is also important to measure benefits as well as losses. There is some art hereand some science, but data sharing undoubtedly creates some direct and measurable
benefits. Many business processes can be optimized and shortened, driving costs out
of the business. The total amount of integration code that needs to be created and
maintained is reduced. Troubleshooting becomes less costly because process audit
trails are maintained and can be easily reported. Such improvedflexibility can actually
become a competitive advantage that attracts new customers, although attribution of
such effects is difficult.
4.2 Recognition
Effective incentives and controls are also essential. Communicating standards of howintegration should be performed within IT, and publishing these standards, is a good
start. Recognition is a powerful motivator too, and by publicly and positively acknowl-
edging compliance with such standards, successful adoption is much more likely.
4.3 Using expandable definitions
To avoid the trap of getting stuck arguing about how to define data, organizations
can use canonical definitions of data (basically just a union of all data elements).
This is a good method of best practice and allows progress to occur immediately.
These canonical definitions can be expanded over time. However, there will also
be challenges with data translation. Because similar data might have different units
of measurement in different systems, a translation or rules engine is useful so that
translation logic does not get duplicated.
The Journal of Operational Risk Volume 6/Number 1, Spring 2011
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
13/14
Operational risk: lack of effective information sharing 13
4.4 Do not forget the cloud
Cloud computing, and specifically software as a service, poses new challenges relat-
ing to integration as well. As organizations move applications from the premises into
the cloud, where it is controlled by a third party, integration becomes more complex.
Integrating to something that is hosted elsewhere, in an environment that supports
numerous customers, may sometimes not be possible. For example, the vendor might
notallowit forsecurity or qualitycontrol reasons. If it is possible, there willalmostcer-
tainly be limitations on what kind of integration is permitted (such as batch-integration
only, Web-services-based integration only, security restrictions, etc).
4.5 Timing
There are good times and bad times to roll out integration infrastructure. It is unreal-
istic to imagine that any organization would throw away years worth of hard-wired
software connections and just replace them overnight. This would be a large and riskyproject, with little immediate or obvious benefit to a business. However, upgrading
or replacing large systems creates a great opportunity. Large systems have many
software connections or interfaces. When these systems are replaced, many software
connections must also be replaced. This is the ideal time to start doing things the
right way.
4.6 Vision and management support
Having a clear vision of a desired future state that is supported by senior management
is also critical to success. Such a vision needs to align with an organizations goals
and mission. Making technology selections, decisions or purchases before clearly
defining a desired future state, or having received support from senior management,
is not recommended.Without such a vision, it is difficult to answer the basic questions,
such as why are we doing this?
5 CONCLUSION
Data, along with customers and employees, is increasingly identified as one of an
organizations most valuable assets (Pitney Bowes Business Insight (2009)). Organi-
zations must take care to maintain these assets just as they take care of their other key
assets, such as employees and customers. However, efforts to improve data quality,
visibility and integration take a great deal of patience and commitment. There are
many places where such efforts can be compromised. Although it may be difficult to
track and measure, loss events resulting from poor quality data are rampant. Some
of the examples provided in this paper provide a good starting point for beginning to
Research Paper www.thejournalofoperationalrisk.com
-
7/29/2019 Journal of Operational Risk Management - Operational Risks of Not Sharing Data
14/14
14 S. Francis
recognize data problems and attributing loss events to them. Progress will be grad-
ual and iterative but, over time, data-quality improvements can profoundly change
an organization and make it more adaptable and flexible. Gradually, it will improve
the quality of work from any employee that uses software to do his/her job. Processerrors and breakdowns will diminish and customers will be served faster and more
effectively.
REFERENCES
Chandy, K. M., and Schulte, W. R. (2007).What is event driven architecture (EDA) and why
does it matter? Discussion Paper (July).
Chandy, K. M., and Schulte, W. R. (2009). Event Processing: Designing IT Systems for
Agile Companies. McGraw-Hill, New York.
De Fontnouvelle, P., DeJesus-Rueff, V., Jordan, J., and Rosengren, E. (2003). Using loss
data to quantify operational risk. Working Paper, Federal Reserve Bank of Boston.
Gannon, T., Madnick, S., Moulton, A., Sabbouh, M., Siegel, M., and Zhu, H. (2009). Frame-work for the analysis of the adaptability, extensibility, and scalability of semantic informa-
tion integration and the context mediation approach. In Proceedings of the 42nd Hawaii
International Conference on System Sciences, January 58, Big Island, HI. Institute of
Electrical and Electronics Engineers, New York.
Gustafsson, J.Nielsen, J. P., Pritchard, P., and Roberts, D. (2006). Quantifying operational
risk guided by kernel smoothing and continuous credibility: a practitioners view. The
Journal of Operational Risk 1(1), 4355.
Lam, J. (2003). Enterprise Risk Management: From Incentives to Controls. John Wiley &
Sons.
Michelson, B. M. (2006). Event-Driven Architecture Overview: Event-Driven SOA Is Just
Part of the EDA Story. Report, Patricia Seybold Group (February).
National Commission on Terrorist Attacks (2004). 9/11 Commission Report. Executive
Summary (July).Pitney Bowes Business Insight (2009).Managing your data assets.White Paper, IDC Gov-
ernment Insights.
SeniorSupervisors Group (2009).Risk management lessons from theglobalbanking crisis
of 2008. Report, Senior Supervisors Group (October).
Vanson Bourne (2009). Overtaken by events? The quest for operational responsiveness.
Study commissioned by Progress Software (September).
The Journal of Operational Risk Volume 6/Number 1, Spring 2011