the 2011 cio compass a field guide to practical it …...a field guide to practical it strategy and...

130
Deloitte Consulting LLP November 2011 The 2011 CIO Compass A field guide to practical IT strategy and planning

Upload: others

Post on 15-Mar-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Deloitte Consulting LLP

November 2011

The 2011 CIO Compass A field guide to practical IT strategy and planning

Letter to the CIO

Dear CIO,

The role of information technology (IT) in business has never been more significant than it is today. IT is a critical enabler of virtually every operating element in contemporary organizations, and timely access to information is unquestionably of paramount importance. As the “art of the possible” in technology continues to advance at a rapid pace, it is clear that IT has become equally important as a shaping influence on corporate business strategy. (To wit, over the past 18 months, “Cloud Computing” has become a topic of great interest to CEOs, CFOs, and CIOs alike.).

Whether you are a newly promoted CIO or a seasoned “pro,” this compendium has something for you. It contains insights, approaches, “tips,” and recommendations taken from Deloitte’s work with clients of all sizes and across all industries. IT professionals, at all levels of experience, should find useful ideas that can be directly applied to their IT planning, development, deployment, and operational activities.

Each article in this compendium can be read as a self-contained piece, but, in concert, they span a broad set of IT topics and issues ranging from leading-edge approaches for developing pragmatic, business driven IT strategic plans to perspectives on leveraging new developments in “intelligent asset” technology. Whatever your topic of immediate interest, there is likely valuable information here that can augment and inform your thinking going forward.

Deloitte is here to help you whether you are wrestling with aligning your overall business and technology agendas, or looking for insights on improving delivery of IT services. As acknowledged by leading industry analysts, our experience in assisting clients with topics spanning the full IT “Plan, Build, Manage” life-cycle is virtually unmatched. If you’re looking for a seasoned advisor with real-world experience, we’re ready to help.

Peter Blatman Practice Leader, Global Technology Advisory Services Deloitte Consulting LLP Tel: +1 415 783 6169 [email protected]

IT Efficiency & Effectiveness 1

Contents

IT strategy

The CIO: Liaison and guide ...............................................................................................................................................4

SAIL-STREAM: A dynamic approach to strategy formulations .............................................................................................6

Operating models: The hidden jewels of IT strategy .........................................................................................................10

Matching IT demand and supply: A practical approach to aligning IT and business objectives .........................................13

Enterprise architecture

ERP consolidation: Fragmented systems to effective operations .......................................................................................17

Drowning in data: New architectures and approaches to handle the multimodal data explosion ......................................22

Addressing the digital threat: Data security and privacy in distributed environments ........................................................28

EA and Lean IT: Exploring the relationships ......................................................................................................................34

IT efficiency and effectiveness

Establishing and maintaining IT agility .............................................................................................................................43

The bottom line: Blending service delivery models to improve performance ....................................................................48

Charting the course for IT success with ITIL: Targeting 10 actions in 10 months ..............................................................53

What you don't know will hurt you: Intelligent approaches to asset, configuration, and risk management .....................57

IT Efficiency & Effectiveness 2

Contents (cont.)

From silos to services: Navigating the transformation path to an IT service delivery organization .....................................65

Infrastructure provisioning: Automation .........................................................................................................................72

The rapid evolution of cloud computing

Cloud computing: Opportunity or threat? ........................................................................................................................78

Navigating the cloud computing maze ............................................................................................................................82

Cloud strategy: Private, public, or hybrid? ........................................................................................................................86

ERP in the cloud: SaaS based application solutions .........................................................................................................89

Cloud-based application rationalization ...........................................................................................................................94

IT organization design and governance

Architecting the IT organization of the future: New factors, new models .......................................................................104

Information technology (IT) talent management: Operating the multisourced, multicultural, multigenerational, IT organization .................................................................................................................................113

Appendix

Author biographies .......................................................................................................................................................118

Contacts .......................................................................................................................................................................127

IT strategy

The 2011 CIO Compass 4

The CIOLiaison and guide

Now, more than ever before, IT is viewed as one of the more effective enablers to increase competitiveness, solve complex problems, and deliver new revenue streams in business. As a result, the CIO’s role has evolved; those who work closely with the business leaders are seen as liaisons and guides who translate technology advancements into business opportunities and capabilities that can drive the organization and help it meet its goals.

Fulfilling the CIO’s new role requires wearing many hats, which mirror the basic corporate functions.

•Strategist: Imagines and plans how IT can help its clients in their efforts to be more effective.

•Technologist: Understands technology advancements and their applicability to the business.

•Innovator: Combines technologies and business opportunities to create practical and compelling offerings.

•Marketer: Communicates and promotes IT’s capabilities and offerings to its clients.

•Designer: Defines and implements service delivery processes that can allow IT to keep its commitments while providing repeatable service experiences.

•Comptroller: Obtains and manages the funding needed to provide new and existing services to IT clients.

•Recruiter: Enlists, motivates, and mobilizes existing and new staff.

•Revolutionary: Changes the way the business drives value.

These are the same roles typically required by any entrepreneur or executive who wants to build market share by introducing a continuous stream of valuable and affordable solutions. Essentially, the CIO must act as the CEO of the IT “business.”

As a business liaison, the new CIO should consider the capabilities he or she can deliver to business units so they can reach their goals. To do so, the CIO must have an understanding of the business as well as the organization’s culture. Modern CIOs — like modern CEOs — should speak the language of their customers. They must go beyond taking orders to proactively step out of the back-office to foster innovation and drive the standards that can accelerate deployment.

The CIO must balance stakeholders’ interests in pursing the “cutting edge” technologies with the realities, rewards, and risks associated with jumping on a technology bandwagon. For example, cloud computing may present a major opportunity for many enterprises, but such a big technical shift must be justified by a specific business purpose. Further, the growing sophistication of edge technology capabilities requires the CIO to manage the expectations and needs of individual business owners by balancing the need for agile business technology with sound corporate IT governance. Even more pressing is the consumerization of IT and the growing demands of enterprise constituencies to have their work technology be at least as good as their technology at home. The CIO should not be the naysayer of new things, but instead, should become the educated soothsayer of what is possible.

Authored by: Brett Loubert, Edward Reddick, Kristi Lamar, and Eugene Lukac

The 2011 CIO Compass 5

Delivering the art of the possibleMany CIOs are taking a new proactive posture by educating corporate leaders so they understand how IT’s capabilities and technology can make the business more effective. Sharing information on industry advancements, market services capabilities, and cyber security with C-level executives is beneficial for CIOs in the short term and for the business in the longer term.

The CIO is being asked to shepherd the art of the possible and broker new technologies and the aggregation, or “mash-up,” of existing capabilities into new services. The CIO’s organization will need to embrace change management skills to effectively adapt to new technologies, develop services that are built to run, and demonstrate operational excellence.

Even more compelling are the opportunities to provide meaningful data and insights to the business through analytics, mobility, social media, and cloud computing. Social media, once considered something that kills productivity and bandwidth, is now a powerful new corporate tool that goes beyond sales and marketing to also enhance customer care, human resources, and even research and development. Couple these with the potential offered by cloud computing, and organizations have the opportunity to test the waters of what is possible without taking a bath.

Many organizations have long thought that the development of new product and services lived in Marketing and R&D. But through technology, IT can create disruption, in a good way, to impart new business models and opportunities for to the entire enterprise ecosystem of partners, clients, and alliances. New services = new revenue.

Still, the CIO continues to have the underappreciated role of keeping the lights on by managing enterprise information and technology. Newfound capabilities and the lure of the possible could generate the respect that many CIOs have deserved. However, the responsibility to provide reliable basic technical services remains constant, even when IT is rolling out a new release or a new capability.

The CIO must understand the organization’s ability and readiness for change, as well as manage that change by setting expectations, mitigating risks, and allowing constituencies to do their jobs better, cheaper, and faster. Most CIOs know that change isn’t about boiling the ocean, but about making the case and executing incremental change. In reality, CIOs have been liaisons and guides, but now they have the potential to gain a greater realm of influence and respect.

The 2011 CIO Compass 6

Despite its negative consequences the recent economic downturn has forced virtually everyone to question doing business-as-usual. Even those once afraid to rock the boat are now in such turbulent waters that everyone gets a “free pass” to challenge the status quo. The next question is where and how to start.

What to do next is not limited by available options—enterprise architecture, cloud computing, project portfolio management, etc. The problem is where to start, what sequence to execute, and how to justify it to stakeholders in business terms.

This new approach to IT strategy formulation is designed to provide clarity on solving these problems. SAIL-STREAM can enable your IT leadership team to define IT success in terms of changes to business behavior, identify the interdependencies, locate the leverage points, model the core business constraints, pre-test the effectiveness of potential IT projects, and recommend the most directionally effective sequence of initiatives. Once implemented, the method can support a repeatable process of continuous improvement to help maintain the ongoing alignment between the business and IT.

The results have been very positive as summarized by the following CIO quote from a recent engagement.

A new approach to strategy formulationSAIL-STREAM takes a different approach to strategy. Current approaches like SWOT (strengths, weaknesses, opportunities, and threats) and value chain analysis provide a static snap shot of your performance. While informative, they may not account for the dynamic interactions that actually drive performance over time and they may not lead directly to strategy execution, monitoring, and improvement.

“I’ve worked with many of your competitors and this is the first time I’ve seen such a seamless transition from high level business strategy to execution.”

Jason Molfetas, CIO Recall

Authored by: Don Frazier

SAIL-STREAMA dynamic approach to IT strategy formulation

The 2011 CIO Compass 7

The new method is designed to overcome these weaknesses by answering the following strategy questions1:

1. What defines success for our employees, customers, and stakeholders?

2. Why has our performance behaved the way it has in the past?

3. Where will our performance more likely continue to trend if nothing changes?

4. How can we positively alter future performance?

5. When should changes be made?

The new method is called SAIL-STREAM. It is based on transparently capturing the working assumptions at each level of management. It is designed to answer the basic question of “How things really work around here?” The overall approach is shown below. Each step in the process results in a deliverable that can help set the overall business driven IT agenda.

Figure 1: SAIL-STREAM Process Overview

3

The new method is called SAIL-STREAM. It is based on transparently capturing the working assumptions at each level of management. It is designed to answer the basic question of “How things really work around here?” The overall approach is shown below. Each step in the process results in a deliverable that can help set the overall business driven IT agenda. Figure 1: SAIL-STREAM Process Overview

1. What defines success for our employees, customers, and

stakeholders?

Each member of your management team has a different set of experiences based on their past positions within your organization or previous firms. It is important to capture these differing perspectives to attempt to have a broad list of viewpoints, issues, and metrics. The first phase, Defining Success, builds a picture of how your issues and definitions of success are actually interrelated. Of course you can’t boil the ocean so you need a way of finding the metrics that really matter. To achieve this, we perform a detailed analysis of how each metric influenced or was dependent on other metrics2. Below is an example from a recent client engagement:

2“Managing from Clarity: Identifying, Aligning and Leveraging Strategic Resources“ James Ritchie-Dunham, Hal Rabbino [ISBN 0-471-49731-2]

1. What defines success for our employees, customers, and stakeholders?Each member of your management team has a different set of experiences based on their past positions within your organization or previous firms. It is important to capture these differing perspectives to attempt to have a broad list of viewpoints, issues, and metrics.

The first phase, defining success, builds a picture of how your issues and definitions of success are actually interrelated. Of course you can’t boil the ocean so you need a way of finding the metrics that really matter. To achieve this, we perform a detailed analysis of how each metric influenced or was dependent on other metrics2. Below is an example from a recent client engagement:

Figure 2: Defining success measurement analysis

4

Figure 2: Defining Success Measurement Analysis

Quadrant Description

1. High Influence / Low Dependency (these are variables that if changed have a broad influence on the behavior of the entire system without worrying about dependency on other variables)

2. High Influence / High Dependency (these are the Key Variables of Interest (KVI) at the heart of the system and need significant planning to change given the high number of other variables involved)

3. Low Influence / High Dependency (these are the dashboard variables that mainly show the outcomes of any changes to the other variables)

4. Low Influence / Low Dependency (these are one-off variables that can be changed without much planning but will not make major impacts on the overall system performance)

In this example, the success measure that had the highest number of dependencies and influenced the highest number of other metrics was “Deal Complexity.” This strategic variable becomes the Key Variable of Interest (KVI) and provides a jumping off point to begin building a working model of your business. Having this point of reference it can be possible to begin answering the next strategic question.

2. Why has your performance behaved the way it has in the past?

The second phase of the method Mapping Success provides a dynamic analysis of your core business drivers. Together, these business drivers constitute the “Core Architecture” of your organization. If you step back from the specific details of your enterprise you can summarize

Quadrant description1. High influence/low dependency (these are variables

that if changed have a broad influence on the behavior of the entire system without worrying about dependency on other variables).

2. High influence/high dependency (these are the Key Variables of Interest (KVI) at the heart of the system and need significant planning to change given the high number of other variables involved).

3. Low influence/high dependency (these are the dashboard variables that mainly show the outcomes of any changes to the other variables).

4. Low influence/low dependency (these are one-off variables that can be changed without much planning but will not make major impacts on the overall system performance).

In this example, the success measure that had the greatest number of dependencies and influenced the highest number of other metrics was “Deal Complexity.” This strategic variable becomes the Key Variable of Interest (KVI) and provides a jumping off point to begin building a working model of your business. Having this point of reference we can begin answering the next strategic question.

2. Why has your performance behaved the way it has in the past?The second phase of the method, mapping success, provides a dynamic analysis of your core business drivers. Together, these business drivers constitute the “core architecture” of your organization. If you step back from the specific details of your enterprise you can summarize your business as a set of core resource levels that directly influence the ongoing delivery of products/services, and

The 2011 CIO Compass 8

cash flow. It is at this level of aggregation where strategic decisions can be made that can drive performance improvements.

To answer the “why” question, a model (a living reference) should be built to determine how your business actually works. Keeping with our example of “deal complexity” we broke this variable down into more measurable components like on-boarding time, resources involved, and major process chains. We then expand the working model to include demand signals. This resulted in the following simulation model:

Figure 3: High level business model

5

your business as a set of core resource levels that directly influence the ongoing delivery of products/services, and cash flow. It is at this level of aggregation where strategic decisions can be made that can drive performance improvements. To answer the “Why” question, a model (a living reference) should be built to determine how your business actually works. Keeping with our example of “Deal Complexity” we broke this variable down into more measureable components like on-boarding time, resources involved, and major process chains. We then expand the working model to include demand signals. This resulted in the following simulation model: Figure 3: High Level Business Model

Some of the data needed to populate the model was already available; other data (rates) had to be reverse engineered from existing performance numbers, while still other information had to be estimated based on the working assumptions of experienced employees. The message here is that just because hard data is not available does not mean it has no influence on performance. So estimate for now and start tracking the actual information moving forward. The issue here is keeping the model at a high level by only including major resources that interact over time to drive business performance. The goal is to capture the “gist” of the business drivers at a high level. Major resources include tangible things like customers, products, services, employees, and cash flow, but also include intangible resources like capabilities, investor support, and employee morale. Dependencies between these core resources can reinforce or constrain overall performance. Management decisions that matter should influence these resource levels to alter business performance. Taken together these core resources and dependencies can drive and limit the “performance envelope” of the enterprise.

10.00

0.00

New Sales Percent

10.00

0.00

Total Customers

1000.00

0.00

Transactions per Month

100.00

0.00

BU1

10.00

0.00

BU2

50.00

0.00

BU3

100.00

0.00

BU4

700.00

0.00

BU5

10.00

0.00

Customers Lost

10.00

0.00

Lost Customers

10.00

0.00

Customers Added Per Month10.00

0.00

Onboarding

10.00

0.00

Support Calls Initiated

10.00

0.00

Total Calls10.00

0.00

Hire Sales Reps

10.00

0.00

Sales FTE

10.00

0.00

Lose Sales Reps

10.00

0.00

Hire Help Desk

10.00

0.00

Help Desk FTE10.00

0.00

Lost Help Desk FTE

10.00

0.00

Hire IT

10.00

0.00

IT Dev FTE10.00

0.00

Lose IT

10.00

0.00

Project Requests

3.00

0.00

Future Demand Signal

0.08

Percent Volume Grow th

0.00

Platinum Customer

10.00

0.00

[Total Customers]

10.00

0.00

Repeat Sales Percent

10.00

0.00

Total Repeat Sales

0.08

[Percent Volume Grow th]

Some of the data needed to populate the model was already available; other data (rates) had to be reverse engineered from existing performance numbers, while still other information had to be estimated based on the working assumptions of experienced employees. The message here is that just because hard data is not available does not mean it has no influence on performance. So estimate for now and start tracking the actual information moving forward.

The issue here is keeping the model at a high level by only including major resources that interact over time to drive business performance. The goal is to capture the “gist” of the business drivers at a high level.

Major resources include tangible things like customers, products, services, employees, and cash flow, but also include intangible resources like capabilities, investor support, and employee morale. Dependencies between these core resources can reinforce or constrain overall performance. Management decisions that matter should influence these resource levels to alter business performance. Taken together these core resources and dependencies can drive and limit the “performance envelope” of the enterprise.

Having this working reference model we were able to begin answering the remaining questions.

3. Where will your performance likely continue to trend if nothing changes?The 60 month simulation below shows the effects of an 8% growth in volume on the needed resources if current productivity stays the same. The sales force would need to increase by 25%. The help desk resources would need to increase by 26%. And finally, the IT resources would have to grow by 68%.

Figure 4: High level business model point of leverage

6

Having this working reference model we were able to begin answering the remaining questions.

3. Where will your performance likely continue to trend if nothing changes?

The 60 month simulation below shows the effects of an 8% growth in volume on the needed resources if current productivity stays the same. The sales force would need to increase by 25%. The help desk resources would need to increase by 26%. And finally, the IT resources would have to grow by 68%. Beyond the number of employees needed, the model also surfaced that for every 100 new customers they were losing 22 existing customers each month. These indicators lead to a very strong business case for change. Figure 4: High Level Business Model

4. How can we positively alter future performance? Having this shared understanding of the strategic architecture of the organization allowed us to directly tie strategic initiatives to improved business performance as shown in the following diagram. Figure 5: Linkage to Potential Projects

769.50

0.00

New Sales Percent

44326.50

0.00

Total Customers

1183.84

0.00

Transactions per Month

100.00

0.00

BU1

10.00

0.00

BU2

50.00

0.00

BU3

100.00

0.00

BU4

700.00

0.00

BU5

222.41

0.00

Customers Lost

11499.17

0.00

Lost Customers

769.50

0.00

Customers Added Per Month769.50

0.00

Onboarding

77802.75

0.00

Support Calls Initiated

4.09E+06

0.00

Total Calls10.00

0.00

Hire Sales Reps

203.00

0.00

Sales FTE

10.00

0.00

Lose Sales Reps

10.00

0.00

Hire Help Desk

113.41

0.00

Help Desk FTE10.00

0.00

Lost Help Desk FTE

10.00

0.00

Hire IT

111.00

0.00

IT Dev FTE10.00

0.00

Lose IT

51.68

0.00

Project Requests

3.00

0.00

Future Demand Signal

0.08

Percent Volume Grow th

0.00

Platinum Customer

44326.50

0.00

[Total Customers]

414.00

0.00

Repeat Sales Percent

11620.00

0.00

Total Repeat Sales

0.08

[Percent Volume Grow th]

Beyond the number of employees needed, the model also surfaced that for every 100 new customers they were losing 22 existing customers each month. These indicators lead to a very strong business case for change.

4. How can we positively alter future performance?Having this shared understanding of the strategic architecture of the organization allowed us to directly tie strategic initiatives to improved business performance as shown in the following diagram.

Figure 5: Linkage to potential projects

7

5. When should changes be made? Having the business model that included both business and IT initiatives we were able to quantify the impact of various interventions and sequences. Performing these steps can yield a tailored roadmap that is designed to help accomplish your desired objectives.

Summary SAIL-STREAM is a resource system approach that paints a shared picture of your core architecture (a working model of how your core performance drivers actually behave over time). This model can provide a time-based appreciation of your current situation and future prospects. It is designed to enable a transparent decision process of where IT investments in skills, data, systems, and processes can directly influence the speed, cost, quality, and flexibility of your future performance. For more information please contact Don Frazier at [email protected] Additional information about the mystrategy™ tool can be found at: http://www.strategydynamics.com/mystrategy/default.asp Knowledge resources: “Strategic Management Resources” Kim Warren [ISBN 978-0-470-06067-4] “Competitive Strategy Dynamics “Kim Warren [ISBN 0-471-89949-6]

5. When should changes be made?Having the business model that included both business and IT initiatives we were able to quantify the impact of various interventions and sequences. Performing these steps can yield a tailored roadmap that is designed to help accomplish your desired objectives.

The 2011 CIO Compass 9

SummarySAIL-STREAM is a resource system approach that paints a shared picture of your core architecture (a working model of how your core performance drivers actually behave over time). This model can provide a time-based appreciation of your current situation and future prospects. It is designed to enable a transparent decision process of where IT investments in skills, data, systems, and processes can directly influence the speed, cost, quality, and flexibility of your future performance.

Additional information about the mystrategy™ tool can be found at: http://www.strategydynamics.com/mystrategy/default.asp.

References:1. Competitive Strategy Dynamics" Kim Warren [ISBN

0-471-89949-6].

Additional resources:•Strategic Management Resources" Kim Warren [ISBN

978-0-470-06067].

•Managing from Clarity: Identifying, Aligning and Leveraging Strategic Resources" James Ritchie-Dunham, Hal Rabbino [ISBN 0-471-49731-2].

The 2011 CIO Compass 10

When many organizations consider increasing the business value of the information technology (IT) function, they focus on aligning IT priorities and spending with the overall business priorities and strategic imperatives. However, the IT operating model can be another critical element in executing IT strategy. The operating model defines the organizational structure, capabilities, governance, and performance metrics necessary to help operationalize strategy and add value to the business.

Many companies do not meet their strategic goals because the current capabilities of the company were not taken into account when their strategy was formulated. The strategy should be informed by an understanding of current capabilities, and designed with a realistic expectation of the organization’s capabilities. Programs can then be designed to increase IT’s capabilities where needed to execute the strategy.

An IT operating model assessment should be included in IT strategy initiatives. The assessment process allows the IT strategy and execution road map to consider IT capabilities realistically and addresses areas that may need to be improved in order to execute the strategy. This assessment consists of three major phases that align with the phases of the IT strategy itself: a current-state assessment, definition of a future-state target, and development of a road map to achieve the future state.

Assessing the current-state IT operating modelWhen assessing the current-state IT operating model, consider examining the following elements that affect the ability of IT to execute strategy:

•Current IT organization.

•IT governance.

•Capability maturity of the IT organization.

•Performance metrics that measure and manage the IT performance.

Performing this assessment in conjunction with IT strategy development is important because the IT strategy will identify the future IT capabilities that will be required to support the business objectives and strategic imperative. This helps inform the target-state IT operating model. At the same time, the IT strategy road map should consider the magnitude of the gap between current and required IT capabilities so that the plan can be realistic and allows the necessary time to mature the IT operating model.

Developing the target IT operating modelDefining the future IT operating model is done at the same time as the definition of the target-state architecture. Changes to the IT organization are driven by a number of factors, including capabilities, customer needs, applications, degree of control, size, and geography. Traditional models tend to focus on the issue of centralized versus decentralized; however, that is just one dimension of IT organization design. A three-dimensional approach that can point an organization toward an effective IT organization design includes:

Operating models The hidden jewels of IT strategy

Authored by: Anuj Sood and Bill Sheleg

The 2011 CIO Compass 11

1. Organization dimension: centralized versus decentralized. Focuses on which function owns IT resources. Do the people reside in the IT function or in the business?

2. Resource dimension: in-house versus outsource. Focuses on IT staff and sourcing. Does the business own the resource or is it owned by the outsource provider?

3. Business management dimension: integrated versus independent. Focuses on how much direct control a business should have over IT.

Some leading practices to consider in the design process include: separating IT demand and supply capabilities; centralizing infrastructure and common applications; deploying relationship managers to manage service levels; establishing new roles and capabilities, such as program management and enterprise architecture; and outsourcing of noncore capabilities, such as maintenance and IT support.

As a result, the IT operating model may resemble, but not be restricted to, one of the illustrated models described in Figure 1.

IT governance helps maintain the alignment of the IT function with the business. Establishing an effective IT governance model can involve defining what decisions must be made, who should be involved in these decisions, and how the decisions should be made. Governance decision domains help to identify what strategic decisions have a significant bearing on IT business alignment, governance styles define who makes decisions and has

input into the decision-making process, while governance mechanisms describe how and when the decisions are made.

Though many variants exist, the most widely used IT governance styles, in practice and research, can be categorized into five styles:

Figure 1

Hybrid IT organization

Description Some IT resources are owned by the IT organization, while other IT resources are owned by the business units

Strengths • Drive standardization of IT processes

• Enhanced responsiveness and speed in IT service delivery

• Appropriate balance of common delivery of IT services and particular business needs

Weaknesses • Need to prioritize business unit IT service requests

• Potential overlap of responsibilities and duplication of resources

• Reduced economies of scale

Managed/outsourced IT organization

Description Some resources are owned by the IT organization, while others are managed by IT, but owned by external service providers

Strengths • Increases economies of scale

• Focus on core capabilities

• Increases flexibility and ability to manage changes in demand and supply of IT services

Weaknesses • Reduced business knowledge

• Increased need to manage multiple external service providers

• Increased cultural challenges through global resources

Centralized IT organization

Description All IT resources owned by the IT organization. Can be organized by services, applications or processes

Strengths • Drive standardization of IT processes

• Leverages economies of scale

• Supports the delivery of one IT strategy and enterprise architecture

Weaknesses • Lack of dedicated business unit IT resources

• Need to prioritize business unit IT service requests

• Increased challenge in business partnering

Decentralized IT organization

Description All IT resources are owned by the business units

Strengths • Improved access to dedicated resources

• Improved development of business knowledge

• Enhanced responsiveness and speed in IT service delivery

• Supports particular business unit IT needs

Weaknesses • Multiple IT strategies, standards, and processes

• Increased cost for common IT services

• Lack of visibility into total IT spend

The 2011 CIO Compass 12

1. Corporate: Critical decisions are made by the chief information officer in conjunction with C-level executives.

2. IT management: IT management typically exists as a separate unit, with either corporate-level IT management, business-unit-level IT management, or both.

3. Individual business unit leads: Business unit IT leaders usually control local decision making.

4. Corporate and business leads: A combination of corporate and business leads make IT decisions from a purely business perspective.

5. IT management with corporate or business leads: Most decisions are made by an IT leadership committee, which consists of business unit leaders, C-level executives, and IT executive leadership.

Another important aspect of the future IT operating model is the capability maturity of the IT function. There are a

number of capabilities to consider, including strategic planning, program and project management, enterprise architecture, and operations — to mention a few. The future-state capability should consider both the current maturity level and the required maturity level necessary to execute the strategy.

A broad capability maturity model (Figure 2) considers five critical disciplines; the first (strategic functions) defines the role of the IT function within the organization, while the other four are comprised of a mutually exclusive set of IT processes and procedures.

After assessing the current-state capabilities, the results should be compared with the IT strategy and business to prioritize the capabilities that need to be developed. This

prioritization feeds into the process of defining the target IT operating model and broader IT road map.

Also, appropriate performance measures should be used to assess IT’s performance in executing strategy, so that adjustments can be made, if necessary.

Incorporating the necessary actions into the IT roadmapAn IT strategy road map should include organizational, governance, capability maturity, and performance metric initiatives that align with IT strategy execution. Those who take the required actions, in the required sequence, at the required time, usually discover that the IT operating model really can be the “hidden jewel” of an effective IT strategy.

Figure 2

Maturity

Capability Ad-Hoc Defined Controlled Leading

Strategic functions

Strategic planning and definition

IT governance and budgeting

Business relationship management

Business relationship management

Business case development

Performance management

Architecture and asset management

Enterprise architecture management

Data management and governance

Security and risk management

Project and program management

Program management

Demand management

Solution delivery/infrastructure

Application development

Quality assurance and testing

Application integration and support desk

Application and integration operations

Infrastructure operations

The 2011 CIO Compass 13

Matching IT demand and supply A practical approach to aligning IT and business objectives

Business services provided and supported by Information Technology (IT) organizations are being subjected to increasing scrutiny. Sandwiched between the dot-com bust of the late 1990s and the global financial crisis of the late 2000s, executive leadership began requiring IT service investments be justified by business cases, including total cost of ownership and return on investment projections. While these financial metrics require IT leaders to develop financial and business acumen, they must also develop their ability to communicate and market the value of IT services, as well as the IT organization.

In addition to the threat of replacement by a lower-cost outsource or offshore service provider, IT organizations are also threatened by mass-market services. While outright replacement by one of these service providers is less likely for medium-to-large businesses, their perceived ease of delivery and low cost raises questions about the cost and quality of IT services. For example, compare low cost or free technology that is available to consumers with similar business or enterprise IT services:

1. Email communications. Many consumer email services provide near limitless storage and accessibility options. Many corporate IT email offerings limit mailbox storage size; if mobility access is even provided, access is often restricted.

2. Disaster recovery solutions. Cloud-based consumer backup solutions offer remote access and unlimited storage for varying monthly rates. For individual corporate employees, the disaster recovery options often consist of CD or USB flash drive copies that the individuals may have created — thanks to their own due diligence.

3. Collaboration. New cloud-based collaboration sites that allow sharing of documents or photos are readily available to individuals. In many companies, the search for the latest copy of a document begins and ends in an email chain.

In the three examples above, the service offerings for the consumer market frequently put similar corporate offerings to shame.

Prep the objectiveWhat does the business need, and how can IT support those needs? From the outset, IT services should be defined with input from business stakeholders. The steps of service definition have typically been the domain of IT personnel, usually application and infrastructure architects with highly technical backgrounds. When business and IT leaders do discuss IT services together, it seems they are speaking in different languages.

Organizations with productive business and IT relationships:

1. Establish common terminology and references.

2. Gather the stakeholders and sponsors.

3. Define specific objectives.

IT leaders should become familiar with the strategic goals of the business — growth projections, new opportunities, strengths, and weaknesses. IT leaders should also gather feedback about how business leaders perceive the quality of IT’s service delivery.

On the other hand, business leaders should explain their needs with sufficient detail for IT to understand

Authored by: Mike Habeck and Matt Leathers

The 2011 CIO Compass 14

their requests. Business leaders should develop a basic-level understanding of IT’s delivery timelines and methodologies, including plan, analysis, design, build, test, and deploy phases.

Establishing common terminology and referencesFrameworks provide decision-making tools and materials designed to encourage IT leadership in adopting a service mindset. Diagram 2.1 provides a basic communication framework for business and IT leaders.

Figure 2.1 Service definition framework for a retail bank

•Business services — First layer: Begin the discussion with the business services that align to the products or services offered by the business. Determine the correct level of detail during initial and subsequent conversations. Smaller services are typically aggregated into larger services to gauge the overall performance. For example, consumer banking service performance may be made up of smaller performance scores in mobile banking, online banking, and branch banking.

•IT services — Second layer: Define the basic IT processes and components required to deliver Business Services. Avoid being too technical so business leaders can understand the value of the service. Also, avoid being too abstract so that IT can focus on applications and infrastructure related to the service. Individuals responsible for the relationship between business and IT must be effectively “bilingual.” Service managers should become relationship builders who are responsible for understanding and communicating the dependencies at the applicable level of detail for each audience.

•IT infrastructure — Third layer: Pull together cross-infrastructure teams to document and understand the IT dependencies required to support the first layer business services and second layer IT services. Organizations typically use formal enterprise architecture boards to review deployed and planned IT infrastructure to coordinate disparate functions within IT and Engineering organizations. IT relationship managers can provide value by emphasizing the broader view of Infrastructure as an integral part of the business and IT service layers. It may be necessary to involve business stakeholders or resources, in addition to the relationship manager, in defining these relationships to maintain focus on the broader objective.

Lather, rinse, repeat. As business and IT stakeholders define and understand dependencies, they will invariably discuss their expectations of each other, often informally. While this dialogue is ultimately constructive, it can be chaotic and damaging without the proper structure and context.

LatherEarly in the process, IT leaders should communicate three crucial tenets to set the tone:

1. Business and IT stakeholders have a mutual interest in reaching company goals, such as shareholder value, environmental stewardship, and overseas expansion

2. Commitment to improve is more important than past performance challenges.

3. Expectations and actions will likely change in both the business and IT organizations. Long-running IT service delivery problems can rarely be attributed to one specific department or unit. IT may ask their business counterparts to adopt changes to organization, process, or technology to achieve their mutual goals.

In challenging situations, these tenets may need to be repeated as a mantra.

With well-formed intentions, IT stakeholders may receive new insight into current and planned business services. Business stakeholders may also develop a better understanding of the complexities and challenges associated with delivering IT services.

RinseOnce a common framework and constructive tone are established, the matter of aligning demand and expectations with delivery capabilities begins.

Business services

IT services

IT infrastructure

Loan origination Consumer banking Commercial banking

Email Loan processing Online checking Electronic Payments

Network

Applications

Servers

Mainframe

Integrations SupportOperations Mobile devices

The 2011 CIO Compass 15

Several technology solutions are available to support expectations and demand management, including project and portfolio management, service catalog, and request fulfillment solutions. While these solutions should be considered to enhance demand and service management capabilities, an easy way for leadership to gain a resource capacity overview is to review time reports and walk the floor early in the morning and late at night.

Typically, services and delivery commitments are codified in service level agreements (SLAs) with external customers or operating level agreements (OLAs) between departments or functions. If these agreements are already in place, they should be updated to define the revised services and delivery expectations. For SLAs and OLAs on the higher end of the process maturity scale, financial costs associated with these services are established. Estimating demand and associating cost with volume further reinforces mutual expectations.

Repeat — not as neededBenefits associated with SLAs and OLAs may be unrealized because they are rarely, if ever, revisited after they are finalized. The documents sit on a shelf, waiting for the next effort to align business and IT objectives. Unfortunately, the opportunity to integrate the expectations defined in these agreements is often forgotten.

A regular schedule for compliance review and discussion should be established within these agreements. Compliance and performance reports, in the form of balanced or executive scorecards, should be reviewed in person, rather than by email.

IT relationship managers should talk face to face and ask these questions:

•How do these performance metrics line up with your experience?

•What metrics would you suggest in addition to those reported?

•What metrics are not valuable?

•What improvements in IT service delivery have you noticed?

In some respects, marketing skills may be required to conduct these conversations and understand the business expectations. Interpersonal skills are desirable within an IT organization because if the internal IT organization does not talk with its business counterparts about their needs, third-party service provider and outsourcing vendor representatives may outmaneuver them.

As progress is made, continue to accelerate efforts to communicate the ongoing improvements and service offerings that align to business needs. Instead of viewing these efforts as a temporary improvement initiative, IT leaders should consider these practices as a part of the ongoing life blood of their organization.

As a business ideally grows and thrives, demands for IT services will change. Sometimes the technology transforms the business entirely. Hot topics such as cloud computing and mobility should give IT leaders reason to celebrate.

Resources:•HBR case study — “Who has the Decision (D)?”

•Understanding the marketing and IT relationship — Forrester March 31, 2009.

•Synchronizing strategy and the it portfolio, Forrester July 25, 2008.

•Service portfolio management links it capabilities to business value — Symons November 3, 2008.

•IT demand mgmt and the PMO — October 10, 2008, Cameron.

•ITIL V3 Service Strategy — Service Economics.

•Wired magazine — The AI Revolution is on http://www.wired.com/magazine/2010/12/ff_ai_essay_airevolution/

•The Tipping Point — Malcolm Gladwell.

Enterprise architecture

The 2011 CIO Compass 17

You should consider ERP consolidation if you find yourself answering “yes” to most of these questions.

•Is your SG&A growing at or above your revenue growth?

•Are you filing your 10-K two days before the deadline?

•Are your safety stock levels higher than industry averages?

•Do you hear that division X bought a part for $10, while division Y buys the same part for $2?

•Does it take a week to get customer and product profitability reports from your organization?

•What is your current manufacturing capacity and is it aligned with your growth objectives?

•Do you feel there are redundancies within various divisions?

•Do you have high freight costs? Limited visibility into your supply chain networks?

•Do your mid-month sales numbers look a lot different from the month-end sales?

•Are you concerned about integrating acquisitions?

The challenges frequently created by fragmented systemsMany companies are living with fragmented systems and technologies that were acquired over time in a piecemeal fashion, which could create these problems:

•Disparate master data Systems implemented for a specific division are often tailored to its specific needs or geography, making the underlying master data structures inconsistent. This

can lead to complicated integration among systems, manual workarounds for synchronization, and frequent duplication. When data does not flow freely among divisions or geographies, transferring the correct information at the right time is difficult.

•Manual middleware During technology’s early years, systems integration was a manual process by necessity. When data is transferred from one system to another manually, the integration process is prone to errors and time consuming.

•Shadow IT systems Without effective integration, the data being transferred from one system to another is stored in another form, typically spreadsheets, small databases, or other unaccounted forms. This can lead to multiple, inconsistent data sources, which complicates compliance with regulations and security requirements.

•Obsolete technology Many of the systems running in today’s organizations are obsolete and do not support current business needs. These systems were often put in place 10 to 15 years ago when the organization was smaller and its needs were different.

Ways to address the challengesMany companies are facing the challenge of consolidating and restructuring their old patchwork of systems. Because organizations are different in the way they conduct business, it may be difficult to determine which ERP consolidation model will be most effective. Below are several models to consider.

ERP consolidationFragmented systems to effective operations

Authored by: Johannes Raedeker and Satish Maktal

The 2011 CIO Compass 18

Integrate information modelThis model uses data translation and warehouse tools to consolidate operational information across different environments. Data from various ERP systems are extracted, transformed, and loaded into a data warehouse. Tools, like Hyperion or Cognos, allow data consolidation for reporting and analysis. As a result, many ERPs continue to function as is with little process and data alignment. Different variations and inefficiencies usually remain when this model is followed.

Integrate operations modelSystems in different parts of the organization are linked through a middleware layer that allows information to flow among entities and business units (BUs). This model is often used following mergers or acquisitions.

Standardize operationsBy reducing the number of regional or operational unit ERPs, organizations may come closer to standardizing their operations. This model requires a larger effort from the business owners to adopt the new business processes and change their operations to arrive at a common model. This model often incorporates some centralized and aligned processes or operations, such as shared financial services.

Centralize operationsA single company-wide ERP is usually the preferred state for a homogeneous organization. Modern ERP systems have advanced functionality to address regional and operational variations and still run many processes in the same instance. (A single instance means a single repository of data that allows consistent and quick data retrieval.) Business processes flow more efficiently due to standardized business rules across the organization. IT operational costs are often much lower due to reduced support staff and complexity. This model invites more opportunities for shared services and eases transition to outsourced services.

Choosing an effective consolidation model ERP consolidation is not for many companies. There are a number of factors that determine the degree of consolidation a company should target, including organizational and governance structure, diversity of products and services, geographic distribution, and overall size. This section describes the various dimensions that an organization should evaluate when choosing the right model and approach. While it is important to evaluate where organizations currently stand on the scales below, it is equally important to understand where they want to be on the scale.

•Organizational and governance structure If an enterprise operates as a holding company that only wants to aggregate financials and leave individual BUs autonomous, the required degree of instance consolidation will likely be low. However, if the company wants to achieve synergies through consolidating purchases and/or improving operational efficiencies through shared services functions across BUs, it may require a higher degree of ERP consolidation. The factors listed below should be considered.

The 2011 CIO Compass 19

Low BU autonomy High BU autonomy

Low BU diversity High BU diversity

Concentrated in few locations High degree of geographic dispersion

Global planning, local execution Local Planning, local execution

Low corporate culture Strong corporate culture

Manage by process Manage by numbers

Limited people/resources move between BUs/geographies

High people/resources move between BUs/geographies

IT, administration, HR, and finance share information/resources

IT, administration, HR, and finance do not share information/resources

•Diversity of products/services and customers Companies with integrated or complementary product offerings throughout their BUs are likely to share the same customer group or segment. These customers are likely to expect consistency across the company, which requires integrated processes and systems with a greater propensity toward a single instance. Companies with vastly different products often do not share the same customer segment, so they may be able to effectively operate different processes and systems without a high degree of integration. The range of integration varies based on the level on company-wide consistency that customers expect.

Local customer base Global customer base

Diverse product mix Homogenous products

Products from different BUs sold to the same end customer

Products from different BUs sold to the same end customer

•Supply chain A single instance of an ERP allows full supply chain visibility for sales planning and forecasting, procurement, and order fulfillment. BUs dispersed across the globe can improve coordination by using the same information to stock appropriate inventory levels, perform strategic sourcing of raw materials, and utilize better network planning to reduce freight costs. However, if an organization procures parts from a limited number of suppliers and has concentrated distribution centers, having most data in a single system to track material is less important.

Parts/materials sourced locally Parts/materials sourced

Parts used locally Parts used across BUs

Distribution centers concentrated in few locations

Distribution centers dispersed

Parts/materials tracked locally Parts/materials tracked

No intra-company transfers of material

Intra-company transfers of material

The 2011 CIO Compass 20

•Corporate strategy The decision regarding ERP consolidation should be aligned with the corporate strategy. Consolidation of ERP systems is only one factor in transforming business processes, consolidating organizational functions, or integration following mergers and acquisitions. If an organization’s strategy is to grow through acquisitions, a single instance allows for easier integration of the organization into the single platform.

Organic growth Inorganic growth

Low likelihood of divestitures High likelihood of divestitures

Manage BUs or subsidiaries as a portfolio of companies

Manage as a single enterprise

•Implementation/execution ERP consolidation should be supported by a strong business case followed up with diligent tracking of the benefits realized.

High ROI Low ROI

Low level of organizational disruption High level of organizational disruption

Quick Implementation and deployment

Protracted implementation and deployment speed

Limited competing priorities Competing priorities

Ability to execute a global enterprise program

Limitations in executing a global enterprise program

IT skills and resources available Limited IT skills and resources

Consistent with enterprise architecture and IT strategy

Does not fit well with current enterprise architecture

Organization and process changeERP consolidation can have a wide impact across the organization; many business cases reveal that the most benefits are realized within core business functions. Organizations often leverage ERP consolidation initiatives to restructure their organizations and reengineer their core business processes. There are three key approaches as to how to consolidate several instances, many ending up in a single environment in the end.

•Back-office and operating group consolidation: This approach creates a single back-office instance, which could include finance, human resources, and purchasing. Similar BUs may be grouped into one or multiple operating group instances along with their supporting functions, like order entry, sales, manufacturing, and supply chain planning. This model allows for corporate oversight and integration, but retains some flexibility and autonomy of the core operations as they are aligned over time.

•Geographic consolidation: This approach groups the similar business entities and their associated environments by geographic region with front- and back-office processes in the same environment. For example, a global company with three to four instances could be reduced down to two, which may eventually be streamlined into a single instance. This approach aligns with geographic regulatory differences and can ease the time zone integration.

•Master data driven consolidation: The instance with the most pressing need for change or the instance representing the largest market would become the standard global master data structure, becoming a hub for other instances, which are usually grouped by geographies or operating groups.

The 2011 CIO Compass 21

Selecting the most appropriate model for an organization depends on its existing structure and system landscape. If different operating requirements play a major role and the organization has very different product and service offerings, the first approach might be most appropriate because it maintains flexibility in core areas, like order management and manufacturing. If there are many regional synergies across different business entities within an organization, the second approach offers more flexibility to coordinate operations within a time zone and facilitates maintenance activities like batch jobs, patches and upgrades. The third model is geared toward companies with complex product structures that are able to define their master data strategies and structures up front and can govern around these.

Steps toward effective consolidation•Identify an effective model: Identify where you stand

today and where you want to take your organization. This can also guide you to identify an effective model and road map for your organization.

•Create a strong business case: The business case can help you quantify benefits and identify areas of high-quality potential return, which may help prioritize and direct the efforts. A business case will often require key stakeholders to sign up for the benefits and support them, which is the initial step in the change management process. It also lays out the cost and duration of the effort so that leadership understands the resources that may be required and length of the effort.

•Use your A-team for execution: ERP consolidations often take five to seven years for a global, diversified organization to transition from a fragmented to a more centralized organization. To achieve this, you will need top performers on the team who understand your current organization and have the vision of what it will look like tomorrow.

Organizations should identify and quantify the key business drivers associated with the consolidation, gain executive support, determine the most appropriate model for instant consolidation, and identify the right path to implement it.

The 2011 CIO Compass 22

Drowning in dataNew architectures and approaches to handle the multimodal data explosion

There is an ongoing data explosion! It is driven by the unimaginably vast amount of digital information which continues to grow, expand and evolve rapidly. In general, businesses, governments and individuals are increasingly enjoying the fruits of the World Wide Web and benefits of new technologies, which, in turn, are feeding the continuous innovation. These new techniques and technologies, such as smart devices and network appliances (smart phones, smart books, smart meters, LTE networks, etc.), in turn, provide myriad new data sources and data capture points. The data explosion has become self-perpetuating.

Imagine: More technology penetration facilitating more data collection more rapidly from more powerful data sources. Fiber optic cables are being laid into the ground, high speed networks and smart grids are springing up and more analog devices are increasingly becoming digitized.

The resulting volumes of information can have vast implications from providing basis for operational decision making, spotting business trends, managing risk, preventing diseases, and even combating crime. More data can mean more analytics which can mean more effective decision making.

In addition to the proliferation of data, we are also seeing an emphasis on data ‘providers’. Multiple new data providers are appearing in the marketplace while niche ones continue to grow. Still, determining the standardized and consistent ‘retail value’ of data remains elusive.

More data from more sources drives the need for focused organizational capabilities to handle this data explosion. Most organizations do not have an index or master

view of all their data assets. Most organizations do not have a simple catalog of data they purchase, trade with business partners/customers, sell to others, or provide for compliance reasons. The result can be skewed reporting as similar, but not identical, data points are distributed as a result of different collection and analytic processes. The Big Data explosion is the driving force in identifying creative enterprise methods for cataloging and managing data lifecycles.

The challenges of “Big Data”Let us first define Big Data. ‘Big’ is, of course, a moving target. We now carry hundreds of gigabytes in our pockets and laptop cases and it has become commonplace for end consumers to have terabyte storage available on their family network. Big Data, then, is relative to the managing or owning organization. Most organizations can run into Big Data challenges when they realize that they can collect and tap into more data sources internally and externally to enhance their business insights for decisions making. The data processing capability maturity of the organization plays a key role in identifying when it has Big Data challenges.

Essentially, an organization has Big Data challenges when it accumulates data at a pace that is higher than it can digest it, exceeding its ability to analyze that data, near real time for effective decision-making. Companies do not have to be large global corporations to have Big Data challenges.

The mad, inconceivable growth of computer performance and data storage — a relentless march from kilo to mega to giga to tera to peta to exa to zetta to yotta — is changing everything that can be represented as data, or built on those representations.

Authored by: Tracy Bannon, David Croft, Kajal Gorai, Sumit Sharma, and Ray Buckner

The 2011 CIO Compass 23

There has been recent interest in quantifying Big Data and even the coining of the term “Medium Data” to represent smaller organizations or agencies. This type of quantification can and should be avoided. The challenges faced by all organizations during the data explosion are the same.

The technical challenges associated with “Big Data” are often the first to be addressed by companies, typically with small or lagging efforts to integrate with governance processes, and the larger enterprise information management capability.

Several questions should be addressed across all functional areas of a company or with business partners/customers in order to understand the breadth and depth of the obstacles to managing Big Data in a sustainable and repeatable manner.

Big Data readiness assessmentDoes your organization address Big Data challenges with an enterprise-wide perspective?

What is the availability of common semantic definitions of data?

Is there a complete “birth” to “end-disposition” data lineage information?

Has freshness dating been discussed; e.g., time reference and time relevance of data?

What are considered to be the appropriate and acceptable formats, transformations, and integrations?

Is there the ability to encode data through shared vocabularies, dictionaries, and translators?

Are there appropriate and approved mappings between different data sources?

Do rules and methods for appropriate queries, aggregations, and integrations exist?

•Are there peer groups that can examine queries to be performed on composite datasets?

•Do the parties involved have the rights to integrate and query data?

Is there an ability to exchange information, whether internally or with an external partner?

Does the organization have the ability to enrich data in a consistent manner?

How mature are the organization’s data-abilities”

•Ability to preserve or make the identities associated with data anonymous?

•Ability to preserve and forward intellectual property rights?

•Ability to preserve and process usage rights?

•Ability to preserve and process privacy constraints?

Can the organization capture and independently verify results, whether internally or with an external partner?

These challenges identified by the assessment can be readily addressed with today’s technology solutions; however, having the agreement across all functional areas or with business partners is the larger cultural issue that needs attention.  

How can IT architectures handle the Big Data paradigm?Enabling IT architectures to handle Big Data requires careful planning as well as revisiting mainstream approaches to data management.

Mainstream data managementTraditionally, enterprise data architecture is the cornerstone of effective data management. No tool or business process will likely be able to replace or compensate for the effects of an inadequate data structure. Many organizations leverage industry or vendor-based data models to jump start their enterprise data architecture programs. The problem with this strategy can be that using such off-the-shelf products often causes them to miss the true key to effective enterprise data integration.

The 2011 CIO Compass 24

Careful planning and revisiting a mainstream approach will be required to achieve results when implementing enterprise data architecture capable of handling Big Data. An effective mainstream approach to data management that could be used is as follows:

•Promote self-documenting data life cycle

•Embrace enterprise data models

•Manage the enterprise data model

•Introduce model driven architecture

•Map the models to provide an enterprise view

•Actively guard data security

1. Self-documenting data life cycle: Use the enterprise data architecture as the means and vehicle to document enterprise data through its lifecycle thereby giving the organization a condensed view of the state of its data as it increases over time.

2. Embrace enterprise data models: Create an enterprise data model as an integrated view of the data produced and consumed across an entire organization. This way, it can represent a single, logical definition of data, unbiased of any system or application. Data modeling is the process applied to design the enterprise data model and it is also used to design system databases. Big data adds additional challenges to traditional data modeling due to the unstructured or semi-structured approach to collection, process/analysis. What is required is a practical and formal approach to gathering data requirements and understanding the data in the form of a data modeling process.

3. Manage the enterprise data model: The model should be managed in a structured way to achieve model consistency and a single version of truth, understanding and interpretation. The model should be managed in a formal way.

4. Introduce the concept of a model driven architecture: This concept relates to the broader enterprise architecture discipline and in essence states that you can define your business needs in terms of abstractions such as business rules, use cases, process flow plans or class diagrams that are independent of any implementation details. The assumption is that these high-level abstractions will be easier to maintain and change if they are independent of any platform or technology-specific details.

5. Map the models to provide an enterprise view: enterprise data models are, in essence, a storage mechanism for metadata as the logical data models

provide the business view and physical data models the technical view of enterprise data. These should be mapped to each other to obtain a single view of enterprise data.

6. Actively guard data security: Finally, security measures should be developed to determine that data and data systems are protected against a variety of threats such as sabotage, unauthorized disclosure, fraud, service interruption, misuse and natural disaster. Decent protection against such threats facilitates availability, confidentiality and integrity of the data.

Traditional issues with data ownershipAnother common pitfall that entraps many companies is the politics of data ownership. For example, there are often issues with picking a data owner for a given domain. It’s a given that there may be discussion over who actually “owns” data. Process owners often are very protective over the data their processes consume and can have negative reactions if they feel that their territory is being encroached upon.

To avoid this pitfall, some companies have instituted the concept of a chief data officer (CDO) or “data czar” for the entire organization, and they have given the position C-suite authority. However, most companies have a very low data governance maturity level and don’t yet have the data governance capabilities required to implement a “CDO” position. For these companies, it’s still probably best to appoint data owners for each subject area. The data owners should have certain qualities. First, they should have established relationships within the business based on mutual respect and trust. Second, they should be willing to be a strong advocate for their position at the executive level.

New approaches and paradigms for data and information sharing Imagine a situation in which emergency medical, fire, law enforcement and hazardous materials personnel respond to an emergency only to find that they all speak different languages and cannot communicate with one another. That scenario unfortunately describes the challenge government agencies often face today when they need to share information among incompatible database systems.

Incompatible data exchange and data sharing is not limited to government agencies. It reaches across industry, business, education, scientific, consumer, private, and public domains. Each domain space has unique challenges though much can be gained by discussing the concerted efforts of any one of these groups.

The 2011 CIO Compass 25

One such case study is governmental in nature and involves the U.S. Department of Justice and another federal government agency/department. To address the long-standing concern of incompatible database systems, the U.S. Department of Justice is collaborating with related federal agencies on a technology to facilitate sharing of vital public safety and criminal justice information. Known as the National Information Exchange Model (NIEM), the new interagency effort will provide a foundation for sharing information using extensible markup language (XML), an open standard that can allow exchange of information regardless of computer system or platform.

Efficient and comprehensive information sharing environments (i.e., NIEM) can help organizations…

•Improve decision making capabilities by providing more timely and reliable data.

•Increase organizational agility in response to problems and changing environments.

•Potentially avoid costs associated with redundant processes and data.

The NIEM concept embodies next-generation enterprise data management technologies at the conceptual and implementation levels. Specific aspects include modularity aligned with common and stakeholder-specific needs, stakeholder consensus, and the collaborative development, sustainment and reuse of sets of core data types.

Evolving technologies to address big dataNew technologies and implementation techniques have evolved with existing technologies for dealing with Big Data. The traditional BI/analytics approach of recent years has been focused around structured, vertically definable sets of data, in quantities that are able to be estimated and understood.

With the onset of the data explosion, the entire data landscape is evolving along with the associated business models. New, and often times machine generated data sources are emerging. Business models are being transformed as a result of an increasingly connected consumer population, with ecosystems developing around the cloud.

As a result, there is a need for a new paradigm shift towards business intelligence and analytics that is tailored towards ambiguous sources of human and machine data and that has the ability to federate metadata across disparate sources of structured and unstructured data to facilitate effective semantic search and setting the foundation for predictive analytics.

Computers and mobile devices are getting more powerful and bandwidth and connectivity continues to follow this shift in allowing businesses to leverage technology. As a result, it is possible to run simulations, create complex models to predict future behavior and outcomes in real time in a contextual manner, through the power of interpreting and crunching data.

From the technical perspective, Big Data should be addressed on three fronts: storing data, processing data, and learning from data and the associated technologies, which include data store technologies and analytic engines.

In this new paradigm of a data infused world where predictive analytics and contextual search become mainstream, what will the data repositories and structures look like, and what will their features be?

There are currently four primary types of data store technology today can be summarized in context of their associated data model approaches:

•Relational database/row-oriented data models.

•Extensible record stores/column-oriented data models.

•Document stores: No defined data model.

•Key-Value stores: Indexed-based data models.

1. Relational database/row oriented data models: these data stores are most widely used throughout the enterprise today, and are not going away despite all the chatter of how they aren’t scalable enough for the surge of data happening today. While there is an element of truth in this, all is not lost because new scalable relationship databases are emerging, through innovative clustering, sharding and transaction scoping provisions.

2. Extensible record stores/column-oriented data models: These data models allow for scalability through sharding across rows-typically split by a range, and grouping columns into pre-defined groups across nodes in a cluster. Examples of extensible record stores may include Google’s BigTable, HBase, a Java Apache project, HyperTable and Facebook’s very own version called Cassandra.

3. Document stores: No defined data model: Document stores support complex, unstructured data requirements however, the document stores prevalent today do not support typical requirements such as ACID (atomicity, consistency, isolation, and durability) to give room for scalability and performance.

The 2011 CIO Compass 26

Some examples may include Apache’s CouchDB and another open source variant called MongoDB. The document stores do not have defined schemas, with exception of a few standard features: attributes — names, collections — for multiple documents and indexes — for organizing across multiple attributes and collections. In almost all cases, documents can be distributed across nodes in a scalable fashion, with difference in the degrees of how reads and writes are controlled differently across multiple nodes. At first, this may seem to be exceptionally limiting to those professionals who have cut their teeth, so to speak, using formal SQL to query and analyze the data however, technical query frameworks have evolved for the distributed document oriented data stores.

4. Key-value stores: Indexed-based data models: The simplest example of scaling data models is the user of key-value stores, which are at the simplest level, a single key-value index for all data. These systems support insert, delete and lookup operations and they are scalable through distributing keys across nodes. Depending on the level of complexity of data, key value stores may be less favored over document stores. Examples may include LinkedIn’s Voldemort and other open source variations such as Tokyo Tyrant, Riak, and Scalaris. A key benefit with key-value store is that you can experiment with the data and if you find a specific analysis need, you can transform it into other structures to suit your needs.

The catchphrase “mash-up” describes pouring data together to provide new insights and cross-connections that are not readily apparent when reviewing data sets in isolation. The resulting information can have vast implications from providing basis for operational decision making, spotting business trends, managing risk, preventing diseases, and even combating crime.

Both traditional and new approaches to data stores are still often focused on a relational data foundation. The NoSQL movement, based on indexed storage systems, as opposed to relational database systems exemplifies one of the emerging trends in new-age data structures and repositories. Another increasingly popular alternative to NoSQL stores are scalable relational database systems, which are an extension of the current data structures widely-used today.

Distributed data sources and new data storage paradigms, including cloud computing, and advances in parallel processing have given rise to various technical frameworks to facilitate querying the data using abstraction, which is an important aspect of processing at scale. Abstractions over MapReduce, like Pig and Hive have made simple things very easy where frameworks, such as the Cascading library, have made complex data analysis possible.

Like any enterprise architecture consideration, there should be a roadmap and proper visioning before enterprises understand the way forward.

The way forward What remains to be seen is how rapidly the enterprise will make its way to the new world of Big Data, and how much consistency matters, in terms of both application data and design, across the variegated ways of customers currently manage their information.

So while there are many ways to solve the problem the important thing is the approach and like any enterprise roadmap, several iterative rounds of discovery and assessment should happen.

Understand your current technology stack and capabilities.Throughout this article, we have focused on the non-technical aspects of the Big Data challenge. That having been said, your current and future technology stack and capabilities should be discussed. This will help you directly define your “Big Data technology style”

Components of Big Data technology style include:

•Size — Determine how “big” is your “data” now and what sorts of growth can you handle as-is? Can your needs be met by a “single box” or do you need to utilize the capabilities of parallelism?

•Skills — The nature of your enterprise and the organization of your IT group should be considered. Is your IT outsourced or are you a global enterprise with broad mature IT capabilities? Can your needs be met by you current staff or will you need to expand your resource pool?

•Infrastructure — You current enterprise infrastructure has strengths and weaknesses. Organizations often have multiple parallel technology based initiatives.

•Technology attitude — What is your organization read for? Is it open to the possibilities of cloud computing?

The 2011 CIO Compass 27

Define your enterprise approach to Big DataThis is where the rubber hits the road. You’ve gathered and assessed and now you need to plan your next steps and answer these fundamental questions:

•What type of data do you need to handle? The data explosion has yielded a myriad formats including image, blog, structured and unstructured.

•How is the data organized? If you know how you will use the data, SQL may make sense. If you are not quite sure, key-value approach (such as MapReduce) may suite your enterprise better.

•What are your real-time data needs? — While this can yield a competitive advantage but if your analysis needs are high, you may consider streams or wavelettes. This approach lets you interpolate the data as it comes in and quickly analyze it placing it into usable structures. Your decision point will be to decide if an analytical approximation will do for your organization and what the analysis tolerance is.

• What will your future processing performance needs be? There are still physical limitations when considering hardware (such as the write speed). If performance matters more than all else, SSD (solid state drive) technology may be on your roadmap. Again, depending on your analysis needs

•How involved or engaged do your analysts need or want to be with the data? We have seen that professionals using abstract query paradigms tend to be more engaged with their data than SQL users.

Your enterprise approach to Big Data will be influenced by your Big Data technology style, your Big Data readiness, and the cultural challenges you should address. Even so, you should keep an open mind — these key points should be considered to make any decision in any approach:

1. Depending on industry’s pressing need, legacy technologies can be replaced with distributed computational resources (often compute clouds) and relational databases can be replaced, or perhaps offset with NoSQL data stores that allow for extremely high volumes of transactions without requiring SQL-specific programmatic methods.

2. The data processing resources can live, in a corporate data center or out on the Internet with providers. Odds are a hybrid approach to processing and managing this data--both through internal compute clouds and public cloud offerings--is the path to achieving results.

3. Corporations that are embarking for business analytics and predictive patterns should take a holistic approach to handle Big Data capabilities. These corporations can initiate evaluating vendors to expand analytics to encompass Big Data, information streams, and structured data in data warehouses.

As innovation continues, the data explosion will continue and by taking a multi-modal approach to Big Data, your organization can do more than “brace to survive the storm” by embracing the flood and structuring your organization to capitalize on change.

The 2011 CIO Compass 28

Organizations have embraced distributed environments over the past 20 years as a way to cut overhead, increase workforce flexibility, and integrate geographically dispersed teams and information technology (IT) resources. Distributed environments can also improve resource availability while supporting system and user growth with as-needed network expansion. Perhaps most importantly, they are integral to the growth of cloud computing, which can further reduce spending on IT infrastructure.

While these environments offer significant advantages, they introduce their own set of operational and security challenges. These include data privacy, system transparency for end users, implementation of common standards to securely share information, and management of heterogeneous systems, IT resources, and costs.

To address these challenges, architects are using varied strategies to manage IT services (e.g., cloud computing) and protect IT services and assets with approaches such as identity, credential, and access management (ICAM), and data loss prevention (DLP).

Challenges of a distributed environmentDistributed computing offers cost savings as well as human and IT resource flexibility. But with these advantages come security and privacy concerns and requirements. Providing security and compliance for a large group of distributed-network participants is especially challenging amid rapidly changing regulations and time and budget constraints. Organizations using distributed environments must consider and prepare for the following challenges:

•Management of heterogeneous systems and environments: Distributed environments allow for a diverse selection of systems, all of which must operate together despite differences in hardware architectures, operating systems, communication protocols, programming languages, software interfaces, security models, and data formats. This diversity requires security solutions that are flexible enough to address varying system and network configurations and multiple trust or clearance levels (internal, external, trusted third party, etc.). These requirements can challenge an organization’s ability to manage diversity without duplicating efforts and resources.

•Management of systems without authority: In a distributed environment, an organization must consider the risks that external or third-party systems pose to the shared environment and the challenge of how to prevent harm to one’s organization. While some organizations may have memoranda of understanding service level

Addressing the digital threat Data security and privacy in distributed environments

The main features of a distributed environment include:•Functional separation: Resources tend to be

geographically dispersed.

•Inherent distribution: Information can be generated, stored, analyzed, and used by different systems or applications, which may or may not be aware of the existence of the other entities in the system.

•Reliability: Long-term data is replicated and redundant at different locations.

•Scalability: Architecture is able to support increasing resources.

Authored by: Carrie Boyle, Josh Drumwrite, and John Ezzard

The 2011 CIO Compass 29

agreements, or substantial due diligence in place, there is still potential for exposure to untrusted systems or applications.

•Management of privacy: Within a distributed environment, the collection, granting of notice, and processing of data may occur in three separate systems. A shared solution must tie together these discrete activities so that information collection, use, sharing, retention, and destruction comply with the consent given by a user and the notice provided by the system. Because data is distributed across systems and geographies, organizations must take steps to know where their data resides and whether appropriate privacy controls are in place to prevent unauthorized access. Additionally, policies and procedures must support the protection of personally identifiable information (PII).

•Budget: Distributed environments may include systems used by more than one operating or funding group, which can complicate funding strategies. Distributed environments may utilize diverse selections of systems and networks. This diversity, while a benefit, often requires organizations to consolidate IT spending on solutions that address the entire distributed environment. Depending on the size of an organization, this may also determine the extent to which it can afford technology solutions to support defense in depth.

•Diversity in participants: As networks grow, so do the number of participants. The technical and security knowledge of these participants may vary widely from novice end users to experienced network administrators or application developers. As networks become more distributed, the diversity of participating people and systems increase, making it harder to provide a seamless experience for end users.

Managing and securing a distributed environmentThe growth of distributed computing is a testament to its ability to address not only security and privacy risks, but also management approaches, such as cloud computing.

Figure 1 illustrates an environment of geographically distributed systems connecting with virtual and physical systems via the Internet. It highlights the challenge of maintaining confidentiality, integrity, and availability when IT assets and resources are distributed across geographical boundaries. The following sections summarize two basic security and privacy concepts to address distributed environments: ICAM and DLP.

Figure 1: Distributed environment

Managing IT services: Cloud computingWhat is cloud computing? Cloud computing is the delivery of scalable IT services and capabilities via the Internet or a private network. A cloud computing environment is dynamic in its ability to add or subtract capacity, reallocate resources, and adjust to changing functional requirements. Cloud computing allows organizations to decrease the cost of new or aging IT infrastructure by providing three basic services:

•Infrastructure as a service (IaaS) provides network communications, server resources, and flexible and scalable bulk storage. IaaS provides scalable processing capacity and capabilities to support an organization’s current and emerging technology needs via virtualization and other integrative technologies.

•Platform as a service (PaaS) consists of common services that support multiple applications and databases. PaaS supports application development and information management.

•Application as a service (AaS) allows for anytime/anywhere access to applications. AaS frees business managers to focus on business missions and requirements rather than hardware, software, and technical considerations. It also supports Web-based applications, which provide increased flexibility in an environment where users and resources are distributed.

How is cloud computing addressing the challenges? Cloud computing helps organizations shed the costs of owning and maintaining IT infrastructure. It also lets organizations pool resources to deliver more transparent information and functionality to end users. Cloud computing is elastic; users can scale up or down, or opt in or out, as needed.

The 2011 CIO Compass 30

What are the key considerations? Evaluating cloud models: Organizations need to evaluate the pros and cons of each of the three service offerings against their enterprise technology goals. A scalable cloud approach might start with one segment of infrastructure and then expand to meet other business requirements.

Establishing trust with a cloud provider: Organizations must trust cloud providers before they share their data with third parties. How will this trust be established, particularly as the necessary public and private governance models for cloud providers are still evolving? Accordingly, organizations must also remember to extend their security policies to their cloud providers as they work to establish trust in them.

Authentication of cloud users: Organizations rolling out cloud computing services will need to establish trust between their corporate infrastructure and the cloud, deciding which facets of information are shareable with third-party-managed clouds and what standards should be used. For example, will highly sensitive information, including user names and passwords, be shared?

Management of distributed services: Cloud environments serve as a central service to distributed users. This centralization may allow for easier management, configuration, and patching of security devices and technologies in cloud environments. Also, organizations that leverage services provided by a third party may be able to decrease overhead and management expenses while focusing on mission-related services. Centralized services also make it easier to manage compliance issues and support the deployment of services, patches, and other fixes. Centralized monitoring and related tools provide real-time intelligence, such as early-warning threat analysis, and a better understanding of the security baseline in an organization’s technical environment.

Security and privacy conceptsIdentity, credential, and access managementAs an organization’s mission expands, new applications often become available to employees, partners, and customers. This increases the time and costs associated with managing digital identities and can strain an organization’s limited resources. Disparate systems and processes for user administration, provisioning, and access rights management compound the problem and can increase IT-related risk.

What is ICAM? ICAM helps protect enterprise resources in a distributed environment by providing the organization with the ability to know exactly who is accessing what, when, where, and why and whether that employee, partner, or customer has the appropriate clearance. ICAM combines these attributes in a single approach that captures digital identities and credentials and manages system access through standardized control.

How is ICAM addressing the challenges? ICAM provides the data security, privacy, and authentication needed to promote collaboration in a shared environment. It also helps reduce security costs by repurposing and integrating resources in an enterprise approach. ICAM also manages participant diversity to make sure the right people get access to the right information at the right time.

What are the key considerations?Digital identity management: Today, digital identities are created on an application-by-application basis across distributed environments. As organizations deal with increasingly complex sets of identities, they must figure ways to provide access across the enterprise without creating redundant, distributed sources. Using digital identity management as part of ICAM, organizations can focus on the following:

•How identity data will be used.

•How to protect PII.

•How to enforce identity data policies.

•How to manage the overall lifecycle of identity data, including access control rights.

The 2011 CIO Compass 31

Authentication, authorization, and access control: When implementing any solution, organizations will want to manage the confidentiality and integrity of systems with provisions to grant and restrict access to sensitive data. The most common mechanism combines authentication, authorization, and access control (AAA). Authentication determines the identity of a person trying to connect to a system. Authorization checks that the user is allowed to receive access to a resource. Access control manages who actually connects to a resource. Implementing AAA in a distributed environment presents challenges. Specifically, organizations must consider how to provide appropriate mechanisms for two remote resources to establish the trust necessary to conduct sensitive transactions. Another challenge is building identification, authentication, and logical and physical resource capabilities into a single access card. With centralized identity management and overall access right enforcement, ICAM can help organizations limit access to critical resources to only authorized users.

Cryptography: Organizations using distributed environments must protect the data that leaves their networks. Cryptography helps organizations maintain private communications by addressing the following:

•Confidentiality: Information cannot be understood by outsiders.

•Integrity: Information cannot be altered in storage or transit between sender and intended receiver without being detected.

•Nonrepudiation: Creators or senders of information cannot later deny their intentions.

•Authentication: Senders and receivers can confirm each other’s identities and the origins and destinations of the information.

When considering cryptography, organizations must consider the technical implementation (encryption strength), the people (what training is necessary to handle cryptographic keys), and the processes for managing cryptographic keys.

Federation and trust framework: Within distributed environments, users, systems, and partners may be geographically separated but still need to share data and work together. Federation lets subscribers of multiple enterprises use the same identification data to access the networks of all enterprises in the group. Organizations considering federation will need to account

for interoperable access to systems and data to support information sharing by leveraging common standards. They must do so while controlling access and providing information-sharing protections. Partners require compatible ICAM policies and approaches to achieve interoperability in a distributed environment. Building trust relationships is an ongoing process. Using interoperable credentials helps establish a trust framework. Trust services include the mechanisms to support trust establishment, negotiation, agreement, and fulfillment through the use of appropriate policies to check the interactions of partners. Federation and a trust framework provide an enhanced user experience, reduced costs, and improved security. Additionally, they help enforce access controls across distributed platforms and operating systems.

Data loss preventionThe modern distributed environment faces unprecedented security challenges from large mobile-storage devices, wireless connectivity, and trusts between private and public networks. As the sharing and distribution of data becomes easier, the risk of unintentional distribution outside the organization increases.

What is data loss prevention? Data loss is the movement of a data asset from an intended state to an unintended, inappropriate, or unauthorized state, which represents a risk or a potentially negative impact to the organization. A loss may or may not cause immediate harm, but it generally assumes that security has been breached through attack, error, or lack of awareness. This might occur through simple day-to-day activities, such as sending e-mail, instant messaging, or sharing data on a USB stick.

Until now, securing information has meant securing the network, server, and application infrastructure around it — everything but the information itself. DLP refers to systems that identify, monitor, and protect data in motion, at rest, and in use through content analysis. DLP solutions can achieve this coverage using networks or endpoint sensors to analyze data as it moves from one state to another. Centralized and distributed network and endpoint monitoring sensors each offer benefits, as follows:

Network monitoring analyzes data traffic over the network to identify sensitive content transmitted via methods such as e-mail, instant messaging, HTTP, and FTP. It allows for passive or active monitoring of traffic and integrates with e-mail to quarantine, encrypt, and filter outbound e-mail.

Deloitte has provided the U.S. Department of Health & Human Services (HHS) with enterprise-level, end-to-end IT and program management support to implement an ICAM solution. HHS benefited by centralizing the management of identity data; streamlining its approach to AAA; and fulfilling Federal compliance requirements, such as Homeland Security Presidential Directive 12 and Federal Information Processing Standard 201.

The 2011 CIO Compass 32

Endpoint monitoring looks at user interactions with data to identify attempts to transfer sensitive content to an inappropriate or unauthorized state, such as a USB drive. At the operating system level, it monitors user activity and blocks actions according to policy. At the file system level, it monitors and enforces according to where data is stored.

These sensors work to identify state changes representing policy violations. Once a policy violation is discovered, DLP tools can take a variety of actions based on their policy configuration, including alert, report, warn, quarantine, notify, protect, encrypt, access control, and delete actions.

How is DLP addressing the challenges? Organizations have adopted DLP technologies and supporting processes to help protect data at rest, in motion, and in use. Ultimately, each of these solutions help protect data from unintended or malicious actions, such as sending Social Security numbers in unencrypted e-mail or downloading PII from a sensitive data store.

What are the key considerations? Data protection practices — Implementation of DLP alone will not resolve an organization’s data-protection problems. DLP can only implement an organization’s existing policies, procedures, and practices. As such, if an organization has not defined its practices, the implementation of DLP can cause significant disruption by blocking legitimate activities. Accordingly, as organizations seek to implement DLP, they should first review and consider their existing policies, procedures, and practices. Without strong mandates for data protection, and the procedures and practices to support them, DLP will be ineffective and potentially hinder an organization’s legitimate operations.

What should be monitored? Data loss can occur in many different ways, including through removable media and e-mail. Accordingly, there are several techniques to mitigate data loss, including monitoring data at rest, in motion and in use. Data-at-rest monitoring looks for inappropriate data stored on servers and within databases, which allows organizations to remove or protect information in an effort to prevent data loss. Monitoring data in motion means reviewing it as it enters or leaves a system or network, which allows the

organization to identify or prevent inappropriate data from leaving its confines. Monitoring data in use means evaluating how users access data and what actions they perform against the data, which allows organizations to identify and address inappropriate or malicious actions. As organizations evaluate DLP products and their implementation architecture, they will need to evaluate which of these techniques to use in their DLP solution. This evaluation should take into account how the different monitoring mechanisms can help an organization achieve its overall data protection goals.

Sensor placement — Organizations must decide which DLP sensor placement model to use — network based or endpoint based. Organizations evaluating sensor placement models will need to consider where their data flows, as sensors are only able to monitor data that they can see. Additionally, organizations will need to identify any special requirements or considerations, such as whether traffic on a network is e-mail only and whether it is encrypted. In making a decision between endpoint and network sensors, organizations should consider that endpoint-based agents are typically installed on systems where data flows do not follow a consistent path, which makes the installation of numerous network-based sensors cost prohibitive.

People and processes — Organizations must identify and train people who will use and support DLP tools, as well as develop and roll out processes to support these tools. These processes include management and monitoring of the tool, event review, and incident response, which allow for effective delivery of data protection.

Deloitte supported the Federal Aviation Administration’s (FAA) selection of a DLP tool and the implementation of the tool across its distributed environment. Implementing a DLP tool helped the FAA reduce the loss of data and identify unauthorized actions to access and modify data by using improved monitoring capabilities.

The 2011 CIO Compass 33

ConclusionAs organizations aspire to do more with less, they will face continued security and privacy challenges related to protecting the confidentiality, availability, and integrity of data; enforcing privacy policies; implementing the appropriate safeguards to protect systems; and establishing the appropriate trust with partners, etc. The use of distributed environments will help them deliver flexible and cost-effective solutions. Whether the solution is cloud computing, DLP, or ICAM, organizations will need a structured, architectural approach and an agile management methodology in place. With strong planning and project management, organizations can effectively implement these solutions in a distributed environment and reduce the impact of security and privacy challenges to their environment.

The 2011 CIO Compass 34

Enterprise architecture (EA) and Lean IT are two approaches that many organizations use today to promote greater transparency, efficiency, and alignment between the business and information technology (IT) to achieve strategic objectives. Each has guiding principles, a delivery framework, and analytical tools that can be independently leveraged to identify inhibitors that limit enterprise effectiveness and to implement solutions for removing those impediments. This article explores the relationships between these two approaches by describing key areas where EA can help organizations attain Lean IT objectives.

Lean IT and EA overviewLean ITThe objectives of Lean IT often focus on improving internal customer value by reducing waste in the IT delivery and governance cycles, and are typically applied after the business strategy is formulated. Lean IT seeks to improve core business process flow using tools or automation to exclude activities that don’t add strategic value. Removing waste from processes liberates capacity, increases velocity, and lowers costs. These variables can be managed to increase customer value because they can be traced to value drivers.

Lean IT is a subset of Lean, which focus on value management. True Lean thinking focuses on delivering external customer value as a barometer for effectiveness, as opposed to Lean IT, which may be externally focused. However, the Lean principles associated with excluding waste and empowering people are inherited by Lean IT. The risk associated with veering from the true Lean path is that you may not see the whole for the sake of analyzing locally. If not managed, the paths can become incongruent.

Lean principles center on a holistic perspective that distinguishes processes that add customer value from those that don’t. Lean seeks to reduce cycle time and process variation, improve quality by testing early and often based on customer-level tests, develop continuous flow, and pull work through the system by avoiding batch work and reducing work in process (WIP). It involves people at many levels in pursuing perfection, and empowers staff to make the process and product changes necessary to meet customer expectations.

To improve productivity by excluding waste, you should first recognize waste in the IT delivery cycle and create an environment that challenges everything and embraces change. Typical wastes include: partially-complete work products, additional process cycle-times, extra functional features, resource intensive task-switching, resources waiting to be utilized, artifacts waiting to be processed, and defects. The Lean approach creates a work environment focused on continuous improvement where customer value is delivered faster through the value stream. Understanding organizational structures and the opportunities for improvement — especially IT opportunities — is a critical strength for EA.

Enterprise architectureThe objectives of EA are to help organizations achieve strategic business and IT alignment, envision technology solutions to business problems, and facilitate consistent capability performance. It does this through promoting business and IT governance models, developing standards, and creating business and technology architectural blueprints. It assists describing and tuning the operating model and can provide decision support for business and technology stewards to achieve the business strategy.

EA and Lean ITExploring the relationships

Authored by: Jerome Campbell, Jeff Anderson, Derrick Robinson, and Scott Rosenberger

The 2011 CIO Compass 35

The EA approach creates a strategy-focused, top-down alignment of the business and IT through layered views of enterprise assets with respective domain partitions and evaluation of resource utilization and specific performance characteristics, coupled with transparency of operations.

There are three well-known EA standards: The Zachman Framework for Enterprise Architectures, The Open Group Architecture Framework (TOGAF), and the Federal Enterprise Architecture (FEA).

The Zachman Framework is a taxonomy for organizing architectural artifacts that takes into account the target of the artifacts and specific issues that require resolution. John Zachman described his work as:

The Enterprise Architecture Framework as it applies to Enterprises is simply a logical structure for classifying and organizing the descriptive representations of an Enterprise that are significant to the management of the Enterprise as well as to the development of the Enterprise’s systems.1

TOGAF divides an enterprise architecture into four categories: business architecture, application architecture, data architecture, and technical architecture. The most visible part of TOGAF is the architecture development method (ADM). The ADM process is complimentary to the Zachman Framework because one tells you how to create artifacts (TOGAF) and the other tells you how to categorize them (Zachman).

FEA has a broad taxonomy and an architectural process that consists of five reference models: business reference model, components reference model, technical reference model, data reference model, and performance reference model. FEA views an enterprise through several lenses: core mission area segments (political boundaries), business services segment (required foundation to most political boundaries, e.g., finance), and enterprise services (spans political boundaries, e.g., security).

The goal of these three standards is to manage the linkage between business and IT capabilities to manage enterprise assets and create value. The key questions that EA answers are:

•What are the business goals?

•How are the business processes organized to deliver value?

•How are those processes related to each other?

•Which business processes are candidates for technology improvement?

•What is the plan for making the improvements?

Lean IT and EA comparisonsDeloitte’s EA framework provides an organizing principle and a holistic approach for evaluating business and IT problems for any industry. This framework, which has roots in EA standards, allows us to compare Lean IT with EA.

Figure 1 illustrates an EA layered framework and indicates where EA aligns to Lean IT. It highlights the taxonomy that contains business and IT asset blueprints, as well as other strategic planning and project delivery artifacts. The framework facilitates a process for creating conceptual, logical, and physical architectural views that traverse the EA layers. These viewpoints address the concerns of critical stakeholders, such as sales and marketing, manufacturing, and finance. Although the delivery or foundational aspects are not shown, they are consistent with EA standards and can be tailored to suit the needs of professional services delivery and management.

The 2011 CIO Compass 36

Figure 1: Lean EA perspective

Enterprise

Enterprise architecture

Strategy layer

Information and services layer

Application layer

Integration layer

Infrastructure layer

Lean

Business process layer

Lean IT

Con

tinu

ous

proc

ess

impr

ovem

ent

Val

ue s

trea

m m

aps/

Non

-val

ue a

dd a

naly

sis

SIPO

C d

iagr

am

Voi

ce o

f th

e cu

stom

er, V

oice

of

the

busi

ness

This diagram emphasizes the relationship of Lean IT to EA by highlighting a discipline in business process management for both. EA provides insight into how resources are consumed as business processes are executed. Lean IT defines the process boundaries via SIPOC diagrams and uses value stream maps to identify waste.

Lean IT provides EA a simple value management framework through a continuous process improvement lens. This lens can illuminate opportunities to exclude waste from processes and process enablers, like IT services. Waste is attributed to cost and lost productivity, and these losses are attributed to Lean opportunities or

EA solutions. EA provides a broad set of mechanisms for defining solutions and solution options. Thus, any corporate initiative that EA underwrites that contains strategies for cost reduction, customer approval, employee/partner culture transformation, sales growth, or operations excellence will find support in Lean IT.

There is a direct correlation where Lean opportunities require IT solutions or business process changes. EA directly contributes to the “see the whole” principle. It supports various levels of process modeling and links the process to IT assets and business strategy.

The 2011 CIO Compass 37

Lean and enterprise architecture similarities

Lean/Lean IT Enterprise architecture

Process waste elimination Process standardization

Process definition/SIPOC diagram Blueprinting/process modeling

Pull/flow/process capability Integration/workflow/strategy/governance

Value stream map Blueprinting/process modeling

Value-add analysis/process cycle efficiency Process governance/IT governance/BPM/BAM

Process improvement Solution architecture/solution options

See the whole Architecture views and viewpoints

Lean focuses on empowering ordinary people, while EA typically requires deep technology skill sets. EA takes its cues from rolling strategic planning cycles, while Lean’s simplicity is attributed to a close focus on internal or external customer value-add. There is no systematic methodology or framework that will solve all enterprise problems. Both methodologies can benefit from each other to overcome the other’s weaknesses.

Lean and enterprise architecture strengths

Lean/Lean IT Enterprise architecture

Tactical bottom-up approach to satisfying external customer demand via pull and flow techniques

Top-down and bottom-up alignment; business strategy alignment to IT capabilities

Process-driven improvement (includes supporting IT enablers)

Enterprise multi-dimensional view of improvement

Structured approach to cost reduction and value creation Structured approach to IT operations unit costs and applications maintenance costs

Empowers the individual; harnesses the power of ordinary people/employees

Centralized governance of IT assets; utilization of architecture repository to increase information sharing

Promotes an environment of continuous improvement Increased management approval and risk management

Faster throughput of value to customers Facilitation of IT asset re-use and IT responsiveness

Improvement by excluding nonvalue-add activities Improvement by refining existing value-add activities

Lean and enterprise architecture weaknesses

Lean/Lean IT Enterprise architecture

Sustaining lean benefits due to process variation and lack of root cause analysis; typically requires supporting methods like Six Sigma to maintain cost savings (this reliance is not discussed in this article)

Speed to benefit due to lack of tactical focus and need to build foundational capabilities, e.g., IT asset inventory, architecture repository, etc.

Summary of Lean and EA collaborationFigure 2 shows how EA and Lean IT can converge to identify near-term opportunities to improve productivity and deliver business solutions by providing a high-level view of a knowledge turn between EA and Lean IT. This knowledge turn begins with value creation by excluding waste and feeds the EA cycle for execution and feedback. The continuous improvement capability is separate, yet integrated.

The 2011 CIO Compass 38

Figure 2: Lean and EA artifact flow

Lean and EA relationships

Primary functions and key artifacts

Proc

ess

anal

ysis Start

Cha

nge

man

agem

ent

Bus

ines

s an

d IT

cap

abili

ty

Blu

epri

ntin

g/so

luti

on

arch

itec

ture

Proj

ect

arch

itec

ture

Gov

erna

nce End

SIPOC diagram

Value add analysis/ value ratio (PCE)

Architecture blueprint updates/

EA repository

Process application, integration, data

infrastructure (logical, physical)

Architecture review, standards compliance

Value stream map (VSM)

Continuous improvement

Triagle/record work request/change

request

Capability analysis/conceptual architecture

Solution arch, solution options, recommendations

(logical)

Lean

Enterprise architecture

Example — Lean applied to a large national health plan

Project descriptionApply Lean IT to a large multiyear, multiproject program remediating 150+ systems with 300 resources and business-to-business implications. The specific objectives were: 1) Identify issues early through metrics analysis and 2) take corrective action to improve process flow and remove impediments.

Specific Lean terms•Customer value add: Activities external customers would be willing to pay for if they knew you were doing it; any

activity in a process that is essential to deliver the service or product to the customer; activities that add form or features to the service.

– Goal: Continually improve and standardize.

•Nonvalue add, mandatory (required waste): Activities that are required by the business to execute value-add work but add no additional value from a customer viewpoint; internal customer would complain if you stop doing this activity; activity is required for legal, governance, reporting, or financial risk.

– Goal: Minimize or exclude; check with the customer.

The 2011 CIO Compass 39

•Nonvalue add: Activities that add no value from the customer’s viewpoint and are not required for financial, legal, or other business reasons; if you stop doing this work, internal or external customers would not know the difference; rework needed to fix errors, waiting, idle time, delays, or over production.

– Goal: Exclude

Lean opportunity statement•Reduce project time slippage by improving overall system development life cycle (SDLC) flow via Lean analysis.

•Reduce overall program cost by reducing time spent in design phase, leveraging a model-driven approach and producing more effective and full artifacts via Joint Application Design (JAD) sessions.

•Improve quality of deliverables by reducing defects prior to testing phase by reviewing requirements artifacts during JAD sessions.

•Improve collaboration across teams by creating end-to-end integration views of flows using Rational tools.

•Utilize EA discipline to organize the project and manage/deliver improvement opportunities.

SDLC throughput analysis — Average demand rateOne of the primary goals of Lean is to create constant flow and to level process cycles that are above the average demand rate (productivity rate) for a given process. Throughput analysis of the SDLC indicates areas that require attention.

Nonvalue-add analysis — Based on project plan variance

Phase Nonvalue add (hrs) Nonvalue add (%)

Plan •Re-plan: 8/3/09 to 5/21/10 (10 months) 100%

Requirement •Original plan: 10/1/09 to 3/31/10 = 6 months

•Re-plan: 9/1/09 to 6/25/10 = 10 months

•Nonvalue add = Average three plus months across systems

50%

Design •Original plan: 12/13/09 to 7/30/10 = approx. 8 months

•Re-plan: 1/25/10 to 9/28/10 = approx. 10 months

•Nonvalue add = Average two months across systems

25%

Construct •Original plan: 4/19/10 to 10/29/10 = approx. 6 months

•Re-plan: 2/5/10 to 2/8/11 = approx. 12 months

•Nonvalue add = Average four months across systems

67%

Test •Original plan: 6/17/09 to 2/15/11 = approx. 9 months

•Re-plan: 3/5/10 to 6/15/11 = approx. 15 months

•Non value add = Average 4 months across systems

44%

The 2011 CIO Compass 40

SDLC throughput analysis — Cycle time analysisBy segmenting the non value add functions, the specific areas that need attention are called into focus.

0

10

20

30

40

50

60

70

80

ImplementationTestConstructDesignRequirePlan

32

SDLC throughput analysis — Value add segmentation

Actual productivity

Target productivity

3339

48

4 5

6

14

7

912

10

2

7

1016

Non value add

SDLC throughput analysis — Average demand rate revisitedAccording to Lean principles, any value-add step that takes longer than the demand rate (takt rate) is considered a time trap and must be improved. A “time trap” is any process step that injects delay into a process. The goal is to level the times across steps so no step is longer or slower than any other step. This is not the same as a capacity constraint or bottleneck which means the process can’t operate at required levels.

SDLC phase Time (months)

Budget (hours)

Budget (%)

Non value add, mandatory (hours)

Monitor (% of SDLC)

Grand total (hours)

Cycle time * (hours/element)

Plan 5 24,600 6% 3,463 14% 28,063 12

Requirements 6 76,053 18% 10,706 14% 86,759 36

Design 8 116,044 27% 16,336 14% 132,380 55

Construct 6 78,368 19% 11,032 14% 89,400 37

Test 9 94,691 22% 13,330 14% 108,021 45

Implement 5 33,539 8% 4,721 14% 38,260 16

Total 423,295 100% 59,589 482,884 201

SDLC phase Non value add (months)

Non value add(%)

Confidence factor (%)

Grand total(hours)

Cycle time *(hours/element)

Plan 5 100% 50% 42,095 18

Requirements 3 50% 50% 108,449 45

Design 2 25% 50% 148,927 62

Construct 4 67% 50% 119,349 50

Test 4 44% 50% 131,786 55

Implement 0 0% 0% 38,260 16

Test 588,867 245

* Note: Preliminary cycle time estimate was calculated by dividing budget hours per phase into total data elements (2,400) to be processed.

The 2011 CIO Compass 41

We first focused on the design phase, which had the most significant inefficiency and the highest level of nonvalue-add activity. To improve design phase productivity, we increased design activity parallelism, utilized EA modeling tools and standards, reduced lengthy delays to design activity start dates, introduced design phase entry and exit criteria, and minimized the number of meetings by creating/maintaining shared environments using an EA repository.

The test phase also provided opportunities for immediate capability improvement, as well as long-term benefits. Similar improvement steps were taken, with more focus on starting testing activities as early as possible with more business subject matter advisor (SMA) engagement. By injecting incremental quality measures into the design and construction phases, we demonstrated that fewer defects would be propagated through the SDLC lifecycle. EA structure combined with Lean focus provided a flexible, speedy approach to improving quality across the SDLC and resulted in implementation costs savings.

Lean action taken•Utilized Deloitte’s EA framework to develop an

end-to-end view of project.

•Mapped Lean performance improvement levers to each SDLC phase.

•Conducted JADs for processes and systems to augment existing requirements and accelerate design analysis.

•Created and deployed architecture models in a single repository.

•Applied architecture principles to standardize documentation and identify significant risks and identified potential re-use opportunities.

•Defined an approach to develop component-based bottom-up estimates.

•Enhanced earned value analysis with Lean productivity measures.

•Developed bottom-up (component-based analysis) business case to demonstrate the benefits of pulling the Lean performance improvement levers.

•Deployed Rational tools, processes, and methods to automate delivery.

– Developed test design execution process to adapt risk-based testing and defect management process.

– Configured Rational Quality Manager and Rational ClearQuest to support these processes.

Near-term Lean benefits•More transparency of SDLC performance at lower

granularity of detail.

•Immediate re-use system-level functional flows and integration views for related projects.

•Earlier risk mitigation action for troubled or complex projects within the program.

•More collaboration across projects within the program.

•Earlier start to test management and test execution activities.

•Identified more than $6 million (15 percent) waste in an approximately $40 million program based on preliminary findings.

References:1. The Framework for Enterprise Architecture: Background,

Description and Utility by John A. Zachman, published by Zachman Institute for Framework Advancement (ZIFA), Document ID 810-231-0531.

IT efficiency and effectiveness

The 2011 CIO Compass 43

Establishing and maintaining IT agility

Chief information officers (CIO) and information technology (IT) managers should anticipate and respond to rapidly evolving market trends. The IT organization should consider that responding effectively to change, as the dynamics of the business environment change, could be viewed as agile and as an organization to be engaged by the business. Let us look at business drivers requiring an agile IT capability, steps toward delivering an agile organization, and a focus on measuring the impact of IT agility.

Why agility?Two forces — market drivers and technological change — typically drive the need for IT agility. Today’s market demands require new and enhanced business capabilities. Merger and acquisition activities and the movement towards globalization require IT to effectively adapt to large-scale changes that can enable new demands on the business. Additionally, new Internet business models have altered the competitive landscape and pushed innovation higher up the business agenda. The readiness to adopt technology advances is also critical. Computing advances, biotechnology developments, artificial intelligence (AI), advances in human-computer interfaces, and even social networks and collaboration tools have created new business capabilities. These factors place the role of IT in the foundation of business strategy.

The common view sees IT as separate from the business, rather than an integrated business resource — in other words, a reactive service organization. Business units often develop their own IT solutions without involving the CIO and IT.

The CIO can still be responsible for the security and protection of the enterprise’s data, despite not having direct control of IT assets or services; unfortunately, IT is partly responsible for this state of affairs. The traditional, process-driven IT development cycle can span years of effort; meanwhile, businesses need IT to deliver solutions based on tighter, market-driven timetables, which are measured in months, weeks, or even days. IT agility can address this conflict by redefining the IT organization to push solutions faster, and to anticipate technology-based opportunities sooner. The agile IT organization should position itself to adjust to the rapid pace of change in the business environment.

Defining IT agilityIT agility can improve the IT organization’s responsiveness to change, but a clear definition can be elusive. IT agility is the capacity to manage change in a timely, systematic, consistent, and repeatable manner. Typically, businesses change more rapidly than their IT organizations; the service gap between business needs and IT capabilities widens as business dynamics outpace IT innovation. An agile approach can make CIOs and IT teams more business-responsive, rather than business-reactive.

Establishing an effective IT organizationIf you are a CIO driving change, being agile requires the organization to consider thinking in terms of nimble services, not in terms of traditional IT and systems. Organizations have different starting points, but the path should contain a leadership commitment to agility for the sake of the business.

Authored by: Paul Krein, Eric Ritter, Kaushik Mukherjee, Nick Elkins, and Gary Corbett

The 2011 CIO Compass 44

Do not underestimate the effort required to make the change. You should consider being an “agile evangelist” spreading the word, defining and demonstrating how agility can change the future and help improve results across the organization. You should develop supporters to help carry the message and then support the move to an agile environment.

There are three critical areas to transition: (1) people, (2) process and policy, and (3) technology.

PeoplePeople are critical to agility. Agility stems from an organization’s ability to recognize, communicate, and respond rapidly to changing business needs and challenges. The first step is to create a blended IT organization that will demonstrate greater value, decrease costs, and speed decisions. By blending and modularizing the IT organization, teams can efficiently respond to immediate, high-priority business needs.

Process and policyConsider enhancing operations control, structures, and processes; these efforts are the cornerstone of the agile organization. Build awareness for, and a focus on, change management. This competency can simplify efforts by proactively addressing potential roadblocks, reducing or terminating failed deployments, and delivering an increased business appreciation and acceptance of completed IT deployments. Agility cannot exist without solid process understanding. Enhanced IT processes, like the information technology information library and information technology service management can provide greater control, increase efficiency, and drive quality. They also help IT respond to evolving customer needs, manage changes, guide responses to technical problems and incidents, and make continuous operations improvements.

As CIO, it is important to create a business architecture that maps business capabilities to technology architectures. Redesign steps that require single point-of-entry and -exit, and create services that align with business capabilities to provide more flexible and reusable services.

An agile organization utilizes key agility indicators (KAIs) as the standard benchmark, grouping them into people, processes, and technology. They can supplement or replace key performance indicators, depending on context. Improper benchmarks hamper the ability to measure agility; not grasping KAIs and implementing them can create blind spots and compromise forecasting. KAIs can

improve communication with business leaders and provide more strategic, versus tactical, decision making. The result — business leaders are demanding more effective support for achieving strategic business goals efficiently.

TechnologyTechnology is a critical agility enabler, and developing an accelerated technology delivery capability can be achieved from adopting a flexible, modular enterprise architecture and associated IT infrastructure.

As CIO, you should consider investing in technologies that enhance and simplify the path to progress, focusing on a services and data-protection management approach. As part of the agile technology rollout, implementing a unified model environment for process owners and development teams helps business and IT work together to test agility ideas before delivery. This can build confidence and camaraderie between IT developers and business collaborators, improving chances for good outcomes.

The technology-enabled pathCombining people, processes, and technology to deliver agility requires the three parts to work in concert. Some insights from IT executives include:

•Use “anything, anywhere, anytime” architectures for support from design to production to order fulfillment. In this environment, technology systems serve as tools to deliver high-quality information and services to enterprise units and customers worldwide. Open architecture and enterprise databases provide real-time data in coordinated and transparent ways across the enterprise. Shareable information resources should be viewable and editable across the enterprise, as well as externally, to allow collaboration with partners, customers, or government agencies.

•Use modeling and AI to deliver real-time design support and effective customer solutions. Data mining and intelligent agents can identify and effectively respond to business needs. Modeling and AI can also test throughput, inventory, regulatory changes, market demands, raw material availability, and a broad range of other real world business issues prior to implementation. Enterprise-wide data access allows coordination and planning for new projects or business changes.

•Set a basic IT goal — establish simplified, end user-oriented IT solutions that offer self-sufficiency and reduce support needs. End-user self-sufficiency simplifies configuration and the use of IT solutions.

The 2011 CIO Compass 45

Empowered end users can easily create ad hoc teams with collaborators, both internal and external to the company structure. They can then collect and analyze information from across the enterprise and deliver rapid improvements to business functions.

•Deploy a test-bed agility project to effectively create awareness around the value of agility once training and critical technologies are in place. Nothing solidifies support of the process more effectively than completing the first agile implementation.

Winning agility storiesIT agility can also provide more effective support for the organization’s vendor strategy. CIOs should demonstrate that they can improve forecasting for IT capacity. This can help procurement negotiate more effectively and potentially save money. And, as the organization grows and goes after new geographic markets, scalable IT infrastructure most effectively supports new service offerings. The examples of a major discount airline and a major retailer below offer two examples of winning agile implementations.

A major discount airline has traditionally focused on customer loyalty and overall business efficiency through the performance of its people. It is well known for its staff’s flexibility in pitching in and rapidly turning aircraft between cycles. Furthermore, this airline created career incentives to empower employees, giving them responsibility and authority for enhancing the customer experience, reducing costs, and growing market share. Management also recognized the value of IT agility as critical to continued cost reductions and revenue growth. Focusing on employee buy-in over rapid implementation as a method, this airline integrated IT agility at the relevant levels of the company. Recent agility investments have led to a greater than 70 percent gain in revenue via the online customer portal. They also reduced customer relations resources an estimated 32 percent thanks to increased efficiency.

The discount airline’s lessons include:

•Effective integration of additional suppliers into the supply chain with IT support can improve efficiency and support for the organization’s vendor strategy.

•Efficient use of people means more effective planning and staffing, which helps avoid disruption and chaos when implementing large projects.

•Efficiency in systems usage and loading allows hardware and software allocation planned in sync with business cycles.

•Stronger procurement through more effective IT capacity forecasting strengthens negotiating and enables economies of scale.

A major retailer competes in a highly dynamic suite of businesses — from online retailing with an ever-evolving supply chain to providing public cloud services where competition changes almost daily. It adapts to new online market developments rapidly. The company stays ahead of the competition and keeps up with its phenomenal geographic and new markets growth by maintaining IT agility and embracing new critical technologies early. For example, they were an early adopter of service oriented architecture and Web 2.0. It has been able to differentiate itself by allowing the business to rapidly adopt new business and delivery models and leverage its core capabilities to lead new businesses. The versatility and impact of their infrastructure have led analysts to deem it as the “world’s leading online retail platform.” Their use of agile IT has provided revenue enhancement and:

•Shorter time-to-market and rapid focus on the next product.

•Shorter time-to-sale, freeing capital tied-up inventory.

•New products or sales channels to open up new markets.

•IT resources to extend production bandwidth and increase capacity.

•Scalable IT infrastructure to support service offerings in new geography.

Measuring your agile organization The companies above and other enterprises are clearly developing or enhancing their overall business agility, with their IT agility capabilities now front and center. As noted, many of the benefits are easily identifiable, and support their businesses’ successes. As part of developing IT agility, the organization should develop and communicate its capabilities, aligned with the enterprise goals.

Measuring IT agility effectivenessThe move to a more agile organization doesn’t happen overnight. Instead, it is a series of steps that transforms the organization. Measuring IT agility effectiveness can show an organization how and where it is improving. Also, measuring return and identifying areas that still need improvement are critical to real transformation. CIOs should understand how new processes and IT agile methods correlate with performance and then use this knowledge to direct future efforts.

The 2011 CIO Compass 46

Critical agility characteristics and metricsThere are many ways to measure IT agility performance and change within an organization. CIOs should identify a few critical agility characteristics and metrics to follow as they implement IT agility practices. Characteristics are behaviors or activities that may not be easily measured, but their effects can be felt. Metrics are discrete and measurable indicators of performance that can be reported. We have discussed three specific areas to focus on during the transition towards IT agility. They help define a couple of key agility indicators and characteristics that a CIO should pay attention to as their organization shifts towards IT agility. The figure below includes both characteristics and metrics that represent some of the most-common IT agility measurements. An organization should not be limited to these metrics and should look to expand upon them as they develop a more effective understanding of their IT processes.

Characteristic Metric

People Speed — Rapid response to global requirements of new opportunities, capacity, sourcing arrangements, market demands, and regulatory changes.

Adoption — The acceptance rate of the organization to new methods and IT agile practices.

•Time in months or weeks to deliver a new capability or functionality.

•Established short cycle times for regular (three–six month) delivery cycles.

•Amount of budget and/or focus spent on operation and maintenance vs. innovation.

Process and policy Intelligence — Integration of simulation and operational processing to determine most advantageous means of fulfilling customer needs.

Intelligence — Real-time decision support and execution, available internally and to suppliers.

•Modeling capability of the organization.

•Services focus and services catalogue.

•Time required to deliver business intelligence, and reporting.

•Ability to leverage current IT investments for re-use in other programs or business ventures.

Technology Access — Systems as a tool to deliver high quality information and services to any facility, customer, anywhere, anytime.

Access — Availability across all data users, and locations.

•Enterprise architecture and business alignment.

•Agile processes implemented in tools and infrastructure.

•Application of IT disciplines and service delivery life cycle reflected in technology and standards.

•Time to deploy new technologies overall.

Use a baseline of performance metrics to measure improvement. This sets the stage, and helps show how business and IT are improving through IT agility adoption. Initial metrics may center on the KAIs and common key metrics listed above. However, future metrics should be tailored to the organization and reflect its priorities. As an organization shifts to more IT agile processes, it should see performance improvements in some areas leading directly to improvements in other areas, as shown in the diagram below.

The 2011 CIO Compass 47

SummaryIT agility is now a critical tool in the chief executive officer’s war chest to maintaining competitiveness and accelerating market position in this dynamic global economy. The CIO is expected to develop and deliver an agile IT capability and in many cases has to refresh or rebuild this capability across the IT organization.

Transitioning to an agile organization is not easy. First, the IT organization should determine what to own and what to build versus buy. There is no room for “not invented here” or “not supported here” mentalities. Second, existing standards may be inflexible, so standards and governance should move front and center with well-defined processes that enable change. Third, innovation projects may not be well structured and will require work. Remember, flexibility and test-bed efforts are necessary to help meet a changing competitive landscape.

Overall, CIOs should focus on tracking change and improving implementation of new processes or methodologies versus a single model for advancement. Measurements allow CIOs to focus on areas needing the most improvement and which deliver the most significant potential value to the enterprise. While performance measurements indicate how, and if, the organization is improving, they are not the end result or sole purpose of IT agility. CIOs should use performance metrics to help direct IT agility transformation. The CIO is the leader of the transformation process, and those who clearly understand and can articulate the value of IT agility will probably get a hearing because IT agility is now recognized as a power capability in the cost reduction arena. If transformation is effective, the CIO and IT can earn or advance to a front-row seat at the enterprise strategy table.

Reduce WIP limits Reduce

cycle times

Reduce task switching

Increase feedback

Increase quality

Increase team maturity

Lowering the level of work in progress can have a significant, positive impact on completion time of individual work items (cycle time).

Increased quality lead to an increase in overall team maturity and performance.

Increased feedback improves the quality of the final result.

With less task switching, context about work items is fresh and increases the feedback provided.

Shorter cycle times means the same item is worked on for longer in sequence providing less switching between tasks.

Less blocking can result in more rapid turnaround for individual work items.

The 2011 CIO Compass 48

The bottom lineBlending service delivery models to improve performance

Organizations today are under incredible pressure to improve internal operating efficiency, while simultaneously driving innovation. High-quality, timely service delivery is becoming increasingly critical for competitive differentiation. But delivering quality, cost-effective services is challenging for many reasons, including:

•Multiple, independent business units.

•Complex business architectures (even within single business units).

•Service context and visibility problems.

•Virtualization.

•Incomplete or broken change management practices.

•Rising information technology (IT) costs.

When service delivery changes are made in such environments, they often fail due to limited funding, unsupportive cultures, poor business processes, and complex architectures, just to name a few. A blended IT service delivery model can allow organizations to address these problems.

In this article, we take a multidimensional look at the IT archetypes and service fulfillment channels and analyze combinations that are critical to improving service delivery. While each IT archetype could be the basis for service delivery in any enterprise, an adaptive, synthesized approach that incorporates features of more than a single archetype could result in more effective, streamlined service delivery.

With the applicable/functional service management platform, operations management processes, technology automation, as well as industry-leading practices, a blended service delivery model brings people, processes, information, and technology together to deliver service excellence, manage risk, and realize the full value of effective service management.

Components of IT service delivery modelsOverviewConventional IT service delivery models range from utility-type models that provide basic IT services at a low cost to business-driver models that “drive” the business by leveraging technology innovations. These models can be used by IT organizations to help meet the needs of its business; however, the effectiveness of these models will vary based on the business need and type of IT service being provided. Below are the typical IT service delivery models currently in the market:

Utility: The primary objective of the utility model is to provide low-cost services with a high degree of predictability. While this model is efficient and cost-effective, it lacks innovation and offers limited collaboration with the business. This model provides standard, canned services that are not customized for each user’s requirements. Utility services are typically focused on a specific unit, site, or region and are governed by well-defined service-level agreements. This service delivery model is often used for infrastructure services, such as desktop and end-user support, as well as application and network hosting.

Authored by: Russ Smariga, Ezrick Wiggins, Anuj Rajeev Nadkarni, and Mike Habeck

The 2011 CIO Compass 49

Supplier: The primary objective of the supplier model is to provide operationally focused, standardized IT services requested by the business. The hallmark of this service delivery model is reliable, on-demand service delivery at low cost. The services provided by this model are typically process-intensive and are delivered across a particular region or country. Although this organization provides reliable, on-demand IT services, it does not act as a strategic partner that would allow the business to unlock the full potential of the IT services at its disposal. This service delivery model is often used for services, such as knowledge management, general ledger accounting, and benefits administration.

Utility archetype

Limited portfolio of existing, highly predictable, well-understood services.

Business unit with need which matches what the utility is already supplying.

Service request

Services supplied/consumed

Supplier archetype

Larger catalog of standard, well-understood services.

Business unit with needs largely matching what the supplier is

already supplying.

Service requests

Services supplied/consumed

Enabler archetype

Flexible set of services backed by a responsive development organization.

New/modified service offering

Business unit with needs not matching what the supplier is already supplying.

Service requests

Services supplied/consumed

Collaboration

Enabler: The enabler model proposes close collaboration between the business and IT organizations to allow the business to derive additional value from the IT services offered. This service delivery model is knowledge-intensive and contributes to the development of leading practices and centers of excellence across the organization. This service delivery model is often used for services, such as application integration, quality assurance and testing, and data management.

The 2011 CIO Compass 50

Driver: The hallmark of the driver model is innovation. This model requires close collaboration between the IT and business organizations to promote alignment between the technology innovations and the organization’s business goals and objectives. This service delivery model is typically aligned with a function or business unit, has a management focus, and is decision- and action-intensive. It is often used for services, such as IT strategy and planning, portfolio and program management, and vendor management.

Framework for a blended IT service delivery modelWhat is a blended delivery model?In its simplest form, a blended IT service delivery model is one that has characteristics of more than one of the four traditional models. Using this definition, many, if not most, medium to large enterprises today are operating with a blended model.

However, many of those organizations did not arrive at a blended model by following a conscious, strategic plan. Here is a typical example: An IT organization was operating in an effective Utility model when a new senior director for IT Operations is hired. This new IT leader has a flair for engaging and understanding the business units, so she begins to move the enterprise IT organization toward an Enabler service delivery model.

The fact that this service model shift was not part of a strategic plan does not make it wrong; on the contrary, it was good for IT and the business. However, consciously making such a transition as part of a strategic plan that is put in place by an enterprise governance team is likely to be far more beneficial, more sustainable, and reach more parts of the organization than an individual ad hoc effort.

This is more useful definition of a blended service delivery model:

A blended service delivery model is one which has characteristics drawn from more than one of the IT archetypes to efficiently support the evolving needs of the business. Ideally, this blended model is created and managed by an IT governance body lead by the chief information officer, and is showcased to the business and IT.

Driver archetype

Deep understanding of business needs.

New service idea

New/modified service offering

Business unit with needs (some of them "undiscovered") not matching

what the supplier is already supplying.

Service requests

Services supplied/consumed

Collaboration

The 2011 CIO Compass 51

Elements of a blended delivery modelBy definition, a blended service delivery model will have one or more characteristics from more than one of the IT service delivery archetypes: Utility, Supplier, Enabler, and/or Driver.

In addition, a blended service delivery organization is characterized by having a well-run, value-added governance model that addresses people, process, and technology/tools.

Across these three foundational items, the blended service delivery model will apply:

•A strategy,

•A vision,

•Guiding principles,

•Standards and architectures, and

•Risk assessment and evaluation.

An effective governance model is perhaps the most important element in achieving a blended service delivery model that achieves the organization’s goals.

Implications, advantages, considerations, and risks of a blended IT delivery optionIncreased effectivenessService delivery and support for many organizations has been built over the years from custom-developed and emerging technologies that are held and managed in silos. The same can be said for information. The outcome is typically multiple sources of data and information with little or no integration or federation to support business service needs.

It is also common to find silos in other areas, including IT, operations, and business processes, as well as unautomated tasks and workflows for internal processes. As a result, operational efficiency and effectiveness can be difficult to achieve. To further complicate the issue, different roles within a company require different information, so the silo effect widens.

For businesses to enhance overall performance and achieve its objectives, business and technology silos should be bridged. With the right blended service delivery model, organizations can achieve the integration and workflow automation required for effective service delivery of IT and business services.

Reduced riskA blended model also reduces the risk of diminished business performance, which could prevent the organization from meeting its business objectives. Therefore, implementing a sound governance model, identifying specific performance indicators, and, most importantly, implementing the proper/required combination of IT archetypes, service delivery models, and service fulfillment channels is critical.

Success storyOne global high-technology manufacturer (GTM) adopted an effective blended service delivery model. This GTM has been in business for more than 15 years, has more than 18,000 employees, and conducts operations in more than 80 countries. Its business units include engineering, hardware, software, service, sales, and back-office functions. For most of its history, GTM’s IT organization ran on a Supplier service delivery model, except for Utility service delivery of computing resources required to support the engineering organization.

During a strategic planning cycle about six years ago, IT leadership decided to adopt a true blended service delivery model to achieve these goals:

•Reduce the annual IT spend by 16%,

•Increase the number of services delivered, and

•Maintain quality of service for existing services.

The GTM IT team developed a plan to evolve into an organization that used these models:

•Utility (for the engineering compute services).

•Supplier (for a variety of undifferentiated back-office services).

•Enabler (for business-specific services across business units).

•Driver (for specific services with aggressive time-to-market constraints).

Over the following years, GTM implemented an IT service delivery organization that reduced the operating budget every year to reach 55% of its original spend. Also, they delivered more services with higher morale and satisfaction among the IT team. The GTM IT leadership team attributes this boost in morale, at least in part, to the fact that the entire service delivery team has a definite role to play in both creating new services and maintaining the quality of existing services.

The 2011 CIO Compass 52

These accomplishments are not uncommon to GTM. Many organizations have effectively implemented blended service delivery models with positive outcomes.

Recommendations/final thoughtsSummaryThere are several factors that determine an organization’s choice of service delivery model, such as type of business processes, number of locations and operating units, pace of business growth, and cost structures. A blended service delivery model provides a flexible and cost-effective option for delivering functional and business-enabling processes by leveraging an effective combination of shared services, outsourcing, and offshoring.

An effective, integrated service delivery approach should consist of the following activities:

1. Document and define the types of IT services that will be delivered by each component of the blended service delivery model;

2. Identify and define the types of organization structures required to support the blended service delivery model;

3. Define the high-level services, capabilities, and governance for those support structures;

4. Build a prioritized road map to transition from the current service delivery model to the future-state blended service delivery model;

5. Estimate costs for build-out and transition; and

6. Identify high-level performance indicators that will be utilized to measure the performance of the blended service delivery model.

Achieving organization-wide adoption of a new service delivery model takes time and may result in an interim dip in operational performance. However, companies can achieve the desired improvement in operational efficiency by adopting a strategic view of how the blended service delivery model can support the organization’s overarching goals and following a disciplined approach for implementing the selected model.

Authored by: Randy Steinberg and Tiffany Chen

The 2011 CIO Compass 53

Information technology infrastructure library (ITIL) and information technology service management (ITSM) are acknowledged as leading practices in information technology (IT), yet many organizations struggle to implement effectively and/or realize value from their ITIL/ITSM initiatives. Some companies have succeeded, some have failed, and some are on their second or third attempt. Others may have undergone ITIL training and certification efforts but are unsure where to start. This article outlines 10 ITIL techniques that can be implemented within 10 months. Almost any IT organization can implement these techniques to boost IT value without a significant investment — in an effort to quickly kick-start or put life back into existing programs.

Moving ITIL towards business value A common mistake that organizations make is forgetting that ITIL is the road, not the destination. Like buying new software, implementing one or more processes or process improvements may improve IT capabilities, but may not directly address IT’s true business problems. Many times, it’s simply assumed that business issues will go away once ITIL processes have been put in place. If ITIL efforts are not closely tied to business outcomes and results, implementation programs can stall or die because the business cannot see the program’s value.

Worse, many organizations attempt a full-process implementation, believing multi-year programs must become internalized before benefits can be achieved. Business priorities and focus tend to constantly shift after nine–12 months; as a result, ITIL efforts quickly fall off the

radar. The implementation approach should include short-term wins through tightly focused efforts directly tied to measurable business value.

This article identifies 10 ITIL techniques that can be implemented within 10 months, the potential business value that can be obtained, and guidance on how to measure results. The 10 efforts described have been effectively enacted at numerous organizations. For organizations that are just getting started in their ITIL journey, or even those already underway, these can provide a starting point to gain program traction or keep the momentum going.

Think agile Implementation of ITIL practices is a program, not a project. Companies experiencing results with ITIL tend to think in terms of a repeatable approach:

1. Take a measurement of current-state processes and IT performance indicators.

2. Identify quick-win efforts that provide measurable business value in a defined time period.

Charting the course for IT success with ITILTargeting 10 actions in 10 months

Targeting business value with ITIL: 3 Reduce unplanned labor

3 Reduce incidents

3 Allow IT to focus on high-value projects instead of firefighting

3 Increase business satisfaction with IT services

3 Reduce nondiscretionary spend

The 2011 CIO Compass 54

3. Implement the pieces and parts of ITIL processes that can deliver that value.

4. Measure the results of the efforts undertaken.

5. Repeat the cycle, identifying further quick wins and efforts for the next time period.

Repeat this approach over and over until IT services noticeably improve, ITIL practices get embedded into the IT organization, and the business becomes more confident in IT and its capabilities.

10 ITIL activities to target with significant business valueTarget #1: Make IT easier to work with

ITIL area: Request fulfillment

Activity: Pick the top-10 common user requests, and reduce the time it takes to fulfill these requests. Focus on creating a standard fulfillment process, establishing on fulfillment roles and responsibilities, and setting user expectations with published delivery targets. Also consider using techniques such as Six Sigma Lean practices and automation to streamline fulfillment activities.

Business value: Increased user’s contentment with IT, and efficiently addressed business needs.

How results can be measured: Customer’s contentment level, percent of requests fulfilled within delivery targets.

Target #2: Assemble a root-cause toolkit

ITIL area: Problem management

Activity: Assemble and document a compendium of techniques for identifying root causes of incidents. Then initiate a training program on these techniques with IT staff with the goal of upping everyone’s skill in identifying and resolving underlying problems that are causing incidents. Sources for techniques can include ITIL books, the Internet, or outside support.

Business value: Increased problem-solving skills and capabilities among IT staff and fewer incidents.

How results can be measured: Percent reduction in incidents, faster incident resolution times.

Target #3: Measure the impact of production changes

ITIL area: Change management

Activity: Measure the actual impact of changes

introduced into the live production environment and make these results highly public and visible across the entire IT organization. Examples of items to measure might include percent of changes resulting in incidents, percent of changes that failed, and percent of changes that were rescheduled or missed their delivery windows. Take these kinds of measurements and report on them by IT department. The publication of these results can/has been shown to more effectively drive positive IT behavior compared to more-sophisticated processes.

Business value: Increased problem-solving skills and capabilities among IT staff and lower number of incidents.

How results can be measured: Percent reduction in incidents, percent of changes completed on time, percent of changes completed without error.

Target #4: Measure and reduce unplanned IT labor

ITIL area: Continual service improvement

Activity: Start by taking a baseline of how much time IT spends fixing incidents, reworking failed changes, or responding to user complaints. Measure these through a time-reporting system or, more simply, take a simple survey among IT staff to estimate percentage of hours spent weekly on these kinds of tasks. Identify the root causes for these activities and target up to three small activities that directly work to reduce this kind of labor.

Business value: Lowers nondiscretionary IT labor costs, reduces the amount of nonvalue IT labor, and allows IT to focus on the projects and improvements that the business views as priorities.

How results can be measured: Reduction in percent of available IT labors on nonvalue tasks, and nondiscretionary IT labor costs.

Target #5: Implement a basic configuration management system

ITIL area: Configuration management

Activity: Identify authorized sources of configuration information, lock them down under change management, and assign owners to each information artifact. Implement a schema or filing system to quickly find this information and utilize the ITIL configuration management process to maintain it. Examples of sources to capture might be relevant configuration files or databases, network diagrams sitting in someone’s drawer, solution architecture documents, or vendor manuals.

The 2011 CIO Compass 55

Business value: Reduces wasted labor and time spent looking for configuration information, provides configuration information that can be trusted, and reduces incidents related to wrong configuration information.

How results can be measured: Reduction in incidents caused by configuration errors, reduction in request fulfillment times, and reduction in time it takes to finish application and infrastructure projects.

Target #6: Identify operating risks and implement mitigation strategies to reduce those risks

ITIL area: Availability management

Activity: Assemble a core team to inventory known operating risks for IT services. Identify each risk, likelihood of occurrence, and potential impact (high, medium, or low). Then identify a mitigation strategy for the top risks identified and assemble an availability plan that outlines the activities required to implement the strategy. Target two or three operating risks that can be quickly addressed at least twice a year. Examples of risks to search for might include single points of failure within the IT infrastructure, lack of IT skills in relevant areas, data center environmental factors, or capacity issues with hardware and software.

Business value: Increases the overall availability of IT services, and proactively reduces likelihood of future incidents.

How results can be measured: Overall reduction in incidents, and percent of services not covered with a proactive availability plan.

Target #7: Start trending incidents

ITIL area: Problem management

Activity: Track incidents by category such as service, hardware/software platform, or application. Implement a program to review incident logs on a scheduled basis to identify trends and repeat incidents. Use the ITIL problem management process to identify root causes and raise changes and project activities to resolve each underlying cause found. Communicate these, along with resolution activities, on a regular basis, to IT executives, management, and the service desk.

Business Value: Reduces incidents and service outages, reduces unplanned labor, reduces reactionary firefighting, and increases end user’s contentment with IT services.

How results can be measured: Percent reduction in

incidents, percent reduction in incident resolution times, and percent increase in end user’s contentment levels.

Target #8: Identify IT service delivery costs

ITIL area: Financial management

Activity: Assemble an inventory of IT services and the current IT budget. Logically recast the costs listed in the budget with each IT service. Then identify demand factors that consume each service such as employees, orders, sales, customers, or goods sold. Determine the current volumes of each demand factor, and divide the total cost for each service by the demand volume to get a unit cost for each service (i.e., IT cost for e-mail is x dollars per employee; IT cost per sale is x dollars).

Business value: Creates clear visibility into how IT costs are being consumed, providing information the business needs to prioritize IT investments and understand the cost impact of business decisions on IT.

How results can be measured: Percent of IT services with known unit and total costs, business’s contentment with IT services, and percent of wasted costs found for services that are not needed or providing little value.

Target #9: Implement a production readiness certification process

ITIL area: Release and deployment management

Activity: Institute a production readiness certification process that formally approves releases for production use before they go live. Using the ITIL process framework as a requirements guide, use case test releases during the design stage and operationally test them once staged for production. Institute a readiness checklist and series of quality assurance and testing gates that must be passed before releases are allowed to go into production. Integrate readiness activities with the application development life cycle.

Business value: Confirms releases will meet designed business and operational requirements and operate incident free the first time they go into production.

How results can be measured: Number of incidents related to new releasers going into production, and percent of releases transitioning without error.

Target #10: Identify services and service delivery models

ITIL area: Service portfolio management

The 2011 CIO Compass 56

Activity: Formally document and inventory IT services being delivered to the business. For each service identified, document a delivery model that identifies the types of hardware, software, network, storage, people, and other necessary elements that are associated with delivery of each service. Also document delivery targets, projects, strategies, and other activities that are related to each service, both in flight and planned. Integrate this information into an IT service portfolio.

Business value: Formally identifies IT services delivered and serves as a basis for aligning IT priorities with the business. Also provides IT with a basis for end-to-end monitoring that delivers configuration management business value, which can provide the capabilities to quickly assess impact of incidents and changes as well as establishing IT strategies and plans.

How results can be measured: Percent of IT services included in the portfolio, and percent of IT strategies supported by the portfolio.

Further ways to enhance your effortsDo an ITIL/ITSM maturity assessment before embarking on an ITIL/ITSM-based improvement initiative

Many organizations fail to realize value from their ITIL/ITSM initiatives because they don’t have a documented baseline to measure their progress against during the improvement initiative. Benchmarking an organization’s maturity rating can typically be done within two months. Benchmarking ITIL/ITSM-based processes and functions allows comparison of IT’s performance to competitors, sister organizations, and industry.

Seriously consider metrics and reporting — centralizing and dashboardsMany organizations fail to realize value in gathering and reporting on data/metrics that are already being produced in their organizations. Useful performance metrics are often found in reporting tools. Even a relatively immature

monitoring program can generate useful data for reporting on service measurement. Creating a reporting function within an organization can aid in gathering disparate reports and centralizing reporting. Consider using a centralized data repository leveraging online tools.

Drive change using a top-down approachMake sure that your improvement initiatives have active senior/executive level sponsors who will communicate and mandate participation. Many ITIL/ITSM-based improvement initiatives fail because they are implemented and socialized using a grassroots approach.

Make realistic goalsOne of the common implementation missteps is trying to do more than the organization is capable of achieving. Determine the approach to achieve your goals in each implementation cycle. Choose an approach that is narrow and deep or wide and shallow.

Keep the bigger picture in mindToo many tactical efforts may cloud the overall IT service management vision. Spread-out low-hanging-fruit efforts to coincide with resource availability from the longer, more strategic projects. This shows progressive accomplishments instead of reporting long stretches without significant accomplishments. Confirm that quick-win projects/tasks are aligned to the strategic vision.

Keep communicatingSometimes ITSM/ITIL initiatives lose steam and focus because results are not realized in the beginning. Set expectations early by developing a “road show” that is presented to executives and senior management to demonstrate what is being done, when, and the intended benefits. Develop measurable results — they are critical to keeping a process-based initiative on track.

Don’t reinvent the wheelCapitalize on improvement initiatives conducted by other organizations in your company, leveraging what each organization has to offer. Also consider using outside resources and sources that can bring more ideas and real-world experiences to the table.

ConclusionCharting the course for IT success with ITIL is recommended to be done as a series of improvement efforts that take place over time. Combined with an overall ongoing improvement program, ITIL can provide noticeable business value and momentum for almost any IT organization undertaking their IT service management journey.

Measurements are key to success; take a baseline on day one and measure the results of each effort to demonstrate progress.

The 2011 CIO Compass 57

What you don't know will hurt you Intelligent approaches to asset, configuration, and risk management

Intelligent approaches to asset, configuration, and risk management

Authored by: Bill Herwig, Doug Day, Chris Thomas, and Sandeep Lele

IT

Real estate

Procure

men

t

HR

Finan

ce

Strategy Data

Assetmanagement

Proce

ss

Tech People

IT organizations traditionally deploy critical business applications that support business needs that are managed through automated processes, workflows, or operations. While IT has been able to facilitate many business needs over the past decade, they have not focused their energy on becoming an effective business partner for the business. This comes as no surprise since in the past, there have been limited tools and processes to manage IT assets effectively. This is alarming since Deloitte Consulting LLP has found on our asset management projects that IT asset costs can account for 50% of the total enterprise asset base and up to 80% of the total capital expenditure. In order to provide more effective service quality at the lowest cost, IT organizations have started leveraging

technical innovations and service management processes to support a more effective and efficient ITAM framework.

Many organizations have faced the following asset management challenges:

•Lack of enterprise IT asset management processes and standards.

•Inadequate IT asset central repository.

•Fragmented and manual asset-tracking mechanisms.

•Organizational silos that limit or prohibit technology process integration and data integrity.

•Lack of automated process to collect or identify changes to inventory information.

•Lack of an executive mandate to comply and enforce asset management processes.

•High expectations of an all-in-one asset management tool.

•Poor reporting definition.

•Lack of a mechanism for maintaining asset data integrity.

A careful and effective ITAM solution can provide the following benefits:

•Increased speed and accuracy in identifying and implementing cost reduction opportunities.

•Increased software and hardware security protection.

•Implementation of centralized and repeatable asset management processes related to change, incident, problem, configuration, availability, and capacity management.

The 2011 CIO Compass 58

•Support for regulatory and vendor license compliance.

•Basis for executives to make informed tactical and strategic IT decisions.

To achieve these ITAM benefits, businesses should implement asset management processes and ITAM tools. The following steps should be taken at a minimum:

•Identify asset types to be tracked. Track facility locations, hardware, software and hardware licenses, and maintenance agreements to understand the full cost of ownership.

•Define and implement IT asset-related processes. Implement ITAM processes to leverage asset-related information and tools that can increase the service effectiveness and help reduce the total cost of ownership (TCO).

•Configure and leverage asset management tools and automation. If the proper ITAM tools are selected and configured in accordance with standards, specific and timely reports will likely be available to provide information to control and manage the organization’s assets.

Asset management frameworkAn ITAM framework provides a set of business practices that join financial, contractual, and inventory functions to support asset life cycle management and strategic decision making. It provides an overview of the asset life cycle, which includes interfaces to IT and non-IT systems (shown in Figure 2.1). It begins with the need analysis and ends when the asset reaches retirement.

The ITAM framework establishes the primary point of accountability for the IT asset life cycle management. Development and maintenance of policies, standards, processes, systems, and measurements can allow the organization to manage the IT asset portfolio with respect to risk, cost, control, governance, compliance, and business performance objectives. The framework is also designed to provide an integrated solution that works with specific organizations involved in the asset procurement, deployment, management, and expense reporting.

The objectives of a well-designed ITAM framework are to:

•Establish ITAM discipline with roles/responsibilities and an effective supporting governance structure.

•Manage IT assets throughout their entire lifecycle.

•Define, communicate, and support asset management policies and processes.

•Provide a centralized, scalable asset management repository to track core data for enterprise IT assets.

•Implement discovery capabilities to allow for physical and operational reconciliation.

•Define “touch points” with select functions, such as procurement, finance, and ITIL/CMDB.

•Enhance financial management capabilities to manage charge-backs and allocations.

•Manage asset groups, such as desktops, laptops, servers, storage, network, application, and hardware/software licenses.

Figure 2.1 IT asset management components and life cycle

12

3

4

5

67

8

19

10

11

12

ITAM

Life cycle

Needsanalysis

Requisition

Finance

Acquisition

Ordering

Receiving

Configuration

Assignment

Deployment/ Usage

Payment process

Maintenance

Retirement

Service desk

Utilization repository links technical and business information

Asset management software delivery software metering

performance network management

Inventory financial contractual SW

licensing

Proof of ownership Evidence of usage

SupportsSupports

Supports

End users IT assets

Business purpose

Service and warranty information

+/-

Procurement

Fixed asset

Human resources

Accounts payable

ITIL processes

Security

DR

Incident

Problem

SLM

Change

Configuration

The 2011 CIO Compass 59

Illustrative ITAM framework approachIn order to extract the financial, tactical, and strategic benefits that an enterprise-wide ITAM capability can provide, a systematic approach using ITIL is necessary. Organizations can improve ITAM functions and achieve more specific enterprise-wide asset visibility and management.

The illustrative ITAM framework approach shown in Figure 2.2 highlights a phased approach with measurable results through three phases: foundation, efficiency, and excellence. However, this approach should be tailored based on additional analysis, perspectives, and prioritization of activities.

Figure 2.2 Illustrative ITAM framework approach

ITAM incorporates the physical, financial, and contractual attributes for an IT asset and associates this information to an IT asset inventory and configuration management. This can allow ITAM to achieve the following:

•Reconciliation of asset-tracking data with contractual and financial data.

•Establishment of cost-of-ownership through tracking ownership details.

•Reduction of liability exposure and improvement in efficiency for software/hardware and service provider contracts by linking contracts to asset-tracking data.

•Reduction of IT costs by aligning usage to contractual terms.

Key steps in implementing an ITAM framework should include:

•Discover and analyze current ITAM inventory data accuracy and the associated processes that support careful asset inventory.

•Discover and analyze current ITAM processes throughout the enterprise.

•Launch feasible improvement efforts for current ITAM processes, focusing on increasing efficiency, accuracy, and reporting.

•Begin development of new ITAM capabilities, including processes, policies, procedures, tools, and standards that align with current industry trends.

As each enterprise-wide ITAM capability is embraced, additional benefits can be realized, including reducing costs, enhancing management and security capabilities, improving regulatory compliance measures, and providing greater levels of information to support tactical and strategic management decisions.

Asset tools and softwareAsset tools are useful for large enterprises in tracking and maintaining their asset inventories. Large enterprises grapple with identifying their full asset inventory. In order to make informed decisions on how to manage their assets, an enterprise should understand their landscape first. Asset tools can largely be quantified as the following:

Asset discovery tools — This type of tool usually utilizes automated scan methods to recognize hardware and software configurations and provide a physical inventory baseline. Depending on the tool’s capabilities, it can provide basic data (such as a UNIX server, Oracle 10g database, etc.) or conduct a deep dive to identify specific configuration details.

Approach

Vision Top down phased approach to implementation

Program execution

Data migration and transformation

Technology solution

Policy and process design

Requirements and gap analysis

Conceptual framework

Cha

nge

lead

ersh

ip a

nd k

now

ledg

e m

anag

emen

t

Prog

ram

lead

ersh

ip

Phas

e 1

Phas

e 2

Phas

e 3

Guiding principles

Current realities+

+

+

Geography

Asset classes

ITAMlife cycleelements

Functions and

processes

Scope

IT operations

The 2011 CIO Compass 60

Asset tracking tool — Once the asset has been discovered or identified, this tool is designed to provide ongoing tracking of each asset throughout its life cycle. This information allows reports to be generated that help determine TCO and contractual compliance, and provides operational data for strategic decisions. Additionally, it may showcase changes made to an asset that can be compared to a change management tool.

Remote management and software deployment — Software distributed from a central application allows applications to be more securely controlled, which supports installation of correct versions and license information for hardware assets.

Data integration — Collecting and analyzing the hardware and software data is the essential building block for asset management. However, this information must be fed into management applications in order to leverage this valuable information.

Asset data typesThe ability to track the appropriate asset data is vital to implementing ITAM successfully. IT organizations are able to make informed financial decisions by identifying the correct set of data. Using the aforementioned asset tools, IT organizations typically track these types of data:

Hardware characteristics — Hardware profiles are used for standardization efforts, migration planning, and diagnostics. Identifying the different types of hardware characteristics allows the company to easily determine each asset’s life cycle status.

Software configuration — Software profiles track the types of software, license management, standardization, patch/upgrade management, and conflict resolution. Tracking software configurations endeavors to provide adherence to company standards.

Contract — Tracking contracts identifies support profile for contract renegotiations, service level agreement adjustments, and vendor preferences. Companies that fail to adhere to contracts usually incur additional costs either through fines or lost opportunities.

Cost — Determining costs includes gathering data needed for cost negotiation, including financial profiles, vendor preferences, and financial alternatives.

Inventory management

Configuration management

Contractmanagement

Assetstandardization

Licensemanagement

Compliancemanagement

ITAMframework

Inventory management is one of the most critical functions of ITAM. This sub-process should be utilized to spark the development of the additional five ITAM sub-processes. By doing so, the organization can begin the effort to implement a holistic, centralized ITAM framework.

Asset-related processesITAM efforts often fail due to decentralized processes and lack of process compliance. An ITAM implementation must focus on standard processes surrounding the asset life cycle. Figure 2.3 shows the ITAM Framework from the perspective of specific subprocesses.

Figure 2.3 ITAM framework and specific subprocesses

ITAM framework compliance allows IT organizations to employ standardized processes focused on managing the asset life cycle — from procurement to disposal. Careful ITAM capabilities typically incorporate the following subprocesses:

•Inventory management — acquiring and retiring assets according to a schedule designed to improve ROI.

•Contract management — effectively managing contractual agreements of high-value assets to provide information to maintain integrity, business relationships, and warranties.

•Configuration management — supporting security and functionality through timely updates and configuration checks.

•License management — maintaining data on license tracking to meet legal requirements and avoid unnecessary licensing costs.

•Compliance management — providing accountability and management to support compliance with major regulatory mandates, such as Federal Information Security Management Act, Sarbanes-Oxley Act, and Health Insurance Portability and Accountability Act.

•Asset standardization — standardizing hardware and software to provide easier and potentially more cost-effective asset management (e.g., installation and maintenance).

The 2011 CIO Compass 61

Improving IT financesShrinking IT budgets and staff reductions increases pressure to analyze purchases and sustain IT spend. Understanding IT finances is the first step to more effectively position procurement and asset management programs to deliver high-quality IT value to the organization.

Managing the financial aspect of IT assets should include several specific activities:•Modeling costs prior to asset acquisition.

•Tracking, monitoring, and managing the asset throughout its operating life.

•Continuous monitoring to endeavor to provide assets effectiveness and cost efficiency.

•Disposing of assets before becoming liabilities.

For large organizations, this typically requires a fulltime IT asset manager and a supporting team to exercise control over specific facets of an asset life cycle. The asset manager works with involved parties to help develop a detailed plan to coordinate the asset life cycle activities and the overall management of IT investments.

Since IT organizations often deal with limited information, financial decisions related to the assets may suffer. For example, stakeholders in the asset approval process are only able to make informed decisions if the information provided is correct and provided in a timely manner. By applying the ITAM framework, organizations will be provided the information to more effectively manage IT assets as a single portfolio across the enterprise. This will not only facilitate the achievement of higher visibility into their asset mix, but will allow organizations to increase their leverage in vendor negotiations.

License complianceMany organizations have a team of finance, legal, and IT resources to prepare for audits and maintain hardware and software licenses. As a result, many organizations are taking a deeper look into just how money is spent on these licenses.

This section discusses how to better leverage existing enterprise license agreements (ELA), reharvest software licenses, improve software maintenance, and deal with risk management (audits).

Enterprise license agreementsIn today’s cost-conscious environment, finance and legal often review ELAs; however, their responsibility does not typically encompass pricing. A licensing deal team typically

consists of finance, procurement (SCM), engineering/architecture, product owners, and legal resources to maintain contract terms and align existing/future ELAs with the business’ strategy.

Taking these steps is easier said than done. There needs to be relevant executive oversight to drive participation from select stakeholders. This requires commitment from the business to carefully forecast demand and develop particular metrics and financial templates for analysis.

Software reharvestingOrganizations can leverage buying power and spend by establishing a process that continually reviews pools of available purchased licenses. When a new license request is generated, the pool of available licenses is reviewed; new licenses are purchased only if specifically required. Once a license is no longer needed, it is returned to the pool.

Software reharvesting improves utilization of purchased software, improving control and repurposing of software assets. However, developing an effective process is complex, requiring an understanding of licensing terms and tax codes and determining how to address charge-backs.

Software maintenance analysisSoftware maintenance analysis is often overlooked because software maintenance is an annual cost and typically corporations assume that the maintenance is required. The first step is to determine and review the annual software financial spend. Next, define tiers and criteria, and assign current vendors to the applicable tier. Then, identify opportunities to migrate services to a more effective vendor, following these steps:

•Utilize leverage to renegotiate/leverage vendors providing services globally.

•Review service levels to optimize service levels in the software contracts.

•Analyze costs to build an application in-house versus purchasing it from an outside vendor.

•Look for opportunities to use a standard product/software to replace a nonstandard product from same/different vendor.

Risk managementMany organizations do not know if they are in compliance with contracts. Unfortunately, license compliance reviews by software companies are on the rise, and industry analysts continue to predict an increase in vendor audits. Software vendors use license compliance audits to improve

The 2011 CIO Compass 62

their margins, up-sell additional licenses, and generate revenue from penalties and fines.

Combating risk managementOvercome fear of license compliance audits by implementing controls that could limit legal and financial exposure and developing reporting tools/processes to determine license usage. Moreover, organizations should work with the deal team to streamline efforts to respond to vendor claims. The client stories below illustrate the benefits of risk management.

Scenario 1: A large financial services company lacked integrated processes and systems for managing software assets. They wanted to minimize compliance risk due to internal and external audit and legal issues. Functional areas — Procurement, IT, and Legal — worked in silos across business units and geographies. They also lacked control of assets throughout their life cycle.

Outcome: These initiatives were launched:

•Facilitated the cleanup of backlogged software procurement and purchasing data.

•Developed specific software management reports.

•Established a data consolidation process.

As a result, the company:

•Reduced costs by increasing the useful life cycle of IT assets.

•Improved software licensing and pricing strategies.

•Increased control of software assets during their life cycle and improved asset utilization.

Scenario 2: A health care company had incurred over $2 million in fines as a result of lost assets that contained sensitive information. They needed to improve their asset management processes and tools to provide applicable knowledge of the quantity and location of their software and hardware assets.

Outcome: These activities were conducted:

•Launched an enterprise-wide ITAM program with the senior management and select stakeholders.

•Developed use cases to address common business practices.

•Implemented ITAM tools with vendors.

As a result, the company:

•Significantly reduced the cost of software license management and IT asset procurement.

•Phased out legacy tools that were not compatible with current ITAM tools.

Scenario 3: A global financial institution wanted to reduce third-party spending in their budgets. This business case had to be analyzed to continue IT asset management efforts.

Outcome: The following tasks were undertaken:

•Interviewed and recorded business requirements for the IT asset management function.

•Translated requirements into detailed system and process requirements.

•Defined the ITAM framework, processes, and organizational roles.

•Embedded an ITAM benefits tracking function.

As a result, these outcomes were documented:

•Delivered sourcing benefits from ITAM estimated at £16.9 million annually.

•Integrated the ITAM solution by combining asset management systems, IT operations, and corporate operations.

•Established report generation and decision support to facilitate business strategic planning and budgeting.

Asset management risksMany organizations do not know their IT assets in sufficient detail to support strategic decision making. Nor do they have specific control of their IT asset inventory. Many of them lack required accountability to provide licensing compliance. They are unable to assess if their assets are performing reasonably and their life cycle is managed to measure their performance. Lack of standardization of IT assets is also a common observation. So is lack of proper documentation. And many organizations are not able to detect the loss of an IT asset.

Lack of asset information posed several risks, which is why asset management can provide information to mitigate organization risks.

The 2011 CIO Compass 63

Risk — “Flying blind” for strategic decision makingHardware and software purchasing decisions can be time consuming and costly, if they are not made in accordance with standards. Organizations usually invest capital to support business initiatives, such as capturing new market or product opportunities.

The focus needs to be on providing information to business units to make more effective strategic decisions by being a technology enabler who adds value. At the same time, existing investments in technology should be leveraged, to keep costs down. IT asset management is designed to provide a clear picture of the current technology investment so organizations can make future technology decisions that fit within the existing environment.

Risk — IT asset inventory “out of control”If the IT asset inventory is out of control, there is a likelihood of “asset sprawl,” which may lead to underutilized assets, increased data center and energy costs, and lost productivity. In order to manage planning, production, and delivery, there is a need to balance demand for IT assets with capacity and scheduling constraints. Otherwise there is the risk of under- or over-provisioning, delivering late, or creating problems with asset performance or customer approval.

Without asset management, there is a potential to focus more on the supply side of the equation and not pay required attention to capturing and prioritizing demand based on the business objectives. Asset management can be leveraged to confirm that IT capacity meets current and future business requirements in a cost-effective manner.

Risk — Poor asset performancePoor performance of assets can harm relationships with business units that use the services. Asset management provides the foundation to monitor asset performance and throughput. The data captured by monitoring can be used for performance analysis to fine-tune the efficiency of existing assets and for capacity planning to adapt to forecasts of workload growth or shrinkage.

Risk — Outdated assets unable to effectively support business needsIT asset retirement identifies legacy equipment with a higher potential for failure. Asset retirement can provide information to contain IT operating costs, lower operational risks, and possibly offset new equipment costs by reselling end-of-life assets. As an example, a large U.S. government agency used asset recovery services to gain more than $150,000 in equipment exchange credits and saved over a million dollars in staffing costs by reducing removal times from six months to five days.

Risk — Proliferation of duplicate and redundant assetsFor global organizations, geographical silos can drive the proliferation of duplicate and redundant assets. Many products at various version levels prevent organization-wide consistency, which can require deeper technical skill sets and prevent portability across regional boundaries. It can also increase overall complexity and support issues, which drives up costs.

The first step is to get a handle on the existing asset inventory so that the asset duplication and redundancy issues can be addressed. Asset management endeavors to provide the foundation for moving from a silo, decentralized regional infrastructure organizational model to a global, centralized, federated utility model. Technology has a useful life cycle, and failure to actively manage that life cycle can lead to cost inefficiencies, instability, and complex deployment, reliability, and support.

Risk — Tribal knowledge related to assetsAnother risk of outdated and legacy assets is the undocumented knowledge that is assimilated by members of the organization to support outdated assets. This “tribal knowledge” depends on the memory of individuals, and unless their knowledge is captured, stored and maintained, new team members cannot have quick access to the information necessary to perform their jobs. Undocumented knowledge also prevents effective configuration management. When configuration data is available, change analysis provides a more timely and reliable view of actual infrastructure components.

The 2011 CIO Compass 64

Asset management mitigates this risk by managing the useful life cycle of assets, creating a more stable environment and increasing ease of deployment, reliability, and support. Asset management also endeavors to simplify keeping track of hardware and software assets and their configurations and status, improving the organization’s ability to manage configurations.

ConclusionsA careful and well-managed IT asset program not only provides information to an organization to save on its hardware and software infrastructure, but also supports IT’s ability to meet the organization’s goals for years to come.

Specific asset management benefits that may be realized by adopting an asset management program include:

•Alignment of applications and infrastructure to business needs and objectives.

•Increased efficiency and cost-effectiveness of IT activities.

•Reduced system maintenance, administration, and inventory management expenses.

•Compliance with government regulations and industry guidelines.

•Understanding of system and process interdependencies.

•Standardization and documentation of specific IT-related procedures.

Supply chain

Enterprise processes

"Value chain"

HR FinanceBusiness support

IT

Business processes

Support processes

$

Time

Flexibility

CSFCost Quality

Efficiency

R&D Production Marketing SalesCustomer

relationship

ITIL/ITSM

ITAM

Effectiveness

The 2011 CIO Compass 65

When it comes to providing information technology (IT) support to the business, a major evolution appears to be on the horizon. The traditional IT operating model of delivering IT to the business in the form of bundled capabilities and assets is wearing thin in an age of cloud computing, on-demand services, virtualization, outsourcing, and rapidly changing business delivery strategies. What IT traditionally engineered, built, owned, and operated can now be bought from many sources more easily without inheriting the specific risks of ownership, support, building, and managing an operating infrastructure.

IT is starting to evolve from a focus primarily on engineering (applications, servers, networks, desktops, etc.) to a focus on managing a networked value chain of suppliers targeted toward delivery of specific services to the business. The IT role is becoming that of an integrator that bundles valued services together from the many pieces and parts provided by external and internal suppliers. As an integrator, IT creates and manages the service supply chain, filling in the gaps between providers to endeavor service value is delivered.

The results of failing to recognize and adapt to this shift can already be seen in delivery organizations that are still organized heavily around technologies and platforms. Common signs of failure include:

•Executive leadership takes on service integration roles, frustrated that IT business and support units are not working together to solve IT problems.

•Availability issues arise where the business finds service outages and problems before IT does.

•IT business and support units point fingers at each other, with little or no accountability for the overall service that is supposed to be delivered to the business.

•Confusion exists within the IT organization over delivery handoffs and responsibilities.

•Customer satisfaction with IT services is low and there is little confidence that IT can get things done.

The shift toward an IT integrator role is already starting to happen. Many companies are reexamining their IT organization and delivery strategies to change their sourcing models and focus around services. New concerns are arising as companies are becoming aware of the immense challenges before them to implement this shift.

This article addresses those concerns and provides suggested strategies to undertake this transformation. Is there an orderly set of transformation steps that can be taken? How can IT organize to execute this transformation? Which specific elements of IT service management practices need to be in place? How can IT determine the services they deliver and how these should be governed?

From silos to services Navigating the transformation path to an IT service delivery organization

The IT universe is rapidly changing — what IT traditionally engineered, built, owned and operated can now be bought.

Authored by: Randy Steinberg, Randy Wisott, Steven Broaden, Jan Hertzsch, and Mitch Kenfield

The 2011 CIO Compass 66

What does it mean to "operate by service"?IT cannot manage itself by technology silos in a world where services depend on a well-coordinated chain of delivery technologies individually managed by those silos. The service is the sum of what is delivered from the technology silos that support it. If one silo fails, the service fails. Therefore, accountability should be built into the organization for the overall service. Without this, IT executive leadership faces the challenge of providing the coordination and integration points to get services to work. As a result, IT leaders may feel frustrated that IT cannot seem to communicate and pull things together.

It is virtually impossible to operate as a service delivery provider unless there is transparency about which services are being delivered with service targets that have been agreed with the business. This is not easy to identify. Development of service definitions typically require new skill sets that IT does not usually possess, such as recognizing and defining the end customers who use the services, conducting needs analysis to identify desired business outcomes, and bundling services in ways that the business can understand the value provided.

Once services have been identified, IT must understand how the various servers, storage, databases, people, applications, and other elements combine and interact to deliver the services. This provides transparency into how services should be put together and delivered. A way to develop, store, and access these models should be developed.

Emerging solutions in cloud computing, virtualization, and use of external service providers increase the difficulty of identifying what the services and who is delivering individual parts of those services.

Key elements of information technology service management (ITSM) transformation effort The transformation cannot be managed exclusively by an IT organization that is heavily organized and focused around technology silos. Key roles are needed within the organization to operate as a service provider. These are:

Service ownerThis role is accountable for one or more services end-to-end, including everything from supporting applications to servers, networks, people, and delivery processes involved with the service. This role aims to ensure that the delivered service meets the needs of the business, reviews service metrics, and takes proactive actions to initiate service improvements when needed.

Process ownerThis role is accountable for one or more processes end-to-end, confirming that processes operate efficiently and provide value. It also oversees activities to help imbed processes across the enterprise, reviews process metrics, and takes proactive actions to initiate process improvements when needed.

Business liaisonThis role is accountable for business unit and customer satisfaction with IT services, including assisting business units with selection of appropriate IT services to meet their needs, providing customer feedback to IT for service improvements and changes, reviewing service quality and status with customers on a scheduled basis, and overseeing resolution of customer IT service issues when needed. Ultimately, this role acts to provide the voice of the customer in many IT decisions.

Technology ownerThis role is accountable for proficiency, management, and ownership over specific technology platforms, tools, and software. This is the go-to role for providing

Who is accountable for the overall service?

Unless operating by service, IT is merely delivering bundles of technology capabilities that in themselves may have little inherent value for the business

The 2011 CIO Compass 67

in-depth knowledge of specific technologies, making sure that technologies are available when needed, and maintaining technologies in accordance with vendor specifications.

Service managerThis role is accountable for many IT service management processes, supporting technologies, improvement projects, and service governance. It is a hands-on role that oversees that many ITSM activities are coordinated and that services are being delivered effectively at acceptable costs. This role is also responsible for communicating many ITSM activities across the enterprise.

In addition to establishing the key roles, other practices need to be solidified from the outset. These include:

•Identification of critical success factors and key performance indicator measurements, along with the infrastructure to report on them.

•Establishment of a program management office to coordinate, plan, and manage transformation projects and activities.

•Identification and establishment of specific strategic partnerships that can be leveraged to both effect the transformation and execute it once put into operation.

•Establishment of a communication and organizational change program to prepare people for the transformation and effectively communicate the changes taking place.

•Execution of a current-state assessment to understand IT capabilities that can be leveraged and those that may be needed but are missing.

Organizing To deliver services effectivelyAn inherent conflict exists between focusing on technology versus services. A technology focus is needed to effectively engineer and support the technology assets that underpin services. Failure to maintain skills and capabilities in this area may result in service outages and delays, putting the organization at great risk. On the other hand, a customer-and-service-based focus is needed to protect those investments in technology and support the delivery of technology to meet the business needs.

How do you organize to support these competing goals? Operate by technology, service, or both? Those organizations undergoing an ITSM transformation appear to be using one of three models:

Network model Coordination model Collaboration model

Reduce disruption to the existing IT organizational model by formalizing interactions between each technology unit such that logical value delivery chains are created across IT.

Keep the existing silos, but add an additional silo focused on ITSM that can serve to provide the IT customer focus, service ownership, and coordination.

Utilize a matrix structure with vertical technology ownership and horizontal customer and service ownership.

The 2011 CIO Compass 68

Pros: 3 Requires fewer managers.

3 Can adapt relatively quickly.

3 Encourages collaboration between IT units and providers.

Pros: 3 Provides accountability for process and service ownership.

3 Provides specific service. coordination responsibility across the IT organization.

Pros: 3 Balances technology considerations with service considerations.

3 Provides strong collaboration with business units.

3 Easily adapts to changing business needs and requirements.

Cons: 3 Requires focus on integrating activities.

3 Determining service ownership and accountability will be a challenge.

3 Integrating some external third-party business models may be difficult.

3 Dealing with conflicting IT unit priorities will be challenging.

Cons: 3 Requires an additional investment in people and resources.

3 May be seen as a threat by leadership of existing IT silos.

3 Balancing priorities between the ITSM silo and other IT silos will still be a challenge.

Cons: 3 Creates potential conflicts between technology and customer groups.

3 Requires high levels of teamwork and negotiation.

3 Requires a solid investment in IT governance to balance conflicting IT priorities and needs.

Analysis:Most efficient approach, but may not work well with large and complex IT organizations. Unless governed well, technology considerations within each silo may override larger service and customer considerations.

Analysis:Requires additional investment, but this approach has been effective for many organizations that implemented ITSM transformations. Works better with midsize to large organizations.

Analysis:Preferred approach for very large organizations. Use of this approach may create political issues, requiring more time to come to agreement on key decisions and strategies.

Undertaking the transformationITSM transformation is an ongoing program, not a one-time project. The program begins by building the overall ITSM foundation in place in terms of organization, vision, and governance, followed by implementation of targeted activities to achieve short-term initial wins followed by longer-term strategies over time.

So that benefits can be realized as soon as possible, the program, once organized, should be operated under the principle of continual service improvement. Plan to execute targeted activities in six-to-nine-month waves with specific and measurable value delivered at the end of each wave. Targeted activities should be selected that are linked to specific business problems and critical issues. Simultaneously, parallel implementation of strategic tasks that keep the ball moving toward fulfilling the longer-term ITSM strategy is also required. Without this parallel effort, the ITSM program could degenerate into many tactical efforts that rarely come together or reach the broader strategic goals.

The 2011 CIO Compass 69

Each improvement wave in the program should operate with a repeatable, consistent set of life cycle tasks that guide the effort from vision to operation. The following framework is suggested for a typical transformation wave executing ITSM implementation tasks:

Vision Plan Design Build Validate Go live and support

Value Gaining understanding • Assessing client performance • Developing business case • Value realization

Project management Initiate • Plan • Execute • Close

People, change and learningDefine change goals • Embedding change • Communicating success

ProcessProcess development • Process implementation • Continual process improvement

PackageAcquire and implement • Deliver and support • Continually improve

Infrastructure and integration

The top layer of the framework represents the work phases of the transformation effort. Within each transformation phase (vision, plan, design, build, validate, and go-live), there are six parallel activity tracks, which allow a holistic approach to development. A description of each phase task is described as follows:

Step Phase Key activities Milestones

1 Vision Set standards and guiding principles for the program threads. Establish the alignment of the overall program with objectives. Put service governance processes into place, prioritize activities, and assign overall program leadership roles and responsibilities.

•Program organization and management office.

•Assigned leadership roles.

•Service governance structure.

•Program business case.

•Program strategy, goals, success criteria, and measurements.

2 Plan Identify the program initiatives that will be executed in the current wave. Develop a detailed plan of action for each initiative.

•Program initiatives.

•Transition road map.

•Assigned teams.

•Initiative project plans.

The 2011 CIO Compass 70

Step Phase Key activities Milestones

3 Design Expand the scope for each initiative into detailed requirements. Utilize a holistic approach for each initiative that considers people, processes, and technologies. Compare business requirements against existing IT technology and process capabilities.

•Process and technology requirements.

•Initiative design packages.

4 Build Configure and test the business and technical designs that were developed in the design phase.

•Installed technologies.

•Installed processes.

•Solution training and communications.

•Solution deployment plans.

5 Validate Execute tests, measurements, and checklists to confirm that each ITSM solution initiative being implemented is deployment ready.

•Test results.

•Pilot results.

•Solution sign offs for deployment.

6 Go-live and support

Execute the system and business cutover. Perform actual business operations with the new solutions. Measure value attained and prepare to execute the next round of service improvement cycle.

•Operational results and data.

•Operated solution delivery.

•Benchmark of operating results achieved versus those expected.

The parallel program activity tracks are described as follows:

•Value — Help the organization identify and realize both the tangible and intangible benefits.

•Project management — Plan, execute, and manage the transformation within the accepted scope, budget, and timeline.

•People, change, and learning — Communicate the need for change and the expected results, and prepare the organization to realize change.

•Process — Design and implement processes that will support the ITSM solutions.

•Package — Help the organization make the transition from its current environment by identifying, designing, and implementing the needed technologies that will be used to support the ITSM solution.

•Infrastructure and integration — Integrate the technologies, people, and process into the existing organization to support ITSM solutions.

The 2011 CIO Compass 71

Causes of failure in ITSM transformations ITSM transitions can be challenging, regardless of how they are constructed. The following table describes some common pitfalls and risks to consider:

Risk area Risk Mitigation

Momentum loss Lack of senior leadership commitment.

Take steps in order that senior leadership is committed financially and visibly participates in key meetings and events.

Poor communication, role definitions, and inability to address organizational anxiety.

Develop an organizational change and communications strategy and execute that strategy throughout the transformation.

“Dictator approach” to transformation.

Avoid this approach.

Goal state and operating model design

“Big bang” approach to transformation to service-based delivery.

Avoid this approach and utilize a repeatable wave of service improvement and transition efforts not to exceed six to nine months.

Lack of clarity around service definition, which is confused with product, process, request, etc.

Take steps such that services are well defined, communicated, and agreed to by the stakeholders who will be supplying and using services.

Poor involvement of business partners/customers.

Engage stakeholder representatives in participating from the outset and include them in the transformation effort, at least on a part-time basis.

Prioritization and transition

“Once and done” implementation approach.

Use agile implementation approaches during the transition and recognize that this is a continual service improvement effort from the outset, instead of a one-time project.

New organization lacks “teeth”. Establish a program governance structure from the outset and staff with senior executive leaders.

Other day-to-day priorities take precedence over the program.

Publish and communicate progress through measurements that visibly demonstrate program success and the negative impacts of not addressing improvement initiatives.

People overwhelmed with too many changes.

Do not try to implement too many things at once — prioritize improvement initiatives so that the pace of change is manageable.

ConclusionRecognize that this is not a one-time project, but a program of continual service improvement. Start the transformation effort by setting the initial vision, measurable goals, and key strategies. Chart the course for IT service improvement with ITSM as a series of efforts that will take place over time. When combined with an overall ongoing improvement program, this project can help create noticeable business value and maintain momentum for an IT service management journey.

The 2011 CIO Compass 72

CIO’s of global IT organizations are often responsible for 20,000 to 50,000 geographically dispersed servers, with total costs typically running hundreds of millions of dollars. On top of their day-to-day responsibilities, they are challenged to quickly adapt their operations while avoiding soaring costs. It seems that every day, CIOs are expected to do more with less.

One way to increase operating agility and manage costs is to improve the organization’s capability to provision applications and rapidly scale them up or down. For

many organizations, it may take seven to eight weeks to provision applications and the associated hardware, which can paralyze business growth.

Reducing this time requires organizations to take a holistic approach and examine their provisioning capability end-to-end and to use multiple strategies to modify/improve the process. One effective strategy is to automate IT activities where possible. This article discusses how CIOs can use automation to improve effectiveness, free IT resources from mundane tasks, and decrease human error.

Infrastructure provisioning Automation

Demand planning Requirements

Verify Install Applications

Deploy into Dev

Deploy into Production

Procure

Build and test Verify

Busin

ess d

eman

d pl

anni

ng (o

ngoi

ng)

Planning & design

Deploy and operate

Design and funding

Request and procure equipment

Acquire software and licenses

Maintenance operation and

retirement

Integration with asset management

tools

Support until user acceptance

Install applicationsManually install

hardware, OS, etc.

Authored by: Vishal Malakar, Anuj Mallick, and Siddharth Sonrexa

Figure 1: End-to-end provisioning overview

The 2011 CIO Compass 73

What is automated provisioning? Data center provisioning consists of servicing IT group requests for infrastructure resources such as new servers or additional storage. The main steps involved include:

1. Installing the physical device (also called a bare-metal installation).

2. Configuring the device.

3. Deploying necessary applications.

4. Granting user access.

There are three main types of provisioning when allocating infrastructure resources: 1) server and application provisioning, 2) storage and network provisioning (also called resource provisioning), and 3) user provisioning.

Prior to virtualization, bringing an application online typically involved a one-time configuration of servers, storage, and network resources, which then remained static for long periods of time. Usually these requests were managed with manual processes or limited automation technologies within silos (e.g., server team or storage team); handoffs or cross-boundary efforts between silos still typically relied on manual, time-consuming processes.

Virtualization has changed this dynamic. New generation provisioning tools automate activities across most parts of the infrastructure and reduce repetitive or serial processes.

Server and application provisioning Server provisioning prepares a server with the necessary software, data, and systems access so that it is ready for use by various IT groups. This process typically involves:

1. Selecting a server from a server pool or conducting a bare-metal installation of a physical server if none are available.

2. Deploying the applicable operating system and device drivers.

3. Configuring the server.

4. Granting access to network and storage resources.

5. Loading middleware and applications to the server.

Virtual servers are provisioned in a similar fashion with additional options for initializing the server. Once a physical server is selected and installed with a hypervisor, virtual machines (VMs) can be provisioned. This can be done in several ways:

•Installing an operating system (OS) and the required applications (same as with physical servers).

•Provisioning a virtual server from a physical server (a physical to virtual (P2V) migration).

•Creating the VMs from a template.

•Creating hardware- and virtualization-independent images, or create “empty” VMs and build layer-by-layer through unattended installations.

Typically all of these tasks are performed using a defined set of configuration standards and server images that have already been tested and accepted by the organization. This provides standardized infrastructure and can reduce risk of application failures due to incorrect configurations.

Network and storage provisioningNetwork provisioning is the process of getting a user/customer online. This process can include a number of different steps depending on the connection technology used, such as DSL, cable, or fiber. The possible steps are:

1. Modem configuration.

2. Authentication with network.

3. Wireless LAN setup.

4. Browser provider-specifics configuration.

5. E-mail provisioning to create mailboxes and aliases, or e-mail configuration in client systems.

6. Installation of additional support software.

7. Installation of add-on packages purchased by the customer.

Similarly, storage provisioning is the process of assigning storage, usually in the form of server disk drive space, to improve the performance of a storage area network (SAN). Traditionally, this has been done by the SAN administrator and can be a tedious process. The administrator should test that the data storage and recovery routes will be available to the users when needed, put alternate routes in place to keep the SAN functional in the event of partial failure, and make sure that the SAN can accommodate expected future expansion.

Virtualization provides flexibility for business owners by abstracting the applications from the physical infrastructure and allowing owners to quickly deploy and scale up applications.

This requires infrastructure teams to deploy automation technologies to meet the increased demand and keep pace with application changes.

Incoming requests can be serviced quickly by fewer administrators or, in many cases, without any human intervention if the appropriate policies and rules are in place.

The 2011 CIO Compass 74

User provisioningUser provisioning refers to the creation, maintenance, and deactivation of user objects and user attributes in one or more systems, directories, or applications. These objects may represent employees, contractors, vendors, partners, customers, or other users of an IT service. Each IT system (e.g., email, business applications, shared folders) can potentially have its own set of user objects that must be provisioned. A key challenge is confirming that access across the various systems remains consistent throughout a user’s life cycle — from hire to retire — so they only have access to the necessary resources for their position or area.

AutomationIn the past, many administrators manually built, allocated, or assigned the resource upon request. Over time, some administrators began using scripts and programs to perform some provisioning steps and day-to-day activities. These initial automation activities accelerated the provisioning process; however, they were confined within the IT silos that managed each system. As a result, manual coordination was needed to confirm a user or application had been correctly provisioned across specific systems.

In recent years, provisioning software programs have become available that span multiple infrastructure

resources, allowing IT to meet the growing provisioning needs of next generation data centers. These programs formalize the relationship between various IT teams and allow resources to be provisioned or decommissioned more efficiently based on input from business requests and processes. They can provide increased flexibility and speed by reducing the need for the discovery-based, serial and repetitive processes that are often required to deploy an application or provision a new user.

Existing tools and capabilitiesAs provisioning is integrated with other areas of IT, such as change management, asset management, or auditing, demand increases for integrating various automation tools. Many tools today offer overlapping functionality with different trade-offs. There is currently no single tool that speaks to all pieces that are typically required to provision a new application or server from scratch.

Most vendors offer a suite of tools that can automate different infrastructure areas, such as servers or applications. Often vendors are strong in one area while weak in another, and as a result, customers must accept difficult trade-offs or buy multiple products with redundant capabilities. 1, 2 ,3

Figure 2: Strong performing and leading vendor offerings and capabilities (Source: Forrester Research, Inc.)1, 3

Vendor Total

Tool capabilities

Bare metal server

provisioning

Virtual server

provisioning

Application provisioning

Network provisioning

Storage provisioning

Cross platform support

Automated discovery

mechanisms

Policy based

automation

Compliance management

Patch management

Pre-defined and

customized reporting

Capacity management

Configuration management

IBM Tivoli provisioning manager

X X X X X X X X X

HP Server automation

X X X X X X X X X X

HP network automation software

X X X X X X X X

HP storage essentials software

X X X X X X X X

CA Data center automation manager

X X X X X X X X X

BMC BladeLogic server automation suite

X X X X X X X

BladeLogic network automation suite

X X X X X X

Symantec Veritas provisioning manager

X X X X X X X

Novell Platespin orchestrate

X X X X

The 2011 CIO Compass 75

Impact of automation and potential pitfallsImpact of automation on IT processes involved with provisioningIT organizations tend to evaluate the value of IT improvements solely from a technical standpoint, but they should also measure the value gained from process improvement. Data center automation solutions can dramatically reduce labor costs. Automation provides additional value in these ways:

•The same tasks can be performed in much less time with far less involvement from IT staff resulting in lower costs.

•Data center bottlenecks are addressed and productivity is increased by improving time to production across heterogeneous platforms.

•Quality of audit trails is improved for security and regulatory compliance for the production systems.

•Planned and unplanned downtime is reduced.

•Disaster recovery, such as failover capability and configuration rollback, is supported.

•Change management is supported.

Along with these, soft benefits are realized, such as reduced downtime, reduced deployment times, and increased responsiveness from the IT department.

Specific pitfalls organizations should consider before automating their provisioning processes1. Configuring OS options — New servers should meet

corporate technical and business standards before being brought online. Confirming that new machines meet security requirements might involve manual assessment of configurations — a process that is neither fun nor reliable. Other important settings include computer names, network addresses, and the overall software configuration. The goal should be to drive consistency while minimizing the amount of effort required — two aspects that are not usually compatible.

2. Support for new platforms — Provisioning methods should constantly evolve to support new hardware, OS versions, and service packs. New technologies, such as ultradense blade server configurations and VMs, often require new images to be created and maintained. Also, there is a learning curve and some “gotchas” associated with supporting new machines.

3. Redeployment of servers — Changing business requirements often require that servers be reconfigured, reallocated, and repurposed. It can be even more challenging to try to adapt the configuration to changing requirements. Neither option (reconfiguration or reinstallation) is ideal.

Bare metal server provisioning

Facilitates automated provisioning and configuration for bare-metal servers

Virtual server provisioning Facilitates automated provisioning and configuration for virtual servers and hypervisors

Application provisioning Facilitates automated application deployment

Network provisioning Facilitates automated provisioning and configuration of network resources

Storage provisioning Facilitates automated provisioning and configuration of storage devices

Cross platform support Supports the major OS and hardware vendors

Automated discovery mechanisms

Allows cross-tier discovery of networks, storage, server, and/or application dependencies

Policy based automation Provisions resources based on status of the environment and predefined policies

Compliance management Allows creation of golden snapshots and configuration policies that are then used to analyze and report resources

Patch management Facilitates patch policy creation and flexible patch deployments. Used to identify server vulnerabilities and reduces the time needed to patch multiple servers

Predefined and customized reporting

Records a sequence of automation tasks to be stored and used again later

Capacity management Tracks and reports on usage trends of resources

Configuration management

Stores configuration information on infrastructure resources

Table 1: Capabilities definition

The 2011 CIO Compass 76

4. Keeping servers up-to-date — The installation and management of security updates and OS fixes can require a tremendous amount of time even in smaller environments. These processes are often managed on an ad hoc basis, leading to windows of vulnerability.

5. Technology refreshes — Even the fastest and most modern servers can begin to show their age within a few years. Organizations often have standards for technology refreshes that require them to replace a certain portion of the server pool on a scheduled basis. Migrating the old configuration to new hardware can be difficult and time consuming when done manually.

6. Support for remote sites — Deploying new servers is rarely limited to a single site or data center, so the provisioning tool should provide methods for performing and managing remote deployments.

ConclusionAutomating provisioning can provide short-term benefits by supporting reduction of overhead associated with common tasks. It may also facilitate a fast, flexible infrastructure that can dynamically ramp-up or scale-down resources to meet business needs. Policy-based automation may also support the organization reduce risk and meet compliance demands.

An automated provisioning tool is not the whole answer. It is unrealistic to expect a tool to drive the new process. Process improvement strategies employing improved demand planning, forecasting, as well as vendor sourcing and management, also play a crucial role in achieving long-term benefits. Automation plays a key role, but only will be effective as part of a well-established end-to-end provisioning capability.

References:1. Forrester — "The Future of Data Center Automation." February 2006.

2. Forrester — "Which Provisioning Vendor." December 2002.

3. "Forrester — "Selecting A Data Center Automation Vendor." January 12, 2005.

The rapid evolution of cloud computing

The 2011 CIO Compass 78

Cloud computing harnesses various technologies to facilitate a business model of usage-based charging and flexible contracts that can allow subscribers to opt in and out of information services easily. In addition to this need-based provisioning, these technologies provide users ubiquitous access from many platforms, making cloud computing an attractive complement to owning information technology (IT) assets and to traditional outsourcing. For chief information officers (CIOs) and IT organizations, cloud computing can represent both an opportunity and a threat, depending on how they respond to it.

The opportunity Cloud computing can help transform IT into the provider of choice for the business. IT organizations that understand and value cloud computing will likely improve their responsiveness and increase their value to the organization. Leveraging a combination of internal and external services, cloud computing can improve IT efficiency and responsiveness to the needs of business functions.

The threat Cloud computing can significantly lower the barrier to entry for obtaining IT services. If IT organizations fail to embrace cloud computing, business functions can decide to directly employ software as a service (SaaS) or other cloud services. This could position cloud computing as a viable competitor to the IT organization. When business units bypass IT, they tend to challenge its authority and threaten its effectiveness. Additionally, the direct relationship between the business and the cloud service provider can remove the IT organization from the

customer/supplier relationship. The resulting “rogue” applications can:

•Circumvent the enterprise architecture.

•Hamper integration.

•Increase the complexity of managing the IT environment.

•Potentially increase costs over the long term.

•Jeopardize ownership of some amount of the organization’s data.

The question CIOs and IT organizations must ask themselves is this: “How do we most appropriately employ cloud computing so that the IT organization — not external cloud providers — becomes the provider of choice for the business?”

IT organizations should consider redefining their role. They should focus on understanding and serving the needs of the business by providing the most effective mix of IT-based services. This may include in-house assets, traditional outsourcing arrangements or cloud-based services. IT can evolve from just managing assets and manufacturing applications to facilitating business results. This is the strategic role of CIOs.

CIOs and IT organizations that understand and leverage the value of cloud computing may have the opportunity to outperform those that do not. It can give them the ability to respond timely and effectively to business needs, changing environments, and capacity and demand fluctuations. As networked value chains become increasingly prevalent, cloud computing will support the interoperability required to support these business models.

Cloud computingOpportunity or threat?

Authored by: Bill Sheleg

The 2011 CIO Compass 79

Additionally, cloud computing can facilitate IT operations more efficiently by using private cloud computing architectures designed to improve asset utilization. This will likely/can move IT closer to better understanding the cost of services.

On the other side of the coin, cloud computing can lower barriers to entry, making it easier for business functions to bypass the IT organization completely. If IT is seen as hesitant or resistant, business users will likely engage SaaS or other cloud computing models directly. They will not see risks, just attractive service offerings, a pay-per use model, and contractual arrangements that can allow them easy entry and exit. These are enticing conditions. So once business units are lured in by the prospect of rapid deployment, activity-based costs and perceived controls, trials can become permanent arrangements. They will then expand and present new competition to the IT organization.

IT organizations should consider cloud computing. But they also should set realistic expectations for what it can deliver — and know when it is an appropriate solution. First step is to develop a strategy for using cloud computing as an alternative sourcing option within the overall IT strategy. Then factor in anticipated changes with cloud computing as part of the portfolio. The CIO’s role is to drive an organizational design and talent strategy that provides the most possible value to the business.

The future role of ITBusiness managers want IT performance levels that help them achieve their objectives. So, unless a strong executive mandate to the contrary exists, the business will look at options for achieving those performance levels. The internal IT organization is certainly one option, but so are external service providers. Using third-party service providers usually means contract development, long-term commitment, and a potential lack of flexibility. Outsourcing arrangements often require committing to a specific duration and cost, accompanied by significant due diligence and negotiation. Even with these conditions, third-party providers are a viable option when the internal IT organization cannot meet the needs of the business.

Cloud computing, however, changes the formula. It can lower the perceived risk associated with the third-party provider option by offering rapid deployment, usage-based billing, easy entry and exit. In short, the business can try a service that is sized right, deployed efficiently and then expand it based on value delivered.

One reason cloud computing providers are effective is that they do not have to confirm that solutions will integrate into the larger enterprise architecture. In general, they do not have to worry about interoperability with other applications or collaborators. And they do not have to worry about degrading enterprise IT performance. So, how can the IT organization compete?

IT’s advantageHistorically, IT organizations delivered value to the business through the application of technology — managing IT assets, and developing or acquiring applications. However, infrastructure and application development are increasingly commodity services. Future IT organizations can have difficulty competing with external providers based on price. But they can compete on understanding the business better.

Today, IT teams tend to spend a lot of time and energy managing assets and building applications. They would be better served focusing on high-value activities such as building relationships with business managers. To do this, the CIO should redefine the role of the IT organization and transform the operating and talent model to respond faster to business needs with more innovation and less cost. Making the transformation will drive the internal IT competitive advantage. If CIOs and IT organizations do adapt to putting business first, their advantage over third-party providers can grow.

Keep in mind that business leaders typically do not want to spend their time assessing technology alternatives, performing technical due diligence, and managing relationships with third-party vendors. They want IT to do that job, but only if IT does it right. They would like IT to select, put in place, and manage an effective/demonstrated mix of internal, third party, and cloud components to help achieve business goals. Essentially, they would like IT to be part of the strategic business plan. Business managers should come to trust that IT understands the needs of the business and will leverage technology for a competitive advantage.

The IT operating model must align to a changing roleBuilding credibility with the business requires rethinking how to employ time, resources, and knowledge. The organization’s operating model should match its business strategy. Similarly, the operating model of a development-focused IT organization looks far different from the operating model of an organization that wants to serve as a strategic advisor to the business.

The 2011 CIO Compass 80

Cloud computing can provide organizations an alternative for accessing applications and technical architectures needed to support their business strategies. This can have a profound impact on how enterprise architects, application developers, infrastructure, and production should work together when some of those activities are outsourced.

Cloud computing solutions require agile IT operating models. IT is just now moving from an era of elaborate enterprise architecture frameworks to an era of agile enterprise architecture. In this new era enterprise architects, application development staff, and infrastructure staff must work closely to more rapidly define, realize, and deploy services incrementally. Architectures must evolve from elaborate master plans to more nimble structures that adjust easily to changing business realities. Traditionally, Service Oriented Architecture has provided the architectural framework and standards to support more agile, incremental, loosely coupled architectures. Now cloud computing offers the platforms and infrastructure to realize services quickly at a cost fit to the service.

Organizations using cloud computing most effectively will learn that results are not simply a matter of understanding the components of cloud. They will become adept at changing roles within the IT organization and at changing the overall IT operating model so the organization can benefit from cloud computing strengths.

IT organizations deciding to make this transition understand that it’s not easy and it’s not fast. They should be prepared to live in both worlds — slowly transforming pure asset management and application building to strong customer and supplier relationships with deep understanding of the business. The transition begins with an inventory of the organization’s services, defining high-value applications versus commodity services. The IT organization that focuses on, high-value activities will likely be a more effective business advisor. Lower-value, commodity services should be considered for outsourcing to a third-party provider.

Once services are organized, an operating model can demonstrate how work flows through the organization by function and across functions. An effective macro-organization design will emphasize services that the organization considers high value and strategic. These are typically areas where IT has the closest relationship with the business. Competency centers can manage the outsourced relationships. In future IT organizations, individuals with deep technical knowledge will likely

manage these centers. They will call on additional resources as needed to support special projects. The final organization design will have a sound rules structure as well as defined processes to guide operations. Once the organization design is set, IT should consider aligning its skills and capabilities with its new role.

Relationship management is becoming increasingly important. Knowledge of available alternatives in the marketplace becomes more important than knowledge of programming languages. Knowledge of business processes and how users perform their jobs becomes more important than knowledge of technical asset management processes.

In short, relationship skills and business knowledge become more important than commodity technology skills. The ability to asses and manage third-party alternatives becomes more important than the ability to build those solutions. Cloud providers can become strategic collaborators rather than competitive threats. The CIO should determine whether to develop current resources or acquire the needed skills. Typically some hybrid of the two is necessary.

Incorporating cloud computing into an IT strategyCloud computing refers to SaaS, platform as a service (PaaS), and infrastructure as a service (IaaS). SaaS is a model that licenses applications to customers for use as service on demand. PaaS is the delivery of a computing platform and solution stack as a service. It can facilitate the development and deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers. IaaS is the delivery of computer infrastructure, typically a platform virtualization environment, as a service. Rather than purchasing servers, software, data center space, or network equipment directly, clients instead buy those resources as a fully outsourced service.

How much value you derive from cloud computing depends on your readiness to take advantage of these new capabilities. Value may include:

•More efficient operations

•More effective sales and marketing

•Reduced time to market

•Improved product or service quality

•Enriched customer experience

Business value comes when technology is applied. Business realizes benefit from the rapid application of technology to

The 2011 CIO Compass 81

improve business performance and from the advantages of usage-sensitive billing. SaaS, PaaS, and IaaS can offer value through rapid deployment, elasticity, and reduction in manufacturing costs (PaaS).

Improving IT asset utilization through virtualization and multitenancy can provides value to the business via the reduced costs resulting from improved efficiency. The IT organizations that deliver high value will likely be those that learn to deliver IT functionality without taking on the costs needed to own and manage the IT manufacturing (PaaS) and operating (IaaS) facilities.

Cloud computing can allow organizations to operate computing resources sized to need. Breakthrough performance will likely/can come from organizations that learn how to operate IT without owning and managing the manufacturing and operating facility, but instead fit operations to need. In effect, cloud computing is the realization of the service-oriented architectural design principles that allow organizations to architect incrementally with loosely coupled components.

The elasticity cloud computing offers allows for scaling resources up or down as needed. The ability to rapidly provision both platform and production infrastructure can reduce time to value. Pay-as-you-go subscriptions reduce capital spending. The reduced time and effort required for provisioning could free up staff and funding for more value-added activities.

If you are considering a transformation to cloud computing, begin by understanding what areas/which services can derive what value from the different facets of cloud computing. Also, recognize that developing a stand-alone cloud computing strategy may not be required/necessary. Instead, we suggest refreshing your IT strategy by:

•Assessing how cloud computing can offer an implementation alternative for improving IT efficiency and responsiveness.

•Evaluating how cloud computing can reduce time to benefit by providing needed services without IT having to own, provision, and manage those services. IT will still have management oversight and service-level management responsibility. But the level of effort for management oversight will less than the effort required for asset life cycle management.

•Considering the potentially positive impact cloud computing can have on the role of IT. IT can become the provider of choice for the business by offering needed services in the most effective manner.

•Defining the changes required in the IT operating model for IT for future results.

Incorporating cloud computing as an alternative within an overall IT strategy can be the most appropriate way to realize measurable benefits and avoid treating cloud computing as something separate and distinct.

The 2011 CIO Compass 82

The cloud computing revolution is in full swing. This technology combines the Internet, virtualization, and large-scale data centers to provide a powerful new way for information technology (IT) organizations to support on-demand services that offer elastic capacity, resource pooling, and variable consumption.

In the cloud revolution’s wake, a new breed of companies has emerged that only offer cloud-based products such as Salesforce.com and RightScale. In addition, Internet companies, such as Google, have aggressively shifted their product portfolios to offer enterprise cloud solutions.

This article describes how IT decision makers can navigate the maze of cloud products and solutions to address two of their primary concerns: controlling IT costs and protecting the company’s data.

Controlling costs and gaining IT cost transparencyDuring the recent economic downturn, companies looked for ways to improve operational efficiency, reduce headcounts, and boost the bottom line. According to a survey conducted by CIO Magazine, controlling IT costs was the No. 2 priority for CIOs in 2010 [1]. Even as the economy continues to improve, IT organizations are likely to continue to closely monitor costs and find ways to improve IT cost efficiency. In the coming years, IT organizations are likely to use cloud technology to help control costs.

Shifting the IT cost modelCloud Computing affects IT’s cost model by shifting fixed and capital expenditures to variable and operational costs. This approach has the potential to reduce average cost curves and accelerate break-even points. Other financial benefits of cloud services include: elasticity and scalability

(paying for actual usage), lower support and maintenance costs, and generally lower costs per user. But cloud technology can offer more than just cost reduction — it can give IT executives a clearer understanding of their organization’s true costs.

How can cloud computing improve IT cost management?Cloud computing can help IT executives manage and communicate the cost, quality, and value of IT services. Because cloud service costs are usage or subscription based, vast amounts of granular utilization data and metrics can be accessed through sophisticated monitoring and tracking tools. When collated and analyzed, hardware and software utilization data provides a wealth of information including:

•Service costing. Connects IT services and underlying cost drivers to identify areas to reduce costs and evaluate the return on IT service investments.

•Budgeting and forecasting. Facilitates methodical and fact-based IT budgeting, planning, and forecasting, and links budgets to IT service spending.

•Service quality and utilization. Maps key metrics to IT services value and usage, and reports service levels, service usage, and utilization metrics.

•IT benchmarking. Compares baseline IT costs with industry cost and performance metrics.

Moreover, since cloud service costs are operational expenses, cost allocation can be defined and become more transparent to business units. This allows for more careful assessment of return on investment at a business and technology level.

Navigating the cloud computing maze

Authored by: Philip Galloway and Jonathan Weil

Figure 1 [10]

The 2011 CIO Compass 83

Cloud management examplesMany of the larger cloud providers offer add-on web service tools that monitor and track key data metrics from the primary cloud platform.

Cloudkick, recently acquired by Rackspace, provides several cloud monitoring tools designed to support modern application programming interfaces and scalability. Cloudkick is also designed to function across an entire infrastructure environment supporting the option to potentially leverage many cloud providers. [12]

VMware vCenter Chargeback is a tool that enables cost measurement, analysis, and reporting of virtual machines, including private cloud services. vCenter Chargeback gives line-of-business owners and IT teams the ability to see the actual cost of the virtual infrastructure required to support business services. vCenter Chargeback allows the mapping of IT costs to business units, cost centers, or external customers to help them track resource costs and use them more effectively. vCenter Chargeback also allows organizations to support policy-driven accountability for self-service environments so that business owners can “pay as they go” for IT resources [2].

Technology business management (TBM) examplesSome products take cloud management a step further by not only providing insight into server, storage, and network utilization, but also facility and labor costs. This presents IT decision makers with a holistic view of their IT expenses at a granular level.

Digital Fuel, a SaaS TBM solution vendor, offers an IT service costing solution that allows IT to more completely understand the fully loaded cost of client services, infrastructure services, applications, and IT services portfolio. This can be used to evaluate the financial impact of business decisions targeted at lowering costs or adding new capabilities. By using activity-based-costing, this solution may help customers obtain precise total costs. This solution also supports the calculation of unit rates and provides integrated reporting to allow users to easily track and trend costs over time.

Similar to Digital Fuel, ComSci offers its own TBM solution consisting of a scalable production process and web-based reporting tool that allows IT and business unit managers to understand and control consumption. ComSci integrates data from operational, financial, and organizational systems, turning disparate data into

actionable information, such as IT budgets and forecasts, product/service cost pools, unit costing, and consumption metrics [3].

Gaining insight into IT costs through the cloudBy employing cloud mechanisms, such as Rackspace cloud servers and its Cloudkick monitoring tool, companies can more readily track consumption metrics, providing a better understanding of service costs, which leads to proper cost allocation and overall IT transparency. Likewise, utilizing TBM solutions can provide even greater insight into service costing, allocation, and service quality, which supports better budgeting and forecasting, as well as IT benchmarking. In general, these types of cloud technologies can help IT executives identify areas of cost savings and balance necessary trade-offs between service consumption, quality, and cost.

Managing customer data privacy in the cloudMany IT leaders are concerned about maintaining control of private customer data within the cloud. Gaining the benefits that cloud technology provides while controlling the confidentiality of sensitive information will require careful planning and monitoring.

Protecting customer data privacy is a priorityIn the 2010 State of the CIO survey by CIO Magazine, 25 percent of the 500+ IT leaders surveyed said protecting customer data privacy was one of their top management priorities. Similarly, a survey conducted by Lockheed Martin and its cyber security alliance partners announced in April 2010 that 70 percent of IT leaders in federal, defense/military, and intelligence agencies were most concerned about data security, privacy and integrity within the cloud [4]. In particular, IT organizations in highly regulated industries, such as health care, financial services, and insurance, should pay close attention to controlling their use of personally identifiable information (PII), which can be used to uniquely identify, contact, or locate a single person [5].

Technology business management solution

Cloud management solution

Cloud infrastructure

On-premise infrastructurePublic Private

Service Service Service Service

The 2011 CIO Compass 84

Why is hosting PII within cloud service providers a risk?The single most important question driving concern of housing PII within a cloud-based service is: “Who ultimately owns the data?” By hosting data within the cloud, IT leaders should be mindful of the risk associated with their data being exposed to search by law enforcement agencies, such as the FBI, or even civil plaintiffs. For example, in April 2009, FBI agents seized a large portion of a data center in Dallas, TX in an effort to gather evidence against various companies accused of defrauding AT&T and Verizon. The equipment they seized was used for voice-over-IP services for the accused companies. However, it also housed data leveraged by several other companies. One of the companies affected, Liquid Motors, stated in a court filing, “Although the search warrant was not issued for the purpose of seizing property belonging to Liquid Motors, the FBI seized all of the servers and backup tapes belonging to Liquid Motors, Inc.” [6].

What actions can be taken to protect your organization’s PII within a cloud?When choosing to leverage cloud services containing PII, IT leaders must take steps to prepare their organizations to mitigate the risk of data loss. In evaluating and selecting a cloud vendor, these five key items must be considered: encryption and key management, data monitoring, data governance, data forensics, and virtual private clouds (VPC).

Encryption and key managementThe vendor should provide metrics reports on data encryption. These reports measure the levels of protection of sensitive data while at rest, in use, and in transfer. Make sure encryption key servers are not co-located with hosted data; key servers should be housed in a separate location.

Data monitoringAuditing practices, as expected in any internal data center, also should be maintained within cloud services. Also consider using data monitoring services, such as the aforementioned Cloudkick, which provide visibility into resource utilization and overall demand patterns. If these services are not provided, trusted third-party vendors such as Hyperic may also be used.

Data governance and risk metricsCloud-specific security metrics are evolving rapidly. Because

of this, it is critical for IT organizations to continuously revise how they measure their specific data security needs within the cloud.

One example is MetricsCenter, a cloud-based tool demonstrated by security experts at SecureCloud 2010, a global cloud security conference hosted by the cloud Security Alliance, ENISA, and ISACA [7]. Specifically designed to house business contextual security metrics, MetricsCenter’s YouAreHere benchmarking analysis provides cloud service transparency and accountability by demonstrating how individual performance compares with a community of peers across business unit, company, and product line [8].

Another example is nCircle Benchmark, a service that provides compliance metrics and security benchmarks against internal goals and industry standards. Providing scorecards in areas such as vulnerability management, identity and access management, and configuration auditing, the service is designed to allow results in a context that makes sense to the business [11].

Data forensicsService provider contracts should provide some level of data forensic capabilities. Many state and local governments now have “breach notification” laws mandating that companies notify customers when data is lost [9].

Virtual private cloud (VPC)A relatively new cloud computing offering aimed at the enterprise market, VPC is a private cloud existing within a shared or public cloud [13]. Sometimes referred to as a “hybrid Cloud,” VPC platforms such as the Google App Engine Secure Data Connector may be used to extend existing security policies running within internal infrastructures to cloud-based resources.

It’s impossible to completely remove the risks associated with housing sensitive information such as PII within the cloud. A proactive, constantly evolving strategy is critical to maintaining an organization’s trust and confidence in using cloud-based services.

Planning for the future with the cloudWith the right strategic planning, CIOs can navigate the cloud computing maze to better serve the needs of the business by improving their visibility into IT costs while addressing concerns about customer data privacy.

The 2011 CIO Compass 85

References:1. "2010 State of the CIO." CIO Magazine (2010).

2. "VMware VCenter Chargeback: Measure & Analyze Virtual Machine Costs." VMware Virtualization Software for Desktops, Servers & Virtual Machines for Public and Private Cloud Solutions. Web. 20 Oct. 2010. <http://www.vmware.com/products/vcenter-chargeback/>.

3. IT Cost Transparency and Technology Chargeback Solutions — ComSci. Web. 21 Oct. 2010. <http://comsci.com/solutions.php>.

4. "Awareness, Trust and Security to Shape Government Cloud Adoption." Lockheed Martin White Paper (2010).

5. "Personally Identifiable Information." Wikipedia, the Free Encyclopedia. Web. 21 Oct. 2010. <http://en.wikipedia.org/wiki/Personally_identifiable_information>.

6. "When the FBI Raids a Data Center: A Rare Danger." Network World. Web. 21 Oct. 2010. <http://www.networkworld.com/news/2009/042209-when-the-fbi-raids-a.html>.

7. Cloud Security Alliance (CSA) — Security Best Practices for Cloud Computing. Web. 23 Oct. 2010. <http://www.Cloudsecurityalliance.org/sc2010.html>.

8. Quantitative Analysis for Better Decisions. Web. 24 Oct. 2010. <https://www.metricscenter.net//>.

9. “State Security Breach Notification Laws” National Conference of State Legislatures. Web. 12 Oct. 2010. <http://www.ncsl.org/default.aspx?tabid=13489>.

10. “Q&A: By 2011, CIOs Must Answer The Question, “Why Not Run In The Cloud?” Forrester Research. 14 Aug. 2009. https://www.nda.com.au/gettingstartedinthecloud.pdf.

11. “nCircle Benchmark.” Web. 6 July. 2011. http://www.ncircle.com/index.php?s=products_benchmark.

12. “About Cloudkick.” Mon. 23 Aug. 2011. https://www.cloudkick.com/about.

13. “Virtual private cloud.” Sun. 22 May. 2011. http://en.wikipedia.org/wiki/Virtual_private_cloud.

The 2011 CIO Compass 86

Cloud strategy: Private, public, or hybrid?Cloud computing is here to stay, and now the task is to define how it fits into the information technology (IT) landscape. The first order of business is to determine which model — public, private, or hybrid — will work most effectively for your organization.

Making that determination means diving deep into both your business strategy and your goals for operational efficiency. In many cases, it requires organizations to understand themselves and their core capabilities. Consider starting with these questions:

•Will you be a consumer of cloud services — or a provider?

•What types of services will run in the cloud?

•Do you need to comply with specific regulatory or legal requirements?

•Does it make better financial sense to maintain your cloud infrastructure in-house, outsource it, or run a mix of those two solutions?

Along the way, you should evaluate your enterprise application architecture against the maturity of your IT infrastructure. This will help determine how easily and effectively you can automate cloud-related processes within your organization. Hosting a private cloud is a major undertaking that spans technology, business processes, organizational structure, and in-house IT competency.

This article provides an approach to understanding which type of cloud computing strategy should be considered. It

also highlights the issues you should manage in choosing a cloud computing model and effectively integrating it into your organization. Choices among public, private, and hybrid cloud models boil down to issues of access, ownership, and control.

Figure 1: Cloud service source spectrum

Understanding services is crucial to formulating a sound cloud strategyAn important early step in developing a cloud strategy is the creation of a services portfolio — a document that defines the services needed to support the functional and operational strategies of the enterprise. Depending on the complexity of your organization, this could be something as simple as a list of services or a more in-depth profile created using a portfolio management toolkit.

Once your portfolio is in place, you will want to assess each service to determine whether it is a cloud candidate. Then you can decide how to source each one — private or public — depending on scope and criticality of each service.

Cloud strategy Private, public, or hybrid?

Private cloud

Community cloud

Hybrid cloud

Public cloud

loud Service Source Spectrum

Large enterprises Healthcare providers Government

•High data ownership•Restricted service access•High degree of control

Small-medium enterprises

•Hosted virtual private cloud•Custom designed cloud•Dedicated hardware

Supply chain Research Government

• Similar to private cloud, difference being a ‘Community’ is the customer instead of a single enterprise

Individual consumers Small, medium business Edges of enterprises

•Publicly available• Shared infrastructure• Little to no control of data•Multi-tenant environment

Authored by: Sumit Sharma

The 2011 CIO Compass 87

When would you choose a public cloud over a private cloud?Services that are critical to differentiating the business are not readily available on public clouds. In addition, required compliance, security, and governance capabilities may be lacking today. Over time, however, this will change as demand increases and public cloud providers evolve their service standards and agreements.

One development to monitor is the emergence of cloud brokering services, which could potentially draw from different sources to package services tailored to a particular organization’s requirements. This approach requires standardization of cloud interoperability interfaces, metadata, and other cloud-related architecture standards and protocols.

Because the evolution of public clouds is a work in progress, differentiated services that are critical to the business are more likely to be candidates for private clouds instead. Services on the “commodity” end of the spectrum, however, are strong candidates for public sourcing because they are heavily standardized and reusable.

Network latency is also an issue to consider, factoring in the geographic footprint of your organization and requirements for network performance, especially with applications that have to send and receive a great amount of data. In cases where Internet latency is not acceptable, public clouds are not a viable option.

Some argue that a private cloud will not be able to match the elasticity of a public cloud because it operates on a smaller set of systems. This should not pose a problem if capacity planning and other requisite process workflow automation are in place. That said, there is a potential drawback to consider. A private cloud infrastructure stack must be distributed across a heterogeneous environment with many different types of business units and systems, each with their own service level agreements. As complexity of the environment increases, so do costs and the need for caution. Managing the private cloud environment, as more internal customers migrate onto it, requires a smart planning and sustained management commitment.

Does your enterprise have what it takes to pursue a private cloud?From a technical perspective, hosting a private cloud

requires full-fledged cloud service life cycle management. This stretches from automating full-stack layered provisioning across heterogeneous platforms to managing service retirement and resource reclamation. Consider investing in the following:

•Virtualization software.

•Process automation.

•Service provisioning and monitoring.

•Infrastructure software and hardware upgrades in line with next generation data center features.

In addition, private clouds need to provide a self-service portal supported by a service catalog. You can either develop this portal yourself or purchase a front-end application to allow users to access the cloud. As a practical matter, users should be able to access the cloud from inside the enterprise network, from outside the network via a secure VPN session, or through the kind of secure Web browser often found in the public cloud.

An effective cloud computing environment has virtual servers interacting with virtual storage and virtual network components to provide users access to services and data. As such, elastic load balancing is critical. A pool of users accesses the cloud at different times for different services. If the pool of users changes, elastic load balancing allows the enterprise to automatically add or remove computing resources without impacting the user’s experience.

Mastery of these elements could allow you to offer services expansion within the private cloud to the external users. In many ways, it is like offering a public cloud service.

Does your enterprise have what it takes to pursue building a private cloud? The question is more about change management, culture, and organization than it is about technology. Private cloud calls for standardizing commonly repeated operating procedures that range from provisioning to monitoring to measuring to decommissioning. It also calls for automating deployment and management processes for true agility and elasticity.

Finally, you should consider a services interface to the business to support a chargeback model for ongoing capacity planning and management. From a customer service point of view, providing access to users requires sophisticated workflow automation to increase the speed and efficiency of cloud deployments.

The 2011 CIO Compass 88

Big and publicFor very large enterprises, public cloud computing is a viable option for commodity services. The sheer scale of transactions for services with “bursty” characteristics can benefit greatly from an elastic, public cloud approach. That said, there is a point where the cost justification for such services may indicate the need for taking the cloud private.

Going the hybrid routeSome argue that private cloud computing is the natural evolution of IT modernization. But remember, it can be difficult to change course once you have made the investment commitment. Depending on the scale, financial justification is realized over a prolonged period. During this time, there may be innovations in cloud architectures that leapfrog existing private cloud architecture.

In some cases, therefore, the most effective solution may be a hybrid model. You can still gain the benefits of a dedicated cloud environment, but leave most of the heavy lifting and investment to a vendor’s multi-tenant virtual private cloud environment. This may potentially not only reduce costs, but can dramatically decrease time to market for cloud services. An Infrastructure as a Service cloud provider can rapidly set up a hosted virtual private cloud for your enterprise that is fully and properly bound by your enterprise security parameters.

A hybrid solution may also make sense when an application or some functionality (testing, for example) is hosted on a public cloud. It may need to be customer facing, but still need to interface with application program interfaces (APIs) hosted via a private cloud. In this case, the application can reside in the internal cloud infrastructure as well as on the public cloud. This requires seamless cloud-to-cloud interoperability. That is a big challenge given today’s high level of interconnection and the rapidly accelerating pace of innovation.

Regardless of whether you operate your cloud or source a private one, there are a host of operational implications to address. Even though you may have the ability and resources to build and operate a cloud, the decision of whether to do that is a strategic one. You should choose the solution that most effectively supports your business model.

SummaryChoosing the most effective cloud sourcing strategy is complex because the business and technology context is evolving rapidly. Make the commitment to think it out completely.

The 2011 CIO Compass 89

Software as a service (SaaS) within enterprise resource planning (ERP) has been gaining worldwide adoption and momentum, with a compound annual growth rate (CAGR) of 10 percent anticipated through 20141.

The rapid growth of SaaS ERP is driven by four primary forces2:

1. Intensifying business challenges as a result of the recent financial crisis, rising operating costs, escalating competition, and changing workplace requirements.

2. Growing frustration among end users and executives with the costs and complexities associated with traditional on-premise applications.

3. Broad acceptance of consumer-oriented, on-demand services that are setting the standard for ease-of-use and cost-effective software.

4. Rapid evolution of SaaS-enabled technologies that are becoming economical to develop and implement.

Given the rapid growth of the SaaS model and its growing market penetration in the front office, an increasing number of companies are evaluating the benefits and functionality of SaaS operating model for the back office as Traditionally, SaaS ERP providers have targeted small- and medium-size businesses (SMB), but current trends indicate that larger businesses are also evaluating SaaS ERP as a viable option. Companies are particularly attracted to the SaaS model’s perceived lower capital investment and faster deployment time, as well as faster refresh cycles of new functionality from SaaS vendors, which are usually three times a year versus traditional ERP vendors’ annual or biannual releases.

ERP SaaSSaaS applications use an innovative approach where an external vendor hosts the enterprise application that is provided as an on-demand service to the client. The benefits of such a design include on-demand scalability (both up and down), anytime and anywhere access, and increasingly pay-per-use functionality. Since the entire service is hosted by the service provider, the client is not responsible for owning, maintaining, or upgrading the software and hardware.

SaaS is expanding its presence in everyday life with products, such as Google Docs in the cloud. Cloud computing, which includes SaaS, is changing the way businesses run technological processes and services. Because of this relatively easy-to-use application service model, many SMB are adopting SaaS for cost savings and increased efficiency. The SaaS adoption trend has enjoyed success in the customer relationship management (CRM) market with SMBs and is especially strong in the CRM space where vendors are experiencing consistent growth and expansion of their client portfolio. Although SaaS is growing in the areas of CRM and everyday applications, it has limited adoption in the ERP market to date. Accounting for only five percent of the total ERP software market in 2008 because of its focus on SMBs, SaaS has the potential to expand more in this area. It is now overcoming its initial niche focus and is taking on the traditional strengths of established ERP offerings. According to Gartner, SaaS could be adopted across four major components of the ERP suite, although some difficulties do exist:

ERP in the CloudSaaS-based application solutions

Authored by: Johannes Raedeker, Satish Maktal, John Hsu, and Dwij Garg

The 2011 CIO Compass 90

•Enterprise asset management: Conservative outlook on adoption due to its complexity and requirement for high amounts of industry and process knowledge.

•Financial management systems: Minor part of the overall market because core applications have matured and strategic applications require complex integration and configuration.

•Human capital management: SaaS has been available and is continuing to grow. Applications do not require high levels of customization and can be used “out-of-the-box.”

•Manufacturing: Limited applicability due to being a minority component and the complex needs of industry and manufacturing processes.

In order to better understand if SaaS ERPs are a good fit for your company, this article compares the SaaS delivery model to the traditional on-premise model and provides you with practical insights about how to incorporate SaaS into your future application portfolio.

Characteristics of SaaS compared to on-premise applicationsSaaS ERP requires that decision makers and implementers 1) understand what SaaS really means to the company, 2) estimate the organization’s readiness to align with the SaaS-driven processes, and 3) prepare the right approach to managing the rollout and maintenance phases of projects.

Cloud based services Traditional on-premise services

Usage Software-as-a-Services (SaaS) Applications

Design Development-as-a-services (DaaS) Development tools

Arrangement Platform-as-a-services (PaaS) Software layers

Infrastructure Infrastructure-as-a-service (IaaS) Servers and hardware

The table above outlines the underlying characteristics that make cloud-based services different from traditional applications delivery approaches. The SaaS ERP approach provides a new dynamic and value to organizations. Here is a summary of SaaS ERP characteristics:

•SaaS ERP can be implemented relatively quickly. Companies can be up and running very rapidly — often in a matter of weeks or a few months, in contrast to traditional on-premise applications which generally take several months or years to implement. In addition, patches and upgrades are delivered in much more rapid cycles, allowing companies to capitalize faster on new features and functions.

•SaaS ERP drives lower information technology (IT) staff needs. Based on the SaaS architecture, an enterprise will be able to add applications without having to justify the expenses of adding IT staff to operate and manage the application. This is a key distinctive characteristic that distinguishes it from the on-premise software.

•SaaS ERP provides a consistent way to enforce regulatory compliance. Unlike on-premise software, SaaS enhances process controls, audit abilities, and consistencies, which are generally welcomed by security and compliance professionals.

•SaaS ERP is cost effective. It can be scaled effectively as a business expands or contracts and can be especially effective for smaller companies where the large capital investment associated with an ERP has made it unfeasible to undertake an ERP project. While SaaS models do not necessarily cost less in the long run, they can adapt better to variation in business cycles. In addition, customers have the latest version of the software, without having to endure implementation or upgrade pains.

However, SaaS ERP architecture is relatively new and not every existing feature from on-premise-based software has been replicated in the SaaS architecture. If a company is looking for more sophisticated software and for particular functionality specific to its industry, the company may have to either wait until SaaS functionality is improved or choose on-premise-based software. However, SaaS ERP architecture is maturing and growing in scale and the functionality gap is likely to diminish over the next three to five years.

The 2011 CIO Compass 91

Potential benefits in adopting SaaS ERPs

Expected Potential side effects

•Ability to meet business needs quickly.

•Ability to scale capability in response to demand fluctuations.

•Decreased technology acquisition and support costs.

•Potential accounting and tax benefits.

•Loosely governed application portfolios.

•Incompatible application architectures.

•Sunk infrastructure cost:

– Organizational lock-in.

– More complex vendor management.

– More complex security and control requirements.

Increased speed to market

Rapid deployment

Increased Flexibility

Streamlined operations

Low technology

costs

What are SaaS ERP’s challenges?Some likely challenges of SaaS ERP are:

•Adopting SaaS ERP into the current on-premise IT environment increases integration costs for system maturity-level enhancements, decreases customization options for industries and geographies with local requirements, and creates security issues when sharing sensitive data with a third-party provider. In many cases, the SaaS model is deployed in a multitenant environment, which means several companies, including potential competitors, are sharing the same hardware and underlying application. This requires stringent security and access controls.

•Cost savings may be limited if requirements are complex and require significant customization. The SaaS ERP model is a better fit for companies that desire straightforward functionalities with little to no customization.

•Testing may be neglected because upgrades occur on a regular vendor-driven schedule and users are not in control of the change stream or timing. This may cause issues, especially for cyclic businesses that require that all eyes focus on operations during busy seasons.

•The company cannot prevent "buggy" functionality from being introduced into the SaaS ERP system because the vendor controls the software.

•Companies that require a high degree of integration of the ERP with many other systems will require a complex integration architecture. The SaaS vendor typically supports their application up to predefined interface points; the customer is responsible for end-to-end integration. This can lead to finger pointing regarding root causes and resolution of technical issues.

The 2011 CIO Compass 92

Key considerations and evaluation criteria for adopting SaaS ERPIn many respects, evaluating SaaS applications is similar to evaluating on-premise applications. The key criteria are to deliver the functionality that the business requires; however, thought should also be given to some of the key considerations listed below:

•Applications portfolio. Any key application selection should be evaluated within the context of your current and future applications portfolio. Functionality of a solution for the specific business requirements may be a great fit, but if the solution does not easily interoperate with existing solutions and does not provide a seamless integration or end-user experience, it may become more expensive in the long run. For example, if business users must log in to two more different applications or screens to perform a process, it will hinder their productivity and increase the training required for new employees.

•Integration strategy. SaaS ERPs are being created from the ground-up with new technologies and paradigms. This means that traditional integration methods may not work seamlessly with your existing applications. This may especially be true if your applications portfolio consists mainly of SaaS solutions or a mix of SaaS and on-premise solutions. The potential promised SaaS savings of internal IT personnel will now shift from maintaining the ERP itself to maintaining the integrations between the SaaS ERP and other applications.

•Data warehousing and business intelligence: In the SaaS model, the data resides on the vendor’s servers and is accessible through the Internet. This means that if your data warehouse is in-house, you could be required to extract terabytes of data from the vendors’ servers and bring it into your data warehouse over the Internet, which may create a bottleneck. Some SaaS vendors use proprietary data formats, which may also require you to write additional interfaces for transforming the data.

•Availability of skilled people: SaaS ERP solutions are in a nascent stage today and availability of skilled people to implement and maintain your applications environment may be limited.

•Data controls and ownership: The cloud model has been criticized by privacy advocates because companies that host the cloud services control the communication and data stored between the user and the host company, which gives them ability to monitor at will, lawfully or unlawfully. External management of security-based services can also be a contentious issue.

•Viability of SaaS providers: The number of SaaS solutions and vendors is growing. Looking back at the trends in on-premise solutions, it would be hard to determine a coming consolidation of SaaS vendors/solutions. Careful consideration should be given to the long-term viability of the vendors and support contracts.

Data controls and ownership

Legal/regulatory compliance

Data location and access

Who will own the data? How might it be used?

Where will your data reside? Who can access your data?

Can cloud vendors comply with your regulatory/compliance requirements?

Backup, retention, disposal

Availability and reliability

Disaster recovery

Do the cloud vendor’s retention/destruction timelines and practices meet your requirements?

How is reliability, access, and availability assured by cloud vendors?

What are operational continuity measures and recovery timelines?

Risk and control issues

The 2011 CIO Compass 93

Key observations and recommendations A large number of small and medium companies may adopt SaaS ERP because of its intended promise to lower initial capital investment risk, deploy faster, and lower professional services required. They should consider these points:

•SaaS ERP depends on the company profile Our observation is that SaaS ERP tends to cater to companies that are rapidly growing and wish to become early adopters in testing and evaluating the latest technologies. In many of our evaluations, we see that SaaS ERP is not as mature as many of the existing on-premise ERP solutions, but presents a strong business case for companies that are looking for a more general on-demand ERP suite while reaping potential benefits.

•Do not implement SaaS ERP just on cost alone Generally, do not base a SaaS ERP decision solely on potential perceived cost savings. The long-term total cost of ownership (TCO) for a SaaS ERP may be as significant, if not more, than the TCO of implementing a traditional on-premise solution.

• What about the large companies? We are cautiously optimistic for larger companies considering SaaS ERP. Larger companies would like to see SaaS ERP vendors solve specific hindrances, such as integration of complex data and wide variety of legacy systems. For multinational companies, general availability of SaaS ERP solutions that would satisfy various particular local and country-specific requirements, such as tax, payroll, audit, is still limited. Since SaaS vendors support a single code, uncommon features may not be supported because vendors cannot scale the upgrade to cover customer bases without dramatically driving up research and development costs.

•Analyze, analyze, analyze Companies should have a specific/particular objective for implementing SaaS ERP. They should evaluate their SaaS ERP adoption according to their own strategic IT vision and then assess overall applicability through a variety of analytical assessments, including return on investment, benefits and drivers, and risk and TCO.

Companies should understand that implementing an effective SaaS ERP solution still requires sizeable budget, resources, and training. They should also factor in other considerations, such as operating model (centralized or decentralized with many small remote offices), anticipated growth rate (number of current and future users), sensitivity around security measures, and integration needs. Since the typical ERP shelf life is about 10 years, companies must look beyond the perceived initial cost savings and conduct a long-term cost/benefit analysis to decide if SaaS ERP adoption would fit their particular strategic vision and needs.

References:1. Gartner, Forecast Analysis: Software as a Service, Worldwide, 2009-2014, Update, 11 November 2010, Table 2.2. ThinkStrategies, Enabling SaaS: Beyond Development and Delivery to Marketing and Support.

The 2011 CIO Compass 94

Marketplace dynamics, and the recent state of the economy, have reinforced the need for companies to significantly reduce costs and improve operational flexibility, in the face of increasing uncertainty. One of the first places companies look towards for reducing costs is within their information technology (IT) organizations. This is because the current IT environment of many companies consists of numerous disparate applications which have resulted in increased costs over time. Additionally, the complexity and inconsistency engendered by this application sprawl inhibits an organization’s ability to be operationally lean and nimble.

Application rationalization is a specific IT cost reduction initiative by which overlapping applications are consolidated or reduced and obsolete applications are retired. The primary quantitative benefits application rationalization provides are reduced spending on application/infrastructure maintenance and support, as well as training due to a consolidated application portfolio. In addition, application rationalization, when done in accordance with professional standards, also can provide for increased operational flexibility, and a reduced vendor footprint. In the traditional approach to application rationalization, the target portfolio consists of applications and underlying infrastructure that are primarily owned and maintained in house and on premise. While the benefits intended from application rationalization can still be achieved through the traditional approach, incorporating the use of cloud computing can greatly increase the quantitative and qualitative benefits.

Cloud computing is a “pay–as–you–go” paradigm by which infrastructure, platforms, and/or applications can

be procured as metered services from external vendors. As a result of the applications and infrastructure being provided as services in the cloud computing model, IT costs are shifted from capital expenditures to operational expenditures, thereby reducing upfront investments and payback periods while increasing the return on assets. In addition, some of the application and infrastructure maintenance and support activities can also be borne by the cloud vendors. Finally, the high scalability (ramp up/down of infrastructure and/or applications) in a reduced time frame enabled through cloud computing increases operational flexibility. Therefore, by moving the target state application portfolio and supporting infrastructure to the cloud, as opposed to the traditional approach of on premise and in house, can greatly enhance the benefits of application rationalization.

Overview of cloud computing services Before delving further into the details of how cloud computing can enhance the benefits of application rationalization, a basic overview of cloud services and its implementation models is intended. There are three types of cloud services for consideration (figure 1):

•Software as a service (SaaS) which provides on demand use of software over the Internet or private networks. These applications are typically targeted towards private users and/or business users with well–known examples being social networking applications, such as Facebook and CRM tools.

•Platform as a service (PaaS) where tools and environments are provided to build and operate cloud applications and services. It targets users in the software

Author: Jeff Krugman and Shavin Thaddeus Shahnawaz

Cloud-based application rationalization

The 2011 CIO Compass 95

development organization and basically allows for the paradigm of “programming on the Internet” by facilitating designing, building, and delivering of applications and services from the Web.

•Infrastructure as a service (IaaS) where computing and storage resources are provided as Web–based services from the cloud. Virtual CPUs, disk space, and database services are provided as services that are scalable and elastic. Examples include Rightscale and GoGrid.

Figure 1: Types of cloud services

Hosted applications

Service type Definition Sample applications

Infrastructure software

Softwarea as a service (Saas)

Provider licenses an application to customers for use as a service on demand.

•Norcore applications, such as HR and CRM.

•Office productivity/utility apps.

•E-mail, document management, collaboration and workflow.

Operating systems

Virtualization

Platform as a service (Paas)

Designing, building, delivering applications and services from Web; Computing platform and solution stack as a service.

•Application design, development and test environments.

•Data storage/access services.

•Web services integration.

•Database system integration.

Servers

Connectivity

Infrastructure as a service (IaaS)

Computer infrastructure (virtualized processing, storage and network environment) as a service that is scalable and elastic.

•HPC/Grid, analytics.

•Web apps and services.

•Scenario analysis (e.g., Monte-carlo simulations).

•Storage service.Data center

The cloud services can be deployed in one of the following three ways (figure 2):

•Public clouds where services from vendors can be accessed across the Internet using systems in one or more data centers, shared among multiple customers, with varying degrees of data privacy control.

•Private clouds where cloud computing architectures are built, managed, and used internally by an enterprise using a shared service model with variable usage of a common pool of computing resources.

•Hybrid clouds which, as the name suggests, is a mix of public and private where the organization manages some resources in house and has others provided externally.

The 2011 CIO Compass 96

Figure 2: Cloud deployment models

Cloud deployment models Key characteristics

Public cloud •Third party owned and managed, multitenant, metered by use (OpEx cost model).

•Shared infrastructure (virtual separation), no ownership/control of infrastructure.

•Highly scalable and elastic.

•Access via internet or private link.

•Data security and privacy are concerns.

•More scalable and cost effective than private cloud.

Private cloud •Firm or vendor owned/managed; single tenant.

•OpEx and CapEx model similar to self-owned infrastructure; off — premises cost model may vary depending on vendor.

•Dedicated infrastructure with control over scalability deployed.

•Access via LAN or private link.

•Consistent level of control over data and access security, privacy, and governance.

•Data security and privacy is not a concern, unless it is an off-premises private cloud.

Hybrid cloud •A model where an application workload uses both private and public cloud environments.

•Workload which could run in private cloud are:

– Dependent on proprietary platforms or appliances not offered by cloud providers.

– Tightly coupled interfaces or dependencies on shared infrastructure services.

– Workloads which could be moved to public cloud.

– Can benefit from public cloud scalability and elasticity.

– Run on commodity infrastructure and required quicker time to market.

Based on the business objectives to be realized from the application rationalization initiative and the organizational constraints within which the application portfolio needs to operate, a specific cloud computing model, or combination of models, will need to be chosen.

How cloud computing enhances the benefits of traditional application rationalizationIn the conventional application rationalization approach, business processes are standardized across the enterprise while an enterprise application architecture is established. The resulting target state application portfolio consists of a reduced set of applications and supporting infrastructure which are strongly tied to enabling the achievement of business objectives. While the IT costs are reduced in comparison to the originally disparate application portfolio, and a greater level of operational flexibility is enabled, there is the potential for so much more. For instance, a significant breadth and depth of human resources, software, and hardware will still be required to support and maintain the portfolio of applications on premise. Rolling out upgrades, break/fixes, enhancements, and other routine maintenance activities, across the ‘consolidated’ application portfolio adds to the cost burden due to the complexity and quantity of effort

Key characteristics

Public cloud • Third party owned and managed; multitenant, metered by use (OpEx cost model)

• Shared infrastructure (virtual separation); no ownership/control of infrastructure

• Highly scalable and elastic

• Access via internet or private link

• Data security and privacy are concerns

• More scalable and cost effective than private cloud

Cloud deployment models

Firm virtual cloud

Other clients

Firm data center

Vendor Data Center

Private cloud • Firm or vendor owned/managed; single tenant

• OpEx and CapEx model similar to self–owned infrastructure; off — premises cost model may vary depending on vendor

• Dedicated infrastructure with control over scalability deployed

• Access via LAN or private link

• Consistent level of control over data and access security, privacy, and governance

• Data security and privacy is not a concern, unless it is an off–premises private cloud

Hybrid cloud • A model where an application workload uses both private and public cloud environments

• Workloads which could run in private cloud are:

– Dependent on proprietary platforms or appliances not offered by cloud providers

– Tightly coupled interfaces or dependencies on shared infrastructure services

– Workloads which could be moved to public cloud

– Can benefit from public cloud scalability and elasticity

– Run on commodity infrastructure and require quicker time to market

Firm data center

Private cloud

Private cloud Other Clients

Vendor data centerFirm data center

Firm data center

Private cloud

Firm virtual cloud

Other clients

Vendor data center

Key characteristics

Public cloud • Third party owned and managed; multitenant, metered by use (OpEx cost model)

• Shared infrastructure (virtual separation); no ownership/control of infrastructure

• Highly scalable and elastic

• Access via internet or private link

• Data security and privacy are concerns

• More scalable and cost effective than private cloud

Cloud deployment models

Firm virtual cloud

Other clients

Firm data center

Vendor Data Center

Private cloud • Firm or vendor owned/managed; single tenant

• OpEx and CapEx model similar to self–owned infrastructure; off — premises cost model may vary depending on vendor

• Dedicated infrastructure with control over scalability deployed

• Access via LAN or private link

• Consistent level of control over data and access security, privacy, and governance

• Data security and privacy is not a concern, unless it is an off–premises private cloud

Hybrid cloud • A model where an application workload uses both private and public cloud environments

• Workloads which could run in private cloud are:

– Dependent on proprietary platforms or appliances not offered by cloud providers

– Tightly coupled interfaces or dependencies on shared infrastructure services

– Workloads which could be moved to public cloud

– Can benefit from public cloud scalability and elasticity

– Run on commodity infrastructure and require quicker time to market

Firm data center

Private cloud

Private cloud Other Clients

Vendor data centerFirm data center

Firm data center

Private cloud

Firm virtual cloud

Other clients

Vendor data center

Key characteristics

Public cloud • Third party owned and managed; multitenant, metered by use (OpEx cost model)

• Shared infrastructure (virtual separation); no ownership/control of infrastructure

• Highly scalable and elastic

• Access via internet or private link

• Data security and privacy are concerns

• More scalable and cost effective than private cloud

Cloud deployment models

Firm virtual cloud

Other clients

Firm data center

Vendor Data Center

Private cloud • Firm or vendor owned/managed; single tenant

• OpEx and CapEx model similar to self–owned infrastructure; off — premises cost model may vary depending on vendor

• Dedicated infrastructure with control over scalability deployed

• Access via LAN or private link

• Consistent level of control over data and access security, privacy, and governance

• Data security and privacy is not a concern, unless it is an off–premises private cloud

Hybrid cloud • A model where an application workload uses both private and public cloud environments

• Workloads which could run in private cloud are:

– Dependent on proprietary platforms or appliances not offered by cloud providers

– Tightly coupled interfaces or dependencies on shared infrastructure services

– Workloads which could be moved to public cloud

– Can benefit from public cloud scalability and elasticity

– Run on commodity infrastructure and require quicker time to market

Firm data center

Private cloud

Private cloud Other Clients

Vendor data centerFirm data center

Firm data center

Private cloud

Firm virtual cloud

Other clients

Vendor data center

Vendor data center

The 2011 CIO Compass 97

required. Furthermore, the time and capital investment required to commission/ramp–up/ramp–down traditional IT environments on premise can still impede organizational flexibility and scalability to some extent.

The following are the primary benefits of cloud services which help enhance the value from conventional application rationalization efforts:

•Lower upfront investment and corresponding quicker payback since IT assets can be commissioned in the cloud without the need for large investments in software and hardware. In essence, capital expenses are shifted to operating expenses in the case of pay–as–you–go vendor–provided cloud solutions where billing is tied to metered use of resources. This benefit is greatly enhanced when utilizing public clouds, however, even with the use of private clouds, capital investments can

be greatly reduced due to the sharing of services and infrastructure.

•Increased speed/agility of deployment since clouds can provide almost immediate access to a pool of hardware and software resources that can be allocated and provisioned almost on demand.

•Improved flexibility and scalability since cloud computing resources can be almost instantaneously (depending on the contract) up–or down–scaled while avoiding expensive wait times and capacity constraints.

•Increased focus on core competencies since organizations can significantly reduce the dependency on traditional IT by offloading some of the responsibility and management of IT capabilities to cloud service providers.

Rationalizing the application portfolio to a cloud–based environment can significantly increase the benefits of an application rationalization initiative by increasing agility, reducing capital spending, and improving sharing and utilization of IT resources

The 2011 CIO Compass 98

Our approach to rationalizing the application portfolio to the cloudSimilar to the traditional approach of application rationalization, rationalizing the application portfolio to the cloud (figure 3) can be organized into three main phases — Strategy and Setup, Assessment and Recommendations, and Execution.

The initial phase of strategy and setup focuses on understanding the business vision and high–level IT goals and objectives for the application portfolio. The assessment and recommendations phase is dedicated to collecting

information about the applications across the portfolio, assessing the applications against business objectives, determining suitability of moving application functionality to the cloud, and developing a road map with detailed migration plans. The execution phase deals with retiring applications, implementing applications (on premise/off–premise/hybrid), migrating application information to future state applications, performing training and necessary talent management activities for organization–wide preparedness, and monitoring/governing the application portfolio.

Figure 3: Traditional approach to application rationalization — modified to include cloud computing• Execute migration and deploy

applications to core platforms (on-premise, SaaS, or PaaS)

• Retire applications• Implement change

management processes

• Understand application portfolio• Identify in-use applications, collect

application data, and build application profiles

• Map applications against business processes and business objectives

• Identity gaps between current state and future objectives • Evaluate suitability of applications for cloud• Rationalize applications and create disposition

of surviving applications• Develop POC approach for piloting migration

• Execute and validate pilot for migration

• Develop sequenced roadmap for migration of surviving applications to core platforms

• Understand organization expectations• Secure executive support and business

unit participation• Understand business requirements and IT issues

• Guide decisions for application procurement, deployment, management, and retirement of applications on core platforms

UnderstandBusiness and IT

Environment

Collectapplicationinformation and build baseline

Analyze Analyze PortfolioPortfolio

PrioritizePrioritizeConduct pilot and develop roadmap

Developdetailed

plans

Governapplication

portfolio

• Confirm application portfolio strategy and goals

• Define initial application scoring model

Define applicationmanagement

approach Monitor

applications

• Track benefits, costs, resources, strategic alignment, and more using a dashboard with health indicators for the portfolio through regular monitoring

Execute on plans

Step 3: Execution

Step 2: Assessment and recommendations

Step 1: Strategy and setup

Assess applications

plan pilot and identify target

platforms

• Develop detailed migration plans (including change management and process, and people impacts)

• Identify investments required and migration costs

The 2011 CIO Compass 99

A key enabler for the assessment phase, in both the traditional and modified approach, is the “4R” model (figure 4) which evaluates the business value and technical condition of the applications in the current portfolio and provides insights into the potential application rationalization options. Depending on business criticality, there are two options for applications that are providing subpar business value; those that are in poor technical condition should be candidates for retirement/replacement, while applications that are in good technical condition should be candidates for reassessment of business value. Conversely, two options exist for applications that are providing high business value; companies should enhance those applications that are in poor technical condition, while expanding the footprint of those that are currently optimized.

Evaluating the technical condition of an application begins with assessing how well it can effectively and efficiently accommodate ongoing maintenance and necessary enhancements in a careful and integrated manner. Those applications built on a strong foundation may necessitate further investment while those built on a weak foundation will likely be problematic and require excessive maintenance over time, making retirement or replacement a viable option. Those applications that are critical to the business, have hundreds/thousands of users, and provide high business value are good investment candidates. Applications that are viewed as nonessential and/or used by only a handful of users are poor portfolio performers.

Reassess Renew

Retire/replace Redevelop

Excellent

Poor

Low Business value High

Tech

nic

al c

ond

itio

n

Figure 4: “4R” model to evaluate applications in the portfolio

The 2011 CIO Compass 100

Evaluating the suitability of moving application functionality to the cloud Once the “4R” model has been utilized to identify applications for replacement, redevelopment, and/or renewal, a similar framework, as indicated in figure 5, can be utilized to make the initial, high–level assessment of those applications that may be viable cloud candidates.

Figure 5: Framework to evaluate and prioritize candidates for the cloud

Meets business requirements

Busi

nes

s re

quir

emen

ts

Technical requirements

Does not meet business requirements

Does not meed cloud technical requirements

Meets cloud technical requirements

Can do (later) Can do

Bus

ines

s

•Low or moderate application criticality

•Internal users with low latency needs

•Moderate service-level requirements

•Confidential data can be masked

Bus

ines

s

•Low application criticality

•Low number of internal users with low latency needs

•Low to moderate service-level requirements

•No confidential data or data is easily masked

Tech

nica

l

•Some interdependencies on other applications/data

•Good virtualized candidate; uses cloud vendor-supported OS

•Users commodity hardware (e.g., x86 servers)

•Moderate bandwidth and infrastructure requirements

•Shares environments or software stacks

•Does not depend on specialized appliances

Tech

nica

l

•Minimal interdependencies to other apps/data

•Currently virtualized or is a strong virtualization candidates; uses cloud vendor-supported OS

•Uses commodity hardware (e.g., x86 servers)

•Low bandwidth and low/moderate infrastructure requirements

•Standalone environments and software stack

•Does not depend on specialized appliances

Bus

ines

s

•Mission critical application

•Large number of external users with high latency requirements

•High service-level requirements, contains confidential data not easily masked

Bus

ines

s

•Mission critical application

•Large number of external users with low latency expectations

•High service-level requirements, contains confidential data not easily masked

Tech

nica

l

•Complex interdependencies to other apps/data

•Not suited for virtualization; uses unsupported OS by cloud vendors

•Uses custom hardware (e.g., vendor hardware or highly customized grid)

•High bandwidth and infrastructure requirements

•Shared environments and software stack

•Depends on specialized appliance

Tech

nica

l

•Complex interdependencies to other apps/data

•Currently virtualized or is a strong virtualization candidate; uses cloud vendor-supported OS

•Uses commodity hardware (e.g., x86 servers)

•Low bandwidth and low/moderate infrastructure requirements

•Standalone environments and software stack

•Does not depend on specialized appliances

Cannot do Should not do

The 2011 CIO Compass 101

Another approach to evaluating the feasibility of replacing application functionality with services provided by cloud computing is a decision–tree model, as outlined in figure 6.

Figure 6: Tree model to identify opportunities to rationalize applications to the cloud

Yes

Yes

No

No

Yes

No

Yes

Yes

No

Start

Does the application support active business functions, and provide value? Retire application

Does the application conform to current and future architectural standards?

Is the application redundant with other existing systems?

Is there a high cost to maintain and/or upgrade the application?

Retain and renew (Maintain/enhance) application on premise

Is application (or set of redundantapplications) a ‘good fit’ for the cloud based on given criteria defined above?

NoRedevelop (Reengineer)

existing application

Can the application be developed/supported on the cloud?

Is there another cloud provider that meets the needs?

Cloud candidate: Assess impact (e.g., architecture, users)

and begin cloud planning activities

:

No

Yes

Yes

No

Yes

Conduct cloud vendor selection

Does the application support revenuegenerating activities (billing, A/R, etc.)

Retain and renew (Maintain/enhance) application on premise

Yes

Based on the organization’s tolerance for risk , can the application be in the cloud?

No

Specific business concerns regarding suitability of applications for the cloud are typically centered on business criticality of applications, number of application users, service-level requirements, and confidentiality/privacy of data. Regarding technical requirements, the specific considerations influencing cloud suitability include complexity of interdependencies among applications, support of the operating system by cloud vendors, specific hardware (including any special types of devices) and bandwidth requirements, and level of independence of the environment. Either framework can enable an IT organization to make an informed decision regarding cloud viability by utilizing business value/functionality and technical condition as the specific viability parameters.

Based on the outcomes from these assessment frameworks, a road map can be developed for prioritizing and sequencing the activities necessary to achieve the future state processes and corresponding application and license portfolio.

By clearly understanding the suitability for moving functionality of applications to the cloud, planning and execution can be kicked off for implementing the list of new/existing/modified applications to the various cloud computing models that were deemed applicable.

Risks and challenges associated with utilizing the cloud for application rationalizationWhile cloud computing appears to be an appealing option in the future application mix, the following are some of the associated high–level risks and challenges that an organization should consider prior to going ahead with utilizing the cloud for application rationalization:

•Handling of existing data and porting of data to new applications in the cloud while minimizing business disruption. Impacts to reporting and interoperability with ancillary systems also needs to be factored into the feasibility analysis of rationalizing applications in the portfolio to the cloud.

•Data security, sensitivity, and compliance with regulations/laws and client contracts vary by industry, client, and organizational priorities and hence the service-level agreements (SLA) and contracts with the cloud service providers need to be taken into consideration. It will make sense to consider the cloud as an option in the application rationalization initiative only if the SLAs and contracts substantially cover the security/sensitivity/compliance requirements.

The 2011 CIO Compass 102

•Risks of vendor lock in should be analyzed as the lack of a maturity road map and/or viability of the vendor can adversely affect future capabilities of the organization. Further, in the event that it becomes necessary, the ability to exit and port data out of vendor–operated cloud services needs to be ascertained in advance to avoid becoming fully dependent on a vendor. This is important as the risk of a vendor lock in constraint can potentially seriously offset the incremental benefits that cloud computing provides to traditional application rationalization.

•Availability and performance of services in the cloud need to be governed by SLAs. Issues of network latency also need to be factored into performance considerations. If availability and performance of the cloud services are not meeting the needs of the business, then cloud services can affect how business is conducted and negatively impact the benefits that cloud computing provides to application rationalization.

•Changes to staffing needs and IT priorities are critical factors as different types of talent and training may be required to handle the new applications/services in the cloud. Further, the potential shift in focus of IT from operational maintenance may also lead to some resistance due to the possible perception of a reduction in power. While cloud computing may be a great option from a variety of aspects to boost the benefits of traditional application rationalization, it is critical to understand the operational ability of the organization to support the new paradigm.

ConclusionRationalizing the application portfolio to a cloud–based environment can significantly enhance the benefits of an application rationalization initiative by increasing agility, reducing capital spending, and improving sharing and utilization of IT resources. While rationalizing the application portfolio and moving functionality to the cloud is a challenging task with significant impacts, it can be managed effectively by adhering to a well–governed, structured approach that has been sufficiently customized based on the organization’s needs and constraints. Adequately evaluating applications in the portfolio for suitability for migration to the cloud, with regards to business, technology, operations, and regulatory/compliance requirements, will help with prioritization of migration activities, as well as truly allow for increasing/enhancing the value of application rationalization using the added benefits of cloud computing.

IT organization design and governance

The 2011 CIO Compass 104

IntroductionAs the U.S. economy struggles to emerge from one of its deeper recessions, many organizations are focused on conserving cash, reducing costs, and maintaining liquidity1. Some are seeking to expand their footprints in emerging economies, while others are viewing the current domestic market as the prime opportunity to gain competitive advantages in products, services, and talent.

The future realities presented by these macroeconomic and technological trends will directly affect the design of the information technology (IT) organization in ways even the savviest chief information officers (CIOs) may not have considered.

Does the standard IT operating model fit the new reality?The current macroeconomic and technical realities indicate that standard operating models may not be sufficient in the future. Traditional operating models lack a flexible approach that can respond to significant market swings, new technologies, and regulatory pressures.

Regulation: Consider the impact of increased regulations on the evolution of IT risk management. A critical function of IT is to harness the power of technology to mitigate risk and drive compliance. As regulatory pressures increase, IT will need to design a model that allows close cooperation between IT risk management and internal compliance groups2. An interesting and viable alternative is to placing the IT risk function within the ethics and compliance function of an organization, or migrating these two groups into a new third structure responsible for all things related

to the people, process, and technology of risk and compliance.

Economy: A sluggish recovery in the U.S. economy and perceived fears of a double-dip recession are likely to continue to have a profound impact on IT organizations as companies aggressively cut costs and preserve liquidity. A result may be increased centralization of IT organizations to drive economies of scale. In addition, increased outsourcing and offshoring to low-cost providers is likely as companies look for flexibility and scalability. An IT operating model in such an environment will be characterized by strong vendor management and a lean organization. While these changes may not be new to many companies, the increased demand for resources focused on contract and vendor management will force IT to think creatively about which organization structure can deliver the most value at the least cost.

Globalization of work: What started as a niche segment has now developed into a full-fledged industry: knowledge process outsourcing. The sluggish economy in both the United States and Europe has forced businesses to invent new measures of cost-savings without compromising quality. This quest has led to outsourcing of business processes across functions. Although labor arbitrage is the primary driver of process outsourcing, companies want to create world-class business processes. This has led organizations, from investment banks to pharmaceutical companies, to outsource core business processes. Investment analysis and research functions increasingly are outsourced to knowledge process outsourcing vendors. The health care industry has witnessed outsourcing of the

Architecting the IT organization of the futureNew factors, new models

Authored by: Tiffany McDowell and Dipesh Bhattacharya

The 2011 CIO Compass 105

clinical trial process and offshoring of test data analysis. Insurance companies were among the earliest adopters, offshoring claims analysis and data entry. Most recently, legal process outsourcing has become a fast-growing market segment.

Research reveals that the future will be characterized by a continued emphasis on globalization of specific business processes. Data analysis, legal document review, animation and media graphics, and medical record analysis are just a few of the examples of processes that increasingly will be delivered from outside the organization and, in some cases, outside its geography.

This development demands an entirely different response from IT organizations. IT organizations should be increasingly aware of the potential risks associated with such globalization. In addition, a collection of disparate vendors delivering various components of business processes will demand integration of technology platforms that can facilitate data security and consistency.

Adoption of new technology: While macroeconomic trends play a significant role in determining the future of IT, recent technology trends could transform the composition of the IT organization even more. One of these trends is cloud computing, which could help organizations radically lower cost of entry and speed time to solution, while introducing new models for elastic scale and pricing. Overall, IT spending may be lowered by decreasing IT capacity inefficiencies using a “pay-as-you-go” model — resulting in fewer data center resources and reduced maintenance costs3.

To prepare for adoption of the cloud computing model, IT should account for expanded infrastructure and slimmer resourcing to host and manage cloud capabilities. As a result of cloud computing, companies will rely on IT to provide business management services to support effective risk and privacy management. IT can also leverage its capabilities as a shared service provider to position itself to house the service monetization, incremental billing, and metering that could result. A future IT organization could have a cloud computing “center of excellence” (CoE) that drives adoption and evolution of cloud services within the organization and, in some cases, advises the business in leveraging cloud computing as a strategic advantage. Under this model, the organization may require little or no infrastructure capabilities.

Role of social media: Blogs, virtual networks, and other social media are widely utilized as primary sources of

business information and, as a result, can be increasingly influential marketing tools4. IT can position itself as a strategic partner by demonstrating to the business its aptitude for marrying enterprise knowledge management with information sharing through social media outlets. An IT organization that unleashes both of these capabilities will align IT customer service with marketing, product, and talent functions, which can allow IT customer groups to advise these functions on evolving trends and appropriate media for deployment (i.e., online social media for viral marketing).

To fully harness the power of social media and Web 3.0, more organizations should consider creating CoE groups on social media within IT business management to focus on measuring return on investment, adopting industry-leading practices, and implementing learning programs across the organization. From an infrastructure standpoint, IT organizations may be challenged to oversee the design, development, and maintenance of various social media platforms. The introduction of social media will carry with it potential risks for the organization, especially related to personally identifiable information, data security, and potential policy violations regarding user content. Organizations may consider expanding IT risk management to include social media issues, including privacy management, content moderation, and metrics and measurement.

Data strategy: The power of technology is being harnessed to produce business intelligence. Advanced business intelligence uses modern data mining, pattern matching, data visualization, and predictive modeling tools to produce analyses and algorithms to help businesses make more effective decisions. By demonstrating an aptitude for unleashing hidden relationships in data using broad statistical and data-mining techniques, the IT organization has an opportunity to exhibit a strategic business advantage5.

An effective approach is to organize end-to-end data management through a centralized data management group that owns business analytics. IT organizations such as these could also create business intelligence centers of expertise that develop business intelligence methodology, reporting and metrics certification, knowledge sharing, training, and learning. IT organizations could also expand functional services to design data reports and implement predictive analytics tools. Functions may also focus on strong business-aligned customer service groups to bridge the business need and data capabilities.

The 2011 CIO Compass 106

Cloud computing Social media Data strategy

Infrastructure •Fewer resources.

•Reduced maintenance cost.

•Service monetization, incremental billing and metering.

•Environment provisioning.

•Design, development and maintenance of social medial platforms (including portal).

•Mobile device integration and environment management.

•Manage data security.

•Host end-to-end data management applications (Data warehouses).

IT business alignment

•Risk management.

•Center of Excellence.

•Data privacy and controls.

•Social media applications portfolio management.

•Center of Excellence.

•Data strategy and end-to-end data management.

•Business intelligence and data analytics CoE.

•Data modeling and predictive analysis.

IT customer service

•Liaise with business to identify applications need and subscription.

•Potentially aligning with customer facing group such as marketing, communications along with talent groups within the organization.

•Collaborate with business to define data requirements.

Provocative future IT organization design alternativesThe above information suggests the potential for a smaller, more strategically focused IT function, directly or indirectly organized around a collection of centers of expertise that can directly support the business in new and impactful ways. An effective future-state organization will both accommodate and incorporate the emerging external factors of today’s world.

The next-generation IT organization design framework will likely have a two-dimensional focus: external environment and emerging technology. These two external factors can have radically different impacts based on the organization’s strength and IT strategy.

Figure 1: External environment, emerging technology and their impact on it organization

Data management

Central control Offshoring

Analytics

CoE

Business embedded IT

Minimal

Rece

ssio

nary

Favo

rabl

e

Busi

nes

s cl

imat

e

Role of emerging technologies

Strategic

The 2011 CIO Compass 107

The growth archetypeFor example, assume a growth scenario where the CIO has been charged to use technology to support the organization’s Web strategy by introducing emerging technologies that support innovation and collaboration and by testing the company’s media strategy by developing an in-house test market. The following diagram illustrates the magnitude of each of the environmental and technological implications of this example.

Considering this set of influences, an embedded full-service IT organization that aligns with each of the business units would be appropriate. The organization would willingly give up some central control and scale economies in favor of flexibility and improved business alignment through the introduction of CoEs who focus on relevant emerging technologies, such as a social media CoE who advises the marketing organization. A central CoE could facilitate knowledge sharing among various centers of expertise and have a dotted-line relationship with the business units.

Other possible capabilities could include:

•Vendor management with low-touch governance at the corporate level;

•Expanded infrastructure capabilities, including delivering applications via mobile devices and supporting internal portal evolution; and

•Introduction of a centralized innovation engine focused on Web technologies that facilitate growth and provide advice on new technologies that could push products and services to the consumer.

Finally, this model would likely require an embedded, full-service traditional customer service function to help deliver flexible and responsive customer service to the business.

Continum

Critical environmental

factors

Key technology

drivers

Regulatory pressures

Unregulated

Stagnant

Low

Minimal

Minimal

Supporting

Highly regulated

Strong growth

High

Early adopter

Strategic

Predictive

Economic recovery

Degree of globalization

New technology

Social media

Data strategy

Future state

The 2011 CIO Compass 108

Compliance archetypeFor the compliance example, assume the CIO’s organization is facing heavy regulatory scrutiny, requiring a shift to a more compliance-driven culture. The CIO also believes the economy will become stagnant in the near term, so preserving liquidity for the company is a top priority. The CIO is willing to adopt new and emerging technologies that provide flexibility and scalability. The diagram that follows illustrates the environmental and technological implications of this example.

BU 1 BU 2

• Business IT leader• Project management• BU risk• BU qa

HR HR

BU IT BU IT

IT design and delivery

IT design and delivery

Customer service

Customer service

Vendor management

Vendor management

BU CoE BU CoE

BU IT strategy

BU IT strategy

Cloud CoE

Cloud CoE

Analytics CoE

Analytics CoE

Marketing Marketing

CIO

Enterprise IT

Infrastructure

IT strategy

Customer service

excellence

Risk management

CoE management

Program management

Innovation engine

Shar

ed s

ervi

ces

Continum

Critical environmental

factors

Key technology

drivers

Regulatory pressures

Unregulated

Stagnant

Low

Minimal

Minimal

Supporting

Highly regulated

Strong growth

High

Early adopter

Strategic

Predictive

Economic recovery

Degree of globalization

New technology

Social media

Data strategy

Future state

The 2011 CIO Compass 109

Under these circumstances, the organization could reorganize IT to create a centralized risk management function that aligns with corporate compliance to manage compliance risks, IT control, and risk management technology. The redesigned IT organization would include a centralized cloud management CoE that can work closely with centralized customer service (project management) to implement enterprise-wide software as a service (SaaS) programs. The CoE could also collaborate with infrastructure and vendor management groups to manage relationships with “cloud brokers” and evaluate choices between public or private clouds.

Additionally, IT customer service groups could also be centralized with “bare bones” project management support for each business unit, allowing consistent enterprise application and gaining economies of scale. The centralized IT risk management function can manage service quality technology application suites (such as risk reporting, Sarbanes-Oxley, incident management, compliance helpdesk, etc.) to drive compliance, potentially resulting in reduced audit costs and faster audit turnaround. The CIO might also introduce a centralized data management function to deliver end-to-end data management and to facilitate the use of advanced data-mining techniques to predict risk, as well as global vendor management policies to support centralized control and consistency in managing global risks.

Cost-reduction pressures will likely support a flexible and scalable shared service center equipped to respond to global expansion, as well changes in a growth economy. IT should consider engaging offshore vendors to deliver noncritical work and should evaluate implementing cloud-based solutions by establishing relationships with cloud brokers to prepare for quick response to market changes. The business could also reduce costs through achieving low business alignment as project management and relationship management support can lead to more effective resource utilization.

BU 1 BU 2 BU 3

• Project management• BU risk

• Project management• BU risk

• Project management• BU risk

HR HR HR

BU IT BU IT BU IT

Marketing Marketing Marketing

CIO

IT services

Infrastructure

IT strategy

Customer service

excellence

IT design and delivery

Vendor management

Risk management

Program management

Cloud management

CoE

Data management

Shar

ed s

ervi

ces

The 2011 CIO Compass 110

Outsourced archetypeIn this example, the chief executive officer considers IT to be a noncritical component of the business strategy. The CIO is tasked with reducing IT costs in the middle of a recession and implementing an IT strategy that calls for a flexible and scalable IT organization. The diagram illustrates the environmental and technological implications of this example.

IT risk management

Metrics and reporting

Security and privacyIT controlsSOX

Continum

Critical environmental

factors

Key technology

drivers

Regulatory pressures

Unregulated

Stagnant

Low

Minimal

Minimal

Supporting

Highly regulated

Strong growth

High

Early adopter

Strategic

Predictive

Economic recovery

Degree of globalization

New technology

Social media

Data strategy

Future state

The 2011 CIO Compass 111

Stores Products

HR HR

BU IT BU IT

IT relationship

manager

IT relationship

manager

Marketing Marketing

CIO

Enterprise IT

IT strategy

Customer service

excellence and relationship

management

Portfolio and program

management

IT security

Vendor management

IT asset management

Customer service

excellence

Portfolio and program

management

IT security

Outsourcer

Vendor management

As a result, we can expect critical IT functions to be centralized, with the aim of reducing organizational costs. IT customer service functions could be centralized to gain economies of scale and to achieve cost-reduction targets. Vendor management programs could be consolidated, reporting through enterprise IT to manage vendor relationships, streamline vendors, and work toward more cost-effective pricing.

The CIO may decide to outsource several IT functions to increase flexibility and scalability in the IT organization. The retained organization, in such cases, will be structured to deliver high-value advice and other high-touch services to client groups. For example, a special function dedicated to managing IT’s relationship with the business may be included in the retained organization to oversee of this critical relationship. Similarly, IT security and program and portfolio management will become a critical focus of the retained organization to help determine whether the proper organizational controls are in place.

The 2011 CIO Compass 112

References:1. US And Global IT Market Outlook: Q2 2010 by Andrew Bartels for Vendor Strategy Professionals.

2. The Dbriefs Technology Executive series presents: The CIO: Driving Business Value Through Risk Management and Technology.

3. Depth perception: A dozen technology trends shaping business and IT in 2010.

4. Tribalization of Business Through Social Media: The Rise of the Hyper-Social Organization.

5. Destination: Data excellence: Building an enterprise data road map.

Conclusion: What the savvy CIO should consider For the forward-looking CIO, there is a three-stage process of understanding, evaluating, and incorporating today’s important external factors into the future-state design of an IT organization:

1. Understanding: Evaluate business and IT strategies and identify priorities. Critical questions to consider include:

a. How much money will the business invest in IT? How important is it for the business to preserve liquidity?

b. Is the organizational focusing on reducing IT costs or on capitalizing on critical technology trends to gain competitive advantage?

c. Is the IT organization equipped to service the business through a double-dip recession or other tough economic condition? How can it be better equipped to do so?

d. Are there significant regulatory changes in the industry? Will these changes pose risks to the organization that IT can help mitigate?

e. Does the IT organization have a growth agenda, or is the near-term focus on preserving cash?

2. Assessing: Consider actions that can help assess the direct impact of environmental factors on the IT organization. Critical questions include:

a. Is there a need to centralize specific functions or create disparate units?

b. What environmental and technology factors affect IT business alignment? How will this alignment look in the future? Are additional teams needed, or are there redundant teams?

c. How will a focus on reducing costs through outsourcing, SaaS, cloud computing, etc., affect the IT service delivery model?

d. How closely should the customer service function align with the business?

3. Apply: After completing the assessment of the IT organization, consider working with organizational leadership to socialize the results, solicit buy-in, and begin to apply these changes to the IT operating model.

1. Understand

• Determine overall business strategy and align IT strategy to support business.

• Identify top IT priorities and their alignment with overall business strategy.

• Determine critical drivers of IT strategy and rate them according to priorities.

• Determine company's overall cash position and liquidity goals and specifically IT investment plans.

2. Assess

• Assess current IT operating model and determine future state.

• Identify components of the IT operating model that are impacted by the overall IT strategy.

• Design a future state organization to close the gaps in the IT operating model.

3. Apply

• Complete a detailed design of the IT organization.

• Implement the future state IT organization.

• Transition employees to the new IT organization.

Evaluate Incorporate

The 2011 CIO Compass 113

IT talent management: Operating the multisourced, multicultural, multigenerational IT organizationToday’s leaders must confront the challenges of a complex and changing workforce. The workforce, as a whole, is experiencing demographic shifts in the U.S. and abroad. Talent is becoming increasingly global and generations have become more diverse. Chief information officers (CIOs) are at the forefront of these issues for several reasons:

1. They are under constant pressure to provide services while efficiently scaling-up to meet business demand.

2. Innovative technology continues to change the role and focus of IT.

3. Outsourcing is more prevalent in IT than other functions.

4. The nature of the IT workforce is multisourced, diverse, and increasingly global.

5. CIOs are uniquely positioned to respond to, benefit from, and often lead an innovative approach to dealing with these talent issues in their organizations.

CIOs should lead the organization’s efforts to manage a complex, changing, and increasingly global workforce by aligning IT strategy and IT talent strategies, developing a climate and culture for diversity, rethinking career development, providing managers with the required/applicable learning opportunities, and building an effective virtual work environment.

Talent management issues for CIOsTalent management issues are often at the top of the list for C-level executives, including CIOs. These market trends

may be driving workforce challenges that cannot be ignored:

•Economic: An extended global economic downturn, coupled with a fairly grim outlook for the labor market, has led to constant demands for IT cost reductions. CIOs are expected to look for cost savings to help shore up the bottom line.

•Technology: The technology landscape is rapidly evolving. The explosion of social media and the proliferation of Web 2.0 and cloud computing technologies have led businesses to require IT to transform from a role of “supporter” to “enabler.” Technology departments are expected to nurture innovation, help increase market share, and attract and retain top talent by capitalizing on revolutionary technology trends that may change how businesses, not just IT, operate.

•Generational diversity: The nature of an increasingly diverse workforce has shifted significantly over the past decade. IT has not been immune to these demographic trends that have and will continue to fundamentally shift the nature of the workforce. Generational diversity impacts on organizations across industries. As the newest entrants into the workforce, Gen Y employees have received more attention as companies seek to understand this younger demographic. In return, Gen Y’ers (also known as Millenials or 18-27 year old) are beginning to define themselves. A recent Deloitte survey of Gen Y professionals (“Gen Y: Powerhouse of the Economy, 2009”) revealed that Gen Y has fundamentally different ideas about their life and career than Baby Boomers and Gen X'ers. Fundamentally, Gen Y values

Information technology (IT) talent management Operating the multisourced, multicultural, multigenerational, IT organization

Authored by: Patrick McLinden

The 2011 CIO Compass 114

growth and learning opportunities over job security. The survey reveals that 53.7 percent prefer opportunities for advancement, while only 7.9 percent are looking for better job security. The value proposition for employees is shifting along with the expectations of a younger workforce.

•Globalization: The need to effectively manage a globally deployed talent pool is increasing, with employees, contractors, and vendors interacting across time zones and cultures. They may — more may not — have the tools to work well together virtually. When they do have the tools, they often lack the knowledge and skills to work effectively in a virtual environment.

•Multiple vendors. The availability of cloud-based business applications has created increasing demand for CIOs to adopt a cloud strategy — essentially rapidly moving toward a “best-of-breed” approach, rather than the single-vendor strategy that was popular a mere decade ago. This requires IT department to manage multiple vendors, each with their own data definitions, technology platforms, and integration challenges. Often, these vendors give IT a global footprint as well. Poor vendor management, single-vendor sourcing models, and cultural barriers resulting from a global operating model can contribute to higher IT vendor-related costs.

CIOs are uniquely positioned to respond to, benefit from, and often lead an innovative approach to managing these talent challenges in their organizations and influence how other organizational leaders deal with these issues. With an effectively deployed and engaged workforce, IT can more effectively deliver technical solutions required of the business while maintaining efficiency and managing costs. Steps CIOs should consider include:

•Aligning IT strategy and IT talent strategies.

•Developing a climate and culture for diversity.

•Rethinking career development.

•Providing managers with the right learning opportunities.

•Building an effective virtual work environment.

Aligning IT strategy and IT talent strategiesLeaders of IT organizations should analyze their current IT strategy along a continuum from infrastructure-focused organization tasked with “keeping the lights on” to a strategic enabler focused on innovation and best-in-class technology. This strategy can significantly influence the way the organization deploys and manages talent.

For example, a CIO focused on being a strategic enabler is more likely to nurture innovation, encourage employee

development, recruit top talent, and identify and deploy retention programs for critical talent pools. On the opposite end, an infrastructure-focused IT leader is looking at managing a globally-deployed talent pool to minimize costs. The priority of this “infrastructure” talent leader will be to identify “more bang for their buck” — invest cautiously in cheaper talent markets and provide developmental opportunities where it makes sense.

CIOs should also understand their critical workforce segments that deliver a disproportionately high value to the organization and are difficult to recruit. By identifying these segments, a CIO can focus talent-related investments where they will have the most “bang for the buck,” rather than spreading dollars evenly across the organization, regardless of return.

The talent management issues IT faces today are increasingly people related, rather than technology related. The CIO should define the talent competencies and capabilities that the organization needs to build. These should be aligned to the organization’s strategy and culture. Too often, IT only focuses on the technical aspects of the job.

Developing a climate and culture for diversityOrganizations that can create a positive climate and culture for diversity are more likely to see results, retain critical talent, and effectively manage a diverse and global workforce. A positive climate for diversity starts with defining the diversity and inclusion strategy of the IT organization.

It is important to understand the roles played by IT stakeholders — leaders, managers, and employees — in fostering a culture of diversity. IT leadership should promote diversity awareness, while management acts as an information conduit, reinforcing the culture of nurturing and supporting diversity. A positive climate for diversity requires an active leadership team that leads by example. It also means baking diversity into talent strategies like recruiting, succession planning, and performance management.

Other organizations with a strong diversity and inclusion initiatives may share their tools to help foster an even stronger climate for diversity in the IT organization.

Rethinking career developmentCIOs should also take a new look at career development within their organizations. In the past, company loyalty was valued, but now the idea of a “job for life” may not be suitable for the generation of workers now

The 2011 CIO Compass 115

entering the workforce. Especially in IT environments, the sway of strong economies make job-jumping and employee poaching common. If this has not affected your organization directly, it will become more prevalent as Baby Boomers continue to retire and gaps are uncovered. Indeed, a Deloitte survey revealed that being challenged and given opportunities to learn and grow while working with senior leadership are some of the most critical retention drivers. Gen Y is not worried about job security, and a significant number of them are already looking for other jobs, particularly as the economy turns around.

To be proactive, CIOs must identify innovative ways to provide learning and growth opportunities and help individuals grow with the company, instead of out of it. Three ways CIOs can effectively respond are by: 1) providing business rotational opportunities, 2) tailoring career paths to career destination, and 3) self-tailoring career paths or highly customized talent strategies or mechanisms.

•Providing business rotational opportunities allow IT professionals to see how their work affects other parts of the business and increases their connection to the work. It also builds IT’s credibility and talent pool. Career paths allow IT professionals to grow inside, as well as outside of the IT organization. This is not only a critical retention driver, but also contributes to organizational knowledge flows through cross-pollination of ideas. Often, career paths that lead to cross-functional centers of expertise can be instrumental in building and capturing knowledge, as well as developing relationships with other businesses. Global mobility and rotational opportunities in different business sites around the world can also help to provide opportunities that align with individuals’ goals by increasing learning and development through exposure to new and innovative ideas in international offices.

•Tailoring career paths and compensation around a career destination is another way to increase impact on IT talent. For example, by separating technically-oriented IT employees with a subject-matter-specialist trajectory from those with a management or leadership trajectory can help to increase motivation and provide clear lines of possibility. Removing ambiguity by having clear responsibilities, goals, and means of advancement can allow people to excel in their chosen trajectory. The required support, training, and performance management also need to be in place for this to be effective.

•Self-tailoring career paths allow individuals to set the pace and direction at which they want to advance and grow within the company. The “corporate lattice” and mass career customization concepts allow employees to dial up or down and move across positions as their life situations, needs, and desires change. IT employees are looking for more than just a job, and CIOs need to be willing to accommodate individual needs to retain IT talent. Research suggests this will be a critical career path of the future, and organizations are only beginning to adopt these practices and differentiate themselves to younger generations.

Providing managers with learning opportunitiesEffective leaders and managers are critical to the success of IT operations. Current leadership competencies and styles should change. The next generation of leaders should establish bridges to leading and managing in a way that motivates younger generations.

•Identify specific leadership competencies. Specific competencies should be defined for the IT function, and a process put in place to identify individuals who have strong leadership potential. Professional development tied to the right competencies should be carefully considered and aligned with performance management. Providing managers with the right learning and development opportunities tend to not only improve their ability to manage and develop others effectively, but also increases their motivation and connection to the organization.

•Move potential managers to other offices and/or functions. Moving high potentials accelerates their learning and skills transference. Processes and infrastructure should be established so that employees with leadership skills are given the opportunity help lead other business units or offices. Providing mobility increases management skills and provides exposure to different types of developmental opportunities.

•Enhance communication. Management development and communication training on diversity, inclusion, and teamwork help build bridges among Baby Boomers, Gen X’ers, and Gen Y’ers. Boomers in management roles need to acknowledge their leadership behaviors and be willing to modify them to motivate younger cohorts and create a positive working environment. Gen X’ers and Gen Y’ers in management roles need to empathize with Boomers who may not be used to having someone younger overseeing their work. The manner in which the different generations communicate, mentor, and escalate issues will also be different.

The 2011 CIO Compass 116

•Foster flexibility and ability to adapt to change. Technology is constantly evolving, and employees need to adapt and thrive under change to create a leading-edge organization. A focus on developing open communication skills, embracing change, motivating others who resist change, and building teams should be an integral part of the organization’s culture, onboarding, and training programs. Leadership should set the standard and demonstrate their backing of these ideals.

•Focus on succession management. Building the management and leadership team is only one piece of the puzzle. CIOs should also develop robust succession plans for leadership and management positions, as well as critical talent lower in the organization.

Building an effective virtual work environmentBuilding and maintaining an effective virtual workplace — a requirement in today’s global operating model — can be a daunting challenge. CIOs should consider the technological and communication-related intricacies of having employees working at various locations on different systems, and provide the virtual structure to support them. Even for domestic or U.S.-based companies, the IT department is often global. An effective virtual workplace defines the process, technology, and culture associated with working effectively in a dispersed or even global organization.

•Standardized processes. Processes should be simple, efficient, and clear to support consistent work quality across the organization.

•Technology and training. Employees need to have the tools, troubleshooting ability, and sufficient resources. From teleconferencing to social networking, the technology exists to make a virtual workplace more effective, but technology is only part of the solution. CIOs should provide training to develop managers’ abilities to manage a virtual workforce. Assessing employees’ readiness is also important. Do they have the required skills to be able to troubleshoot? Do they have the required mindset to work independently for long periods of time? Can they be productive when working outside the office? CIOs must also consider cultural differences that may affect remote employees’ ability to work together in a virtual environment.

•Knowledge sharing. Knowledge sharing should be encouraged in any organization so that new ideas can be built on old constructs to avoid “reinventing the wheel.” In virtual IT departments, a common repository of information and standard knowledge-sharing procedures are recommended. Frequent meetings and other communications also support indirect knowledge transfer. In multigenerational IT environments, two-way sharing of information can impart fresh, new technical skills to older generations, and experiential business knowledge to younger generations, increasing competencies in both groups.

•Performance management. CIOs need to look at the implications of developing, coaching, and managing performance virtually. One common mistake is setting different expectations for different employees. A big-picture view of the workforce must be taken and workers should be treated equally. Ideally, on-site, virtual, contract, and outsourced employees should be held to the same expectations as on-site employees, and the virtual infrastructure must support this. For example, virtual employees cannot be expected to perform at the level of on-site employees if their technology is not working to the same level. Performance expectations and performance management should be as streamlined and standardized as possible, regardless of geographic differences. Changing to a results- or outcomes-based performance management program may be required to facilitate this standardization. Also, with outsourced employees, service level agreements should be put in place to confirm expectations are clearly communicated and workers are held accountable.

ConclusionCIOs are uniquely positioned to push the envelope when it comes to dealing with these issues. In many ways, they are required to do so, given the nature of their operating model. Thinking through the right focus areas to drive improvement and innovation will be critical to become more effective managers of talent.

Special Thanks to Rachel Wildman, Whitney Cook, and Dipesh Bhattacharya for their contributions, research, and insight.

Appendix

The 2011 CIO Compass 118

Author biographies

Jeff Anderson, Deloitte & ToucheJeff is a Senior Lead in Deloitte & Touche’s (Deloitte) System Integration practice. He has more than 15 years of experience architecting and delivering custom-developed and leading enterprise solutions. He specializes in requirements gathering, as well as design and implementation of business applications using object-oriented technologies including J2EE, .NET, and open source frameworks. He is a specialist in the area of service-oriented architecture and other modern development paradigms, including aspect-oriented programming. He has served as the Technology Lead on a number of large-scale legacy renewal programs, and with his architecture background, he can deliver systems modeled around business domains rather than specific technologies or platforms. He is often called upon to play an advisory role in the space of software development methodologies and software process improvement on large IT transformation engagements. Jeff is well-versed in using lean/agile development methodologies such as the Rational Unified Process (RUP), extreme programming (XP), SCRUM, and the Agile Modeling Method (AMM). He has taken elements from these different approaches as necessary to address enterprise program delivery concerns. Jeff also founded and leads Deloitte’s Development Community of Practice across Canada and is the architect of Deloitte LEAN, an agile process framework. For more details on Jeff Anderson's experiences, insights, and points of view relating to software development, governance, and architecture, please visit his blog at http://agileconsulting.blogspot.com.

Tracy L. Bannon, Deloitte Consulting LLPTracy is a Specialist Master with Deloitte Consulting LLP’s (Deloitte Consulting) Services Quality practice, joining in 2007. She has over 20 years’ experience in software design, development, and implementation. Tracy has extensive experience with design and development of enterprise systems, which include Microsoft technology-based solutions and major database platforms (DB2, Oracle, and SQL Server). Tracy currently leads and manages the design and implementation of Deloitte Consulting’s PPM-based project management tool set, balancing her time between global tool adoption and enhancement activities. She holds PMI and Microsoft certifications and is cross trained and project experienced with Java. Tracy also led the Deloitte Consulting .NET Framework team, which architects and supplies reusable development

frameworks to accelerate delivery, and currently leads Deloitte Consulting’s Global Microsoft .NET Community of Practice. Tracy has been aligned with the public sector implementations over the past 11 years, having participated in a number of technology and business process reengineering engagements. She also has private sector experience.

Dipesh Bhattacharya, Deloitte Consulting LLPDipesh is a Senior Consultant in Deloitte Consulting LLP's (Deloitte Consulting) Human Capital practice with over seven years of consulting experience across multiple industries. He has managed and implemented projects in the areas of HR Assessment, HR Process Design, Change Management, Compliance Adoption, HR Shared Service, Vendor Selection, Organization Strategy, and Technology Strategy. His responsibilities included HR process mapping, process analysis and documentation, business case, shared service implementations, and technology assessments. Dipesh majored in HR and Organization Strategies during his MBA and has been trained in the areas of Organization Design, Talent Management, and HR Strategy. He has bilingual and multi-cultural experience, having worked across Europe and the United States while consulting for Fortune 500 clients.

Carrie Boyle, Deloitte & Touche LLPCarrie is a Specialist Leader with Deloitte & Touche LLP’s (Deloitte & Touche) Audit and Enterprise Risk Service’s Federal Security and Privacy Practice. She holds an MBA, MS, and BS from the University of Maryland College Park. She has over 12 years of experience in management and technology consulting with the Federal and commercial sectors. She has experience working in the areas of project management, risk management, security and privacy, information sharing, and enterprise architecture. Most recently, Carrie has been working within Emerging Markets to develop Cyber Security and Identity Management Architectures. Carrie is involved with several solution development efforts with Deloitte & Touche related to Cyber Architecture, Cyber Awareness and Training, and Social Media. Carrie has supported cross-Federal initiatives such as the National Information Exchange Model (NIEM) and the Federal Segment Architecture Methodology (FSAM). Carrie’s certifications include: PMP, CIPP, CIPP/G, and ITIL. In 2008, Carrie was nominated by Federal Computer Week as a “Rising Star” in Government Consulting.

The 2011 CIO Compass 119

Jerome Campbell, Deloitte Consulting LLPJerome is a Specialist Leader in Deloitte Consulting LLP’s System Integration practice. He has more than 17 years of professional experience helping organizations understand and apply methods to effectively align technology with mission-critical strategies. His experience has covered phases of development for business and technology strategy, business and process architectures, application and data architecture and design, enterprise technology architecture and deployment, systems integration, large program management, and IT transformation. He specializes in strategic enterprise architecture to help companies establish and implement their technology-enabled business strategies, improve operational performance, and better manage IT investments. He is a Certified Enterprise Architect (FEAC Institute certification for Enterprise Architecture) and participates in TOGAF forums for enterprise architecture and services- oriented architecture.

Tiffany Chen, Deloitte Consulting LLP Tiffany is a Business Technology Analyst in Deloitte Consulting LLP’s (Deloitte Consulting) Technology Strategy and Architecture practice. Being with Deloitte Consulting for almost a year, Tiffany has consulted for government financial services, e-Commerce, and Public Sector clients regarding IT strategies within the Information Technology Service Management (ITSM) framework and the Systems Development Life Cycle (SDLC). She also has extensive experience in collecting data using the Extract, Transform, and Load (ETL) process, conducting and training people on stack upgrades, and formulating network baselines within system environments. Tiffany holds a Bachelor’s degree in Supply Chain Management and Information Systems from the University of Maryland, is ITIL Foundations certified, and has her Lean Six Sigma Yellow Belt.

Gary Corbett, Deloitte Consulting LLP Gary is a Specialist Master in Deloitte Consulting LLP‘s Technology Strategy & Architecture practice. In this role, he utilizes 20+ years of senior IT infrastructure experience to lead federal, commercial, and international clients in the development of telecommunications strategy and specializes in the design and implementation of Call Center, TDM, and VoIP Telephony Solutions. Gary has consulted nationally and internationally with major federal and commercial clients in the areas of financial services, health care, defense, foreign government, telecommunications, and public service regarding the design, selection and

deployment and Program Management of advanced Contact Centers and VoIP deployments. Gary has a broad background in IT Management, general telephony systems, ISP/OSP, LEC/LDC services, strategic planning, assessments, system implementations and operations. 

David P. Croft, Deloitte Consulting LLPDavid is a Specialist Leader in Deloitte Consulting LLP‘s Information Management service line. He is a Healthcare Architect with 34 years of experience across the federal health, life sciences, provider, and payer industries. David’s experience includes leading the creation of an end-state architecture at a U.S. regulatory agency, working within clinical development organizations focusing on clinical trial information and clinical data architecture (Phase I−IV, and post-marketing studies and surveillance). He has recently worked with clinical research organizations to understand how secondary use of clinical data, such as provider HL7 transactions, might be utilized to test protocol feasibility, how the data can or cannot be pooled with other types of clinical data along with understanding the legal and privacy issues surrounding patient data and appropriate use. In addition, he has led the development of a clinical data warehouse for a major hospital system in the U.S. using HL7, UB92, and other transaction types for the purpose of reporting operational, clinical, and financial outcomes, along with AHRQ reporting. He has also led projects to implement clinical data repositories for statistical pooling and analysis using SAS Drug Development and was an architect involved with the development of Oracle’s Life Sciences Data Hub.

Douglas Day, Deloitte Consulting LLPDoug is a Specialist Master in Deloitte Consulting LLP’s Technology Strategy & Architecture practice. Hi is a technical specialist in IT service delivery (ITIL and ITSM), focusing on client business transformation in areas of operations including asset lifecycle, planning and strategy, and delivery for publicly traded and private clients across industries, as well as public sector clients. Engagements have included organization, process and technology assessments and their re-engineering, stakeholder buy-in, and technology enablement, as well as technology selection and implementations. Projects addressed process development (through facilitated workshops), organizational change, cost takeout, development of service level agreements, and key performance indicators (SLAs and KPIs), as well as process integration.

The 2011 CIO Compass 120

Josh Drumwright, Deloitte & Touche LLPJosh is a Manager with Deloitte & Touche LLP’s Federal Security and Privacy Practice. He holds a B.S. in business from Virginia Tech. He has over seven years of technology risk consulting experience supporting federal agencies and financial service clients in the U.S. and UK. He has worked on numerous IT Security and Privacy related projects including developing strategies for clients to proactively manage and reduce their unnecessary use of Personally Identifiable Information (PII), helping clients manage and protect PII through its lifecycle, and helping clients understand their data protection risks and take steps necessary to prevent the unnecessary loss of data. Josh is a Certified Information System Security Professional, Certified Information Privacy Professional, and Certified Information Systems Auditor.

Nick Elkins, Deloitte Consulting LLP Nick is a Manager within Deloitte Consulting LLP’s Technology Strategy & Architecture practice, focusing on Enterprise Architecture (EA) development. With eight years of experience, Nick has provided EA, Capital Planning Investment Control, and IT Strategic Planning to the federal government. He has been involved in most stages of development of both the departmental and component-level Enterprise Architecture programs with a particular focus on the development of IT transition planning. Nick focuses on developing informative views of an agency’s IT environment in order to better educate decision makers and leadership. Nick also focuses on integrating EA information into the decision-making process, including EA requirements for CPIC and IT governance.

John Ezzard, Deloitte Consulting LLPJohn is a Senior Manager with Deloitte Consulting LLP’s (Deloitte Consulting) Federal Healthcare Practice. He holds a B.A. in history from Yale University and an MBA from Georgetown University. He has over 10 years of technology and management consulting experience working with the federal government in industries ranging from telecommunications to transportation, education, and health care. One of Mr. Ezzard’s major focus areas has been privacy and security for electronic health records and electronic health information exchange. He has supported privacy and security workgroups on topics such as identity proofing, privacy and security frameworks, privacy and security requirements for HIPAA noncovered entities, data loss prevention, and critical infrastructure protection. Mr. Ezzard is currently contributing to the development of Deloitte Consulting’s health IT solutions across its Health Care practice. He has also contributed to publications

regarding health IT terminology, health IT implications for managed care organizations, and analysis of state health information exchange architectural models.

Don Frazier, Deloitte Consulting LLP Don is a Senior Engagement Director with Deloitte Consulting LLP’s Technology Strategy & Architecture practice. With more than 30 years of experience, Don has consulted internationally across health care, energy, transportation, and consumer business organizations regarding the strategic application of technology in solving complex business problems. Don has an MBA from Wake Forest University and a Ph.D. in Information Systems from Nova Southeastern University.

Philip Galloway, Deloitte Consulting LLPPhilip is a Consultant in Deloitte Consulting LLP’s (Deloitte Consulting) Technology Information Management practice with a focus in Business Intelligence & Data Warehousing. Prior to joining Deloitte Consulting, Philip spent four years in the financial services industry as a Project Manager within GE Capital’s Information Management Leadership Program where he led a variety of projects including a cloud-based, geospatial analytics application. He has an advanced degree in Information Systems & Business from Kansas State University. Philip’s firm grounding in the fundamentals of accounting and corporate finance, combined with his technical education, enables him to provide his clients with solutions focused on business need.

Dwij Garg, Deloitte Consulting LLP Dwij is a Business Technology Analyst in Deloitte Consulting LLP’s Emerging Solutions — Web Solutions service line. Dwij has experience in large-scale software development, including requirements management, process flow application, and application and user interface design. He has a strong technical background and has been involved in many technology planning, implementation and integration phases, specifically strategy, design, resolution, and testing. Dwij is a double major in Electrical Engineering and Computer Science (EECS) and Economics from the University of California, Berkeley, and has industry experience interning and working with several companies and clients. Dwij has worked on challenging and uncommon projects, which allow him to leverage his technical knowledge and creativity, as well as his interests in renewable energy, cloud computing, and other emerging technologies.

The 2011 CIO Compass 121

Kajal Goria, Deloitte Consulting LLPKajal is a Manager in Deloitte Consulting LLP’s Information Management practice line specializing in enterprise-wide data delivery, business intelligence (BI) and master data management (MDM) solution implementations. He has over 11 years of experience in architecting and leading enterprise-wide master data management, data warehouse/business intelligence, and data management strategy projects. Kajal has consulted to major Financial Services, Energy & Utility, Insurance, Retail/Restaurant, and Manufacturing industry clients regarding Information Management business requirements. Kajal holds a master’s degree in computer science from Birla Institute of Technology in India.

Mike Habeck, Deloitte Consulting LLP Mike is a Director in Deloitte Consulting LLP’s (Deloitte Consulting) Technology Strategy & Architecture practice. He has over 20 years of experience delivering solutions that are centered on improving and transforming IT technology and operations, both as a consultant and in related executive roles in technology and telecommunications companies. As Deloitte Consulting’s national leader in this space, Mike has a track record of helping clients achieve their business objectives. He has advised and managed many projects in the areas of IT transformation and improvement, technology planning and strategy, IT service management and large-scale architecture and implementation projects. Mike has led teams in the creation of ITIL-based capability maturity models, service optimized datacenter strategies, and datacenter reference implementations. Mike also holds a patent in the area of high-availability computing.

Jan Hertzsch, Deloitte Consulting LLPJan is a Specialist Master in Deloitte Consulting LLP’s Federal Practice of the Public Sector. Mr. Hertzsch has 15 years of consulting experience. He has coordinated hardware provisioning, supported SAP budgetary and financial systems for the U.S. government, and implemented SAP financial and collection systems for a provincial government in Canada. His government experience includes SAP budgetary accounting, funds management, collections and disbursements for government agencies, cost center accounting for government, and SAP asset accounting for private industry. He has experience in the banking, health care, and automotive industries; all with a financial and/or systems focus. He is a Certified Project Management Professional (CPMP) with project management experience in the United States and Canada. He is also ITIL (Information Technology Infrastructure Library) certified.

Bill Herwig, Deloitte Consulting LLPBill is a Specialist Leader in Deloitte Consulting LLP’s Technology Strategy & Architecture practice. He is an information technology specialist with over 20 years of consulting experience in the areas of IT Service Management and data center and infrastructure design, sizing, and deployment. Mr. Herwig has significant business and technology strengths in leading strategic solutions for sophisticated technical environments. Included in his experience is support for people, process, and technology utilizing ITIL V3, ISO20000, and CoBIT 4.1 frameworks and standards. He has extensive experience in assessing architecture designs for major system deployments across multiple technology platforms (Windows, UNIX, Linux, and mainframe).

John Hsu, Deloitte Consulting LLP John is a Manager in Deloitte Consulting LLP’s Technology Strategy & Architecture practice. He brings more than 10 years of experience with focuses on Technology, Media, and Consumer and Industrial Products sectors. He has led many projects including large-scale multi-year IT transformation, post-merger integrations and divestitures, IT cost reduction/improvement, IT outsourcing and organization alignment programs, and enterprise application product design/rollout. He holds a MSc. in Information Systems and Management from Carnegie Mellon University and BSc. in Electrical Engineering and Computer Science from the University of California, Berkeley.

Paul Krein, Deloitte Consulting LLP Paul is a Specialist Leader in Deloitte Consulting LLP’s (Deloitte Consulting) Technology Strategy & Architecture practice. As a seasoned business advisory, he is regarded for his ability to translate business requirements into technology solutions. He is currently a leader within our federal office of the CTO and focuses on innovation, the business impacts of technology, Cloud computing, and the role of the CIO. Mr. Krein held previous roles including business leader, sales manager, technology strategist, and solutions architect. He has program and business operations experience managing large programs and consulting operations, along with experience in the Federal Energy, Manufacturing, Financial Services, and Technology industries. He holds an MBA in Finance and Strategy, a B.S.E.E. degree, and is a Six Sigma Green Belt. He recently co-authored the Deloitte Consulting Federal series on The CIO Lifecycle.

The 2011 CIO Compass 122

Jeff Michael Krugman, Deloitte Consulting LLP Jeff is a Senior Manager in Deloitte Consulting LLP’s Technology Strategy & Architecture practice with over 15 years of IT consulting experience. He has provided information to large, complex IT organizations regarding innovative strategies to both address critical business issues and improve their organizations. He specializes in IT strategy, planning, management, governance, and effectiveness. He has also led and executed business intelligence/data warehousing projects for large, global clients to provide more efficient and effective access to data. He has an MBA in finance.

Kristi Lamar, Deloitte Consulting LLP Kristi is a Specialist Leader in Deloitte Consulting LLP’s Technology Strategy & Architecture practice. Her focus is enabling organizations to leverage the disruptive potential of the convergence of cloud, mobility, and social business. Ms. Lamar has over 15 years of consulting experience with a passion for using emerging technology solutions to solve clients’ business issues.

Matt LeathersMatt has significant cross-industry experience in delivering innovative solutions and technology transformations for over 12 years. He has established service offering and technology strategies for a number of clients and supported their subsequent execution as a program lead, trusted advisor, and business liaison. As a practitioner in Deloitte Consulting LLP’s Technology Strategy & Architecture practice, Matt delivered IT transformation initiatives in the telematics (in vehicle safety and entertainment services) and mobile payments industries (online, iOS-, and Android-based payment solutions). Matt is a reviewer of the ITIL V3 Service Strategy volume and lead author of Service Level Management and ITIL V3 Points of View.

Robert C. Lee, Deloitte Consulting LLPRobert is a Consultant in Deloitte Consulting LLP’s (Deloitte Consulting) Technology Strategy and Architecture service line. He received his M.S. and B.S. in Electrical Engineering from the Georgia Institute of Technology. Robert has worked on IT Asset Management initiatives at two Deloitte Consulting clients. He has created detailed IT Asset Management processes for the asset lifecycle as well as identified savings opportunities in software assets. Robert also has participated in authoring white papers on IT asset management.

Sandeep LeleSandeep is an advanced degreed, ITIL Certified Professional, with over 17 years of experience solving diverse sets of challenges including sustainable cost reduction; moving an IT organization from an operational concern to a strategic tool; conceptualizing and deploying innovative infrastructure services. His proficiency includes Technology Strategy & Architecture, IT Strategy and Business Alignment, IT Service Management (ITSM) and Service Delivery, IT Operations Efficiency and Effectiveness, and Global/Enterprise IT Program & Project Management.

Brett Loubert, Deloitte Consulting LLP Brett is a Senior Manager in Deloitte Consulting LLP’s Technology Strategy and Architecture service line. He works with CIOs and senior leadership to develop and refine strategies to solve complex technical problems. His experience spans the entire IT lifecycle, including IT Strategy, Enterprise and System Architecture, and Software Development and Implementation. Mr. Loubert complements his technology and engineering background with a deep understanding of how organizations operate and how IT can support and improve business functions and processes. This experience provides IT leaders with the necessary insight to design and implement the key tenets of their visions. Prior to his consulting career, Mr. Loubert worked as a program manager and engineer on large-scale network and system development efforts for a leading global security and information technology company.

Eugene Lukac, Deloitte Consulting LLPEugene is a Specialist Leader in Deloitte Consulting LLP’s Technology Strategy and Architecture service line, working at the interface of business and technology. He helps major organizations align business and IT strategies, and improve the business effectiveness of IT services and processes. He is a recognized speaker and author on the financial management and contribution of IT organizations. His experience has spanned a broad cross-section of industries, with particular emphasis in consumer and industrial products, energy, and financial services. Prior to his 15 years in consulting, Dr. Lukac had 17 years of experience managing information services organizations. He is a frequent guest speaker at select industry gatherings. His articles have appeared in publications such as CIO, Optimize, and the Journal of IT Financial Management.

The 2011 CIO Compass 123

Satish Maktal, Deloitte Consulting LLP Satish is a Manager in Deloitte Consulting LLP’s Technology Strategy & Architecture practice. He has 10+ years of experience in Business and Technology consulting with emphasis in technology strategy, assessments, and roadmaps; Enterprise Architecture; Enterprise Applications Strategy (SaaS and on-premise); package selection; IT Operations assessment and roadmaps (ITIL); PMO; Business Case development; and Systems Development and Integration.

Vishal Malakar, Deloitte Consulting LLPVishal is a Manager in Deloitte Consulting LLP’s (Deloitte Consulting) Technology Strategy & Architecture practice focusing on IT strategy and effectiveness for banking and securities clients. He has 10 years of client service experience, across multiple industries including Health Care, Telecommunications, Public Sector, and Financial Services, advising clients on technology-business alignment, information strategy, structure/accountability in IT processes, and effective management of key infrastructure assets. Vishal has managed Deloitte Consulting teams responsible for developing and implementing IT processes, customized reporting, and tech assessments for IT solutions, and he has worked with global financial services firms to establish strategic roadmaps for their IT organizations. His ongoing focus is to help clients realize their IT investment value and help them leverage it for enhanced business alignment.

Anuj Mallick, Deloitte Consulting LLPAnuj is a Manager with Deloitte Consulting LLP’s (Deloitte Consulting) Technology Strategy & Architecture practice serving Capital Markets clients. He has been with Deloitte Consulting for eight years and maintained a focus on the Financial Services industry, but also has experience with Technology, Public Sector, and Health Care clients. Anuj has delivered on IT effectiveness, data center and trade floor architecture, cost reduction, stability, process improvement, and global technology deployment projects. He has worked across equities, derivatives, payments, retail banking, and insurance areas. He also has experience with the delivery of risk projects including Sarbanes-Oxley and living wills.

Tiffany McDowell, Deloitte Consulting LLP Tiffany is a Senior Manager in Deloitte Consulting LLP’s (Deloitte Consulting) Human Capital practice. Tiffany has experience in many areas of organizational behavior

and serves as a national lead for Deloitte Consulting’s Organization Strategies service offering. Tiffany focuses on delivering organization design, talent strategies, and global change management solutions for large-scale transformation projects. She is a specialist in organization and skills assessment, leadership development, talent planning and execution. Tiffany has conducted job analysis and performance evaluation, done executive assessment and coaching, and has developed, delivered, and measured organizational interventions in a variety of corporate settings. Tiffany has over 14 years of combined business and consulting experience, both locally and abroad, and holds a doctorate in Industrial/Organizational Psychology.

Patrick McLindenPatrick has over 10 years of experience working with clients to assess, design, and implement effective talent management solutions. In addition, he has helped clients manage mergers and acquisitions, develop talent strategies, effectively manage change, and design effective organizations. He focuses primarily on the Health Care industry but has served clients and IT organizations across industries.

Kaushik Mukerjee, Deloitte Consulting LLP Kaushik is a Manager in Deloitte Consulting LLP's Federal Technology practice. With more than 15 years of experience, Kaushik has consulted to major government and industry clients on IT strategy, enterprise architecture, IT portfolio management, strategic planning, and cloud computing. He has primarily worked with Chief Information Officers and Chief Enterprise Architects to manage strategic initiatives, aligning business goals with technology investments.

Anuj Nadkarni, Deloitte Consulting LLP Anuj is a Senior Consultant in Deloitte Consulting LLP’s Technology Strategy & Architecture practice, which focuses on driving business performance through effective use of information and enabling technologies. Anuj has more than six years of experience leading IT strategy and implementation projects across a number of industries including Financial Services, Health Care, Public Sector, and Travel and Transportation. His areas of specialization include IT services management, enterprise architecture, information strategy, and project management. Anuj holds a master’s degree in Applied Operations Research from Cornell University.

The 2011 CIO Compass 124

Johannes Raedeker, Deloitte Consulting LLP Johannes is a Director in the Technology Strategy & Architecture practice with Deloitte Consulting LLP in the San Francisco office. He has over 15 years of experience in the selection, cost justification, strategic planning, and implementation of large-scale ERP systems. He has focused on aligning IT capabilities to business needs and helped clients plan their forward-looking application architectures combining both on-premise and SaaS applications. His work experience includes Financials, Procurement, Order Management, Projects and Supply Chain application implementations, financial processes and customer service redesign, Shared Services implementation, and organizational design. He has led several large implementation projects particularly in the finance area and developed global solutions with deployments in Europe and Asia.

Edward Reddick, Deloitte Consulting LLP Edward is a Director in Deloitte Consulting LLP’s Technical Strategy & Architecture practice, with experience in consulting assignments covering business and IT strategies, portfolio management, effectiveness and efficiency studies, software development, systems requirements, and compliance analysis. Recently he has led efforts to develop an overarching data strategy that aligns IT portfolio management processes with network operations, information assurance, and network defense processes. Prior to becoming a consultant, Mr. Reddick spent eight years as an officer in the United States Navy and an operations manager for a leading telecommunications equipment maker.

Eric Ritter, Deloitte Consulting LLP Eric is a Senior Manager in Deloitte Consulting LLP’s Technical Strategy & Architecture Practice. Mr. Ritter has extensive management consulting experience with an emphasis on helping CIOs and other IT decision makers strategically align information resources to support business priorities. This experience reflects progressively increasing levels of project and program management responsibilities in the areas of IT Infrastructure, Enterprise Architecture, Segment Architecture, IT Strategy, IT Portfolio Management, IT Governance, and other CIO support services. He holds MBA and MS degrees and is also a Certified Project Management Professional (PMP), Six Sigma Black Belt, and is ITIL certified.

Derrick Robinson, Deloitte Consulting LLPDerrick is a Manager in Deloitte Consulting LLP’s System Integration practice and has more than 15 years of professional experience. His specialty is delivering business results within health care companies by defining enterprise architecture (TOGAF standard), Lean Six Sigma, Lean Agile and service oriented architecture (SOA) technology solutions to enable revenue-generation, increase client satisfaction, and improve operating margins. This entails assessing, designing and implementing integration solutions for claims management and business-to-trading partner operations. His emphasis is on discarding waste in and improving throughput of software delivery lifecycle processes and core administration systems. Resulting improvement projects that become part of the implementation roadmap are vetted for business impact and technical complexity and are managed via program governance control structures. He participates in TOGAF forums for enterprise architecture and services oriented architecture.

Scott Rosenberger, Deloitte Consulting LLPScott is a Senior Principal in Deloitte Consulting LLP’s (Deloitte Consulting) Consumer & Industrial Markets and he is the Industry leader for THL and Transportation Sectors. His specialization for client service is focused on Technology Strategy & Architecture, where he is the service leader for Enterprise Architecture. The THL and Transportation sectors of Deloitte Consulting provide different and innovative solution capabilities and services to the world’s leading companies in each of these marketplaces: hotels, travel service companies, restaurants, airlines, and air freight and transportation companies. Scott works closely with clients by guiding business strategy and the application of information technology. With over 26 years of experience, Scott has consulted globally to Fortune 2000 companies and has been involved in most phases of technology planning, development, and integration. As a certified ITIL and FEA Architect, he led the design and development of Deloitte Consulting’s full architecture methodology, which is both TOGAF and DODAF compliant.

The 2011 CIO Compass 125

Shavin Thaddeus Shahnawaz, Deloitte Consulting LLPShavin is a Senior Consultant in Deloitte Consulting LLP’s Technology Strategy & Architecture service area, with over five years of experience in the Life Sciences & Health Care industry. He has worked with health plans, pharmacy benefit managers, medical devices, and electronic medical records across functions of product marketing management, business development, financial analysis, strategic technology research, enterprise architecture, and software application development. His background and professional interests include IT strategy, effectiveness, and alignment. He has an MBA and a bachelor’s degree in computer science and engineering.

Sumit Sharma, Deloitte Consulting LLPSumit is a Manager in Deloitte Consulting LLP’s (Deloitte Consulting) Technology Strategy service area, with a focus on the high-tech industry. He is a founding member of Deloitte Consulting’s Cloud Strategy Practice and in addition to contributing toward thought leadership, intellectual property, and business development activities related to Cloud Computing, Sumit serves as a specialist within Deloitte Consulting’s Cloud CoE.

Bill Sheleg, Deloitte Consulting LLP Bill is a Senior Manager in Deloitte Consulting LLP’s Technology Strategy & Architecture service area. He has more than 25 years of experience working with organizations to increase the value they receive from IT. This involves working with clients to develop IT strategies that align with business imperatives and transforming IT operating models to improve IT performance. The IT operating model design work includes IT organizational design, increasing IT’s capability maturity, and implementing effective IT governance structures. He is ITIL V3 certified and works with IT organizations to assess and improve their IT service management maturity. He is also TOGAF certified, and works with organizations to achieve more integrated IT environments that improve support for the business, reduce costs, and provide more consistent information. He is a frequent speaker at the Open Group’s Architecture Forums on the topic of Strategy Execution. He is PMP certified, experienced in managing complex programs, and received certification from IBM as a Cloud Certified Architect.

Russ Smariga, Deloitte Consulting LLP Russ is a Specialist Leader in Deloitte Consulting LLP’s (Deloitte Consulting) Technology Strategy & Architecture service line, specializing in Data Center and Infrastructure, service/application/infrastructure rationalization and consolidation, and cloud computing. He has over 25 years of experience and has served in various leadership and technical roles at Deloitte Consulting, Sun Microsystems, and Motorola. Russ's areas of specialization include Technology Strategy & Infrastructure and IT Operations as they apply to enabling business strategy. He has led teams in numerous implementation projects, the architecture, design and deployment of disaster recovery/business continuity efforts, data center consolidation engagements, data center improvement strategies, and very large software development efforts in both the public and private sectors. Russ’s experience spans health care, technology manufacturing, financial services, global online service delivery, and large state government operations. Russ holds a Masters degree in Systems Management from the Air Force Institute of Technology.

Siddharth Sonrexa, Deloitte Consulting LLPSiddharth is a Senior Consultant in Deloitte Consulting LLP’s Technology Strategy & Architecture service line, with six years of experience in IT Strategy and IT Service Management in several different industries including Health Care, Energy, Telecommunications, Public Sector, and Consumer Business. Siddharth's experience includes ITSM strategy development, IT shared services design, IT operations reengineering, and software integration. He is a ITIL V3 Practitioner (with several advanced certifications including Service Strategy and Service Design) and has experience assessing and implementing processes across the service lifecycle including defining services, transitioning services, and implementing processes. Siddharth’s core focus is improving alignment of business and information technology strategies through better design of IT organizational structure, IT operations, and IT governance.

The 2011 CIO Compass 126

Anuj Sood, Deloitte Consulting LLP Anuj is a Senior Consultant in Deloitte Consulting LLP’s Technology Strategy & Architecture practice. He has six years of experience in conducting and leading IT assessments, IT infrastructure improvement, IT infrastructure design and implementation, enterprise architecture assessment and design, and IT operations. His experience spans most phases from planning through design and execution. In his consulting role, Anuj has helped clients manage their infrastructure, achieve architecture compliance, and improve IT operations.

Chris Thomas, Deloitte Consulting LLPChris is a Manager in Deloitte Consulting LLP’s Technology service area, aligned with the Technology Strategy & Architecture service line. Primarily focused in the financial services and insurance space, Chris has also led engagements with public sector, high-tech, and automotive clients. Chris has demonstrated effectiveness blending business requirements and technology strategy for clients in the U.S. and Europe. Chris’s experience includes developing and implementing technology strategies to support large-scale technical and programmatic transitions, IT operational assessments, business/IT process development, and IT Service Management (ITSM) principles. Chris holds a master’s degree in Communications and Technology Strategy from Northwestern University, is ITIL Foundations certified, and actively contributes to technology and data center journals.

Jonathan Weil, Deloitte Consulting LLPJonathan is a Business Technology Analyst in Deloitte Consulting LLP’s Information Management practice focusing on Business Intelligence and Data Warehousing. Jonathan has consulted for clients in both federal and commercial sectors in a variety of industries, including Financial Services and Entertainment/Media, to help his clients understand their data landscape and leverage resources for high-quality performance management. Additionally, Jonathan has experience working in varied roles as a data architect, ETL developer and business analyst. Jonathan’s educational background in Business Strategy allows him to leverage his knowledge along with his technical experience to continue to develop individual solutions that generate increased business value for his clients.

Ezrick Wiggins, Deloitte Consulting LLP Ezrick is a Manager in Deloitte Consulting LLP’s Technology Strategy & Architecture service line with over nine years of experience spearheading projects and formulating business strategies for C-level executives. He has assisted clients in numerous areas, including M&A IT strategy, program management, cost reduction, systems implementation, technology rationalization, enterprise architecture, and financial management. Ezrick has led many projects to develop custom applications, deploy infrastructure, build cloud-computing platforms, and facilitate data center consolidation efforts. He has also developed strategies for the consumption of SOA to provide services across business units to address redundancy, improve IT security, and differentiate business capabilities. Ezrick holds a master’s degree in Finance from the Jones School of Business at Rice University.

The 2011 CIO Compass 127

Contacts

For more information, or to obtain reprints, please contact:

Peter Blatman Principal, Deloitte Consulting LLP Phone: +1 417 783 6169 E-mail: [email protected]

Matt Law Principal, Deloitte Consulting LLP Phone: +1 612 397 4353 E-mail: [email protected]

For further information, visit our website at www.deloitte.com/us/ciocompass

This publication contains general information only and is based on the experiences and research of Deloitte practitioners. Deloitte is not, by means of this publication, rendering business, financial, investment, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte, its affiliates, and related entities shall not be responsible for any loss sustained by any person who relies on this publication.

About Deloitte Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee, and its network of member firms, each of which is a legally separate and independent entity. Please see www.deloitte.com/about for a detailed description of the legal structure of Deloitte Touche Tohmatsu Limited and its member firms. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting.

Copyright © 2011 Deloitte Development LLC. All rights reserved. Member of Deloitte Touche Tohmatsu Limited