2011 planning guide: data center, infrastructure,...

13
2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal Cloud Gartner Research Note G00210244, Drue Reeves, 4 March 2011, RA15 08132011 In 2011, data centers will begin their second transformation in five years, from virtualized, consolidated, and centralized IT infrastructure to a service-oriented, economically efficient set of IT capabilities. The new data center will enable business units to consume IT as a service, host critical apps and data, and augment capacity using the external cloud. This transformation is the result of pressure external cloud computing exacts on IT organizations to compete with cloud providers and the desire to support a growing and increasingly complex remote and mobile workforce. In this data center Planning Guide, Vice President and Distinguished Analyst Drue Reeves will explore the technology innovations and IT trends that enable IT organizations to build internal private clouds, virtualize the desktop, bridge to external clouds, and increase data center resiliency while reducing capital and operational costs. SUMMARY OF FINDINGS Bottom Line: In 2011, data centers will start a transformation from virtualized, consolidated, and centralized IT infrastructure to a service-oriented, economically efficient set of IT capabilities that enable business units to consume IT as a service, house critical applications and data, and augment capacity by using the external cloud. This transformation is the result of pressure external cloud computing exacts on IT organizations to compete with cloud providers and the desire to support a growing and increasingly complex remote and mobile workforce. A new wave of innovation is emerging in key data center areas such as server virtualization, client virtualization, servers, and storage. This innovation enables IT organizations to build internal private clouds, virtualize the desktop, bridge to external clouds, and increase data center resiliency while reducing capital and operational costs. Context: The growth of cloud computing and the rise of a remote and mobile workforce are having a profound effect on data centers and IT organizations. In 2010, organizations were compelled to consume IT services from external cloud providers to achieve their business, budget, and IT goals. But organizations realize that external cloud computing is not a panacea. They still need internal data centers to house critical applications and data. However, the use of external cloud providers has conditioned organizations to expect IT resources — whether internal or external — that are offered in a pay-as-you-go (PAYG), self-service manner. Therefore, IT organizations are forced to offer IT services by using the same consumption model or otherwise risk extinction. In addition, the business need to retain key talent, minimize operational overhead, and maximize employee productivity is creating a

Upload: others

Post on 20-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal Cloud

Gartner Research Note G00210244, Drue Reeves, 4 March 2011, RA15 08132011

In 2011, data centers will begin their second transformation in five years, from virtualized, consolidated, and centralized IT infrastructure to a service-oriented, economically efficient set of IT capabilities. The new data center will enable business units to consume IT as a service, host critical apps and data, and augment capacity using the external cloud. This transformation is the result of pressure external cloud computing exacts on IT organizations to compete with cloud providers and the desire to support a growing and increasingly complex remote and mobile workforce. In this data center Planning Guide, Vice President and Distinguished Analyst Drue Reeves will explore the technology innovations and IT trends that enable IT organizations to build internal private clouds, virtualize the desktop, bridge to external clouds, and increase data center resiliency while reducing capital and operational costs.

SUMMARY OF FINDINGSBottom Line: In 2011, data centers will start a transformation from virtualized, consolidated, and centralized IT infrastructure to a service-oriented, economically efficient set of IT capabilities that enable business units to consume IT as a service, house critical applications and data, and augment capacity by using the external cloud. This transformation is the result of pressure external cloud computing exacts on IT organizations to compete with cloud providers and the desire to support a growing and increasingly complex remote and mobile workforce. A new wave of innovation is emerging in key data center areas such as server virtualization, client virtualization, servers, and storage. This innovation enables IT organizations to build internal private clouds, virtualize the desktop, bridge to external clouds, and increase data center resiliency while reducing capital and operational costs.

Context: The growth of cloud computing and the rise of a remote and mobile workforce are having a profound effect on data centers and IT organizations. In 2010, organizations were compelled to consume IT services from external cloud providers to achieve their business, budget, and IT goals. But organizations realize that external cloud computing is not a panacea. They still need internal data centers to house critical applications and data. However, the use of external cloud providers has conditioned organizations to expect IT resources — whether internal or external — that are offered in a pay-as-you-go (PAYG), self-service manner. Therefore, IT organizations are forced to offer IT services by using the same consumption model or otherwise risk extinction. In addition, the business need to retain key talent, minimize operational overhead, and maximize employee productivity is creating a

Page 2: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

2mobile workforce that is armed with a diverse set of devices and work-environment constraints. This new workforce is driving IT organizations to rethink their desktop strategy and focus on the end user by offering desktop services that are housed, managed, and operated within the data center.

Take-Aways:

The macro trends in IT have not changed in the past year. IT organizations continue to externalize, democratize, and consumerize IT. However, these overarching trends are having a profound effect on the trends and technology within the data center.

Data center trends:

• Hybrid IT: IT organizations are becoming the intermediary between internal IT consumers and IT services — both internal and external.

• Internal private clouds: To compete with external providers, IT organizations are planning to build, or are already building, internal clouds that offer IT as a service.

• Hybrid clouds: IT organizations are building bridges to external cloud providers in order to augment internal IT capacity, automatically migrate applications and data, and increase disaster recoverability.

• User-centric computing: To support a mobile and remote workforce that uses a diverse set of devices and applications, IT organizations are taking a user-centric approach to desktop architecture. Many IT organizations are employing server-hosted virtual desktops (SHVDs), persistent personalization, and software as a service (SaaS)-based apps to reduce the operational expenses and security issues associated with desktops.

• Data center economization: Data center trends are underpinned by the desire to optimize every aspect of IT. To that end, IT organizations are employing technologies such as data deduplication, blade servers, and virtualization to further slow data growth, utilize excess CPU capacity, reduce energy costs, and economize precious data center space.

Server virtualization planning:

• Virtualize as many applications as possible: Virtualizing as many applications as possible will help organizations reach their consolidation goals and increase IT agility and workload mobility necessary to offer infrastructure as a service (IaaS).

• Evaluate cloud orchestration software: Building an internal IaaS cloud requires automation and orchestration software that can place and move virtual machines (VMs) within the infrastructure.

• Build an internal cloud: To compete with external cloud providers and to offer IT services to internal customer, IT organizations must build internal clouds that use virtualization and orchestration software. Chargeback and a self-service portal are also key parts of an internal cloud.

Client virtualization trends and planning:

• Pilot SHVDs: As IT organizations seek to rearchitect their desktop infrastructure and replace Windows XP, they should consider SHVD solutions. Virtualization vendors significantly improved their SHVD solutions in 2010. Citrix XenDesktop and VMware View are ready for enterprise deployments.

• Use SaaS whenever possible: Almost every end-user device supports a browser. Thus, SaaS-based applications can increase user productivity when traditional applications would require a fully functional laptop. External SaaS-based services also reduce the IT organization’s management and maintenance burden.

• Watch for consolidation in persistent personalization: Persistent personalization is a key piece to user customization in SHVD deployments. However, personalization software companies are small and may be acquired by a virtualization vendor whose virtualization platform may not match an organization’s SHVD solution’s platform, thereby reducing support options.

• Test client hypervisors in the lab: Client-hosted desktops (aka client hypervisors) will emerge in 2011, but the lack of manageability and security will prevent IT organizations from deploying them in production environments.

© 2011 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced or distributed in any form without Gartner’s prior written permission. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see “Guiding Principles on Independence and Objectivity” on its website, http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp

Page 3: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

3Compute trends and planning:

• Use reduced instruction set computer (RISC)/Unix platforms only when applications require it: Today’s x86 servers pack enough power and redundancy to handle most enterprise workloads. IT organizations should purchase non-x86 server platforms when the application needs the highest levels of performance, scalability, or redundancy.

• Buy blades for density and innovation; buy rack-based servers for flexibility: Blade servers offer the best CPU performance per rack unit but can lock IT organizations into a single vendor. Rack-based servers offer greater vendor flexibility, and still offer good density and power, but lack unique innovation, such as input/output (I/O) virtualization, that can increase workload mobility.

• Plan for 10 Gigabit Ethernet (10GbE) at the network’s edge: 10Gb Converged Enhanced Ethernet (CEE) will be a standard feature on server motherboards in 2011. IT organizations should plan to upgrade their top-of-rack switches to accommodate the infrastructure change.

Storage trends and planning:

• Simplifying storage environments: IT organizations should reduce the number of both vendors and devices by purchasing larger, multipurpose devices. Multifunction devices can increase workload mobility and reduce storage network complexity.

• Investigate and invest in storage efficiency and reduction technologies: Storage continues to grow at phenomenal rates. To control this growth, IT organizations must invest in vendor products that already have or can demonstrate road maps that incorporate storage reduction technologies such as deduplication and thin provisioning.

• Incorporate cloud into their backup, disaster-recovery (DR), and data-distribution plans: Cloud-based storage continues to mature. For non-critical applications and data, IT organizations should investigate cloud-based storage as an alternative method to protect and distribute data.

Availability and recoverability trends and planning:

• Review business-continuity and DR plans annually: Gartner routinely recommends that organizations review their business-continuity and DR plans annually. But in 2011, this recommendation is more important due to changing recovery time objective (RTO) requirements and the increased number of DR options available to IT organizations.

• Use virtualization for warm-site recovery and insource DR when possible: Virtualization’s encapsulation of applications and data make it a perfect option for warm-site recovery. For IT organizations that have excess data center capacity, using x86 server virtualization and storage replication internally are a cheaper and better DR option that outsourcing.

• Incorporate cloud computing thinking into your DR and resiliency plans: Public cloud computing can make a good DR option for non-critical applications with small datasets. Organizations must also consider the recoverability of applications and data already in the cloud.

• Continue to build resiliency and availability inside the data center for critical applications and data: Due to security and liability issues with the public cloud, internal clouds become the default location to house critical apps and data. IT organizations must build resilient internal clouds by using the hardware and operating system’s high availability.

Conclusion: The rise of the public cloud and the growth of remote and mobile users are catalyzing a major shift in the data center. Many IT organizations are feeling the pressure to compete with external providers and to support a workforce armed with a range of devices and functionality. In 2011, IT organizations need to prepare for this shift by offering IT as a service on top of a highly virtualized and economized infrastructure. To accomplish this goal, IT organizations must invest in technologies that enable them to build internal clouds, to rearchitect the desktop, to bridge to external clouds, and to economize every aspect of the data center.

INTRODUCTIONFrom 2006 to 2008, data centers experienced a major transformation. Server virtualization sparked a wave of data center and server consolidation initiatives that transformed data centers from a rigid, siloed, underutilized set of IT devices to a highly economized, agile, common IT infrastructure.

But these initiatives were not enough to stem the tide of the economic downturn in 2009. Many IT organizations were faced with little or no capital budget, no additional head count, and a growing demand for IT services. To stay alive, businesses needed to save or earn every penny by automating processes, enabling additional revenue streams, increasing user productivity, and dynamically scaling up and down resources as the market demanded. These dynamic business expectations, combined with the budget restrictions, became part of a “new normal” — the expectations that budgets would not go back to pre-2009 levels and that IT resource scaling could match the pace of business. Thus in 2010, public cloud computing became an increasingly popular choice for IT services. Cloud computing’s rapidly provisioned, PAYG business model enables organizations to match the pace and trend of business.

Unfortunately, public cloud computing has many issues including software licensing, data privacy and compliance, vendor viability and lock-in, and liability. Enterprises were unable to entrust their critical (and some non-critical) applications and data to cloud service providers (CSPs). Nevertheless, organizations have growing accustom to consuming IT as a service. Using cloud computing, businesses can more easily predict costs, bring products to market faster, and scale up or down as the market demanded.

Today, data centers stand on the brink of a second transformation. The rise of public cloud computing and a growing, increasingly complex mobile and remote workforce is pressuring IT organizations to offer IT infrastructure as service that not only can

Page 4: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

4compete in price and agility with external providers, but also can offer the security and surety necessary to house business-critical applications and data.

In 2011, IT organizations must embrace this transformation; they must let the edge drive the core by building secure and efficient internal clouds that can compete with external providers, support a mobile and remote workforce, house applications and data that cannot be hosted in the public cloud, and connect to the external cloud to augment capacity.

The consequences for not embracing the new data center can be profound. If IT organizations fail to offer IT as a service or resist supporting remote users, internal customers will spend their IT budget elsewhere (or lobby to re-organize how to pay for IT services) and eventually bypass the IT organization to meet their needs.

INDUSTRY OVERVIEWIn order to give the reader a full view of the industry trends, this data center Planning Guide categorizes three industry trends:

• Macro trends: These are general IT industry trends that are important to understand because they reflect the business and IT trends that directly impact the data center.

• Data center trends: These are overarching data center trends that are driving the market in terms of IT customer demands and product direction, which create technology specific trends.

• Technology specific trends: These trends are specific to a technology area in the data center such as virtualization, storage, or computing. These trends are a part of a separate market (e.g., storage technology trends support a storage market), but also fit within the data center market and support the overall data center trends.

Macro TrendsGartner IT1 coverage writes extensively about three top-level trends in businesses and enterprise IT: externalization, consumerization, and democratization. These macro trends can be broken down into specific effects, and resultant actions, that IT professionals need to be aware of and choose from.

ExternalizationBusinesses continue searching for ways to reduce IT spending across the board while focusing on their core capabilities. Core capabilities are those that provide competitive differentiation to the business. Non-core, or context, business capabilities are often non-value added processes (i.e., commodities). Executive interest in externalized IT — the cloud, SaaS, outsourcing, onshoring, and offshoring — reflects the desire to reduce internal maintenance costs and complexities. The benefits of IT externalization include the potential of reduced cost, renewed focus on the core, and strategic partnership with the business. Risks include the relative immaturity of external solutions for true enterprise-scale support and a myriad of unsolved security and privacy issues. In addition, advanced external scenarios will require significant refactoring of most enterprise IT environments. That refactoring has the long-

term benefit of clearly decoupling core and context functions so that they can be more effectively redistributed. Enterprises are determining their level of risk for externalization; businesses expecting strong growth in a weak economy will likely incur higher risk while businesses recovering from recession will still look at externalization, although less aggressively and primarily as a cost-saving manner or in preparation for a business upturn. For those businesses where there has been no recovery, externalization is an imperative for true commodity functions, with heavy emphasis on expense reduction.

Companies should reduce the amount of effort they spend on context — commodity IT — while refocusing on the core — their competitive differentiators. A growing number of options for externalizing IT, including the cloud, hold the promise of streamlined operations, reduced overhead, and/or reinvigorated purpose. The danger with externalization, especially in a cloud market with emerging standards, is vendor lock-in. With cloud, the internal IT pricing model of cost allocation shifts to a market-based value pricing model (vs. cost-plus pricing), which may or may not reflect true vendor costs.

ConsumerizationThe average person has become a sophisticated consumer of technology. This has led to an explosion of devices and personal choice and a familiarity with technology that breeds contempt for IT standards.1 As a result, there is a strong trend for external IT providers to market directly to business stakeholders, thereby effectively circumventing traditional IT. Consumer laptops and smartphones will follow the same path around IT, displacing corporate desktops through the price advantages and short product cycles they carry over from the mass market. A generation from now, enough of the workforce will likely own high-end mass-market computing equipment that enterprises will not have to pay for user computing equipment at all. Coupled with other macro trends, this is igniting growth in work-at-home and mobility options. Consumerization of IT imposes significant burdens on the enterprise; it’s hard to secure equipment you don’t own, and it’s hard to manage and support a very diverse hardware and software base. In addition, identity is increasingly linked to consumer experiences, and enterprises are considering ways to leverage those external identities to modernize identity and access management (IAM). Enterprises are responding with consumer-oriented products for customers, partners, and employees. Emerging virtualization and cloud-based identity services are likely to accelerate this trend.

DemocratizationDue in part to the rise of social computing and resultant flattening of traditional hierarchies, complex decisions, including those affecting productivity, are often made collaboratively by groups and individuals rather than as a result of top-down mandates. This trend ensures that decision makers are situationally aware of one another’s activities. This allows workers to gain a degree of shared decision rights by providing a means to jointly own and contribute to an objective, obtain group consensus, and establish varied levels of trust. In many ways, this trend is about flattening hierarchies, avoiding dependencies on “push” information, and prioritizing community participation that democratizes the workplace. This is not to say that traditional hierarchies will disappear; they are a necessary communication structure within a large organization.

Page 5: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

5The challenge to the organization is to facilitate the coexistence of hierarchical and democratic styles of communication and choice making. This is as much a technology change as it is a social change. Where the 20th century reflected the socialization of work by using a Frederick Taylor framework, the 21st century is emerging from diametrically opposed forces that focus less on finite measurement and more on the infinite capability of collaborative organizations.

Data Center TrendsIn 2011, five overarching trends will transform the data center:

• Hybrid IT: Perhaps the greatest effect of public cloud computing on IT is in operations. IT organizations realize that not only do they need to compete with public CSPs, but they also have to be the intermediary between their internal customers and all IT services (whether internal or external). IT organizations are becoming the broker of a set of IT services that are partially hosted internally and partially hosted externally — hybrid IT. By being the intermediary of IT services, IT organizations can offer internal customers the price, capacity, and provisioning speed of the external cloud while offering the protection and security of the internal cloud (see Figure 1).

• Internal clouds: As businesses grow accustom to consuming IT resourcing as a service (i.e., the consumerization of IT), IT organizations will be compelled to build internal clouds. Unfortunately, building an internal cloud is high art; there are few blueprints for building an internal cloud. Although vendors are building products (e.g., cloud orchestration software such as vCloud Director from VMware) that will help customers build an internal cloud, no turnkey internal cloud solution exists. Thus, IT organizations will struggle to cobble together and integrate the necessary pieces to build an internal cloud. Nevertheless, building internal clouds remains a key data center trend in 2011 because of the need to compete with external cloud computing.

However, Gartner has published an internal cloud maturity model to help IT organizations know when they have achieved an internal cloud. For more information regarding internal cloud maturity models, refer to the guidance document “Stuck Between Stations: From Traditional Data Center to Internal Cloud.”

• Hybrid clouds: Hybrid clouds are a connection or integration between two clouds, usually between an internal private cloud and an external public cloud. Hybrid clouds are constructed using software that enables applications and data to more easily migrate

between connected clouds. For example, many applications are dependent on identity management systems to authenticate users, have gigabytes of data, and/or have I/O latency dependencies for storage. These dependencies often prevent applications from migrating to the external cloud. Hybrid cloud solutions solve each of these dependencies in unique ways. For example, hybrid cloud software can enable WAN acceleration and VPN connections between clouds to hasten the migration of large datasets over a secure connection. As IT budgets continue to shrink and capital resources remain scarce, hybrid clouds become a more popular option to augment IT capacity and enable DR rather than building another data center or signing a long-term outsourcing agreement.

• User-centric computing: To compete in a global marketplace and to retain key employees, organizations are forced to hire personnel who live in remote locations and/or use their personal devices for work. Other organizations are attempting to radically reduce the capital expense of large desktop devices for users with small sets of application requirements. These new requirements create new challenges for IT organizations to secure data; backup data; support smaller, less functional devices; and support a broader range of devices. For example, IT organizations are employing SHVDs on blade servers in

Figure 1. Hybrid IT

Source: Gartner (February 2011)

ConsumerCCo

Service requestSre

IT organization (acting as a broker)

EnterpriseStrategic IT services,apps, and data

Non-strategic IT services,apps, and data

Cloudcomputing

IT

SaaS

PaaS

SlaaS

HlaaS

pp ,p apps

S

PaPaP

SlSl

HlHl

Clco

© 2011 Gartner, Inc. and/or its affiliates. All rights reserved.

Page 6: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

6the data center to support remote and mobile users or call centers with thin clients. In addition, IT organizations are using persistent personalization software to deliver a customized user experience to these new devices. Therefore, many IT organizations are rethinking their desktop and mobility strategy and choosing a user-centric, rather than device-centric, point of view.

• Data center efficiency: Competing with the external cloud requires IT organizations to drive for hyper efficiency within their data center. If critical data and applications are to be housed within an internal private cloud, IT organizations must deliver internal IT services in a cost-effective, efficient manner. This requires IT organizations to squeeze all remaining cost out of their data center by virtualizing as many applications as possible, employing storage efficiency technologies such as data deduplication, and buying servers that allow the organization to maximize space and power and consolidate applications.

TECHNOLOGY-SPECIFIC TRENDSData centers are the heart of IT. They house the technology that run the business. It is vital for IT organizations to keep apprised of the technological trends in several key areas including server virtualization, client virtualization, storage, servers/compute, high availability, and disaster recovery.

Server Virtualization TrendsServer virtualization is the underlying technology in transforming data centers into internal clouds.

Market and Technology TrendsOver the past 20 years, no technology has had such a profound effect on the size, shape, and operation of data centers than x86 server virtualization. At first, virtualization was used primarily as a server consolidation tool. Over time, virtualization’s ability to enable workload mobility has proven to be its most valuable asset. In 2011, virtualization vendors will take workload mobility to new heights by creating innovative products that enable IT organizations to build internal and hybrid clouds.

Four trends are emerging in the server virtualization market:

• Virtualization and enterprise management vendors will continue to build turnkey cloud orchestration software: To meet the demand for internal cloud computing, virtualization and enterprise management vendors are offering “cloud orchestration software.” This software is designed to transform a highly virtualized IT infrastructure into IaaS or platform as a service (PaaS). Arguably, this trend started in 2010 when some vendors acquired cloud orchestration software (e.g., CA acquired 3Tera, Citrix acquired VMLogix, and Quest Software acquired Surgient) and other vendors developed new products (e.g., VMware developed vCloud Director). However, in 2011, the cloud orchestration market will mature as the vendors add new features such as self-service portals, service catalogs, and policy-based hybrid cloud management.

• Competition in the hypervisor market will increase, but it’s a three-horse race: Today, VMware’s vSphere is the clear leader in server virtualization. Although that will not change in 2011, competition in the hypervisor market will increase. Citrix’s XenServer is now widely accepted as an enterprise-capable hypervisor. Also, integrated solutions and price pressures are pushing multi-hypervisor architectures into the enterprise. For example, Microsoft has been working to enhance Hyper-V to optimize performance and management of Microsoft applications. Likewise, IT organizations may deploy XenServer as the back end for XenDesktop because of its IntelliCache feature for SHVD performance and optimization. Other virtualization vendors, such as Oracle and Red Hat, will remain niche because customers will employ these hypervisors only when the technology or solution demands.

• Hypervisors are entrenching in data center infrastructure and commoditizing software: As hypervisor competition increases, hypervisor vendors are doing two things to retain customers. First, vendors are tightly integrating the hypervisor with traditional data center infrastructure. For example, VMware offers vStorage APIs that enable third-party storage arrays to communicate with the hypervisor in order to determine whether the hypervisor or array will provide storage functionality such as data deduplication. Second, virtualization vendors are subsuming traditional infrastructure device and software functionality into the hypervisor. Basic network security and access control, storage virtualization and optimization, and backup functionality are being subsumed by the hypervisor, supplanting the need for the same functionality in storage and networking devices. Both cases — more tightly integrated hypervisors and additional hypervisor functionality — lead to hypervisor lock-in.

• Virtualization vendors are building hybrid cloud software: A hybrid cloud is actually two clouds — typically a private cloud and a public cloud — connected together to enable application and data migration. Hybrid clouds are useful for augmenting capacity when IT organizations are unable to build additional data center space or as a DR option. IT organizations build hybrid clouds by using specialized software to secure data in flight, to accelerate data migration, to enable authentication and authorization to identity management systems, and to convert between virtualization formats. Several virtualization vendors are building hybrid cloud software, including Citrix’s Cloud Bridge and VMware’s vCloud Connector.

Planning ConsiderationsIT organizations should plan for the following in server virtualization in 2011:

• Evaluate cloud orchestration software: Cloud orchestration software is a necessary piece when building a private or hybrid cloud. As this market matures, now is the time to evaluate cloud orchestration software. Organizations should start small, turning over small portions of their infrastructure that hosts non-critical applications to cloud orchestration, and grow as they are able to add policies that govern workload mobility. For more information on building a cloud, refer to the guidance document

Page 7: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

7“Stuck Between Stations: From Traditional Data Center to Internal Cloud.”

• Strive to virtualize as many applications as possible: The default position for IT organizations should be to virtualize every application. Although IT organizations may not be able to virtualize every application, they must realize the lack of workload mobility is a key inhibitor to building private and hybrid clouds. Non-virtualized applications are significantly more difficult to move within the infrastructure and to deliver in a self-service, on-demand, charged-back manner. The only applications that should not be virtualized are those that the software vendor will not support in a virtualized environment.

• Make virtualization a requirement for all software RFIs/RFPs: Independent software vendors (ISVs) must help IT organizations virtualize their applications by supporting those applications on x86 server virtualization platforms and offering virtualization-friendly licensing. IT organizations must include virtualization as a must-have requirement in RFIs/RFPs to demonstrate to ISVs that a lack of virtualization support may result in a lost sale.

• Investigate hybrid cloud software: Hybrid cloud software is an emerging and important market. As organizations push more applications to the public cloud, the integration and migration efforts become more difficult. Hybrid cloud software will help to ease — even automate — application and data migration between private and public clouds. Forward-thinking organizations will begin to test hybrid cloud software to reduce the operational overhead of cloud integration.

Client Virtualization TrendsAs workforce mobility, outsourcing, and contract employment increases, organizations will be forced to rethink their desktop strategy. If 2007 was the year of server virtualization, 2011 may be the year of client virtualization.

Market and Technology TrendsIT organizations spend vast amount of time and resources supporting user desktops. Capital expenses for thick, fully functional desktops aside, the operational expenses (e.g., personnel and software) required to procure, secure, and manage desktops are astronomical. Adding to these problems is a remote and mobile workforce armed with a diverse set of devices with varying degrees of security features, network quality, and management functionality. Supporting a remote workforce by using traditional desktop infrastructure can lead to decreased business continuity due to device failures that result in lost productivity, increased security problems due to lost or stolen devices that hold important data, and increased operational overhead stemming from remote device support.

Client virtualization is an all-encompassing term that describes several technologies that are used to virtualize a user’s desktop experience including server-based computing (aka presentation virtualization [e.g., terminal server]), SHVDs, client-hosted virtual desktops (CHVDs), persistent personalization, and even SaaS.

These technologies are often employed in concert to help IT organizations increase desktop security, reduce management overhead, and increase user productivity. For example, using SHVDs, an IT organization can:

• Secureimportantdatausingacentrallylocated,managedstorage infrastructure

• Increasebusinesscontinuitybyusingmultiplebladesinaclustered or failover SHVD solution

• Increasemanagementefficiencybyusingenterprisemanagement tools and centralized image management

Unfortunately, no turnkey client virtualization solution exists. IT organizations are forced to integrate several different solutions to achieve the goals described in this section. However, emerging client virtualization trends in 2011 include:

• SHVD and persistent virtualization deployments will increase in 2011: In late 2010, Gartner observed several organizations piloting SHVD solutions for call centers and remote users. These pilots will finish in early 2011 and increase the SHVD market. In addition, while IT organizations transition from Windows XP to Windows 7, many organizations will take advantage of the SHVD features in Windows 7 and the transition time to pilot new SHVD initiatives.

The increase in SHVD initiatives will spark a wave of persistent personalization software deployments. Typically, IT organizations keep a small set of generic “golden images” copied to each server when a new user is instantiated. However, these images are often void of user settings and customization; SHVDs solutions do not excel at retaining and deploying user’s desktop settings. Persistent personalization software, used in concert with SHVD, can deliver a unique desktop to each user.

• SaaS will also increase in 2011: For a variety of reasons (e.g., network bandwidth requirements, user interface design, and security), some applications cannot be deployed using SHVD or other client virtualization technologies. Nevertheless, IT organizations still need client virtualization users to access these applications. Rather than giving up on client virtualization, some organizations will spend the time to replace these applications with SaaS so that the application can be accessed by any type of device that has a browser.

• Client hypervisors will emerge, but will remain niche: Although client hypervisors deployments will increase in 2011, IT organizations will hesitate to use them in production environments due to lack of security and management software. Vendors will continue to add to existing solutions, but SHVD will remain the focus for 2011.

For more information regarding virtual desktop applications and delivery, refer to the IT1 Reference Architecture template “Virtual Desktop and Application Delivery.”

Page 8: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

8Planning ConsiderationsThese are the items IT organizations should consider for client virtualization in 2011:

• Pilot SHVDs for specific desktop needs: SHVD solutions have matured in 2010 to the point where some solutions, such as Citrix XenDesktop and VMware View, are ready for enterprise deployments. IT organizations seeking to support a wide range of devices and applications for mobile, remote, and pattern users would do well to trial these solutions for a subset of their users in 2011.

• Use SaaS-based desktop applications when possible: Almost every device — from fully functional desktops to the smallest handheld — supports a browser. Using SaaS-based applications to support a wide array of end-user devices is an easy win for IT organizations. Not only does SaaS increase the investment that organizations make in desktop virtualization, but it also plays a key role in rearchitecting desktop infrastructure and driving down operational costs. However, IT organizations must be careful to not sacrifice functionality for ease of use. If users have to switch devices to get the functionality they need to complete their job, the cost savings is lost. Also, IT organizations must be mindful of both the maturity of the SaaS solution and the end user’s performance expectations and requirements. If a SaaS application is too slow or cumbersome for users, it could reduce employee productivity and erode the virtual desktop business case.

• Watch for consolidation in the persistent personalization software space: Several small software companies, including RES Software, AppSense, and Liquidware Labs, produce persistent personalization technology for both physical and virtual desktops. Although the virtualization vendors have some personalization software in their repertoire, the solutions are either incomplete (e.g., Citrix profile manager supports virtual desktops only) or are not shipping (e.g., VMware acquired RTO Virtual Profiles). Look for the smaller personalization software companies to be scooped up by the SHVD vendors in 2011. IT organizations that have deployed (or are considering) SHVDs need to be aware of the potential changes in this market. Persistent personalization is infrastructure software for SHVDs, and if a persistent software company is acquired by a different SHVD than the solution the organization has selected, the organization might face support issues.

• Put client hypervisors in the lab: Client hypervisors are an area to watch, but until vendors produce the robust set of software necessary to manage cardholder data (CHD) on an enterprise scale, IT organizations should put client hypervisor deployments on hold. Also, testing client hypervisors enables IT organizations to gain a clear understanding of niche areas within the organization where the products will add value.

Storage TrendsStorage, too, is transforming the data center. Storage-as-a-service and unified storage devices are changing the way IT organizations use and manage storage in 2011.

Market and Technology TrendsUnbridled data growth and storage complexity are the two biggest issues storage administrators face today. Storage administrators are simply overwhelmed by the diverse set of application requirements (e.g., multi-protocols support, price, and performance) and the countless systems and disks necessary to support the ocean of data for a large enterprise. The trends in storage reflect a desire to address these issues, as well as offer storage as a service.

There are three trends emerging in the storage market:

• Consumerizationofstoragewithinthedatacenter

• Economizationandconsolidationofstoragetechnologyintomultipurpose devices

• Externalizationofstoragetocloudproviders

As IT organizations begin to build internal clouds, they quickly realize that IaaS is more than compute as a service. IaaS requires both connectivity between compute and storage services for application data and stand-alone storage services that users can consume independently. For example, applications running within an internal IaaS cloud need connectivity between the VM and block storage services in order to automatically store application data. However, users often require file storage services — independent from the application — to store unstructured data. In addition, both users and applications have a need for advanced storage services such as backup, archive, replication, and data tiering to protect data or meet performance and compliance requirements. Therefore, many IT organizations are building internal private clouds with storage as a service that can serve the diverse needs of applications and users.

Unfortunately, storage administrators are struggling to build internal storage as a service. The growth, complexity, and disparate nature of storage prevents administrators from offering holistic storage services (e.g., backup) across all compute and user environments. The more diverse the compute environment, the more discrete and complex the storage environment becomes. In addition, the sheer growth of data causes administrators to employ more devices than they can reasonably manage. Administrators are overwhelmed by the number of devices with different management interfaces, I/O protocols, replication protocols, disk drive types, and performance characteristics. That is why many storage vendors are offering all-in-one/unified storage devices that consolidate storage features into a single device or family of devices. Many of these devices combine storage functionality such as primary and secondary storage, block and file protocols, auto-tiering, deduplication, thin provisioning, replication, and archive. Combining storage functionality into fewer and larger devices reduces storage environment complexity and administrative management overhead, thereby enabling IT administrator to focus on offering storage as a service. Some examples of unified storage devices are NetApp, Isilon (EMC), and SONAS (IBM).

In addition, storage administrators are selectively externalizing data to cloud storage providers. Storage administrators cannot keep up with storage growth. Not only are their data centers out of power

Page 9: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

9and space, storage administrators do not have the capital budget to purchase expensive storage devices necessary to keep pace with storage consumption. Thus, these storage administrators are using operational budget to purchase external storage services for non-critical data, backup data, and application data associated with compute instances that have been hosted in a public IaaS cloud. For example, IT administrators are purchasing storage services such as Amazon’s Simple Storage Service (S3) and Rackspace Cloud Files to store data associated with compute instances running in Elastic Compute Cloud (EC2) and Rackspace Cloud.

Also, providers and software vendors are attempting to address the I/O latency, integration, and replication issues between internal and external cloud storage. Appliances from small storage companies such as Nasuni, Cirtas Systems, and TwinStrata are emerging that cache storage locally and replicate to external cloud providers. These appliances effectively bridge internal block storage to external file or object storage. They also provide management capabilities that enable the storage administrator to set how and when data is migrated and to better manage the flow of data to external cloud providers. It is not far-fetched to imagine that the functionality in these appliances could — one day — be incorporated into the all-in-one storage devices from NetApp, EMC, or IBM.

Planning ConsiderationsWhen it comes to storage, IT organizations should strive for efficiency and efficacy. Thus, Gartner recommends that IT organizations plan for storage growth and deal with storage complexity by:

• Simplifying storage environments by reducing the number of vendors and consolidating storage: Tackling complexity requires storage administrators to decrease the number of devices and vendors in their environment. Although reducing the number of vendors can lead to lock-in, the lack of integration and diverse management of disparate devices is hindering the storage administrator from focusing on larger business goals (e.g., storage reduction). The best administrators will strike a balance between over-reduction of vendors — which can lead to lock-in — and having too many vendors, which can lead to storage perdition. Storage administrators should strive to have one primary vendor and another smaller vendor to keep the primary vendor honest in terms of price and features.

• Investing in multifunction storage devices: Supporting fewer devices requires strict governance to help application owners adjust to a smaller set of storage options. As storage devices incorporate more features, storage administrators will find that they can obtain most of the features in one device that previously required several devices. Thus, storage administrators should shy away from devices that are less inclusive of these features and that require extra overhead to manage. Storage administrators should only purchase single-function storage devices to fulfill a specific storage need (e.g., performance). Only add complexity to your storage environment to fulfill a specific business requirement.

• Investing in products that incorporate storage efficiency and reduction technologies: For example, solid-state disks, data deduplication, auto-tiering, and compression should be a primary consideration for any storage device purchases. Multifunction storage devices are the new storage platform. If IT organizations will entrust most of their storage capabilities to these platforms, they should ask these vendors how their storage devices will do two things: 1) help to build an internal storage cloud and 2) integrate and/or connect to the external cloud as a storage tier.

• Considering cloud as part of their backup, DR, and data distribution plans: Before placing any data — primary or secondary — in the cloud, IT organizations must evaluate the criticality of the data and their risk tolerance to determine if the data can be placed into the external cloud. However, as the market matures in 2011, cloud storage can be an easy and effective means to protect both end-user and enterprise data. Some customers incorporate the use of cloud computing with storage as a warm-site DR option.

Compute TrendsServers are the foundation of data center operations. The size, shape, and functionality of servers directly affects the size, shape, and cost of the data center and IT services.

Market and Technology TrendsThe trends in computing are directly related to and serve the overarching data center requirements to build internal clouds, support a mobile workforce, simplify and unify storage, enable greater workload mobility, and economize data center power and space. These trends are sparking a wave of innovation consisting of more powerful, denser, more agile, and tightly integrated server technology:

• Server competition is increasing: In 2009 and early 2010, two new server vendors entered the market: Cisco Systems (with the introduction of blades) and Oracle (through the Sun Microsystems acquisition). These vendors have continued to build their product portfolio and now stand ready to compete with HP, Dell, and IBM. While Sun and Cisco have a long way to go to capture significant market share, they have forced the market to create more tightly integrated stacks or “silos” of hardware and software. These silos differ in size and shape. Some integrated stacks are attempting to offer “IaaS in a box” that provide servers, networking, and storage hardware that is tightly integrated with virtualization and management software in a single solution. Other products, such as Oracle’s Exadata platform, offer a single integrated business application solution that consists of hardware, system software, and an application stack of a database, middleware, and multiple vertical applications.

In addition, independent hardware vendors (IHVs) are building servers to meet the market demands of public CSPs. IHVs realize that CSPs are the consolidating compute capacity in the market. To survive in the post-cloud world, IHVs must offer CSPs a

Page 10: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

10competitively priced compute platform. For example, Dell is offering “skinless” servers — a no frills server platform with a reduced footprint and low price point.

• Blade server innovation is accelerating: IHVs continue to invest in blade server platforms. Today, blade servers come in all shapes and sizes and easily compete in CPU and memory capacity with most rack-based servers. In addition, IHVs have added I/O virtualization that enables both physical blade servers and VMs to move within one or more blade chassis while maintaining their I/O connectivity. Unfortunately, blade servers have two drawbacks. First, they create vendor lock-in. When customers buy a blade server chassis, they are committed to buying the chassis vendor’s I/O modules, blade servers, and management. Second, the I/O module represents a network bottleneck out the chassis. The blade servers in the chassis can drive more I/O that the I/O modules (IOM; e.g, internal network switch) can deliver to the data center network.

• X86 is overtaking RISC/Unix for most workloads: The latest Xeon 56xx, 65xx, and 77xx series processors — with integrated memory controllers — are powerful and fast enough to handle the workloads formally run on RISC/Unix servers. Now that Windows Server and Red Hat Enterprise Linux no longer support Itanium, look for the next generation of x86 servers to replace many of the RISC/Unix platforms in data centers.

• LAN-on-motherboard (LOM) 10Gb CEE arrives: By the end of 2011, CEE LOM 10GbE will be a standard option on many x86 servers. These network interface cards (NICs) will drive more I/O to the edge of the network, causing IT organizations to rethink their data center network topology. In addition, CEE will enable IT organizations to unify network and storage traffic, thereby paving the way for Ethernet-based storage (Internet Small Computer Systems Interface [iSCSI] and Fibre Channel over Ethernet [FCoE]) to eventually overtake FC and increase workload mobility.

Planning ConsiderationsFor 2011, Gartner recommends IT organizations observe the following considerations for their computing needs:

• Purchase RISC/Unix platforms only when necessary: The latest Xeon processors and newest x86 server platforms have enough performance and availability to handle most of today’s computing needs. x86 should be the default purchasing position for all new workloads. The only applications that should remain on RISC/Unix platforms are those that require superior transaction processing of which multicore, multi-socket x86 servers cannot deliver (see Storage Performance Council [SPC] benchmarks), that require less than five minutes of downtime per year, or that cannot move due to legacy implementations. It’s also worth noting that most IaaS CSPs are x86 based. Thus, any non-x86 applications that is a potential target for cloud computing must be converted to an x86 platform before it can be moved to most public CSPs.

• Buy blades for density and innovation; buy rack-based servers for flexibility: Most of the innovation in servers today is in x86 blade technology. Blade servers deliver the most CPU power per rack unit, unique I/O virtualization technology (e.g., virtual connect), and more scalability than in previous blade generations. However, to take advantage of the cost savings and I/O technology blades have to offer, IT organizations must lock into a single vendor’s solution, including chassis design, blades, and I/O technology. Given the expense and longevity of chassis, blade servers can be a 10-year commitment. If an organization is unsure whether to invest in a single technology for a decade, then rack servers offer more flexibility because they are more easily replaced (or the investment doesn’t last as long) as blade servers.

• Plan for 10GbE at the network’s edge: LOM CEE 10GbE NICs will be standard features on the next generation of x86 servers. As 10GbE makes its way to the network edge, IT organizations must update top-of-rack and edge switches to accommodate the move from 1GbE to 10GbE. Also, because the LOM NICs will support CEE, unified networks will become a real possibility. IT organizations must decide whether to support FCoE or iSCSI on the same network as communication protocols. As more storage devices natively support FCoE and iSCSI as a target, the pressure to consolidate communication and storage traffic onto one network will increase.

High-Availability and Disaster Recovery TrendsDisaster recovery (DR) and high availability (HA) will be important focuses for organizations planning to build an internal cloud in 2011.

Market and Technology TrendsFor years, HA and DR were afterthoughts in the data center. Many executives and CIOs consider investments in business continuity to be the technological equivalent of buying insurance. DR in particular can be a costly proposition because it often requires redundant capital expenditures such as data center space, servers, networking, and storage. Today, organizations are seeking options to improve business continuity without spending large capital resources, but they are finding few viable options outside of their own data center.

These are the trends in DR that will drive data center change in 2011:

• Insourcing DR for large enterprises: For many years, outsourcing DR was a popular option. Organizations leveraged the experience and resiliency of outsourcing providers to protect data and increase business continuity. But as application requirements changed, IT organizations discovered that outsourcer’s recovery ability — especially tape solutions — could not meet RTOs. Replicating data to the outsourcing provider was cost prohibitive. As a result, many IT organizations are bringing DR back in-house. Organizations are employing a combination of co-location providers and existing data center capacity to meet DR objectives. For example, an organization might replicate data to an out-of-region internal data center

Page 11: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

11to recover from a region-wide disaster, but also might synchronously replicate data to a local co-location facility to meet recovery point objectives (RPOs) in the event of a single data center outage.

• Virtualization as a warm-site recovery option: Virtualization’s ability to encapsulate an entire application stack makes it a perfect DR vehicle. Many IT organizations replicate VMs and data to the recovery site so that — in the event of a disaster — they can initiate a set of VMs and quickly recover. Many IT organizations have built scripts that automate recovery. Vendor solutions, such as VMware’s Site Recovery Manager, can also help IT organizations test and automate site recovery. However, VM-based disaster recover software is immature and lacks functionality such as failback and the ability to discover changes made within the VM.

• Integration with public cloud computing as a possible DR alternative: Public cloud computing is becoming an increasingly popular warm-site disaster recover option for organizations that cannot afford or are unwilling to build a redundant data center or house additional equipment at a co-location facility. Public cloud computing’s on-demand, PAYG business model enables organizations to rapidly spin-up applications in the event of a disaster without the cost of buying IT infrastructure. However, off-site DR requires application integration and data replication to achieve reasonable recovery point and time objectives. Thus, many organizations may find that cloud computing’s ingress and egress data rates may be too expensive for recovery of large datasets. In addition, many vendors are building hybrid cloud solutions that integrate public and private clouds. These solutions automatically replicate data using encrypted tunnels, thereby reducing RPOs while protecting data in flight.

There are few new trends in HA. Most physical and long-distance HA solutions still lack key functionality to be deployed in the enterprise. For example, stretch clusters are not ready for long-distance failovers because of the network performance requirements of the cluster heartbeat. Likewise, physical HA solutions such as Stratus and Marathon remain niche due to the expense of implementation, but these solutions are slowly growing as a viable HA alternative. Thus, until these solutions mature, organizations will build HA into their IT solutions in two ways:

• Theywillrelyonavailabilitybuiltwithintheirserverandstoragehardware (e.g., redundant power supplies, fans, and redundant array of independent disks [RAID]), operating system-based clustering, or virtualization HA software

• Theywillbuildavailabilityintotheapplicationbyrebuildingorrewriting the application to be more fault tolerant or resilient. For example, an IT organization might chose to enable greater availability by rebuilding an application to be multi-threaded (and use symmetric multi-processing capabilities of the platform), multi-tenant (using the fault tolerance of the application platform), or stateless (allowing global load balancing). However, this is a cost-versus-benefit choice. Rebuilding an application requires time and resources. IT organizations must decide if the criticality of the application is worth the effort to rebuild the application for higher availability.

Planning ConsiderationsAs organizations architect and build their internal and hybrid clouds, they have the opportunity to reassess their resiliency and recovery options for 2011:

• Review business continuity plans and DR plans annually: Gartner frequently recommends that IT organizations routinely review and practice their business continuity and DR plans as part of a sound IT practice. However, in 2011, reassessing the organization’s business continuity and DR plan is more important than ever because of a significant increase in DR options and technology. Virtualization, WAN acceleration, storage replication, insourcing, co-location, and cloud-computing are affecting how organizations architect DR and business continuity.

• Use virtualization for warm-site recovery and insource DR when possible: For IT organizations that have excess data center capacity or the means to use co-location facilities, insourcing DR may provide better time recoverability than using a outsourcing provider’s tape solution and less expensive than replicating data to a outsourcing provider’s data center. Also, for organizations that are insourcing DR, virtualization provides the best means to execute a warm-site recovery. Virtualization’s encapsulation makes it easier for organizations to replicate applications and data to a remote site and initiate a quick recovery. Virtualization site recovery manager software may further increase recoverability, if the organization can accept the software’s inability to fail back or increased configuration.

• Incorporate cloud computing thinking into your DR and resiliency plans: Cloud computing’s ability to quickly provision compute instances, capacity to store large amounts of data, and PAYG model make it an attractive option for warm-site recovery. However, depending on the dataset size and ingress data rates of the CSP, replicating recovery data to the provider may be cost prohibitive. In addition, some applications may have security requirements that prevent hosting in the cloud. IT organizations should determine the business criticality of the application and data before using a CSP for DR.

Also, IT organizations should consider the resiliency and DR of the IT assets already in the public cloud. Many providers enable the cloud consumer to add on services that can enable greater resiliency and recoverability, but many IT organizations, because cloud computing is new to them, have not yet mastered the art of protecting applications and data in the cloud. Also, using a single cloud provider aggregates risk, so many organizations need to consider options such as using multiple providers to disaggregate risk or creating an exit plan to move IT assets in case the provider has a service failure (e.g., pull the application back in-house). Rather than build a separate DR and resiliency plan, Gartner recommends that IT organizations incorporate cloud thinking into their existing plans. The guidance document “Managing Availability and Performance Risks in the Cloud: Expect the Unexpected” has more detailed information on availability and performance planning in the cloud.

Page 12: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

12Finally, IT organizations should investigate in hybrid cloud software as a method to automate DR options. Using the external cloud for DR is a good idea, but it requires IT organizations to migrate applications that have large datasets, specific security requirements, or dependencies on application services that cannot be hosted by the provider. Recognizing this dilemma, many vendors have begun to build hybrid cloud solutions that can accelerate and replicate data asynchronously, transmit data over encrypted tunnels, and redirect application requests to services to various locations. For example, Citrix’s Cloud Bridge provides an encrypted and accelerated tunnel to transmit data between internal and external clouds. These solutions are not yet ready for prime time, but they bear watching in 2011.

• Continue to build resiliency and availability inside the data center for critical applications and data: Until the external cloud matures, the internal cloud is where many enterprise IT organizations will keep their critical applications and data. Therefore, IT organizations need to build and maintain an IT infrastructure that can meet their application’s resiliency requirements. This can be difficult for two reasons. First, organizations build internal clouds that use general purpose infrastructure; internal clouds serve the IT needs of the many. Thus, IT organizations may need to segment their internal cloud according to availability and resiliency service levels. This also requires IT organizations to manage workloads based on their recovery point and time objectives, which places workloads on the infrastructure with the correct quality of service. As always, application owners and IT operations will need to work together to ensure the application’s resiliency, but they may also need to incorporate the resiliency requirements into cloud orchestration software so that the internal cloud software can automatically place applications on the correct infrastructure. Second, many of the physical and virtual HA software options from vendors such as Marathon Technologies and Stratus Technologies are niche or lack maturity. So, the only option IT organizations have is to use the HA options in the guest operating system or hardware. While these options are acceptable in many scenarios, some applications may simply require a different level of availability. In that case, organizations are forced to employee separate, expensive, more resilient IT infrastructure.

• Choose HA solutions or build HA into the application rather than using fault tolerant software: Until the fault tolerant (FT) software market matures, IT organizations should choose HA services from SaaS vendors or build higher availability by using hardware technology or rebuilding applications for better resiliency.

SETTING PRIORITIESIn 2011, IT organizations will have three potentially conflicting directives:

• CompetewiththerapidprovisioningandPAYGcostmodelofpublic cloud providers

• BuildaresilientandredundantITinfrastructurenecessarytohouse critical applications and sensitive data that cannot be hosted in the external cloud

• Supportanincreasinglyremotesetofuserswithadisparatesetof requirements while reducing support costs

To achieve these goals, IT organizations must set their priorities carefully and strategize how to meet these requirements now while architecting the dynamic, service-oriented data center of tomorrow.

Radically optimize and economize the IT infrastructure: In order to compete with external providers, IT organizations must optimize their IT infrastructure and operations; they must justify every expenditure because all IT overhead increases the cost charged back to the customer. IT must take steps to reduce costs wherever possible (including reducing unnecessary or underutilized infrastructure), increase workload mobility and management automation, enable self-service, and reduce unnecessary IT processes. To accomplish these goals, IT organization must do several things:

• Virtualizeasmanyapplicationsaspossible:Virtualizationformsthe backbone of workload mobility, service automation, and internal clouds.

• Consolidatestorage:Thisisaccomplishedbyusingmultifunction devices and data efficiency technology such as data deduplication.

• Standardizeonx86serverplatformstoreduceinfrastructurecosts and increase application mobility: Higher performing server platforms should be used only when the application demands higher performance and resiliency than x86 platforms can deliver.

Build an internal private cloud: Once the IT infrastructure is optimized and virtualized, the organization can begin to build an internal cloud. Building an internal cloud will accomplish two goals. First, an internal cloud will satisfy the desire of internal customers to consume IT resources as a service. In doing so, the internal customer can more rapidly provision IT resources and better predict IT costs as they relate to creating a product, providing a service, or whatever the customer’s business objective. Second, it positions the IT organization as the proxy to the external cloud. If the internal cloud is the internal customer’s IT service interface, the IT organization can redirect or augment internal capacity to the external cloud. This enables the IT organization to reduce unauthorized use of external cloud services and manage which providers are approved for cloud consumption.

Transforming a data center into a highly optimized internal cloud is a long journey. Although vendors are inventing solutions that can enable IT organizations to build internal clouds, selecting the best vendor solution and integrating the pieces is the IT organization’s responsibility. Nevertheless, IT organizations need to build an internal cloud with a sense of urgency because business managers and internal customers will continue to pressure IT to compete with the rapid provisioning and PAYG consumption model of public providers.

Page 13: 2011 Planning Guide: Data Center, Infrastructure, …img2.insight.com/graphics/fr/adobe/insight_article15.pdf2011 Planning Guide: Data Center, Infrastructure, Operations, and Internal

13Take a user-centric approach to client virtualization: When supporting a large set of users with a diverse set of application and connectivity capabilities, no one client solution fits all. As organizations look at the cost of supporting their users, IT organizations must resist the temptation to force users into a small set of solutions. Savvy IT organizations will assess the needs of each user and select the best client virtualization technology. For example, call center users may be able to use thin clients and server-hosted desktops, but developers may require fully equipped laptops.

Reassess business continuity plans and incorporate cloud computing: Internal clouds are the default location for critical applications and data. IT organizations can differentiate their IT services from those offered by public cloud providers by offering internal customers a less risky, more resilient IT service option. Thus, IT organizations need to invest in data center HA and recoverability internally. However, as cloud computing matures, IT organizations can utilize the cloud as a potential DR option. The difference: By acting as the IT service broker, the IT organization can decide which applications and data belong in the internal or which belong in the external cloud, thereby preserving resiliency and recoverability for critical apps and data.

Investigate hybrid clouds to augment internal capacity: Hybrid clouds will emerge in 2011. As IT budgets are squeezed, IT organizations may be forced to use the external cloud. Hybrid clouds can ease the migration and integration between private and public clouds and bridge critical application services that cannot be moved to the public cloud. In 2011, IT organizations should investigate hybrid cloud software, but they should not place them into production until solutions mature.

NOTES1As Steve Ballmer said at the Gartner Symposium 2010 in Orlando, people “want the same things at work that they have at home.” Whether Microsoft has a lock on that hybrid experience in a world dominated by user-friendly Apple devices is another question.

Acronym Key and Glossary Terms

10GbE 10 Gigabit Ethernet

CEE Converged Enhanced Ethernet

CHD cardholder data

CHVD client-hosted virtual desktop

CSP cloud services provider

DR disaster recovery

FCoE Fibre Channel over Ethernet

HA high availability

IaaS infrastructure as a service

IAM identity and access management

IHV independent hardware vendor

iSCSI Internet Small Computer Systems Interface

ISV independent software vendor

LOM LAN on motherboard

NIC network interface card

PaaS platform as a service

PAYG pay as you go

RAID redundant array of independent disks

RISC reduced instruction set computer

RPO recovery point objective

RTO recovery time objective

S3 Simple Storage Service

SaaS software as a service

SHVD server-hosted virtual desktop

SONAS Scale Out Network Attached Storage

SPC Storage Performance Council

VM virtual machine

I/O input/output