twelve companies: changing the data centre (by idg connect)

21
CHANGING THE DATA CENTRE 12 TWELVE COMPANIES by IT consultant Chris M. Evans from the Architecting IT website A Special Report April 2015

Upload: cxo-communitycom

Post on 15-Jul-2015

130 views

Category:

Technology


2 download

TRANSCRIPT

CHANGING THE DATA CENTRE12TWELVE COMPANIES

by

IT consultant Chris M. Evans from the Architecting IT website

A Special Report

April 2015

Every year sees the emergence of new technologies that disrupt the status quo of the enterprise data centre. Disruption implies changes to existing practices - something that needs to have significant cost/benefit savings in order to be adopted. As 2015 unfolds, we are seeing a number of new technologies that have significant disruption potential.

These include:

All-Flash & Hybrid Storage Arrays – shared storage based either solely on NAND flash technology or using flash to deliver performance boosts at the price of spinning media

Hyper-convergence – the melding of storage and compute into a single form factor

Containers – thin and efficient application deployment

Cloud Storage – solutions for moving data efficiently into public clouds

Many of these solutions come from new upstarts looking to challenge the established players in the market. For many, the aim is to reach IPO and be acquired – others are hoping to be the next NetApp or EMC and have no desire to be subsumed into a larger mother ship. In this special report for IDG Connect Chris M. Evans discusses some of the companies delivering disruptive technology and analyses the likely future state of the enterprise data centre.

Introduction

Twelve Companies Changing the Data Centre

The last 18 months has seen a huge interest in the use of flash in the data centre. The reasons are pretty clear: the continued evolution of Moore’s Law and its relationship to processor performance means hard drives don’t have the “IOPS density” required for modern application workloads.

To understand this we need to take a step back and define what IOPS density means. Hard disk drives (or HDDs) have a limited amount of I/O capacity, making them good for sequential workloads and poor at completely random ones. This arises from the mechanical nature of spinning disks that access data by moving a read/write head over a platter in a similar fashion to a vinyl record or compact disk. For random data, response times can be as low as 4-8ms per I/O.

The growth in capacity of hard drives has not been met with a similar growth in performance. Capacity growth has been exponential whereas performance growth has been at best linear. As recording techniques start to reach the limits of physics, new techniques such as Shingled Magnetic Recording (SMR) mean there is a risk of a performance decline in hard drives of the future.

In contrast, flash drives (built from NAND flash storage) have a consistent and fast I/O response time for either sequential or random I/O that is an order of magnitude better than HDDs while delivering tens of thousands of IOPS (I/Os per second). Storage array vendors have been quick to take advantage of this new medium and build products either enhanced by (hybrid systems) or specifically for flash (all-flash arrays).

Twelve Companies Changing the Data Centre: Storage

Storage

In 2012 EMC Corporation acquired XtremIO, an Israeli technology start-up, for a reported $430m. The acquisition was slightly unusual as at that time XtremIO had no generally available products or customers. EMC clearly saw the potential in the XtremIO all-flash platform and made an early pre-emptive strike before the company became more expensive to acquire.

EMC has a habit of buying companies at an early stage, the most obvious examples being CLARiiON, (now VNX), Isilon and Data Domain. From there, the company looks to develop its acquisitions into new market propositions, a strategy that has done well for EMC over the years.

XtremIO provides EMC access to the all-flash market. This is not a new area for the company to move into; the existing product ranges of VMAX and VNX can take flash drives and there is an all-flash VNX system. However, we know that architecture matters in all-flash designs and legacy platforms were never engineered to accommodate high-performance drives, in fact it was quite the opposite – logic in platforms like VMAX was dedicated to managing the relative sluggishness of hard drives compared to system memory. EMC therefore acquired XtremIO to fit high performance demands that can’t be met by disk-based platforms – that of consistent low latency with high throughput.

Twelve Companies Changing the Data Centre: Storage1

EMC now has a platform with which to address the high performance requirements and a highly motivated workforce that will be incentivised to get the product out to customers. At first glance it may seem that EMC is simply taking market share away from its existing VMAX platform. That may be true but this would have been business EMC was losing anyway to competition from other all-flash vendors. This is where we see disruption in the data centre coming in. All-flash solutions will take over the tier-1 storage performance requirements in the data centre, steadily progressing disk-based systems to tier-2 and below.

EMC is quite bullish about the success of XtremIO sales. Fourth-quarter 2014 earnings show a run rate of $300m for the quarter, doubling the previous quarterly figures, with more than 130 customers buying systems of costing $1m or more. The company has been quick to rush out new software releases (three within 18 months and a fourth expected any time soon) mainly to address a feature gap seen in many early-generation flash products. This strategy hasn’t always worked well; customers moving from release 2.4 to 3.0 of the XtremIO XIOS were met with both a disruptive and destructive upgrade path.

EMC2 - XtremIO Storage

Pure Storage is one of a group of start-up companies looking to challenge the likes of EMC in their dominance of the high-end storage market. The company has an ethos based on the use of flash for all storage requirements in the data centre rather than just for niche high-end applications. To that end, Pure Storage isn’t the fastest product on the market, but does offer consistent I/O performance around the millisecond response time mark.

Pure was founded in 2009, bringing its first products to market in 2011. The technology is based on a traditional dual controller design but has been architected to cater for the characteristics and benefits of flash. This has meant re-writing the implementation of data protection with a proprietary RAID-3D design, dispensing with caching at the controller (reducing the impact of controller failure) and developing a range of data optimisation features around compression and de-duplication. The company has been quite forward in demonstrating the benefits from these space reduction technologies and displays a ticker on the homepage of its website showing figures obtained from actual customer systems.

Pure StorageTwelve Companies Changing the Data Centre: Storage

2

Pure has been clear that the market segment it is looking to attack are the tier-1 performance applications that currently sit on high-end disk systems from the likes of EMC and Hitachi. This is a somewhat differentiated approach to the initial players in the all-flash market who developed solutions based on raw performance. Instead, Pure is focused on good, consistent (but not necessarily lightning) performance at a price comparable to existing all-disk systems. Moving to all-flash also significantly reduces hardware footprint and saves on power and cooling costs. Pure’s market strategy is certainly a good position for today, but could quickly become disrupted as the flash market matures and flash prices drop making the technology more affordable. It’s likely that Pure will bring new products to market some time in 2015.

SolidFire is another start-up company using flash to deliver disruptive storage products. The SolidFire SF platform is a scale-out, ‘shared nothing’ architecture that scales from three to hundreds of nodes in a single cluster. Each node is based on a 1U commodity server with 10 2.5-inch drives of varying capacity (from 240GB to 960GB), all of which can be mixed and matched within a single cluster configuration.

The shared nothing architecture of the SolidFire design allows nodes to be added and removed from the configuration non-disruptively, either for system expansion (or contraction), to replace failed devices or as part of a rolling platform upgrade. The only real negative of the scale-out design is the limited ability to support Fibre Channel deployments, which are a staple of large enterprise environments. SolidFire works around this problem by delivering a dedicated FC node that doesn’t house any storage capacity.

The SolidFire architecture has two other features that make it disruptive in terms of features required in the data centre of the future. Quality of Service (QoS) enables individual storage volumes to be assigned constraints

SolidFireTwelve Companies Changing the Data Centre: Storage

3

on performance by setting IOPS limits and throughput. This helps to eliminate the so-called “noisy neighbour” problem where one host on a shared system consumes system resources to the detriment of another. QoS also provides “multi-tenant” capabilities, which are becoming more important in service-defined IT delivery for public and private clouds.

The second feature is API-driven management. SolidFire systems are managed entirely through a REST-based API (including the GUI software provided with the system). This means the platform can be easily integrated into scale-out architectures such as OpenStack, with little additional coding.

SolidFire started by targeting service providers, who were a natural fit for the product with their requirements around multi-tenant automated storage deployments. As we see the data centre evolve, private data centres will be internal service providers to their own lines of business: exactly what SolidFire is delivering.

Despite claims from some all-flash vendors that they offer flash products for the price of disk, all-flash systems can still be seen as something of a luxury and hard to justify for all application requirements. In many cases performance needs can be met by delivering more throughput rather than low latency. This has provided an opportunity for the development of hybrid storage solutions. Hybrid arrays use flash to ameliorate the performance of hard drives in more effective ways than as a simple storage tier. One such example is Nimble Storage and its CS series of appliances.

CS series arrays use hard drives as the primary data repository. Flash is used as a cache layer to manage and improve read I/O. Surprisingly, flash isn’t used to accelerate write I/O; instead CS arrays use NVRAM to coalesce write requests that are subsequently committed to disk in a sequential write operation. Commonly accessed data is cached in flash, as is a certain subset of write I/O, as determined by Nimble’s Adaptive Flash algorithms. The result is the acceleration of I/O for only a modest 10 per cent deployment of flash storage as an overall percentage of system capacity.

Nimble has been a public company since IPO in December 2013. Sales continue to grow, although the company also continues to make a loss. The most recently released figures show revenue rising 77 per cent, with products delivered to over 4,300 customers.

Nimble StorageTwelve Companies Changing the Data Centre: Storage

4

Twelve Companies Changing the Data Centre: Hyper-Convergence

Hyper-Convergence

Two years ago, no-one had heard of the term ‘hyper-converged’; now it’s one of the fastest growing technologies in the data centre. Converged infrastructure (a term coined by HP and perhaps personified by VCE) has been around for some time, representing the packaging of servers, storage and networking into a single “compute stack”. Hyper-convergence takes things a step further. These systems collapse compute and storage into a single physical appliance, removing the need to deploy and manage separate servers and storage appliances. The watchword of hyper-convergence is simplification: systems can now be deployed and operated by IT generalists who know virtualisation but don’t need to be experts in storage platforms. Simplification should result in cost savings and as the number of IT systems deployed continues to grow this is an important consideration. There are four key players in this part of the industry, Nutanix, SimpliVity, Scale Computing and VMware.

Nutanix was founded in 2009 with a mantra of eliminating the SAN long before the term hyper-converged was even coined. Their platform brings together storage in the form of HDDs (hard disk drives) and flash to create a distributed file system across all nodes that de-duplicates and compresses data for optimal efficiency using MapReduce software. The Nutanix Distributed File System operates in a similar way to the Google File System, placing content into storage pools and containers and using a modified version of the open-source Apache Cassandra database for metadata storage.

A common problem for hyper-converged solutions is the lock-in of unused resources on each node. At some point a node runs out of either storage or compute power, but is unlikely to run out of both at the same time. Nutanix has countered this issue by providing a range of hardware solutions that scale from single chassis four-node configurations to an all-flash appliance. This flexibility allows customers to deploy new nodes to specifically match requirements, whether that is for storage capacity, storage performance or compute.

NutanixTwelve Companies Changing the Data Centre: Hyper-Convergence

5

As one of the initial players in this space, Nutanix has the benefit of being a first mover, which can in some instances be a poison chalice. However the company has been quick to expand its ecosystem rather than relying on a single hypervisor solution, providing support for Hyper-V and KVM. Management features have been expanded to support its new mantra of “web-scale computing”, including centralised management through the PRISM platform and the ability to drive systems using REST-based APIs.

In August 2014, Nutanix completed an “E” round of investment at $140m, with total investment in the company standing at $312m. This war chest has been used to scale out globally and beef up the marketing message prior to a potential IPO this year. The message is clearly getting through – February 2015 figures reveal an annualised run rate of $300m and around 1,200 customers.

SimpliVity is another start-up focused on the hyper-converged market and was also founded in 2009 by ex-Diligent Technologies founder Doron Kempel. Diligent (a backup deduplication platform) was successfully sold to IBM in 2008.

Hyper-converged solutions focus on delivering efficiencies through software but SimpliVity has taken a slightly different approach and developed a dedicated PCIe adaptor card for its OmniCube platform to offload the more intensive data management functions such as compression and data duplication. The ability to de-duplicate data globally (a lesson clearly learned from the Diligent days) allows multiple OmniCube servers to be combined into a cluster that can be geographically dispersed for resiliency. De-duplicated virtual machines have much greater flexibility in terms of mobility, as the contents of VMs built from a common master image only require the updated data to be shipped across the wire from one location to another.

SimpliVity now claims to have shipped over 1500 OmniCube licences (although it’s unclear whether this translates into actual systems) and reported a five-fold increase in sales in 2014, compared to 2013. It recently raised an additional $175m, taking total funding to $276m.

SimpliVityTwelve Companies Changing the Data Centre: Hyper-Convergence

6

Both Nutanix and SimpliVity are targeting their products at the mid- to large-enterprise segment of the market. However, significant opportunities exist in developing hyper-converged solutions for the SME market upwards.

Scale Computing is another hyper-converged start-up vendor bringing products to market that eliminate the need to run separate server and compute disciplines within the IT department. Rather than use VMware vSphere or Microsoft Hyper-V as the virtualisation framework, Scale has chosen to use KVM, an open-source hypervisor solution. The company has done this specifically to reduce the cost of its solutions, which are targeted at more entry-level requirements.

In order to support KVM in a hyper-converged context, Scale has been required to develop two key pieces of technology for its HC3 (Hyper-Converged Compute Cluster) platform. SCRIBE (Scale Computing Reliable Independent Block Engine) provides a distributed, enterprise-class storage layer for virtual machines, implementing data management features including thin provisioning, snapshots and clones. Data performance and protection is provided through wide striping and automated data redundancy.

Scale ComputingTwelve Companies Changing the Data Centre: Hyper-Convergence

7

Hardware management is provided through a custom-built “state engine” that monitors and records thousands of data points across the platform hardware. The state engine responds to new hardware and to device failures, allowing responses to be automated or surfaced up to the web-based management GUI.

Scale Computing has positioned itself as a low-cost alternative to other hyper-converged solutions by eliminating the cost of the hypervisor – something the company calls a “vTax”. To date, the company has over 1,000 customers and has deployed over 4,500 systems although it hasn’t quoted any revenue figures. In terms of disruption, many of the customer sales successes for Scale come from companies that have yet to virtualise, which means it is taking new business away from incumbents like VMware and Microsoft. If The Innovator’s Dilemma is any guideline to the development of the hyper-converged business, then Scale could be in a very strong position indeed.

Not to be outdone by the surge of hyper-converged solutions, VMware has developed its own platform to meet this market. VMware EVO:RAIL is a hardware and software package that uses VMware vSphere software (including Virtual SAN) and a simplified set of deployment wizards with hardware coming from OEM providers including Dell, HP and EMC.

VMware Virtual SAN provides the distributed storage layer within the platform and has recently been updated with the release of vSphere 6 in February 2015. Virtual SAN now supports all-flash configurations (albeit with a price premium) rather than using flash as a caching tier. Improvements have been made in operational features however the software still lacks some optimisation refinements such as deduplication and compression.

Packaging the EVO:RAIL software with tested hardware configurations removes the risk for end users looking to build their own hyper-converged solutions. We’ve previously seen some problems surface with components such as RAID storage adaptors even though they were supposed to have been validated for VMware’s Hardware Compatibility List (HCL). The additional support of the hardware platform through the EVO:RAIL program will give some customers more confidence in using relatively new features such as Virtual SAN.

EVO:RAILTwelve Companies Changing the Data Centre: Hyper-Convergence

8

Twelve Companies Changing the Data Centre: Containers

ContainersOver the last 15 years, server virtualisation has itself been a disruptive technology. We have seen the migration of applications from discrete stand-alone physical servers to infrastructure based on VMware’s vSphere platform, Microsoft Hyper-V and open source solutions from Xen and KVM. The value proposition was simple: the majority of server hardware was underutilised and introducing a hypervisor and moving to virtual machines enabled physical resources to be optimised and used much more efficiently. For many IT departments (especially those running the Windows operating system), the savings were significant, to the extent that today server virtualisation is the de facto standard for deploying new workloads. In addition, the move to virtualisation has allowed the development and adoption of private and public cloud technology.

As data centres evolve (and more importantly scale up) we are starting to see the inefficiencies of deploying a virtual machine for each application. Each VM requires (and consumes) processing power and server memory, but, more importantly, takes significant management overhead for deployment, patching, backup and general maintenance.

To reach the next level of efficiency, IT organisations are looking to containers as a way of running super-lightweight applications. The idea of containers isn’t new; it could be argued that the containerisation concept stretches as far back as the invention of CICS for the IBM mainframe in 1968. It has certainly been in existence since the “chroot” system call was introduced into Unix in 1979. More recently Sun Microsystems introduced Solaris Containers in 2005 and LXC (Linux Containers) was developed in 2008 and subsequently introduced into the Linux kernel.

Containers allow a single operating system to host multiple “user spaces” or what look effectively like individual isolated virtual machines within the same operating system, making the overhead of starting a new user space process extremely efficient.

The current poster child of the container world is Docker, a start-up that is less than two years old and provides a framework for creating, operating and managing containers. Docker provides the framework and APIs for launching and managing containers using libraries of pre-defined applications and operating systems stored in a repository known as a Hub. Customers can use the public hub offered by Docker or run their own private repositories based on customised O/S and application images.

What’s probably most remarkable about the Docker phenomenon is the scale at which other well-established companies have scrambled to form partnerships and start development programs around containers. The company how has offerings through the main three cloud vendors (Amazon Web Services, Microsoft Azure and Google Cloud Platform), plus integration with OpenStack, Chef, Puppet, Jenkins and Vagrant open source development and deployment platforms. There are even plans to bring containers to Windows.

So why has Docker been so successful when containers have been tried many times before? In some respects Docker’s arrival has occurred at something of a perfect storm for IT. Virtualisation has seen widespread adoption

DockerTwelve Companies Changing the Data Centre: Containers

9

and is reaching levels where the overhead of the virtual machine is a significant impact on resources. Public clouds have gained popularity and end users are comfortable with IaaS (Infrastructure as a Service). Docker and containers fit into the PaaS (Platform as a Service) cloud classification and that hasn’t really taken off, probably due to lack of understanding of how PaaS solutions work.

The container revolution offers an opportunity to deliver applications even more efficiently than was achieved through the wave of hardware consolidation that virtualisation delivered. Does this mean containers replace virtualisation? Not quite. Although containers will be suitable to replace large portions of work today traditionally handled with virtual machines, server virtualisation won’t be entirely replaced but may significantly reduce in size.

Docker isn’t the only game in town when it comes to containers. CoreOS is another company looking to make gains from the deployment of lightweight operating systems and containers. The company originally supported Docker but recently announced the development of Rocket, a direct competitor to the Docker platform. Whether either Docker or Rocket is the ultimate winner remains to be seen but success is likely to be based on the level of support for each solution across Linux distributions. In that camp, Docker is currently well ahead, having forged relationships with IBM, Amazon Web Services, Google, Dell and Microsoft to name but a few.

CoreOSTwelve Companies Changing the Data Centre: Containers

10

Twelve Companies Changing the Data Centre: Cloud

CloudCloud computing is now well established and for many organisations public cloud is their default (and in some cases only) technology platform. However, it’s premature to talk about the demise of the data centre, as many companies will always want to retain on-premises equipment for a variety of reasons, including compliance, legal restrictions and security. This means we have a middle ground, known as the hybrid cloud, where applications are deployed both on and off-premises with the concept of “bursting” peak application workload to the public cloud on demand.

Moving either applications or data to the public cloud to cater for peak demands is no trivial task. For those systems that are still built around the legacy monolithic concept of a single virtual machine, moving the VM into the cloud presents challenges around time to migrate (dependent on the network speed available), security and networking changes (to cater for a new networking domain), plus the requirement to still provide backup and recovery in the event of data loss or corruption. When the O/S image for a VM can be 40GB or more, then VM migration isn’t a particularly practical approach.

The alternative is to move only the data that makes up the application and access it from a server already deployed in the cloud. As discussed, this presents a problem of both time and concurrency, especially for large data sets. Moving the data for an entire application would prove costly, time consuming and risk exposure to data loss.

One company looking to resolve this problem is Avere Systems. Mike Kazar and Ron Bianchini, both veterans of the storage industry, founded the company in 2008. Kazar previously founded Spinnaker Networks, which was sold to NetApp and forms the basis of that company’s C-mode platform.

Avere’s initial products were focused around the idea of accelerating the performance of file protocols by placing a NAS caching appliance in remote offices, connected back to a central system placed in a customer’s main data centre. Typically, only a small percentage of file data is active at any one time and so the local cache provided low latency performance, while periodically transferring updates back to the core appliance or filer as they are commonly known.

Successive generations of Avere’s systems introduced global name spaces (AvereOS release 2.0), data mobility (the ability to move and/or replicate data) in AvereOS 3.0 and support for public cloud platforms such as Amazon S3 in AvereOS 4.0. Up until this point, all features had been delivered with dedicated hardware in the customer’s data centre.

Avere SystemsTwelve Companies Changing the Data Centre: Cloud

11

AvereOS 4.5 is where things start to get interesting. With this version, Avere released a software-only implementation of its appliance, which runs as an EC2 instance in Amazon Web Services. The virtual instance or VSA (Virtual Storage Appliance) caches data from the customer’s data centre into AWS for access by virtual machines running in EC2. The benefits of this solution become immediately obvious. Data doesn’t have to be moved wholesale into the public cloud while the virtual appliance takes care of caching only the active data needed by the public cloud instance, co-ordinating the update of write I/O back to the customer’s on-premises filer. The result is an elegant solution for ensuring data integrity with the most efficient use of data transfer – based on only the data read or written by the application.

Twelve Companies Changing the Data Centre: The Data Growth Crisis

As data volumes grow, the other issue experienced by IT departments is dealing with the management of large volumes of information, including archiving and backup. Many organisations can’t or won’t delete inactive data either due to regulatory or compliance requirements or in many cases simply because they have no idea whether content is valuable or not.

Clearly, continuing to deploy expensive storage products isn’t an option. Neither is placing the data on cheaper commodity hardware, as there’s still an operational and environmental cost to keeping that data available to end-users. The use of archive media such as tape may be more cost effective, however many a data centre manager will be painfully aware of the issues that arise dealing with many tape media formats, each of which could be storing data with a different application format.

The public cloud provides the solution to the storage of large volumes of data, but the question is, how to get it there and manage in the most effective manner. The cloud-scale storage providers (including AWS, Azure and Google) all use REST-based APIs to store and retrieve data as objects. However this isn’t the typical consumption model in most enterprise IT departments, where block and file still dominate. Utilising cloud storage therefore means finding a way to move data into and out of the cloud while maintaining existing protocols, security and data centre access – a so-called cloud “on-ramp”.

The Data Growth Crisis

One company moving forward with a solution to this problem is Nasuni. The company markets and sells a NAS appliance or filer that can be purchased either as a physical device or as a VSA (Virtual Storage Appliance), both of which are deployed in the customer’s data centre. The Nasuni filer provides what looks like local filer storage space but is in fact backed by Amazon S3 and Microsoft Azure Storage. Usually, only a small proportion of file data is active at any one time, permitting the Nasuni appliance to effectively act as a cache to the data saved on cloud storage.

The Nasuni solution is different to that provided by Avere as Nasuni manages the data stored in the cloud under their own account. Charges are based on each terabyte of data stored per month with the cost inclusive of the cloud service provider charges. Customers have control over their own data and are able to create encryption keys to ensure that Nasuni have no access to the actual content, although keys can be escrowed with Nasuni for safe keeping if a customer doesn’t have their own management procedures.

The Nasuni service (which includes filers and centralised management) offers much more than a simple storage gateway. The platform provides back-end replication of

NasuniTwelve Companies Changing the Data Centre: The Data Growth Crisis

12

data between cloud providers to ensure an outage with a single provider doesn’t cause data inaccessibility. If this scenario should occur, Nasuni manages the process of ensuring data remains concurrent between the different cloud providers.

Nasuni also offers the ability to replicate data between filers, providing logical access to data in remote locations. Full scalability is provided through a feature that allows data access to be synchronised securely through the public Internet. This allows file locking to be honoured anywhere in the world on a single instance of a file.

Global file locking is a powerful tool for large organisations with many locations and branches around the world. Placing data within the cloud allows customers to benefit from running services against that data within the cloud itself. Nasuni offers a cloud-based virtual machine of their VSA that can run either as an Amazon EC2 instance or within Microsoft Azure. The ability to access data within the cloud without call back to on-premises equipment means functions such as data mining, virus scanning and archiving can all be accessed within the cloud, taking advantage of off-peak or spot pricing of cloud compute capacity.

The future of the data centre looks increasingly likely to be one of the hybrid cloud, with major disruptions influencing the location of data.

Storage continues to be a major focus as traditional hardware platforms start to become displaced by hybrid and all-flash systems. The storage discipline is fragmenting with the need for dedicated hardware being replaced by hyper-converged solutions that enable IT to be delivered by technology generalists.

The endless growth in storage demand is being met by solutions that take data out of the data centre and place it into the public cloud, easing the constraints on space, power and cooling, but more importantly providing mobility to data that once outside of the data centre walls, may never return.

The tight grip maintained by virtualisation that has resulted in “one VM, one application” could well be coming to a close as containers offer the ability to apply further optimisation to the savings of hardware consolidation delivered through server virtualisation.

Conclusion

Twelve Companies Changing the Data Centre

Report produced by Chris M. Evans - Biography

Chris M. Evans runs the Architecting IT website and is an IT consultant with over 26 years of experience. Chris has provided consultancy and advice to a wide range of customers and industry segments, including finance, utilities and IT organisations. Chris runs his own storage consultancy, Langton Blue Limited, to focus on resolving IT related business issues and also writes journalism and reports.

Biography

About IDG ConnectIDG Connect is the demand generation division of International Data Group (IDG), the world’s largest technology media company. Established in 2005, it utilises access to 38 million business decision makers’ details to unite technology marketers with relevant targets from any country in the world. Committed to engaging a disparate global IT audience with truly localised messaging, IDG Connect also publishes market specific thought leadership papers on behalf of its clients, and produces research for B2B marketers worldwide. www.idgconnect.com