managing the information that drives the enterprise...

40
AUTO-TIERING NOW A CHECKLIST FEATURE USERS RATE THEIR NAS—EMC, HITACHI PREVAIL TANEJA: CRANK UP QUALITY OF SERVICE SNAPSHOT: CONVERGED STORAGE SYSTEMS JUST CATCHING ON Managing the information that drives the enterprise Vol. 11 No. 11 January 2013 STORAGE THE TRUTH ABOUT SOLID-STATE STORAGE PERFORMANCE Vendors tout millions of IOPS and other jaw-dropping solid-state performance specs, but should you believe them? CASTAGNA: STORAGE DEVELOPMENTS WE NEED IN 2013 TOIGO: ‘NEW’ DR LOOKS LIKE ‘OLD’ DR MCCLURE: CLOUD SHARE AND SYNC MATURING

Upload: others

Post on 30-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Auto-tiering now A checklist feAture

users rAte their nAs—eMc, hitAchi prevAil

tAnejA: crAnk up

quAlity of service

snApshot: converged

storAge systeMs just cAtching on

Managing the information that drives the enterprise

Vol. 11 No. 11 January 2013Storage

The TruTh abouT solid-sTaTe sTorage performance

Vendors tout millions of IOPS and other jaw-dropping solid-state performance specs, but should you believe them?

cAstAgnA: storAge developMents we need in 2013

toigo: ‘new’ dr looks like ‘old’ dr

Mcclure: cloud shAre And sync MAturing

Page 2: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

From our SponSorS

Page 3: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 3

editorial  |  rich castagna

Hope springs eTernal, especially at the outset of a new year. Even for a jaded, somewhat cynical, ex-New Yorker like me, the image of 12 fresh months is rejuvenating. Having sur-vived 2012, hard-pressed storage managers can now breathe a sigh of relief and focus on the bounty that 2013 will bring.

Well … maybe. Last year was probably the busiest I’ve ever seen in the stor-age market, with the release of lots of new products, super-fast flash having an impact practically everywhere, and storage companies pursuing mergers and acquisitions like bargain-crazed shoppers at Walmart. But despite all the new developments on the tech front and the stretching of the data center into the cloud, a lot of other fundamental storage stuff had few changes.

In the spirit of the new year, here’s my list of five developments that we all want to see in the data storage market in 2013 but, alas, probably won’t. But it won’t be because these are tough tech nuts to crack; it’ll be because vendors continue to drag their heels for a variety of reasons.

1.Real cloud storage standards. SNIA has its Cloud Data Management Inter-face (CDMI) standard, which on the surface looks like a good step toward standardizing cloud storage metadata to make it easier to move data among cloud services. But few vendors have signed on, and the effort is looking more like window dressing from a clutch of vendors protecting the turf they’ve already staked out. Other than SNIA and maybe the open-source OpenStack project, there’s not much else going on with cloud storage standards. But

five storage developments we need in 2013Storage could be a lot easier, make a lot more sense and not be the most expensive thing in the data center ... but it probably won’t happen this year.

Page 4: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 4

editorial  |  rich castagna

this certainly isn’t a surprise for most storage pros; storage companies pro-tect their bottom lines by aggressively maintaining proprietary formats and protocols that make switching to competing products difficult and often too costly to consider. So why should cloud storage be any different?

2.Let’s cut big data down to size. I mean that both literally and figuratively. There’s a pretty good chance we wouldn’t be subject to all this big data pro-paganda if instead of Hadooping us into thinking we need big data this and big data that, vendors delivered products that helped us sift through our data to find the meaningful stuff without requiring Herculean efforts and a whole new set of products. At one time, data classification looked like an emerging market with companies such as Kazeon Systems, NextPage, Scen-tric and StoredIQ, among others. A few have since disappeared, others were scarfed up by bigger companies and some are still plugging away. Data clas-sification could make short shrift of data analysis, but that would put the brakes on the big data juggernaut and make all those big data products look … er, less than necessary.

3.Widely available primary storage dedupe/compression. “Widely avail-able” is the key part of this New Year’s wish. The technology has been around for years (even decades), so that’s not the problem. Vendors say it isn’t so easy to add data reduction to arrays; it takes processing power and the results will never be as dramatic as with backup dedupe. We know all that, but we also know that NetApp’s been doing it for years. Dell and IBM bought Ocarina and Storwize, respectively, and they’ve hardly been heard from since. Dare we hope that they and other storage vendors will finally step it up in 2013 and deliver something storage managers sorely need?

4.Serious and honest cost/benefit analysis of virtualization. When we talk to vendors about storage for virtual servers or virtual desktops, we seem to always hear how the stuff you have on your data center floor now just isn’t up to the task of supporting virtualized environments. So virtualization means new storage systems—preferably liberally laced with NAND flash—and replacing all those bargain-basement servers and desktops with $50,000 gazillion-core colossal servers. But what happens when the network strains under that load? There are plenty of good reasons to virtualize—easier

Page 5: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 5

editorial  |  rich castagna

management, disaster recovery and so on—but it’s far from the blanket solu-tion that vendors claim even if they stick “cloud” onto their product names.

5.Practical alternatives to RAID. RAID is approximately 25 years old and be-ginning to show its age as disk capacities climb to levels that make it imprac-tical for data protection. You could just mirror everything, of course; it’s about the easiest RAID implementation around and it’s effective even if it’s the least economical way to RAID drives. A handful of companies are trying to shake us from our RAID mindset and get us to think about alternatives like erasure codes and other variations that disperse data across a number of nodes and can withstand failures with greater resilience than RAID. The technology is there, we’re just waiting for some of the major storage vendors to give it a boost.

One final thing I’d like to see in 2013: the New York Jets winning the Super Bowl. I’m not sure which is less likely to happen—any of the above five coming to fruition in 2013 or the Jets hoisting the Lombardi trophy one more time. As John Lennon sang, “You may say I’m a dreamer …” n

RIch cAStAgnA is editorial director of TechTarget’s Storage Media Group.

Page 6: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Oracle BeatsEMC, NetAppStorage Magazine Quality Awards

oracle.com/storage

Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates.

Source: Storage Magazine, January 2012 issue, network attached storage.

#1 in Quality

#1 in Features

#1 in Reliability

#1 in Support

#1 Overall

Page 7: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 7

storage revolution  |  jon toigo

If you’re reading These words, the good news is that the doomsayers advanc-ing the Mayan prophesy have been proven wrong. To wit: December 21, 2012, has come and gone, and the planet didn’t change its magnetic poles or eject its crust, thereby expelling you, me and all our mismanaged data into the icy void of space.

I would argue that this is, for the most part, a good thing. Frankly, I’ve never experienced an apocalypse, but from what I’ve seen in movies and read in books, it seems like the “end of days” would be most unpleasant. So, that’s the good news.

The not-so-good news is that we may have more to worry about than pre-Co-lombian prophesies. You might remember that late season hurricane/tropical storm/superstorm called Sandy that visited its own dystopian reality upon the residents of the N.J./N.Y./Conn. tri-state area in late October. Counting that natural disaster, we’ve now seen several consecutive years of “once-in-a-cen-tury” weather events, providing what some climatologists regard as empirical evidence of a mounting planetary problem.

This problem has less to do with climate change than it does with the con-sequences of a collision between severe weather and poorly conceived ideas of civil engineering, architecture, electrification, transportation and distribution, and urban planning. As the old saying goes, “Into each life some rain must fall.” The problem is that so much of our stuff is built on sand—or it’s at least at or be-low sea level. Given the warnings that appear in the texts of Matthew (7:24-27) in the Bible, we’ve apparently been building our stuff on sand for quite a while.

‘new’ disaster recovery looks a lot like the ‘old’ drThe Mayans blew it—we’re still here—but storms like Sandy show that there’s no replacement for sound disaster recovery planning.

Page 8: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 8

storage revolution  |  jon toigo

I started a countdown clock after Sandy hit the beach in New Jersey to see how long it would take for some server hypervisor vendor or cloud service pro-vider to spin a yarn about how their technology provided a life raft or some-thing for some business just as the sea reached the lobby. Within 24 hours, the expected story appeared in my inbox, sent by an enthusiastic PR flack. Chang-ing all the names of the principals, the story went like this:

Cloud Provider X saved Engulf and Devour Company from certain de-mise by providing a location where all of the company’s data could be transported and kept safe from the inclement weather. Engulf and De-vour acted promptly two days be-fore landfall to establish a network interconnect with Cloud Provider X and to copy all its data to cloud stor-age provisioned by the vendor. After approximately 48 hours, data was suc-cessfully transferred and could be accessed by apps and end users from its new storage location in the cloud. This, the PR representative offered, proved the value of cloud storage as a way to replace the traditional disaster recovery (DR) planning process with state-of-the-art high availability (HA).

On its face, this sounded like an impressive case study. Digging into the details, however, my initial interest receded more quickly than Sandy’s tidal surge. It turned out that Engulf and Devour’s entire complement of data com-prised 1.8 TB. They were adding a couple of hundred gigabytes to this store every day or two. Even with this smallish amount of data to protect, copying it over to the cloud service provider required about two days. An LTO tape backup would have taken, at most, about two hours. Copying the data to a sec-ond hard disk, say a 2 TB removable SATA drive, may well have taken even less time. Depending on how far away the physical facility of the cloud service pro-vider was located, couriering over a tape or a disk drive would likely have taken less than 24 hours. So, the story of the miracle of “across-the-wire HA for data” began to seem a tad less miraculous to me.

The moral of this story is simple. Clouds, virtualization and so on are often represented as silver bullet solutions for processes like data protection. I hear many vendors argue that these are part of a new generation of HA technologies

the recovery/continuityplanning process seeks to apply the right tools tothe right targets based ontheir business value andsensitivity to disruption.

Page 9: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 9

storage revolution  |  jon toigo

that put older concepts like disaster recovery and business continuity (BC) out to pasture. This is marketecture, since HA has always been a tool within the toolkit used by DR/BC planners. The recovery/continuity planning process seeks to apply the right tools (HA, tape backup, etc.) to the right targets (busi-ness processes, and the applications and data that serve them) based on their business value and sensitivity to disruption. Used judiciously, and with a com-mon-sense perspective about alternatives, HA can provide value; applied in-discriminately, HA just makes everything cost more without contributing any greater protection or availability to the user.

I hope your 2013 will be disaster free. But I also hope that you’ll be able to institute a common-sense business continuity planning practice in your work-place if you lack one today. Whatever you do, don’t buy into the rhetoric of the tech peddlers. Disaster recovery isn’t obsolete; given the increased dependency on automation to make fewer staff more productive, such planning has never been more urgent than it is today. n

Jon WILLIAm toIgo is a 30-year IT veteran, CEO and managing principal of Toigo Partners Interna-tional, and chairman of the Data Management Institute.

Page 10: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

So how much do you think you know about RAID?

Find Out For Yourself and Test Your Knowledge with Our Exclusive RAID Quiz!

And don’t forget to bookmark this page for future RAID-level reference.

The Web’s best resource on storage for SMBs

Test your knowledge at SearchSMBStorage.com/RAID_Quiz

Confusing

Hard to Remember

Useful

All of the above

Memorizing RAID leveldefinitions and knowing which

level does what can be:

Page 11: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

cover art

StOrAge n jAnuAry 2013 11

By Leah Schoeb

cover story  |  storage Performance

iT seems as if each week brings a new performance claim of 1 million IOPS from a storage vendor touting its solid-state storage system. Due to a lack of disclosure with some of these published solid-state drive (SSD) performance benchmarks, it can be tough for a user to understand what these million IOPS performance numbers mean and how they relate to current storage performance issues.

Data storage vendors go to great lengths to demonstrate their products’ high performance and to prove to storage managers that their products can han-dle large volumes of data center activity. Solid-state vendors started publish-ing SSD performance benchmarks to demonstrate how a 1U or 2U solid-state

the truth About solid-stAte storAge perforMAnceHere’s what you need to know to see if “million IOPS” performance numbers ring true and what they might mean to you.

Page 12: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 12

cover story  |  storage Performance

storage device could outperform a large enterprise-class storage system tricked out with thousands of drives. The vendors also wanted to demonstrate that they could not only achieve 1 million IOPS in such a small, efficient footprint, but they could do it at a fraction of the cost of a high-end storage array.

But the playing field is far from level; for example, high-end storage systems boast data protection and data management facilities that a majority of solid-state storage offerings can’t match. Many storage vendors, including those sell-ing high-end enterprise storage, are racing to close that gap to give users the best of all worlds: performance, price, data protection and management. Re-cently, they’ve reached 1 million IOPS with enterprise solutions that include features like thin provisioning, snapshots, replication, infrastructure manage-ment and monitoring, and API support for server virtualization interfaces like VMware’s vStorage APIs for Array Integration (VAAI).

Vendors that offer controller-based storage have been redesigning their storage controllers to handle the increased performance capacity offered by to-day’s SSDs. The addition of dynamic tiering has allowed for highly active data to be automatically serviced by the solid-state storage layer. When configured and tuned properly, this can greatly increase the performance of a workload. Less-frequently accessed data is still stored on rotating media to minimize cost.

benchmarks versus load generaTorsFor comparison reasons, it’s important to discuss the use of benchmarks versus load generators. Many published results have been mislabeled as benchmark results, which can be confusing when you want to compare results. Often, load generators are mistaken for benchmarks because load generators may be used to create a benchmark.

A benchmark is a fixed workload that has reporting rules and a fixed measure-ment methodology around it so the characteristics can’t be changed. Standard industry benchmarks impose further restrictions, often with an independent reviewer who ensures the compliance of the results. This ensures users get an “apples-to-apples” comparison between similar products. There are currently two standard bodies (see the chart “Current industry-standard storage bench-marks”) that offer industry-standard benchmarks for storage: the Storage Per-formance Council (SPC) and Standard Performance Evaluation Corporation (SPEC).

Page 13: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 13

cover story  |  storage Performance

A load generator is a tool used to simulate a desired load for performance characterization or to help reveal performance issues in a system or product. These generators have “knobs” to adjust the desired workload characteristics and are used not only by performance professionals but by testing organizations

current industry-standard storage benchmarksbenchmark descripTion primary meTrics

SPC-1, SPC-1/E

A single OLtP workload with characteristics similar to an exchange or messaging environment

SPC-1 IOPS, $/SPC-1 IOPS, application storage unit (ASu) capacity*, data protec-tion level, total price

SPC-1C, SPC-1C/E

A single OLtP workload with characteristics similar to an exchange or messaging environment for storage component products

SPC-1C IOPS, ASu capacity, data protection level, total price

SPC-2, SPC-2/E

three distinct workloads: large file processing, large database queries and video on-demand

SPC-2 MBPS, $/SPC-2 MBPS, ASu capacity, data protection level, total price

SPC-2C, SPC-2C/E

three distinct workloads: large file processing, large database queries and video on-demand for storage component products

SPC-2 MBPS, ASu capacity, data protection level, total price

SPEC SFS the request-handling capabilities of file servers using the nFSv3 and CIFS protocols

throughput, response time

*ASU is the total capacity read or written while conducting the benchmark test.

Page 14: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 14

cover story  |  storage Performance

to validate a product’s established specifications. Results often can’t be com-pared with those from other vendors because there’s no guarantee that the test conditions were equal while measuring the system under test.

It’s important to be aware of these differences since each of the 1 million IOPS results was likely measured under different conditions by the various vendors. Some results have artificially high caching effects that are very hard to reproduce in a normal operational environment and they often lack full disclo-sure to permit reproducing the test in one’s own data center. The chart titled “Popular I/O-generator tools” shows two of the most popular load generators currently used in performance testing labs. There are many other good gen-erators in use that were created by major storage vendors, but these two are noted here not only for their popularity but because they’re available for free at http://sourceforge.net.

Most million IOPS results are based on testing with 512 byte blocks. But most enterprise online transaction processing (OLTP) applications use data transfer sizes of 4 KB or 8 KB. Million IOPS performance numbers measured by various vendors have shown that for well-behaved solid-state storage, 4 KB transfers will perform approximately 20% less (yielding approximately 800,000 IOPS) than 512 byte blocks, and some solid-state products that aren’t as well behaved may show as much as a 40% drop in performance compared to measurements using 512 byte blocks. In addition, most million IOPS results are more speeds-and-feeds information than measured application or workload performance numbers.

measuring solid-sTaTe sTorageSolid-state storage has unique behavior characteristics and, unlike hard disk drives (HDDs), has no moving parts to consider. HDD metrics like rotational latency and seek times don’t apply. Because those latencies are eliminated, re-sponse times are usually measured in microseconds for solid-state instead of in milliseconds as with HDDs. It’s important for end users (consumers of these results) to understand how these measurements are performed to ensure that the reported results represent a verifiable and sustainable level of performance.

Not all solid-state drives perform equally. Single-level cell (SLC) SSDs have faster access times than multi-level cell (MLC) SSDs. DRAM-based solid-state storage is currently considered the fastest, with average response times of 10

Page 15: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 15

cover story  |  storage Performance

microseconds instead of the average 100 microseconds of other SSDs. Enter-prise flash devices (EFDs) are designed to handle the demands of Tier-1 appli-cations with performance and response times similar to less-expensive SSDs. EFDs have enterprise-level data protection and management features that are sometimes a small tradeoff in terms of performance, depending on the vendor. Each manufacturer has created its own wear-leveling algorithms, and some al-gorithms can create large drops in performance over time for write-intensive workloads. Other factors to consider are the storage protocol used for accessing the SSDs. Fibre Channel is still the highest performing protocol, but SAS isn’t far behind. iSCSI and SATA perform well with SSDs, but most products built around those technologies won’t produce 1 million IOPS results unless they have other caching features to assist performance.

The location of solid-state storage in the I/O path can also be a factor in pro-ducing a million IOPS result. Microsecond response times are easier to achieve if the solid-state is located closer to the host. Many vendors have taken advan-tage of this fact with PCI Express (PCIe) flash cards and SSDs that plug into a host like internal HDDs. To ensure the maximum performance with host-side, solid-state storage, intelligent software from companies like Fusion-io, LSI, Proximal Data, SanDisk and VeloBit can help optimize the performance of host-side SSDs and PCIe flash cards.

Even hypervisor vendors have gotten into the million IOPS reporting act by demonstrating how a single virtual machine can drive 1 million IOPS just like a physical server. VMware used a popular I/O load generator to rack up million

popular i/o-generator toolsi/o-generaTor Tool descripTion

Vdbench Comprehensive workload generator that can also replay captured workload traces

Iometer generates uniform I/O loads for speeds-and-feeds information

Page 16: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 16

cover story  |  storage Performance

IOPS results using a Violin Memory 6000 all-flash storage array. Under differ-ent measurement conditions, Microsoft even published a 1 million IOPS result for Windows Server 2012. Both of these results were measured under very dif-ferent conditions, so this would be an example of two results that can’t be com-petitively compared since each result was measured under different conditions.

solid-sTaTe sTorage performance measuremenTSolid-state storage performance has different performance measurement re-quirements than the measurements used for HDD performance, so ensuring the published results have followed solid-state storage performance procedures properly is key. There are four main steps that have to be performed to demon-strate sustained solid-state performance:

1.create a common starting point. Solid-state storage needs to be in a known, repeatable state. The popular common starting point is a new SSD that has never been used, or performing a low-level format on an SSD to wipe the contents and restore it to its original state.

2.conditioning. Solid-state storage has to be put in a “used” state. During ini-tial measurements, solid-state will show artificially high performance that’s only temporary and not sustainable. These numbers shouldn’t be reported as a demonstration of the solid-state’s true sustained performance. For ex-ample, if random 4 KB writes are run against the storage for approximately 90 minutes, it should put the storage into a “used” state. Depending on the manufacturer, the transfer size or amount of time needed for conditioning may change.

3.Steady state. Performance levels will settle down to a sustainable rate; that’s the performance level that should be reported.

4.Reporting. The level of reporting is important. If a standard benchmark re-quiring full disclosure wasn’t used, there’s a minimum amount of informa-tion required. The type of I/O is important to know. Most results are reported as 100% random reads because random writes diminish performance. With solid-state storage, most random write workloads don’t perform any better

Page 17: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 17

cover story  |  storage Performance

than the performance that HDD systems yield. Some results reporting will also disclose the number of outstanding I/Os, which is a “nice to have” piece of information if coupled with a reported average response time.

Even after following these important steps to measure solid-state storage performance, it’s still hard to compare results without some kind of compar-ison criteria for fair use rules. More information about these four steps can be found through the Storage Networking Industry Association’s (SNIA) Solid State Storage Initiative (SSSI).

sTandards organizaTions back Their ssd benchmarksIndustry-standard benchmarks and other well-accepted benchmarks are the best means to access competitive comparisons with full disclosure on a ven-dor’s product offering. These benchmarks are usually based on application workload and have strict rules around measurement, independent certification and/or audit, and reporting. These rules are in place for the end-user’s benefit, asserting that an independent third party has verified the information and the reporting rules were followed. In addition, this ensures that full disclosure is reported with the same information and in the same format for easy consump-tion and comparison to the results of the tests conducted on other products.

Standards organizations like SPC, SPEC and SNIA SSSI are good examples of industry organizations putting standards in place to ensure the proper mea-surement of solid-state storage performance. The SPC workloads are based on Tier-1 applications and can’t be compared to the 100% random reads results, for example.

Solid-state technology is still maturing, including finding the best ways to sustain long-term high performance from solid-state-based products. By un-derstanding how this high-performance technology is measured, you’ll have a better sense of where it might boost the performance of mission-critical appli-cations, and virtualized and cloud infrastructures in the data center. n

LeAh Schoeb is a senior partner at Boulder, Colo.-based Evaluator Group.

Page 18: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 18

storage tiering

Technology developmenTs follow a predictable evolution. New functionality starts as an “exclusive” competitive feature offered by one or just a few com-panies, followed by vigorous industry competition of highly differentiated of-ferings, and finally inclusion in the “baseline” feature set of most products. Storage tiering, and more specifically automated storage tiering, has become a baseline element. Even so, significant differentiation gives storage managers a mouth-watering choice when it comes to evaluating competing solutions. This differentiation is particularly important to those organizations that seek best-of-breed products, where tiered data storage is a significant requirement.

By Phil Goodwin

storAge tiering: stAte of the ArtStorage tiering has quickly become a storage best practice, accelerated by the use of solid-state storage. We survey how the major vendors leverage solid-state to implement effective storage tiering.

Page 19: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 19

storage tiering

All tiering offerings have certain things in common. First, and at a minimum, the array hosts multiple physical media types, usually including solid-state drives, high-performance disk (either Fibre Channel or SAS) and high-capac-ity disk, with plenty of permutations of those basic components. Second, sys-tems include software that embodies rules and methods for moving data from one physical tier or media type to another. Even though these features are com-mon at a base functional level, there’s enormous variation in the way they’re implemented.

solid-sTaTe sTorage drives TieringA key technical driver for tiering adoption has been solid-state storage or solid-state drives (SSDs). Early tiering efforts around Tier 1 (Fibre Channel), Tier 2 (SAS) and Tier 3 (SATA) failed because organizations couldn’t accurately provision for hot versus cold data. Thus, many tiered arrays remained 80% Tier 1 to ensure adequate performance. The marginal cost savings of the re-maining 20% didn’t justify the added complexity and effort. SSD has been a game-changer in that it delivers huge IOPS performance gains in a very small footprint (albeit an expensive one). At this point, nearly all storage vendors agree that best-practice architectures include a small percentage of solid-state storage accompanied by high-capacity hard disk drives (HDDs), resulting in far fewer spindles. The aggregate throughput is often higher with a lower acquisition cost.

For the purposes of this discussion, we’ll draw a distinction between SSD and flash cache, though the technology is essentially the same. SSD can be thought of as a distinct Tier 0, available for application provisioning as with any other storage media. Flash cache is general purpose in nature, enhancing the entire array. Most vendors support both types, and a majority also support a “hybrid pool” in which LUNs may consist of both SSD and various types of HDDs.

how vendors leverage flash for TieringEMC Corp. recommends a “flash first” approach when introducing solid-state storage into an environment. On the company’s VNX line of arrays, the prod-uct is called Fully Automated Storage Tiering (FAST) Cache. It’s different from

Page 20: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 20

storage tiering

DRAM cache and actually functions between DRAM and the HDDs. The com-pany has found that as little as 5% of the total storage capacity in the form of FAST Cache can yield between 300% and 600% overall performance improve-ment. Moreover, they’ve found that a 5% slice of flash permits a spindle count reduction of two-thirds when substituting SATA for Fibre Channel. The re-sult is better performance, lower acquisition cost and lower operational costs—what EMC calls the “triple play of storage.”

NetApp Inc. prescribes three locations for flash. The first is at the host level, using its Flash Accel product. The second is NetApp’s Flash Cache, which is lo-cated in the storage controller. NetApp’s third approach consists of flash pools, or hybrid aggregates. This last is a Tier-0 implementation that can be directed to specific applications. NetApp’s approach differs from EMC’s in that they recommend a bottom-up approach—start with storage cache and work your way up as additional performance is required. Nevertheless, they don’t recom-mend replacing flash pools with flash cache. When it comes to tuning the flash, NetApp looks for a 90% cache hit rate as optimal. If the hit rate is relatively low, say 50%, it can indicate an insufficient amount of cache. When multi-ple layers of cache are implemented, the highest layer (the one closest to the

what is state-of-the-art storage tiering?●● Tiered storage strategies must encompass at least three drive types, including solid-state storage.

●● flash memory is an integral part of the offering.●● sophisticated algorithms identify “hot” data and move it automati-cally to the appropriate tier.

●● storage arrays can simultaneously be optimized for cost and performance.

●● optimization decisions are largely automated to minimize adminis-trative intervention.

Page 21: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 21

storage tiering

server) will serve the necessary I/Os first. The fundamental philosophy here is to store the data in the lowest, cheapest devices and allow the system to elevate data to the appropriate storage tier to meet performance requirements.

Hewlett-Packard (HP) Co., with its Ibrix series of scale-out NAS systems, takes a more traditional approach to tiering. In these arrays, SSD functions as cache and storage managers can implement physical tiers consisting of Fibre Channel, SAS and SATA HDDs. HP’s en-terprise 3PAR arrays use “sub-LUN” tier-ing in its Adaptive Optimization tiering solution. Sub-LUN tiering is essentially hybrid LUNs that exploit the perfor-mance of SSDs. These hybrid LUNs can include up to three physical tiers.

Both Hitachi Data Systems and EMC take physical tiering one step further in their Virtual Storage Platform (VSP) and VMAX systems, respectively. Both ar-rays are capable of including third-party arrays as tiers in their system archi-tectures. (NetApp can also virtualize third-party systems behind its V-Series controllers.) EMC refers to this as “tier 4” in its Federated Tiered Storage of-fering. This expands EMC’s FAST capabilities, which encompasses both SSD and numerous HDD options. The EMC VMAX group recommends what it de-scribes as the “80/20 I/O skew rule” to size cache. This rule assumes that at any given time, only 20% of volumes are “hot.” Within that 20% of hot volumes, only 20% of the data is hot. This equates to 4% of total data, which is the rec-ommended place to start when sizing array SSDs. Interestingly, all the various methods used by vendors as sizing guidelines arrive at close to the 5% of capac-ity guideline. If so many agree, there must be some validity to it.

Hitachi, while offering flash, SSD and third-party tiers, suggests a more top-down approach to using flash to improve performance. Hitachi’s Dynamic Tier-ing strategy is based on the assumption that new data is usually the hot data. Therefore, it moves new data into flash initially and only moves it to a lower tier when hotter data displaces it. It’s worth noting that both Hitachi and EMC extend their tiered data storage offerings to mainframe environments as well.

sub-lun tiering is essentially hybrid lunsthat exploit the performance of ssds.these hybrid luns caninclude up to three physical tiers.

Page 22: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 22

storage tiering

under The covers of sTorage TieringAlthough vendor hardware architectures differ as described, the underlying drives and boards are often quite similar. Hitachi, however, has specialized ASICs and processors that it refers to as hybrid control units. The ASICs are used as data movers, while the quad-core Intel processors track metadata. The philosophy is to move as much of the workload as possible into the hardware layer for maximum performance.

The automated data moving software that has made tiering a practical so-lution is the most significant differentiation, and it’s where the “art” comes into “state of the art.” For example, Hitachi combines its hardware architec-ture with an object-based file system to track metadata, which it finds to be the most efficient process. Data movement is based on policies and usage charac-teristics. Data is moved in 42 MB pages, which fit neatly into cache sizes. Hita-chi uses a “set and forget” philosophy to minimize manual effort, but data can be manually promoted to higher tiers in cases where usage can be predicted. An example of this would be month-end processing of certain specific data sets.

when and why daTa geTs movedStorage managers might assume that hot data is unpredictable and likely to oc-cur at any time and therefore data movement should be frequent. Most data movement schemes occur in a matter of hours and can take up to a day, mean-ing that data movement between tiers is more trend-based than an immediate reaction to conditions. Because of this consideration, HP suggests cache as the best technology for reacting to immediate, unpredictable I/O bursts. If unpre-dictability is high, then IT managers might want to beef up on cache rather than hybrid pools.

When to move data is an important aspect of tuning the systems appropri-ately. EMC’s VNX default data movement cycle is once a day, though users can set policies for more frequent moves. HP’s Ibrix systems move data on a daily basis as well, but can even move data hourly. Data is moved based on scans of data segments for metadata that has become hot. Although scans of segments can be run in parallel, the company advises that too many scan jobs can un-productively consume back-end IOPS. Their 3PAR arrays are capable of “non-disruptive” data movement (which is actually self-throttling that’s transparent to the host and application) and data “heat” sampling can be as often as every

Page 23: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 23

storage tiering

30 minutes. Even so, HP still recommends limiting data movement only to the frequency necessary.

At the other end of the spectrum, both the EMC VMAX and NetApp sys-tems are designed for frequent data movement. VMAX moves 768 KB data ex-tents and NetApp does so in 4 KB blocks. Because the number of I/Os needed to move such small amounts of data is also very low, the disruption in the grand scheme of things will be minimal. In addition, EMC permits data to be “pinned” to cache, moved manually or scheduled for specific windows, i.e., be-tween midnight and 2:00 a.m.

daTa Types ThaT Tier besTWhen matching storage tiering with use cases, nearly all vendors point to vir-tual desktop infrastructure (VDI) and virtual server environments. In virtual environments with shared storage, NetApp recommends doubling the amount of cache that one would otherwise allocate. The EMC VNX group describes the best use cases as “skewed data sets” where a subset of data is hot at any given time. In addition to VDI, this might include online transaction process-ing (OLTP) applications. Web-based file serving is another good target, as cer-tain pages may be hit repeatedly compared to others.

Tiered data storage strategies that include SSD or other forms of flash for optimum performance at the lowest aggregate price will only become more ro-bust. Though it’s now a baseline function in most storage arrays, tiering is cur-rently one of the key technology considerations for storage managers. Because solid-state technology is fundamentally the same as server memory, it follows Moore’s law price/performance curve; cost per IOPS will fall significantly in the coming years.

PhIL gooDWIn is a storage consultant and freelance writer.

Page 24: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 24

quality awards: nas systems

By Rich Castagna

eMc, hitAchi eArn top MArks in nAs sAtisfAction surveyNAS systems are the storage workhorses of most data centers. EMC’s and Hitachi’s NAS entries lead a strong field in our satisfaction survey.

The sTorage magazine/searchsTorage.com QualiTy awards for network-attached storage (NAS) have always been a wide open affair. The enterprise group has had six different winners in seven years, and after NetApp Inc. dominated early with three straight wins, there have been four different midrange NAS win-ners. In the latest installment of the user satisfaction survey, EMC Corp. re-gained the top spot among the enterprise group after being nudged out of the top NAS storage systems spot by Oracle Corp. last year, and Hitachi Data Sys-tems Corp. prevailed among the midrange products, its first win since topping the enterprise field in the very first NAS Quality Awards survey.

Page 25: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 25

Overall rankingsEntErPrISE EMC had the highest scores in two of the rating categories and tied for first in another on its way to a winning overall score of 6.84. NetApp captured two categories and finished in the top three in two others to roll up an overall 6.75. That was just enough to nudge out IBM (6.70), which registered a remarkably consistent performance with cat-egory scores ranging from 6.60 to 6.78, including one tie for first place. The average of all scores was a bit lower than in the past two surveys, but higher than all earlier editions. In a strong field, Dell Inc. acquitted itself well with a 6.57, and first-time finalist DataDirect Networks Inc. debuted with a solid 6.27.

quality awards: nas systems

overAll rAtings: enterprise nAs systeMs

0 1 2 3 4 5 6 7 8

6.706.756.84

5.436.216.276.57

IBMNetApp

EMC

DellDataDirect

HPOracle

0 1 2 3 4 5 6 7 8.00

6.756.816.92

5.886.136.326.66

EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

0 1 2 3 4 5 6 7 8

midrange Hitachi held off the midrange pack with an overall rating of 6.92, the third highest score ever on a NAS Quality Awards survey. IBM, EMC and Dell all proved worthy competitors, but Hitachi overcame the challengers by garnering top marks in four of the five rating categories. Still, it was a close contest with ever-present IBM a strong second at 6.81, EMC edging Hitachi by 0.01 point for initial product quality en route to a 6.75 rat-ing and Dell posting a sturdy 6.66. If IBM was the model of consistency in the enterprise group, Hitachi was a paragon of consistency among its midrange peers: Its highest cat-egory rating was 6.97 and its lowest was 6.87, a slim span of only 0.10 point.

0 1 2 3 4 5 6 7 8

6.706.756.84

5.436.216.276.57

IBMNetApp

EMC

DellDataDirect

HPOracle

0 1 2 3 4 5 6 7 8.00

6.756.816.92

5.886.136.326.66

EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

0 1 2 3 4 5 6 7 8

overAll rAtings: MidrAnge nAs systeMs

Page 26: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 26

Sales-force competenceEntErPrISE Apparently, EMC’s and IBM’s sales forces do a good job of paving the way for their NAS products, as they finished the category in a dead heat. On the six rating state-ments, EMC’s strengths were for having a sales force knowledgeable about customers’ industries, understanding their businesses and being easy to negotiate with. IBM stood out for having sales reps who keep customers’ interests foremost, sales reps who are flex-ible and knowledgeable sales support teams. There were few slouches in this group, with an overall sales-force competence average that trailed only two previous surveys. Dell’s third-place finish was highlighted by a 6.74 for understanding its customers’ businesses.

quality awards: nas systems

sAles-force coMpetence rAtings: enterprise nAs systeMs

sAles-force coMpetence rAtings: MidrAnge nAs systeMs

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.596.706.70

5.536.216.226.49

DellIBMEMC

NetAppDataDirect

HPOracle

0 1 2 3 4 5 6 7 8.00

6.596.666.90

5.766.266.286.50

EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

MIdrangE Hitachi’s triumph in the midrange group was achieved by winning five of the six statements—with scores of 7.00 for “My sales rep understands my business” and “The vendor’s sales support team is knowledgeable”—to help it toward a 6.90 category average. Second-place IBM posted statement scores ranging from 6.58 to a 6.78 for “My sales rep is knowledgeable about my industry.” EMC’s top score (6.79) was for being easy to negotiate with, while Dell earned its best mark (6.79) for having a knowledgeable sup-port team. As did its enterprise brethren, the midrange group netted an average for the category (6.42) that was among the highest we’ve seen, just a shade off last year’s 6.44.

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.596.706.70

5.536.216.226.49

DellIBMEMC

NetAppDataDirect

HPOracle

0 1 2 3 4 5 6 7 8.00

6.596.666.90

5.766.266.286.50

EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

Page 27: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 27

quality awards: nas systems

midrange EMC bested Hitachi in initial product quality by a minuscule 0.01—6.88 to 6.87. But Dell and IBM dogged the leaders with a 6.79 and 6.76, respectively. With such high ratings, it’s not surprising there were five 7.00-plus statement scores, including three for Hitachi. For the statement “This product was installed without any defects,” EMC (7.05), Hitachi (7.00) and Dell (7.00) all topped or equaled the 7.00 mark. EMC and Hitachi also split superiority on individual statements, with three each. Vendors apparently have burnished the out-of-the-box experience, with high marks for installing without defects, not requiring excessive professional services, ease of use and delivering good value.

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.786.856.90

5.456.096.146.45

IBMEMC

NetApp

DellDataDirect

HPOracle

0 1 2 3 4 5 6 7 8.00

6.796.876.88

5.906.336.456.76

DellHitachi

EMC

IBMNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.786.856.90

5.456.096.146.45

IBMEMC

NetApp

DellDataDirect

HPOracle

0 1 2 3 4 5 6 7 8.00

6.796.876.88

5.906.336.456.76

DellHitachi

EMC

IBMNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

Initial product qualityenTerprise NetApp notched one of its two category wins for initial product quality, rid-ing 7.00-plus scores for products that are easy to get up and running and that need little vendor intervention. Its overall score of 6.90 was just enough to beat back a challenge from EMC at 6.85. EMC had the only other over-7.00 statement score for “This product was installed without any defects” (7.07). IBM was hard on EMC’s heels with a category average of 6.78, including a 6.94 for the statement “This product delivers good value for the money.” The enterprise products did well on the “good value” statement with an average score of 6.45, the highest among all category statements.

initiAl product quAlity rAtings: enterprise nAs systeMs

initiAl product quAlity rAtings: MidrAnge nAs systeMs

Page 28: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 28

Product featuresenTerprise The product features rating category has traditionally been one of the strongest for network-attached storage systems, as vendors are apparently building in capabilities that meet—or exceed—users’ needs. NetApp snagged its second category win, but by the slimmest of margins as it outdistanced EMC by a mere 0.01 point. Both product lines fared well on all seven statements in the category, with NetApp having the best scores for features generally meeting needs (6.97), mirroring features (6.96), snap-shotting capabilities (6.88) and scaling (6.88). EMC prevailed on the three remaining state-ments: remote replication (6.96), interoperability with other vendors’ products (6.90) and management features (6.77). IBM’s category rating score of 6.69 was good for third, high-lighted by a 6.78 for “Overall, this product’s features meet my needs.”

midrange Based on category ratings, the NAS storage systems features race among the midrange group is hotly contested. Hitachi’s 6.95 led the field on the strength of a 7.17 for mirroring features and a 7.06 for snapshots. IBM and EMC finished second and third, respectively, but were separated by only 0.01 point; both had ratings of 7.00 or better for capacity scaling. IBM just missed another 7.00 score with a 6.96 for remote replication, while EMC fell shy of 7.00 for mirroring (6.95). Dell (6.72) came in a very competitive fourth, with a 6.88 for overall feature set and a pair of 6.84 ratings for snapshotting and mirroring.

quality awards: nas systems

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.696.826.83

5.476.316.526.53

IBMEMC

NetApp

DataDirectDellHP

Oracle

0 1 2 3 4 5 6 7 8.00

6.866.876.95

5.965.996.336.72

EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.696.826.83

5.476.316.526.53

IBMEMC

NetApp

DataDirectDellHP

Oracle

0 1 2 3 4 5 6 7 8.00

6.866.876.95

5.965.996.336.72

EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

product feAtures rAtings: enterprise nAs systeMs

product feAtures rAtings: MidrAnge nAs systeMs

Page 29: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 29

Product reliabilityenTerprise EMC’s second category win came for product reliability, for which it achieved a mark of 6.94—good for the highest average for all enterprise products in all ratings cat-egories. EMC outran NetApp (6.84) and IBM (6.71) by leading on four of the five category statements and topping 7.00 on two of them (meeting service-level requirements and products that experience very little downtime). NetApp fared best for providing upgrade advice (6.94) and did quite well on the downtime and service-level statements. IBM’s third-place score was bolstered by a 6.94 for meeting expected service levels. The group’s average for delivering service per requirements was its highest for the category.

midrange Hitachi (6.97) faced a strong challenge from IBM (6.93) in the product reliabil-ity category for midrange NAS, as both had multiple 7.00-plus ratings. Hitachi’s 7.00s were for meeting service-level requirements (7.06), having non-disruptive patching (7.06) and providing comprehensive upgrade guidance (7.00); IBM rated more than 7.00 for products that experience little downtime (7.08) and the service-level statement (7.04). EMC (6.78) and Dell (6.69) also had solid performances in the reliability category; in fact, the entire field did well, with all product lines averaging 6.00 or higher for the category. Meeting ex-pectations and delivering consistently were hallmarks for this group, with the highest group averages coming on those two statements.

quality awards: nas systems

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.716.846.94

5.296.296.326.56

IBMNetApp

EMC

DellHP

DataDirectOracle

0 1 2 3 4 5 6 7 8

6.786.936.97

6.006.096.19

6.69EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.716.846.94

5.296.296.326.56

IBMNetApp

EMC

DellHP

DataDirectOracle

0 1 2 3 4 5 6 7 8

6.786.936.97

6.006.096.19

6.69EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8

product reliAbility rAtings: enterprise nAs systeMs

product reliAbility rAtings: MidrAnge nAs systeMs

Page 30: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 30

technical supportenTerprise EMC (6.87) climbed to the top of this category by slipping past three ven-dors—Dell (6.72), NetApp (6.70) and IBM (6.60)—that have proven their tech support mettle in previous surveys. EMC earned the top spot with a dominant performance that featured the best scores on six of the eight statements, including a 7.11 for having knowl-edgeable support personnel and a 7.00 for delivering support as promised. Dell had a 6.97 for “Vendor takes ownership of the problem.” NetApp’s highest score was 6.81 for having knowledgeable personnel, while IBM’s 6.71 was the group’s best for providing adequate training.

midrange Hitachi earned its fourth category victory for technical support, coming out on top on five rating statements to average a 6.92 for the category over IBM’s 6.81. Hita-chi’s advantage was due to a 7.06 for resolving problems in a timely manner and a 7.00 for rarely requiring escalation of support issues. IBM won the other three statements, with a 6.93 rating for documentation and support materials, a 6.89 for supplying support as con-tractually specified and a 6.81 for providing adequate training. EMC (6.62) and Dell (6.60) also enjoyed strong showings in the tech support category; EMC earned its best mark for the ownership statement (6.89), while Dell’s best effort was for documentation (6.82).

quality awards: nas systems

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.706.726.87

5.416.146.186.60

NetAppDell

EMC

IBMDataDirect

HPOracle

0 1 2 3 4 5 6 7 8.00

6.626.816.92

5.775.986.326.60

EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8

6.706.726.87

5.416.146.186.60

NetAppDell

EMC

IBMDataDirect

HPOracle

0 1 2 3 4 5 6 7 8.00

6.626.816.92

5.775.986.326.60

EMCIBM

Hitachi

DellNetApp

HPOracle

0 1 2 3 4 5 6 7 8.00

technicAl support rAtings: MidrAnge nAs systeMs

technicAl support rAtings: enterprise nAs systeMs

Page 31: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 31

Would you buy this product again?We also ask respondents if they would buy the same product again after using it for some time. The across-the-board average of “yes” votes for enterprise NAS systems products was among the highest we’ve ever seen. In the midrange group, HP has a loyal following, despite not finishing among the top three in any rating category.

quality awards: nas systems

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100

89%89%89%

78%82%85%85%

HPEMCIBM

DellDataDirect

NetAppOracle

91%94%95%

75%80%82%

88%HP

HitachiEMC

DellIBM

OracleNetApp

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100

0 20 40 60 80 100

89%89%89%

78%82%85%85%

HPEMCIBM

DellDataDirect

NetAppOracle

91%94%95%

75%80%82%

88%HP

HitachiEMC

DellIBM

OracleNetApp

0 20 40 60 80 100

PrOduCtS In thE SurVEy These products were included in the Quality Awards for NAS survey. The number of responses for each finalist is shown in parentheses.

EntErPrISE naS  DataDirect Networks Inc. NAS Scaler/GRIDScaler/EXAScaler/xSTREAMScaler (26)  •  Dell Inc. PowerVault NS-480/Compellent Storage Center zNAS/EqualLogic FS7500 (34)  •  EMC Corp. Celerra NS-480/NS-960 or VG8 or VNX 7500 NAS or Isilon IQ X-Series (47)  •  Hewlett-Packard (HP) Co. StorageWorks EFS Clustered Gateway or StorageWorks X5000/X9000 Storage Systems (27)  •  Hitachi Data Systems Corp. Essential NAS Platform 1000 Series or NAS Platform 3000 Series (BlueArc Titan 3000 Series)*  •  IBM N6000 or N7000 or Scale Out Network Attached Storage (SONAS) (19)  •  NetApp Inc. FAS3000/3100 or FAS6000 (all with NAS interface) (34)  •  Oracle Corp. Sun Storage 74xx Unified Storage System (with NAS) or Pillar Data Systems Axiom NAS  •  Panasas Inc. ActiveStor 9 Series/11 Series/12 Series/14 Series*

MIdrangE naS  Dell PowerVault NS120 or NX200/NX300/NX3000/NX3100/NX4 (41)  •  EMC Celerra NX4 or NS-120 or VG2 or VNXe 5000 Series NAS or Isilon IQ S-Series (39)  •  Hewlett-Packard (HP) StorageWorks X300/X500 Data Vault or X1000/X3000 Network Storage Systems (33)  •  Hitachi AMS2000/1000/500/200 or WMS100 with NAS Option or BlueArc Titan Mercury 50/100  •  IBM N3000 or N5000 (18)  •  NetApp FAS2000 (with NAS interface) (28)  •  Oracle Sun Storage 71xx/72xx/73xx Unified Storage System (with NAS) (25)  •  Overland Storage Inc. SnapServer DX1/DX2/210/410/620/650/N2000/SnapScale X2*  •  Panasas ActiveStor 7 Series/8 Series*  •  Silicon Graphics International Corp. InfiniteStorage File Serving series*  •  Synology Inc. RS3412 Series**Too few responses to qualify as a finalist

abOut thE SurVEy The Storage magazine/SearchStorage.com Quality Awards are designed to identify and recognize products that have proven their quality and reliability in actual use. The results are derived from a survey of qualified Storage/SearchStorage.com readers who assessed products in five main categories: sales-force competence, product features, initial product quality, product reliability and technical support. Products are rated on a 1.00-8.00 scale, where 8.00 is the most favorable score. This is the seventh edition of the Quality Awards for NAS systems; there were 244 valid responses to the survey with 441 sets of ratings for vendors’ products/product lines.

would you buy AgAin rAtings: enterprise nAs systeMs

would you buy AgAin rAtings: MidrAnge nAs systeMs

Page 32: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 32

hot sPots  |  terri mcclure

online file sharing comes of age just in timeESG reviewed a slew of online file-sharing and collaboration applications to see if the market is finally maturing to a point where SMBs and enterprises alike can find a worthwhile IT investment.

the enTerprise sTraTegy group (esg) recently conducted a test drive of 13 corporate online file-sharing (OFS) and collaboration offerings. While this was an unsponsored assessment, partici-pating vendors were asked to provide temporary licenses for testing purposes. Our goal was to uncover the strengths, areas

of improvement and distinctive capabilities of each vendor’s offerings, and ob-tain a real-world view of OFS solutions from the perspective of end users and IT administrators alike. Each product was tested using the same factors across four administration categories and six end-user categories, and each vendor was asked to identify three differentiated features we could test.

We then rated our user experiences across these categories on a scale that ranged from “Features were lacking or performed below expectations” to “Fast, easy, intuitive and exceeded expectations.” A variety of products were tested, some offered as services and others as on-premises software. Each vendor de-cided what specific deployment model and license level we would test.

whaT we learned abouT ofs offeringsOur investigative work made one thing abundantly clear: Corporate online file-sharing and collaboration products are maturing from an administrative standpoint. There was a point when these tools weren’t keeping pace with the demands of businesses, but the market now seems poised to hit its stride. A ma-jority of the early OFS offerings were cloud-based services originally designed

Page 33: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 33

hot sPots  |  terri mcclure

for consumer use, primarily photo and music sharing between users and then between computers and portable devices.

As consumer devices like smartphones and tablets have made their way into the business, so have these products. However, consumer offerings weren’t designed to provide IT with administrative control or visibility into the file-sharing environment. That led to the development of products for business use. Some of these busi-ness-focused offerings are evolved consumer services—Dropbox for Teams is probably the most well-known—while others were designed from the ground up for business use, such as Syncplicity (now owned by EMC) and ShareFile (now owned by Citrix). And some, like GroupLogic activEcho and Accellion Secure Mo-bile File Sharing, are enterprise software offerings that can be installed on the premises so all data remains behind the firewall (Accellion also offers a cloud-based service).

In our testing, we found functionality ranging from basic file permission and account provisioning, to incredibly detailed and deep functionality with “every setting an IT administrator could ever ask for.” A comprehensive report on the results is well beyond the scope of what can be covered in this column, but here are the highlights.

At the basic end are popular consumer offerings, such as Dropbox for Teams, SugarSync for Business and SOS Collaborate (now called InfraScale FileLocker), that are integrated with administrative control and functionality products. These are targeted at departmental or small-to-midsize business use. The administrative control is fairly lightweight, which isn’t surprising given the SMB focus. The golden rule in this market segment is fast and easy, and these products accomplish that from both an administrative and end-user standpoint.

The next level up has services (public cloud-based and hybrid) targeted at larger businesses (those with a desire for more administrative control). Most of the offerings tested fell into this category, and there was an interesting ar-ray of functionality. At the basic level, Box did a nice job of balancing ease of

in our testing, we found functionality ranging frombasic file permission andaccount provisioning, toincredibly detailed and deepfunctionality with “everysetting an it administratorcould ever ask for.”

Page 34: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 34

hot sPots  |  terri mcclure

use and administrative depth and control, and we liked the integrated com-ment threads and array of application integration. Some offerings, like Huddle from Ninian Solutions, are more about managing projects than files. Huddle is built around social collaboration and has integrated calendaring, workrooms and discussion threads.

Egnyte is more of an extension of the existing storage environment. It allows IT to use the existing NAS environment and mirror or extend that to the cloud; at the same time, it extends file sharing to mobile devices and desktops with easy-to-use clients. Soonr has solid administrative control, a mobile client with integrated editing and annotation, and pixel-perfect dis-plays for PowerPoint charts. Citrix ShareFile is well suited to project management (and has some inter-esting security features that limit data retention on mobile devices), while Nomadesk is one of the few offerings to encrypt data on the laptop client. YouSendIt excels as an FTP re-placement and has solid, basic file-sharing functionality, while EMC Syncplic-ity has a good array of administrative control and excellent sync performance.

We then looked at software with the sort of highly configurable enterprise functionality found in GroupLogic and Accellion. GroupLogic is the only on-premises solution we tested. We tested Accellion’s service offering, but much of its installed base runs on premises. The obvious advantage of an on-premises offering is that data kept inside the firewall is subject to the IT department’s se-curity and data protection practices. We found both of these products offered a tremendous depth of control and security settings, but it takes IT resources to manage them. They’re also targeted at larger enterprises that want and need these levels of control. While they have default settings and can be run with minimal IT intervention—the value in these offerings is the flexibility, control and security they offer—you need resources to take advantage of that. These products weren’t quite as intuitive or easy on the end user as those from the consumer space, but it’s a small price to pay if an organization needs these lev-els of control.

the obvious advantage of an on-premises offering is that data kept inside the firewall is subject to the itdepartment’s security and data protection practices.

Page 35: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 35

boTTom lineThe corporate online file-sharing and collaboration space is crowded and will likely become even more populated over the next year. IT needs to mobile-en-able the workforce or employees will use their own file-sharing solutions. This is a dangerous practice because IT loses control of data, and has no visibility into what data is stored where or shared with whom.

In our test drive we found that the market has come a long way in the past 18 months as business-focused offerings have begun to emerge and mature. Some solutions excel at sharing, others at collaboration or project management. Ad-ministrative functionality ranges from basic to deep and wide. No matter the size of your storage shop or IT staff, there should be an option that appeals to your organization. n

teRRI mccLuRe is a senior storage analyst at Enterprise Strategy Group, Milford, Mass.

hot sPots  |  terri mcclure

Page 36: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 36

take quality of service to a new levelTuning storage to meet the specific needs of applications is still an arcane art, but new tools that offer more automation are emerging.

read/write  |  arun taneja

PracTically every storage array vendor claims its box has qual-ity of service (QoS) built in. To a degree, all these vendors are correct. The trouble, however, is how each one defines qual-ity of service.

If you define QoS as the features built into your array then, based on that loose definition, you have QoS capability. I checked Wikipedia for a common definition. The term QoS entered our vocabulary through tele-phony and networking technologies. The one sentence that caught my eye in the Wikipedia entry was, “Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.” This is probably the best description that can be applied to storage.

The basic issue we’ve grappled with for decades is how to deliver the right storage performance to an application. Earmarking capacity with certain per-formance characteristics has now become commonplace. LUNs can be created using a variety of RAID types, and volumes can be carved out and allocated to applications. But if two or more applications are served from the same LUN, each application asks to be serviced and gets whatever is available. So, some-times the application is starved for I/O and at other times it has more than enough, which makes application performance unpredictable. That problem is compounded by server virtualization because 10 applications may be running on the same server accessing the same datastore. Application performance becomes even more unpredictable.

Page 37: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 37

read/write  |  arun taneja

There are a number of ways to deal with this. For starters, every storage array comes with tools that provide performance information. If you don’t like what you see, you make changes. Take the application off that server completely and fire up a new one. See where the hot spots are for each application. Perhaps create a new LUN for that application. But in most cases the process is manual. Auto-tiering software has helped to au-tomate the process. Early auto-tiering products would move an application that needed faster storage response to a higher tier of storage. Auto-tiering was further refined to work at a sub-volume level. That meant only the data that was hot was automatically moved to a higher tier, while “cooler” data was moved to a lower tier.

Unfortunately, most QoS imple-mentation efforts stop right there. That still represents a giant step forward, but I think there’s another key step, without which QoS remains incompletely realized. That final step is to guard against the following condition: There are three applications vying for I/O. From a business perspective, application three is the least important app but it’s behaving in a rogue fashion and hot spots are everywhere. So it gets serviced using the QoS method described above, but that process starves applications one and two simply because application three asked for the services first.

In an ideal environment this should never happen. Each application should be prioritized at the outset with a minimum set of services (IOPS, throughput, latency and so on) assigned by the administrator. The storage array should have the intelligence built in to automatically deal with the constantly changing per-formance landscape. And, regardless of anything else, it must provide the min-imum set of assigned resources to each app and allocate any excess available in the order of the established priorities. All of this should be managed by the ar-ray with no human intervention needed.

Right now we’re close to this level of QoS sophistication, but only a few arrays have this type of intelligence and control built in. In a virtualized or cloud environment, where there could be tens, maybe hundreds of applications

there are a number of ways to deal with application performance. for starters, every storagearray comes with tools that provide performanceinformation. if you don’t like what you see, you make changes.

Page 38: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 38

read/write  |  arun taneja

running as virtual machines, the only realistic way to allocate storage perfor-mance is via automation.

No discussion of a QoS implementation would be complete without men-tioning that many other parameters can be brought under the umbrella of QoS. For instance, QoS might also include the level of data protection, the “breadth” of access (synchronously multisite, asynchronously globally). QoS might need to extend to cover the cache “contention” policies so cache usage can be priori-tized. It should be noted that VMware’s virtual volume (vVol) concept, intro-duced at VMworld 2012, has QoS implications. We’ll take a look at vVols and their potential impact on QoS in a future column. Flash may also change how QoS is managed in hybrid and all-solid-state arrays (another topic for a future QoS discussion).

For now, when you’re evaluating a storage array you should ask how perfor-mance provisioning is done, and also ask the vendor how you can get “micro” control of the allocation of scarce resources, and not just capacity. n

ARun tAneJA is founder and president at Taneja Group, an analyst and consulting group focused on storage and storage-centric server technologies.

Page 39: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 39

snaPshot

Small but satisfied group of converged storage system usersConverged storage systems are all-in-one kits that include a storage array, servers, net-working and usually a pre-installed hypervisor. All the major data storage vendors have either partnered with other vendors or pooled their own resources to offer converged sys-tems. It’s a relatively new idea, so it’s no surprise that just 17% of our survey respondents say they’re using one of these storage stacks (59% of them have had their gear for a year or more). The main reason for buying in was the convenience of the converged architec-ture (32%); preconfiguration was another selling point (24%), along with cost savings (18%). Converged systems users are pretty happy: 39% rate their experience as “very favorable” and 46% say it’s “somewhat favorable”; nobody was disappointed and only 14% are still on the fence. Use cases for converged storage fall mainly into two camps: general-use primary storage (38%) and dedicated to one mission-critical app (30%). —Rich Castagna

how long hAs your coMpAny been using its converged storAge systeM? 18%

the percentAge of respondents plAnning to buy

A converged storAge systeM in the next

12 Months.

whAt hAs been the greAtest benefit/shortcoMing of your coMpAny’s converged storAge systeM?

Brand new/not in production yet

6 months to a year

Less than 6 months

More than a year

7%11%

23% 59%

34%

14%

12%

3%

8%

37%

23%

19%

15%

35%Price was same/more than buying parts separately

Not as easy to manage as expected

Tech support from multiple vendors

Components aren’t best of breed

Other shortcomings

Other benefits

Lower cost

Consolidated tech support

Easier to manage than separate parts

Easier, faster deployment

0 10 20 30 40 0 10 20 30 40

Page 40: Managing the information that drives the enterprise Storagedocs.media.bitpipe.com/io_10x/io_107911/item_625479/StoragemagOnline... · continue to drag their heels for a variety of

Storage developmentS

we need in 2013

‘new’ dr lookS like ‘old’ dr

truth about Solid-State

performance

auto-tiering now a checkliSt

feature

Quality awardS: naS

cloud Share and Sync maturing

crank up Quality of Service

converged Storage SyStemS juSt catching on

StOrAge n jAnuAry 2013 40

StOragE MagazInE

EdItOrIal dIrECtOrrich CastagnaSEnIOr ManagIng EdItOrKim HefnerExECutIVE EdItOr ellen O’BrienCOntrIbutIng EdItOrSjames Damoulakis, Steve Duplessie, jacob gsoedl

SEarChStOragE.COM

ExECutIVE EdItOr ellen O’BrienSEnIOr nEWS dIrECtOr Dave raffoSEnIOr nEWS WrItErSonia r. LeliiSEnIOr WrItEr Carol SliwaSEnIOr ManagIng EdItOr Kim HefneraSSIStant SItE EdItOrIan Crowley

SEarChClOudStOragE.COM

ExECutIVE EdItOr ellen O’BrienSEnIOr ManagIng EdItOr Kim HefneraSSIStant SItE EdItOrIan Crowley

SEarChdatabaCkuP.COM SEarChdISaStErrECOVEry.COM SEarChSMbStOragE.COM SEarChSOlIdStatEStOragE.COM

SEnIOr SItE EdItOrAndrew BurtonManagIng EdItOred HannanaSSOCIatE SItE EdItOr john HilliardFEaturES WrItErtodd erickson

SEarChVIrtualStOragE.COM SEarChStOragEChannEl.COM

SEnIOr SItE EdItOrSue troyaSSIStant SItE EdItOrSarah Wilson

StOragE dECISIOnStechtarget ConferencesdIrECtOr OF EdItOrIal EVEntS Lindsay jeanlozEdItOrIal EVEntS ManagErjacquelyn Hinds

SubSCrIPtIOnSwww.SearchStorage.comStorage magazine 275 grove Street newton, MA [email protected]

techtarget Inc. 275 grove Street newton, MA 02466 www.techtarget.com

©2013 techtarget Inc. no part of this publication may be transmitted or reproduced in any form or by any means without written permission from the publisher. techtarget reprints are available through the ygS group.

About techtarget: techtarget publishes media for information technology professionals. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and processes crucial to your job. Our live and virtual events give you direct access to independent expert commentary and advice. At It Knowledge exchange, our social community, you can get advice and share solutions with peers and experts.

photograph on cover and page 11: michael j. hipple/getty images/age fotostock

techtarget storage Media group