virtualized creativity: applied concepts on vmware … · 2020-06-21 · data storage capacity, and...
Post on 26-Jun-2020
2 Views
Preview:
TRANSCRIPT
VIRTUALIZED CREATIVITY: APPLIEDCONCEPTS ON VMWARE AND VNXESTORAGE
Ramiro A. CanovasEMC Corporation
2012 EMC Proven Professional Knowledge Sharing 2
Table of Contents Virtual Evolution ......................................................................................................................... 3
Introducing virtualization to Creative Industries .......................................................................... 5
Reaching maximum benefits with Three “Vs” ............................................................................. 6
VNX & VSphere ...................................................................................................................... 6
VMware Fusion .....................................................................................................................15
Through the Odds: VMware Disaster Recovery & VNXe Replication ........................................16
Utilizing storage to reach maximum benefits: VNXe Best Practices ..........................................18
Achieving a solid integration: VNXe & VMware Best Practices ..................................................22
Leaving a virtual conclusion ......................................................................................................26
Bibliography ..............................................................................................................................27
Disclaimer: The views, processes, or methodologies published in this article are those of the author. They do not necessarily reflect EMC Corporation’s views, processes, or methodologies.
2012 EMC Proven Professional Knowledge Sharing 3
Virtual Evolution If Darwin were still alive, he would probably recognize that his theory of evolution applies not
only to Earth species but also to Information Technology (IT). He once stated that “complex
creatures evolve from more simplistic ancestors naturally over time”. Ten years ago, data
centers had large hardware devices and simplistic configurations. With time and external factors
such as data center floor space, power consumption, and a decrease in price per megabyte
(MB), what used to be a “simplistic ancestor” started becoming a more advanced and complex
creature. What used to be large hardware devices started becoming multi-use virtual hosts;
what used to be a simple configuration started to become more complex.
But can virtualization be considered a complex creature? On the one hand, through
virtualization, data centers have been able to scale production, increase physical space and
data storage capacity, and start aiming at consolidation at both at the host and storage level. On
the other hand, configurations are no longer simplistic and present complexity and vendor
diversity in its environment. More does not necessarily mean better and newer does not
necessarily mean appropriate. The key resides in applying the concepts to business strategies
so virtualization can support and enable such strategies. More important, it is imperative to
differentiate when using the concept of virtualization: Hardware virtualization where servers are
virtualized to host different operating systems; Desktop virtualization that allows users to utilize
Large fixed hardware with specific usage
60s-70s
Basic & Limited OS virtualization Intel 80286, Merge/386
Late 80s
Late 90s 2000-2008
2012
VMware introduces Virtual Platform (x86)
Virtualization diversified to several OS Virtualization expands to data storage
VSphere 5.0, VMFusion, Workstation 8.0, VNXe
Bare-Metal Data Center <20% virtualized Data Center 40% virtualized Data Center
Figure 1: Virtualization ti li
2012 EMC Proven Professional Knowledge Sharing 4
clients that are not necessarily residing on someone’s desk; Data storage virtualization where
different devices can be presented as a single flexible device.
Now, what if you could utilize all three virtualization concepts to create benefits to, for example,
an advertising organization? What if you could create computer animations with half the money
and twice the speed needed for usual renderings? What if you could run a whole creative
department with just a few gigabits in your hard drive? What if you could have terabytes of
streaming media at your reach without the need for breakable external drives? What if you could
run an entire office without interruption or fear of unexpected disaster?
This Knowledge Sharing article will describe a case study of an advertising agency undertaking
a full virtual restructuring with VSphere, VMware Fusion, and VNX® demonstrating the endless
possibilities that virtualization have brought not only to the IT industry but also to industries
where flexibility and ideas are essential.
2012 EMC Proven Professional Knowledge Sharing 5
Introducing virtualization to Creative Industries While virtualization is spreading across all industries, there are still those that are reluctant to
use virtualization to gain a competitive advantage:
The advertising industry is known for its flexibility and the need for creativity. Flexibility to be
able to work at all times of the day and creativity to have every resource at easy reach to come
up with an imaginative idea. Most advertising agencies utilize powerful software to create and
edit graphics, logos, animations, videos, websites, and so on which means that the industry per
se is very heavy on media files.
Entering for the first time into an advertising agency office, one notices two big realities: A large
number of users utilized Mac OS and the presence of numerous external hard drives attached
to each system. Most advertising professionals do believe in backup but only after having lost
important media jobs. They are also resist structure and such limitations can sometimes present
boundaries for ideas or “out of the box” thinking.
A final important aspect to consider is that the advertising industry is somehow a saturated
arena where there are a lot of participants but only a few have the budget to constantly buy new
technologies.
Figure 2: Virtualization per sector
2012 EMC Proven Professional Knowledge Sharing 6
Reaching maximum benefits with Three “Vs” Taking all the above characteristics into consideration, how can hardware, desktop, and data
storage virtualization help a small to medium size advertising business gain a competitive
advantage? How can virtualization consolidate the IT infrastructure without presenting
limitations to its professionals and allowing them flexibility at all times?
VNX & VSphere To better exemplify this case study, a problem will first be identified following a resolution
through virtualization.
Advertising agency was growing at a very fast rate and so was data The agency started with 10 employees and in a timeframe of 12 months, it grew to 45
employees and even had to relocate to a new office. At the time, there was no data
collaboration and each employee kept their own files and would share them through email or an
external drive. When the agency had just 10 employees, data was estimated to be at
approximately 1 TB. After the 12 month expansion, data grew to 50 TB.
Figure 3: Advertising Agency Growth
The solution to growth challenges In a traditional environment would be to install a physical
server with a Microsoft Business Server OS offering a shared drive. The key when sizing
0
10
20
30
40
50
60
Q1 Q2 Q3 Q4
Employees
Data (TB)
2012 EMC Proven Professional Knowledge Sharing 7
environments is to look beyond the present and prepare the environment for the future. The
elements below were deterministic in following a different approach:
• Trend showed that Agency employee base was growing fast. With 350% employee
growth in one year, it is reasonable to anticipate that the agency will grow beyond 75
employees in the next two years. What does this mean? In a traditional implementation,
Small Business Server OS supports up to 75 users.
• Taking into consideration that data has grown by 49TB in less than a year, it is very
feasible that it will duplicate in the next year (Especially in such an intensive media file
organization). In a traditional environment, the physical server itself will not be able to
provide the storage required. The server will need a DAS, NAS, or SAN solution to
provide enough space.
• A Traditional approach might tend to solve a single need but with growth, new instances
and challenges arise. A physical Business Server might satisfy data growth challenges
but what about business continuance? What about the need to quickly provide services
and applications? What about the OS diversity in the environment?
Considering the above elements, the best solution for short- and long-term plans was to create
a virtualized environment with EMC VNXe® and VMware vSphere.
• VNXe 3300 with 100 TB of user space
• 2 Quad Core Servers with 64 GB of RAM and 4 GIGE Ethernet ports
• VSphere 5.0
Figure 4: Initial Basic Configuration
vSphere 5.0 offered the same advantages as the traditional approach plus many more. Once
ESXi 5.0 was installed and configured in the server, a 1 TB LUN was presented from the VNXe.
2012 EMC Proven Professional Knowledge Sharing 8
This 1 TB LUN would serve as the Datastore for all guest OS. The first and most important
guest OS was a Windows Server R2 2008 that basically served the same purposes as a Small
Business Server 2011 in a traditional approach. The OS drive was carved out of the datastore
with 250 GB of space (OS drive tends to fill up with Business Servers)
Creative department required a common drive with at least 30 TB of space Once Windows 2008 R2 was installed as a guest OS, a second 30 TB RAID 5 LUN was
presented to the ESXi Server and subsequently to the W2K8 Guest OS. This drive was named
Collaboration and it served as the common drive for the Creative department. In order to do this,
the guest OS also served as an Active Directory server with a set of usernames and privileges
so each user could access the drive with its own identity.
Figure 5: LUN presentation
The great thing about sharing a Windows drive is that it can be seen and used regardless of the
OS that the agency is using. In this case, the advertising agency was composed of 80% Mac
systems and the rest Windows Systems. The Mac systems were able to utilize the sharing
capabilities through Service Message Block (SMB).
Invoice/Email, Calendar, and Web servers were outsourced to different providers With growth came the need to find ways to efficiently cut costs while allowing control of
expansion. Having three different providers for Email, Invoice/Calendar, and Web services,
management and control became difficult. Taking advantage of the already deployed virtual
infrastructure, all of these services were brought in-house:
Email Server: The email platform that was brought in-house was an Exchange 2010 one. This
topic alone could take a new white paper but the basic configuration implemented after several
30 TB 1 TB Datastore Drive
PC User
MAC User
Business Server Guest OS
2012 EMC Proven Professional Knowledge Sharing 9
testings (less than 100 users) was: 2 VMs with W2K8 x64, 15 GB of RAM. The virtual machines
were placed in an exchange R5 datastore created in VNXe and the log files in a 4+4 1/0 RG.
Moreover, a Performance Pool with Exchange 2010 characteristics was created for this
instance, one of the many useful features of VNXe.
Be careful in selecting NL-SAS drives when creating an Exchange Pool in VNXe; it is not
recommended for Exchange Pools.
Figure 6: Creating Exchange
Invoice/Calendar Server: The service was also outsourced and accessed through the “cloud”
but the price was getting too high with the agency growth. The solution was to host the software
in a virtual machine which had the following simple configuration: Single VM with 4 GB of RAM.
The virtual machine was placed in the VM datastore and an RDM was created out of a 4+1
RAID 5 1 TB LUN.
Web Server: The web server was also brought in-house and the set up was simple as well: A 4
GB RAM Linux guest OS with a static IP address (a separate vSwitch) to serve the several
agency websites. The drive was provided by a 500 GB RAID 10.
2012 EMC Proven Professional Knowledge Sharing 10
Figure 8: Application Servers in vCenter
500 GB 1 TB
Figure 7: Application Servers
Email Server
Guest OS
Web Server Guest OS
Calendar Server
Guest OS
350 GB
30 GB
2012 EMC Proven Professional Knowledge Sharing 11
Animation department was utilizing several large servers to perform rendering Rendering is the “process of generating an image from a model (or models in what collectively
could be called a scene file), by means of computer programs”. Animations, video games,
movies, special effects: all are generated thanks to rendering. Before virtualization, the
animation department was utilizing 3 large bare metal servers that were directly connected to
each other in the same room. The rendering job was getting done but larger rendering projects
required purchase of a new server which needed to be configured for connectivity.
With the already existing VSphere ESXi configuration, the rendering job became more dynamic,
faster, and much more efficient. A true rendering farm was created:
• Created 5 W2K8 guest OS that served as rendering nodes While there are several advantages to virtualizing a rendering node, the biggest are ease of
deployment and resource segregation. The new 5 W2k8 guest OS were easily deployed
utilizing the cloning feature in VCenter Server. Moreover, each virtual machine was
assigned 4 GB of memory and allocated under a Resource Pool denominated “Rendering
Farm Pool” with High and expandable shares. In this way, the memory allocation is already
dedicated to each server and resource contention is avoided when performing the
rendering. All these rendering nodes share the same attributes to serve as rendering parallel
power for the farm:
Figure 9: Guest OS as rendering nodes
2012 EMC Proven Professional Knowledge Sharing 12
• Created a separate standard Gigabit Ethernet switch If not segregated, bandwidth performance will be degraded when rendering through a
network. This is why it was important to create a separate Virtual Switch (VSwitch) so the
rendering farm could perform well in a dedicated gigabit switch. To perform this, two
Ethernet ports had to be dedicated to the rendering VSwitch and associated with all five
rendering nodes. By doing so, internal network traffic is segregated from the rendering farm
network. Moreover, NIC teaming was performed between the two physical NICs so that the
rendering network could have load balancing and hence, greater throughput:
Figure 10: Network for Rendering
• Carved Guest OS from datastore LUN The rendering nodes are CPU- and memory-intensive and do not require large disk space.
For this reason, there was no point in presenting RDM or VMFS LUNs to each rendering
node. All that was needed was the OS and extra space for logs (approximately 50 GB a
piece).
2012 EMC Proven Professional Knowledge Sharing 13
Professionals need to access systems and applications at all times from everywhere Although this relates more to the Virtual Private Network (VPN) software used to allow this, it’s
important to note that it was created in the same W2k8 Guest (Business Server). In this way,
the Business Server that hosts the Collaboration drive and Active Directory also served as the
VPN Server. This opened the door to remotely access the network from any device with the
correct user authentication.
The video department was utilizing 20+ external hard drives as an archive library Dealing with an Archive library resulted in a repository that was seldom accessed (or at least
less than the collaboration drive). To allow easier management and deployment, it was decided
to use the CIFS share function in the VNXe array.
Figure 11: Creating a CIFS share
Before creating the shared folder, it was important to create a shared folder server in the VNXe
and enable the CIFS protocol (instead of NFS). It was mentioned earlier that CIFS share can be
easily presented to Mac systems through the Small Block Messaging protocol (SMB). Also, it
was important to specify a domain administrator username in the shared folder server so this
one can be added in the Active Directory Server (W2K8 guest OS)
2012 EMC Proven Professional Knowledge Sharing 14
Figure 12: Configuring VNXe CIFS Server
Once the shared folder “Archive Library” was created, all contents from the 20+ external drives
was copied to the new shared folder and easily provisioned to each MAC user that required
access to the library.
Figure 13: CIFS shares
2012 EMC Proven Professional Knowledge Sharing 15
VMware Fusion Mac users needed to run and test on Windows based applications Two departments—web development and Administrative—were highly affected by living among
the restriction of just having a Mac.
Although the web development department worked mainly on Mac to perform the initial
development, they will always experience a setback at the testing stage. This stage required the
website, flash applications, and embedded widgets to be tested on different web browsers and
different platforms (after all, users do also access website through Windows machines). By
installing VMware Fusion, the Development department was able to start testing on every
possible browser and platform.
The Administrative department often complained that Microsoft products such as Word and
Excel, did not run as smooth as it would in Windows. Moreover, every time they were requested
to attend an online meeting through Microsoft Net Meeting, they were not able to attend through
their Mac browsers. By installing VMware Fusion in the Administrative department, users were
able to successfully utilize Microsoft Office software on Windows platform and free themselves
from Mac application boundaries.
Figure 14: VMFusion for Dev and Admin departments
2012 EMC Proven Professional Knowledge Sharing 16
Through the Odds: VMware Disaster Recovery & VNXe Replication All user backups were performed to an individualized external hard drive A fully virtualized environment does not represent much if it does not have a disaster recovery
plan. How can the agency survive hardware problems, have access to immediate backups, and
so forth? The perfect combination was found with VMware Disaster Recovery (VDR) and VNXe
Replication.
At the host ESXi level, it was crucial to create a cluster to allow hardware redundancy. An exact
same ESXi Server was brought to the environment and introduced in a cluster. As a rule, all
datastore and network configurations were replicated to allow the use of vMotion and Storage
vMotion when/if needed. Moreover, the distribution of resources could be evened out to allow
resources to those most CPU- and memory-intensive virtual machines.
VDR, a feature included in the VMware Essential Plus package, offered an affordable backup
solution to the advertising agency. Another very useful item that helped in the decision was the
deduplication feature that would enable saving space. In order to create a fully redundant
environment, the datastore used for VDR deployment was a 1 TB CIFS share which offered
enough space to back up the specific virtual machines present in the environment.
Figure 15: VDR appliance
Figure 13 -
2012 EMC Proven Professional Knowledge Sharing 17
As a second safety snapshotting option, snapshots were automatically configured in the VNXe
during creation of the share folders and datastore.
Figure 16: Automated Snapshot
As a midterm plan, the VNXe will be used to replicate the most significant RDMs, such as the
Exchange and remotely log LUNs. A VNX/e remote system was eventually purchased and
placed in a remote office where the local VNX/e would periodically replicate all the data to the
remote site.
Figure 17: Configuring VNX/e replication
2012 EMC Proven Professional Knowledge Sharing 18
Utilizing storage to reach maximum benefits: VNXe Best Practices In today’s IT environment there is not a unique solution to every customer hence it’s sometimes
erroneous to classify a configuration as Incorrect or Correct. What it is accurate, is that best
practices allow a more efficient configuration hence why is always the most recommended
route.
When describing VNXe best practices, it is important to use a holistic overview of the
environment, starting from the SAN on to the Array (VNXe).
VSANs Unlike a Fiber Channel SAN, an ISCSI SAN does not need zoning but it is best practice to
create separate VSANs when possible. The key is still the same: redundancy.
It is always recommended to have dual separated ISCSI NIC cards in each host. Each host
ISCSI connection should go to a separate Ethernet switch (Cisco’s Nexus in Figure 18) and
utilize preferably single initiator/single storage VSANs. From the VNXe perspective, there
should be four separate ISCSI servers: two for SPA and two for SPB. One of the two ISCSI
Servers for a specific SP should go to a separate switch to provide redundancy at all levels:
Host, Switch, and Storage.
VSAN 1 = (Host A HBA1, SPA 0) VSAN 1 = (Host B HBA1, SPB 0) VSAN 2 = (Host A HBA0, SPB 1) VSAN 2 = (Host B HBA0, SPA 1) VSAN 3 = (Host B HBA1, SPA 0) VSAN 3 = (Host A HBA1, SPB 0) VSAN 4 = (Host B HBA0, SPB 1) VSAN 4 = (Host A HBA0, SPA 1)
Figure 18: Best Practice Zoning
SP B SP A
Switch 2 Switch 1
Switch 2
Switch 1
Host B
Host A
2012 EMC Proven Professional Knowledge Sharing 19
In the event the host only has one NIC card, make sure to connect it to a single SP port.
Connecting several SP ports to a single ISCSI initiator can lead to overload and ultimately,
discarded frames!
Subnet dilemma Want ultimate redundancy? Make sure that each switch is configured on a separate subnet. The confusion arises when the above subnet concept is applied to the different components in
the SAN. The chart below will help clarify when to separate subnets and when not to:
SAN component to be configured Separate Subnets?
Ethernet Switches YES, it provides redundancy
Management Ports & ISCSI Ports in same SP YES, it avoids ISCSI being rerouted to Management Ports and vice versa.
Separate SP ISCSI ports (Ex: SPA0, SPB1) NO, this will allow Fail-Safe to work correctly.
HOST ISCSI HBA YES, it should match the SP ISCSI port subnet for a given port in SPA and SPB
SP assignment The introduction of Asymmetric Logical Unit Assignment (ALUA) has created some confusion
with the concept. Although ALUA is a very useful CLARiiON®/VNX internal feature that allows
access to a specific LUN through both SPs, it does not mean that the array is an Active/Active.
The VNXe is still an Active/Passive array meaning that the LUN or Share is still being served by
one SP. In our applied example, the Share “Archive Library” was owned by one SP (SP A)
although it still had paths to its peer SP (SP B).
Figure 19 - SP ownership
SP B SP A
Share LUN
2012 EMC Proven Professional Knowledge Sharing 20
What does this mean to best practices? Planning is crucial before implementing a VNXe
configuration to make sure an SP is not overloaded with Shares and/or LUNs. This overload can
cause I/O and CPU bottlenecks that will affect performance.
CPU utilization has to be interpreted in VNXe systems. To do this, proceed to
SystemProcessor Performance.
Initially when there is I/O, CPU utilization will be represented between 3% and 6%; the primary
SP will have higher utilization since the Unisphere internal tasks run on it. The more ISCSI
Server activity or Share Folders in a SP along with higher sequential data transfers, the higher
the utilization.
Failover/Failback Best Practices are vital during a failure scenario. In the event of a network failure in the
environment—such as a link to a Port on SPA—VNXe utilizes the Fail-Safe networking (FSN)
feature. This feature enables re-routing to the alternate port in the peer SP (in the event of
single ISCSI port binding). The key to applying this concept is to understand that the re-routing
occurs internally in the array.
SP B SP A
Share LUN
I/O
Figure 20: Failure Scenario
2012 EMC Proven Professional Knowledge Sharing 21
It is very important not to confuse the above failure (link failure) with a SP failure. When a SP
fails, the LUN and/or Share is trespassed automatically to its peer SP and there is no Fail-Safe
networking feature invoked.
Figure 17 takes into consideration a single SP ISCSI port connected fabric. In a fully
redundant ISCSI fabric with at least two ISCSI servers (ports) in the same SP, the I/O will be re-
routed to the alternative port in the same SP.
Aggregate In a Fiber Channel world, load balancing can be achieved through failover software such as
VMware Native Multipathing or PowerPath®. Although in the ISCSI world these also provide
load balancing, there is an extra embedded feature in the VNXe that allows Ethernet load
balancing along the SAN.
Link Aggregation Control Protocol (LACP) enables load balancing to a large number of clients.
The algorithm uses Hash source and destination MAC addresses (same as ESXi). So basically,
several Ethernet ports can be aggregated to a single logical port not only providing load
balancing but also redundancy in the event of failure.
A best practice environment with LACP at both host and storage level would provide an
extremely redundant environment; hence, it is recommended to implement such if Ethernet
switch supports the feature.
Figure 21: LACP
I/O
2012 EMC Proven Professional Knowledge Sharing 22
Achieving a solid integration: VNXe & VMware Best Practices Although great integration exists between VNXe and vSphere 5.0, it is crucial to size and plan
the integration to avoid any issues or limitation with future growth.
Ethernet bottlenecks Network planning should always be accentuated in an iSCSI environment. There are several
Ethernet traffic types in such environments:
• Virtual machine traffic
• VMKernel Traffic
o VMotion Traffic
o ISCSI Traffic
o NFS Traffic
• Management traffic
With such numerous Ethernet components, planning your vSphere networking for a VNXe
ISCSI environment involves several aspects such as vSwitch considerations and CPU
utilization.
Don’t overcommit vSwitches In a fast growing environment, it is usually to overcommit the virtual switches but the key resides
in balancing the different vSwitches (which possess different NICs)
NIC teaming is a best practice when trying to achieve load balancing and passive failover at the
host side (in conjunction to native multipathing) which adds another layer of redundancy in the
environment.
Although it requires knowledge to combine vSwitch VSAN and physical switch VSAN, doing so
will provide another level of redundancy and will avoid mixing different Ethernet traffic.
Plan your CPU Cycles Just like the VNXe, CPU utilization is highly important and it can affect Ethernet traffic. In an
iSCSI environment, higher throughput will require higher CPU resources so planning for
appropriate CPU cycles is critical.
Just as the VNX has a way to monitor CPU utilization, ESXi possesses esxtop that assists in
accurately measuring CPU. If the CPU panel value on esxtop indicates a number equal or
2012 EMC Proven Professional Knowledge Sharing 23
greater than 1, it means the system might be overloaded. This was the case when utilizing the
rendering nodes in the depicted Agency case; hence the segregation of the nodes to a different
vSwitch. Moreover, rule of thumb states that CPU percentage between 70-80% is a warning and
anything beyond this is critical and will require CPU reallocation.
Simply because features are available does not mean that it applies to every environment. It’s
key to understand, for example, that Hyperthreading can be misleading as it provides benefits
only depending on the VM workload. The best way to optimize Hyperthreading performance is
to establish Resource Pools with CPU maximum and minimum.
Hyperthreading is not the same as dual core! With this said, do not use it in conjunction
with CPU affinity as it will cause poor performance. It is basically allowing several VMs to
compete for the same resources (or a single Core).
Bad Memory? Don’t forget about overhead Often, memory planning for a vSphere environment is miscalculated, causing disparity among
the virtual machines. One of the biggest pitfalls is not considering the memory overhead by
agents such as hostd and vpxa that are needed for management and monitoring. Since memory
overhead is also present in the virtual devices being used, it is recommended to disable these in
Virtual Machines that will not be using them.
VNXe storage can also play a role in memory overhead with the VAII APIs that add transaction
at the ESXi host level, increasing memory overhead. An estimate of memory overhead for
virtual machines is shown below:
http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-resource-management-guide.pdf
2012 EMC Proven Professional Knowledge Sharing 24
The eternal dilemma: RDM or VMFS? The eternal question among VMware and Storage administrators is which disk type offers the
most advantage: Raw Device Mapping (RDM) or Virtual Machine File systems (VMFS)? There
is no right answer to the question; it all depends on use and purpose of that specific disk. When
making performance comparisons between RDMs and VMFs, the difference is not that
significant. Therefore, the decision should be based on features needed for the specific drive.
When to utilize VMFS When to utilize RDM
For ease of management & quick provisioning Although there is SAN use, administration and provisioning is quick and straight forward.
When size is too large (above 1 TB) Realistically, although VMFS supports more than 1 TB, in the event of a migration, moving a +1TB vmdk file will be time consuming.
When needing to fully utilize VM features VMFS provides a full set of VMware features that are not available for RDMs: Storage vMotion, VMware Snapshot (it can be done with Virtual RDM)
When utilizing SAN replication tools In this case, the VNXe would take a snapshot of the RDM much quicker than the server would at a host level. More importantly, it frees CPU cycles.
When implementing a Microsoft Cluster
Be careful when creating VMFS extents. First, it will require further mapping throughout the
ESX hosts. Second, if a LUN that is part of the extent fails, it will make the entire VMFS volume
unavailable. Third and probably most important, the metadata will reside on a single LUN which,
by having extents, can generate SCSI reservation conflicts.
With vSphere 5.0, VMFS version 5 was introduced which provides slightly better performance
and useful improvements. With this version, the partition alignment is defaulted to 1 MB to avoid
inconsistencies. If for some reason there is a need to change this alignment, the workaround is
to create a VMFS3 with the desired MB alignment and then upgrade it to version 5. Multipathing With VNXe multipathing, choosing the right pathing policy can be tricky as each one can have a
different effect on an active/passive environment such as the VNX/VNXe series. In the Agency
example, the pathing was managed by VMware Native Failover/Multipathing software. To
understand how to apply each policy, it is crucial to grasp the fact that the VNX/e is an
active/passive array and I/O will always be routed through one SP until a failure or manual
intervention occurs:
2012 EMC Proven Professional Knowledge Sharing 25
Most Recently Used (MRU): This selection is initially performed at boot time and it selects the
first available path to a specific LUN. While MRU is the default policy for an active passive array,
it might not provide the best performance and availability for your active/passive environment.
Fixed: The name itself says it all. It selects a fixed path that can be defined either by manually
selecting a “preferred” path or, if not, by selecting the first working path. The advantage of this
policy in a VNX/e environment is that, if configured properly and in a failure event, it will return
its paths to its rightful default SP once the failure has been fixed.
Round Robin: It provides rotational I/O throughout for all active paths. In a VNX/e environment,
it will sequentially direct the I/O through all paths on the active SP.
What about a failure scenario? The chart below will help to understand the different scenarios
and how each policy would react. The chart is based on a fully redundant environment.
MRU Fixed Round Robin
Failover Failback Failover Failback Failover Failback
Single Path failure
SP Port Failure
I/O is redirected to surviving path in the same active SP.
Although path is automatically recovered, there is no automatic failback.
I/O is redirected to surviving path in the same active SP.
I/O will be automatically redirected to the preferred path.
There will be no I/O redirection but performance will be degraded during path failure.
I/O will resume and provide round robin I/O to all paths to the active SP.
SP failure LUN will be trespassed at the Storage level and Host will utilize path to surviving SP.
Although paths to fixed SP are automatically recovered, there is no automatic failback.
LUN will be trespassed at the Storage level and Host will utilize path to surviving SP.
I/O will be automatically redirected to the preferred path of the recovered SP.
LUN will be trespassed at the Storage level and Host will utilize paths to surviving SP.
Although paths to fixed SP are automatically recovered, there is no automatic failback.
HBA Failure I/O will be redirected to path in surviving HBA.
Although paths to recovered HBA are automatically recovered, there is no automatic failback.
I/O will be redirected to path in surviving HBA.
I/O will be automatically redirected to the preferred path of the recovered HBA.
I/O will be redirected to path in surviving HBA.
Paths belonging to recovered HBA will become active and I/O will resume to paths of active SP.
The most important tip to take from the failure scenario chart is the fixed path policy failback
scenario. It is very important to plan accordingly before selecting a preferred path in a VNX/e
environment. Preferred paths should always match default SP owner in the VNXe. If not, after a
failure scenario, the I/O will be redirected to a non-default SP owner path, unbalancing load in
the array.
Policy Failure Scenario
2012 EMC Proven Professional Knowledge Sharing 26
Leaving a virtual conclusion An advertising agency went into a full virtualization makeover to provide required resources to
every department, making data readily available at all times from anywhere and replicated
against any disaster. From virtualizing business, email, and web servers to allowing efficient
rendering; from transforming a fixed environment into a virtualized provisioning infrastructure;
vSphere, VMFusion, and VNXe provided an integrated solution that will allow imagination to
continue to grow.
Imagination is a tool that allows humans to go beyond boundaries and create abstraction.
Imagination requires flexibility, empowerment, and resources which can be enabled by
Information Technology. Virtualization has evolved in our industry to the point where most IT
environments can be virtualized; even in industries where flexibility is a requirement.
Plan accordingly, Size proactively, Follow best practices and let virtualization lead you to a
world full of opportunities.
2012 EMC Proven Professional Knowledge Sharing 27
Bibliography EMC VNXe Series Storage Systems detailed review
VMware Performance Best Practices for vSphere 5.0
VMware vSphere vCenter 5.0 Resource Management Guide
VMware Business Financial Benefits Virtualization Whitepaper
http://encefalus.com/cognitive/dealing-informational-overflow/
EMC Primus emc269877 & emc156408
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
top related