in-the-lab full esx:vmotion test lab in a box
DESCRIPTION
In-The-Lab Full ESX:VMotion Test Lab in a BoxTRANSCRIPT
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 1There are many features in vSphere worth exploring but to do
so requires committing time, effort, testing, training and
hardware resources. In this feature, we’ll investigate a way
– using your existing VMware facilities – to reduce thetime, effort and hardware needed to test and train-up on
vSphere’s ESXi, ESX and vCenter components. We’ll start with
a single hardware server running VMware ESXi free as the
“lab mule” and install everything we need on top of that
system.
Part 1, Getting StartedTo get started, here are the major hardware and software
items you will need to follow along:
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
1 of 6 9/10/11 12:07 PM
Recommended Lab Hardware Components
One 2P, 6-core AMD “Istanbul” Opteron system
Two 500-1,500GB Hard Drives
24GB DDR2/800 Memory
Four 1Gbps Ethernet Ports (4!1, 2!2 or 1!4)
One 4GB SanDisk “Cruiser” USB Flash Drive
Either of the following:
One CD-ROM with VMware-VMvisor-Installer-
4.0.0-164009.x86_64.iso burned to it
An IP/KVM management card to export ISO images
to the lab system from the network
Recommended Lab Software Components
One ISO image of NexentaStor 2.x (for the Virtual
Storage Appliance, VSA, component)
One ISO image of ESX 4.0
One ISO image of ESXi 4.0
One ISO image of VCenter Server 4
One ISO image of Windows Server 2003 STD (for vCenter
installation and testing)
For the hardware items to work, you’ll need to check your
system components against the VMware HCL and community
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
2 of 6 9/10/11 12:07 PM
supported hardware lists. For best results, always disable
(in BIOS) or physically remove all unsupported or unused
hardware- this includes communication ports, USB, software
RAID, etc. Doing so will reduce potential hardware conflicts
from unsupported devices.
The Lab SetupWe’re first going to install VMware ESXi 4.0 on the “test
mule” and configure the local storage for maximum use. Next,
we’ll create three (3) machines two create our “virtual
testing lab” – deploying ESX, ESXi and NexentaStor runningdirectly on top of our ESXi “test mule.” All subsequent
tests VMs will be running in either of the virtualized ESX
platforms from shared storage provided by the NexentaStor
VSA.
ESX, ESXi and VSA running atop ESXi
Next up, quick-and-easy install of ESXi to USB Flash…
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
3 of 6 9/10/11 12:07 PM
Installing ESXi to Flash
This is actually a very simple part of the lab installation.
ESXi 4.0 installs to flash directly from the basic installer
provided on the ESXi disk. In our lab, we use the IP/KVM’s
“virtual CD” capability to mount the ESXi ISO from network
storage and install it over the network. If using an
attached CD-ROM drive, just put the disk in, boot and follow
the instructions on-screen. We’ve produced a blog showing
how to “Install ESXi 4.0 to Flash” if you need more details
– screen shots are provided.
Once ESXi reboots for the first time, you will need to
configure the network cards in an appropriate manner for
your lab’s networking needs. This represents your first
decision point: will the “virtual lab” be isolated from the
rest of your network? If the answer is yes, one NIC will be
plenty for management since all other “virtual lab” traffic
will be contained within the ESXi host. If the answer is no,
let’s say you want to have two or more “lab mules” working
together, then consider the following common needs:
One dedicated VMotion/Management NIC
One dedicated Storage NIC (iSCSI initiator)
One dedicated NIC for Virtual Machine networks
We recommend following interface configurations:
Using one redundancy group
Add all NICs to the same group in the
configuration console
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
4 of 6 9/10/11 12:07 PM
Use NIC Teaming Failover Order to dedicate one
NIC to management/VMotion and one NIC to iSCSI
traffic within the default vSwitch
Load balancing will be based on port ID
Using two redundancy groups (2 NIC per group)
Add only two NICs to the management group in the
configuration console
Use NIC Teaming Failover Order to dedicate one
NIC to management/VMotion traffic within the
default vSwitch (vSwitch0)
From the VI Client, create a new vSwitch,
vSwitch1, with the remaining two NICs
Use either port ID (default) or hash load
balancing depending on your SAN needs
Our switch ports and redundancy groups - 2-NICs using port
ID load balancing, 2-NICs using IP hash load balancing.
Test the network configuration by failing each port and make
sure that all interfaces provide equal function. If you are
new to VMware networking concepts, stick to the single
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
5 of 6 9/10/11 12:07 PM
redundancy group until your understanding matures – it willsave time and hair… If you are a veteran looking to hone
your ESX4 or vSphere skills, then you’ll want to tailor the
network fit your intended lab use.
Next, we cover some ESXi topics for first-timers…
Advertisement
PPaaggeess:: 1 2 3
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
6 of 6 9/10/11 12:07 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 1First-Time Users
First-time users of VMware will now have a basic
installation of ESXi and may be wondering where to go next.
If the management network test has not been verified, now is
a good time to do it from the console. This test will ping
the DNS servers and gateway configured for the management
port, as well as perform a “reverse lookup” of the IP
address (in-addr.arpa requesting name resolution based on
the IP address.) If you have not added the IP address of the
ESXi host into your local DNS server, this item will fail.
Testing the ESXi Management Port Connectivity
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
1 of 5 9/10/11 12:07 PM
Once the initial management network is setup and testing
good we simply launch a web browser from the workstation
we’ll be managing from and enter the ESXi host’s address as
show on the console screen:
Management URL From Console Screen
The ESXi host’s embedded web server will provide a link to
“Download vSphere Client” to your local workstation for
installation. We call this the “VI Client” in the generic
sense. The same URL provides links to VMware vCenter,
vSphere Documentation, the vSphere Remote CLI installer and
virtual appliance and Web Services SDK. For now, we only
need the VI Client installed.
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
2 of 5 9/10/11 12:07 PM
vSphere VI Client Login
Once installed, login to the VI Client using the “root” user
and password established when you configured ESXi’s
management interface. The “root” password should not be
something easily guessed as a hacker owning your ESX console
could present serious security consequences. Once logged-in,
we’ll turn our attention to the advanced network
configuration.
Initial Port Groups for Hardware ESXi Server
If you used two redundancy groups like we do, you should
have at lease four port groups defined: one virtual machine
port group for each vSwitch and one VMkernel port group for
each vSwitch. We wanted to enable two NICs for iSCSI/SAN
network testing on an 802.3ad trunk group, and we wanted to
be able to pass 802.1q VLAN tagged traffic to the virtual
ESX servers on the other port group. We created the
following:
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
3 of 5 9/10/11 12:07 PM
vNetworking - notice the "stand by" adapter in vSwitch0 due
to the active-standby selection. (Note we are not using
vmnic0 and vmnic1.)
VViirrttuuaall SSwwiittcchh vvSSwwiittcchh00
vSwitch of 56 ports, route by port ID, beacon
probing, active adapter vmnic4, standby adapter
vmnic2
Physical switch ports configured as 802.1q
trunks, all VLANs allowed, VLAN1 untagged
Virtual Machine Port Group 1: “802.1q
Only” – VLAN ID “4095″
Virtual Machine Port Group2: “VLAN1 Mgt
NAT – VLAN ID “none”
VMkernel Port Group: “Management Network”
– VLAN ID “none”
VViirrttuuaall SSwwiittcchh vvSSwwiittcchh11
vSwitch of 56 ports, route by IP hash, link
state only, active adapters vmnic0 and vmnic1
Physical switch ports configured as static
802.3ad trunk group, all VLANs allowed, VLAN2000
untagged
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
4 of 5 9/10/11 12:07 PM
Virtual Machine Port Group 1: “VLAN2000
vSAN” – VLAN ID “none”
VMkernel Port Group 1: “VMkernel
iSCSI200″ – VLAN ID “none”
VMkernel Port Group: 2 “VMKernel
iSCSI201″ – VLAN ID “none”
This combination of vSwitches and port groups allow for the
following base scenarios:
Virtual ESX servers can connect to any VLAN through
interfaces connected to “802.1q Only” port group;
1.
Virtual ESX servers can be managed via interfaces
connected to “VLAN1 Mgt NAT” port group;
2.
Virtual ESX servers can access storage resources via
interfaces connected to “VLAN2000 vSAN” port group;
3.
Hardware ESXi server can access storage resources on
either of our lab SAN networks in 192.168.200.0/25 or
192.168.200.128/25 networks to provide resources
beyond the direct attached storage available (mainly
for ISO, canned templates and backup images);
4.
Next, we take advantage of that direct attached storage…
Advertisement
PPaaggeess:: 1 2 3
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
5 of 5 9/10/11 12:07 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 1
Using Direct Attached Storage
We want to use the directly attached disks (DAS) as a
virtual storage backing for our VSA (virtual SAN appliance.)
To do so, we’ll configure the local storage. In some
installations, VMware ESXi will have found one of the two
DAS drives and configured it as “datastore” in the
Datastores list. The other drive will be “hidden” awaiting
partitioning and formatting. We can access this from the VI
Client by clicking the “Configuration” tab and selecting the
“Storage” link from “Hardware.”
ESXi may use a portion of the first disk for housekeeping
and temporary storage. Do not delete these partitions, but
the remainder of the disk can be used for virtual machines.
NNoottee:: We use a naming convention for our local storage to
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
1 of 4 9/10/11 12:08 PM
prevent conflicts when ESX hosts are clustered. This
convention follows our naming pattern for the hosts
themselves (i.e. vm01, vm02, etc.) such that local storage
becomes vLocalStor[NN][A-Z] where the first drive of host
“vm02″ would be vLocalStor02A, the next drivevLocalStor02B, and so on.
If you have a “datastore” drive already configured, rename
it according to your own naming convention and then format
the other drive. Note that VMware ESXi will be using a small
portion of the drive containing the “datastore” volume for
its own use. Do not delete these partitions if they exist,
but the remainder of the disk can be used for virtual
machine storage.
If you do not see the second disk as an available volume,
click the “Add Storage…” link and select “Disk/LUN” to tell
the VI Client that you want a local disk (or FC LUN). The
remaining drive should be selectable from the list on the
next page – SATA storage should be identified as “Local ATADisk…” and the capacity should indicate the approximate
volume of storage avaialbe on disk. Select it and click the
“Next >” button.
The “Current Disk Layout” screen should show “the hard disk
is blank” provided no partitions exist on the drive. If the
disk has been recycled from another installation or machine,
you will want to “destroy” the existing partions in favor of
a single VMFS partion and click “Next.” For the “datastore
name” enter a name consistent with your naming convention.
As this is our second drive, we’ll name ours vLocalStor02B
and click “Next.”
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
2 of 4 9/10/11 12:08 PM
Selecting the default block size for ESX's attached storage
volumes.
The default block size on the next screen will determine the
maximum supported single file size for this volume. The
default setting is 1MB blocks, resulting in a maximum single
file size of 256GB. This will be fine for our purposes as we
will use multiple files for our VSA instead of one large
monolithic file on each volume. If you have a different
strategy, choose the block size that supports your VSA file
requirements.
The base ESXi server is now complete. We’ve additionally
enabled the iSCSI initiator and a remote NFS volume
containing ISO images to our configuration to speed-up our
deployment. While this is easy to do in a Linux environment,
we expect most readers will be more comfortable in a Windows
setting and we’ve modified the approach for those users.
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
3 of 4 9/10/11 12:08 PM
Right-click on the storage volume you want to browse and
select "Browse Datastore..." to open a filesystem browser.
The last step before we end Part 1 in our Lab series is
uploading the ISO images to the ESXi server’s local storage.
This can easily be accomplished from the VI Client by
browsing the local file system, adding a folder named “iso”
and uploading the appropriate ISO images to that directory.
Once uploaded, these images will be used to install ESX,
ESXi, NexentaStor, Windows Server 2003 and vCenter Server.
To come, Parts 2 & 3, the benefits of ZFS and installing the
NexentaStor developer’s edition as a Virtual Storage
Appliance for our “Test Lab in a Box” system…
Advertisement
PPaaggeess:: 1 2 3
http://solori.wordpress.com/2009/08/17/in-the-lab-full-esxvmot...
4 of 4 9/10/11 12:08 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 2In Part 1 of this series we introduced the basic
Lab-in-a-Box platform and outlined how it would be used to
provide the three major components of a vMotion lab: (1)
shared storage, (2) high speed network and (3) multiple ESX
hosts. If you have followed along in your lab, you should
now have an operating VMware ESXi 4 system with at least two
drives and a properly configured network stack.
In Part 2 of this series we’re going to deploy a Virtual
Storage Appliance (VSA) based on an open storage platform
which uses Sun’s Zetabyte File System (ZFS) as its
underpinnings. We’ve been working with Nexenta’s NexentaStor
SAN operating system for some time now and will use it –with its web-based volume management – instead of deployingOpenSolaris and creating storage manually.
Part 2, Choosing a VirtualStorage ArchitectureTo get started on the VSA, we want to identify some key
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
1 of 5 9/10/11 12:03 PM
features and concepts that caused us to choose NexentaStor
over a myriad of other options. These are:
NexentaStor is based on open storage concepts and
licensing;
NexentaStor comes in a “free” developer’s version with
4TB 2TB of managed storage;
NexentaStor developer’s version includes snapshots,
replication, CIFS, NFS and performance monitoring
facilities;
NexentaStor is available in a fully supported,
commercially licensed variant with very affordable
$/TB licensing costs;
NexentaStor has proven extremely reliable and
forgiving in the lab and in the field;
Nexenta is a VMware Technology Alliance Partner with
VMware-specific plug-ins (commercial product) that
facilitate the production use of NexentaStor with
little administrative input;
Sun’s ZFS (and hence NexentaStor) was designed for
commodity hardware and makes good use of additional
RAM for cache as well as SSD’s for read and write
caching;
Sun’s ZFS is designed to maximize end-to-end data
integrity – a key point when ALL system componentslive in the storage domain (i.e. virtualized);
Sun’s ZFS employs several “simple but advanced”
architectural concepts that maximize performance
capabilities on commodity hardware: increasing IOPs
and reducing latency;
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
2 of 5 9/10/11 12:03 PM
While the performance features of NexentaStor/ZFS are well
outside the capabilities of an inexpensive “all-in-one-box”
lab, the concepts behind them are important enough to touch
on briefly. Once understood, the concepts behind ZFS make it
a compelling architecture to use with virtualized workloads.
Eric Sproul has a short slide deck on ZFS that’s worth
reviewing.
ZFS and Cache – DRAM, Disks andSSD’sLegacy SAN architectures are typically split into two
elements: cache and disks. While not always monolithic, the
cache in legacy storage typically are single-purpose pools
set aside to hold frequently accessed blocks of storage –allowing this information to be read/written from/to RAM
instead of disk. Such caches are generally very expensive to
expand (when possible) and may only accomodate one specific
cache function (i.e. read or write, not both). Storage
vendors employ many strategies to “predict” what information
should stay in cache and how to manage it to effectively
improve overall storage throughput.
New cache model used by ZFS allows main memory and fast SSDs
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
3 of 5 9/10/11 12:03 PM
to be used as read cache and write cache, reducing the need
for large DRAM cache facilities.
Like any modern system today, available DRAM in a ZFS system
– that the SAN Appliance’s operating system is not directlyusing – can be apportioned to cache. The ZFS adaptivereplacement cache, or ARC, allows for main memory to be used
to access frequently read blocks of data from DRAM (at
microsecond latency). Normally, an ARC read miss would
result in a read from disk (at millisecond latency), but an
additional cache layer – the second level ARC, or L2ARC –can be employed using very fast SSDs to increase effective
cache size (and drastically reduce ARC miss penalties)
without resorting to significantly larger main memory
configurations.
The L2ARC in ZFS sits in-between the ARC and disks, using
fast storage to extend main memory caching. L2ARC uses an
evict-ahead policy to aggregate ARC entries and predictively
push them out to flash to eliminate latency associated with
ARC cache eviction.
In fact, the L2ARC is only limited by the DRAM (main memory)
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
4 of 5 9/10/11 12:03 PM
required for bookkeeping at a ratio of about 50:1 for ZFS
with an 8-KB record size. This means only that 10GB of
additional DRAM would be required to add 512GB of L2ARC
(4-128GB read-optimized SSD’s in RAID0 configuration).
Together with the ARC, the L2ARC allows for a storage pool
consisting of fewer numbers of disks to perform like a much
larger array of disks where read operations are concerned.
L2ARC's evict-ahead polict aggregates ARC entries and
predictively pushes them to L2ARC devices to eliminate ARC
eviction latency. The L2ARC also acts as a ARC cache for
processes that may force premature ARC eviction (runaway
application) or otherwise adversely affect performance.
Next, the ZFS Intent-Log and write caching…
Advertisement
PPaaggeess:: 1 2 3
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
5 of 5 9/10/11 12:03 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 2
The ZFS Intent-Log: Write Caching
For synchronous write operations, ZFS employs a special
device called the ZFS intent-log, or ZIL. It is the job of
the ZIL to allow synchronous writes to be quickly written
and acknowledged to the client before they are actually
committed to the storage pool. Only small transactions are
written to the ZIL, while larger writes are written directly
to the main storage pool.
The ZFS intent-log (ZIL) allows synchronous writes to be
quickly written and acknowledged to the client before data
is written to the storage pool.
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
1 of 3 9/10/11 12:05 PM
The ZIL can be dealt with in one of four ways: (1) disabled,
(2) embedded in the main storage pool, (3) directed to a
dedicated storage pool, or (4) directed to dedicated, write-
optimized SSDs. Since the ZIL is only used for smaller
synchronous write operations, the size of the ZIL (per
storage pool) ranges from 64MB in size to 1/2 the size of
physical memory. Additionally, log device size is limited by
the amount of data – driven by target throughput- that couldpotentially benefit from the ZIL (i.e. written to ZIL within
a two 5 second periods). For instance, a single 2Gbps FC
connection’s worth of synchronous writes might require a
maximum of 2.5GB ZIL.
ZFS Employs CommodityEconomies of ScaleBesides enabling the economies of scale delivered by
commodity computing components, potential power savings
delivered by the use of SSDs in place of massive disk arrays
and the I/O and latency benefits of ARC, L2ARC and ZIL
caches, ZFS does not require high-end RAID controllers to
perform well. In fact, ZFS provides the maximum benefit when
directly managing all disks in the storage pool, allowing
for direct access to SATA, SAS and FC devices without the
use of RAID abstractions.
That is not to say that ZFS cannot make use of RAID for the
purpose of fault tolerance. On the contrary, ZFS provides
four levels of RAID depending on use case: striped, no
redundancy (RAID0); mirrored disk (RAID1); striped mirror
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
2 of 3 9/10/11 12:05 PM
sets (RAID1+0); or striped with parity (RAIDz). Disks can be
added to pools at any time at the same RAID level and any
additional storage created is immediately available for use.
A feature of ZFS causes pool data to be redistributed across
new volumes as writes are performed, slowly redistributing
data as it is modified.
Next, why we chose NexentaStor…
Advertisement
PPaaggeess:: 1 2 3
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
3 of 3 9/10/11 12:05 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 2
About NexentaNexentais a VMware Technology Alliance Partner based in
Mountain View, California. Nexenta was founded in 2005, and
is the leading provider of hardware independent OpenStorage
solutions. Nexenta’s mantra over the last 12-months has been
to “end vendor lock-in” associated with legacy storage
platforms. NexentaStor – and their open source operatingsystem NexentaCore – based on ZFS and Debian – represent thecompany’s sole product focus.
NexentaStor is a software based NAS and SAN appliance.
NexentaStor is a fully featured NAS/SAN solution that
has evolved from its roots as a leading disk to disk and
second tier storage solution increasingly into primary
tier use cases. The addition of NexentaStor 2.0,
including phone support, has accelerated this transition
as has the feedback and input of well over 10,000
NexentaStor users and the ongoing progress of the
underlying OpenSolaris and Nexenta.org communities, each
of which have hundreds of thousands of members.
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
1 of 4 9/10/11 12:05 PM
NexentaStor is able to take virtually any data source
(including legacy storage) and share it completely
flexibly. NexentaStor is built upon the ZFS file system
which means there are no practical limits to the number
of snapshots or to file size when using NexentaStor.
Also, Nexenta has added synchronous replication to ZFS
based asynchronous replication. Thin provisioning and
compression improve capacity utilization. Also, no need
to ‘short stroke’ your drives to achieve performance as
explained below.
Today’s processors can easily handle end to end
checksums on every transaction. The processors that
existed when legacy file systems were designed could
not. Checksuming every transaction end to end means any
source of data corruption can be detected. Plus, if you
are using NexentaStor software RAID it can automatically
correct data corruption.
The underlying ZFS file system was built to exploit
cache to improve read and write performance. By adding
SSDs you can achieve a dramatic improvement in
performance without increasing the number of expensive
spinning disks, thereby saving money, footprint, and
power and cooling. Other solutions require you to decide
which data should be on the flash or SSDs. This can be
quite challenging and will never be as efficient in a
dynamic environment as the real time algorithms built
into the ZFS file system.
Specifically, with NexentaStor you NEVER run out of
snapshots whereas with legacy solutions you run out
fairly quickly, requiring work arounds that take time
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
2 of 4 9/10/11 12:05 PM
and increase the risk of service disruption. In summary,
over 3x the capacity, equivalent support thanks to
Nexenta’s partners, superior hardware, and superior
software for over 75% less than legacy solutions.
- Nexenta’s Product Overview
SOLORI on NexentaStorWe started following NexentaStor’s development in late 2008
and have been using it in the lab since version 1.0.6 and in
limited production since 1.1.4. Since then, we’ve seen great
improvements to the NexentaStor roster over the basic ZFS
features, including:
Simple Failover (HA), 10GE, ATA over Ethernet, Delorean, VM
Datacenter, improvements to CIFS and iSCSI support, GUI
improvements, COMSTAR support, VLAN and 802.3ad support in
GUI, Zvol auto-sync, improved analytics from GUI, HA Cluster
(master/master), developer/free edition capacity increase
from 1TB to 2TB, and the addition of a network professional
support and services for NexentaStor customers.
Now, with the advent of the 2.1 release, NexentaStor is
showing real signs of maturity. Its growth as a product has
been driven by improvements to ZFS and Nexenta’s commercial
vision of open storage on commodity hardware sold and
serviced by a knowledgable and vibrant partner network.
Beyond availability, perhaps the best improvements for
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
3 of 4 9/10/11 12:05 PM
NexentaStor have been in the support and licensing arena.
The updated license in 2.1 allows for capacity to be
measured as total useable capacity (after formatting and
redundancy groups) not including the ZIL, L2ARC and spare
drives. Another good sign of the product’s uptake and
improved value is its increasing base price and available
add-on modules. Still, at $1,400 retail for 8TB of managed
storage, it’s a relative bargain.
One of our most popular blogs outside of virtualization has
been the setup and use of FreeNAS and OpenFiler as low-cost
storage platform. Given our experience with both of these
alternatives, we find NexentaStor Developer’s Edition to be
superior in terms of configurability and stability as an
iSCSI or NFS host, and – with its simple-to-configurereplication and snapshot services – it provides a betterplatform for low-cost continuity, replication and data
integrity initiatives. The fact that Nexenta is a VMware
Technology Partner makes the choice of Nexenta over the
other “open storage” platforms a no-brainer.
Coming in our next installment, Part 3, we will create a
NexentaStor VSA, learn how to provision iSCSI and NFS
storage and get ready for our virtual ESX/ESXi
installations…
Advertisement
PPaaggeess:: 1 2 3
http://solori.wordpress.com/2009/08/19/in-the-lab-full-esxvmot...
4 of 4 9/10/11 12:05 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 3In Part 2 of this series we introduced the storage
architecture that we would use for the foundation of our
“shared storage” necessary to allow vMotion to do its magic.
As we have chosen NexentaStor for our VSA storage platform,
we have the choice of either NFS or iSCSI as the storage
backing.
In Part 3 of this series we will install NexentaStor, make
some file systems and discuss the advantages and
disadvantages of NFS and iSCSI as the storage backing. By
the end of this segment, we will have everything in place
for the ESX and ESXi virtual machines we’ll build in the
next segment.
Part 3, Building the VSAFor DRAM memory, our lab system has 24GB of RAM which we
will apportion as follows: 2GB overhead to host, 4GB to
NexentaStor, 8GB to ESXi, and 8GB to ESX. This leaves 2GB
that can be used to support a vCenter installation at the
host level.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
1 of 6 9/10/11 12:09 PM
Our lab mule was configured with 2x250GB SATA II drives
which have roughly 230GB each of VMFS partitioned storage.
Subtracting 10% for overhead, the sum of our virtual disks
will be limited to 415GB. Because of our relative size
restrictions, we will try to maximize available storage
while limiting our liability in case of disk failure.
Therefore, we’ll plan to put the ESXi server on drive “A”
and the ESX server on drive “B” with the virtual disks of
the VSA split across both “A” and “B” disks.
Our VSA Virtual HardwareFor lab use, a VSA with 4GB RAM and 1 vCPU will suffice.
Additional vCPU’s will only serve to limit CPU scheduling
for our virtual ESX/ESXi servers, so we’ll leave it at the
minimum. Since we’re splitting storage roughly equally
across the disks, we note that an additional 4GB was
taken-up on disk “A” during the installation of ESXi,
therefore we’ll place the VSA’s definition and “boot” disk
on disk “B” – otherwise, we’ll interleave disk slicesequally across both disks.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
2 of 6 9/10/11 12:09 PM
Datastore – vLocalStor02B, 8GB vdisk size, thinprovisioned, SCSI 0:0
Guest Operating System – Solaris, Sun Solaris 10(64-bit)
Resource Allocation
CPU Shares – Normal, no reservation
Memory Shares – Normal, 4096MB reservation
No floppy disk
CD-ROM disk – mapped to ISO image of NexentaStor 2.1EVAL, connect at power on enabled
Network Adapters – Three total
One to “VLAN1 Mgt NAT” and
Two to “VLAN2000 vSAN”
Additional Hard Disks – 6 total
vLocalStor02A, 80GB vdisk, thick, SCSI 1:0,
independent, persistent
vLocalStor02B, 80GB vdisk, thick, SCSI 2:0,
independent, persistent
vLocalStor02A, 65GB vdisk, thick, SCSI 1:1,
independent, persistent
vLocalStor02B, 65GB vdisk, thick, SCSI 2:1,
independent, persistent
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
3 of 6 9/10/11 12:09 PM
vLocalStor02A, 65GB vdisk, thick, SCSI 1:2,
independent, persistent
vLocalStor02B, 65GB vdisk, thick, SCSI 2:2,
independent, persistent
NNOOTTEE:: It is important to realize here that the virtual
disks above could have been provided by vmdk’s on the same
disk, vmdk’s spread out across multiple disks or provided by
RDM’s mapped to raw SCSI drives. If your lab chassis has
multiple hot-swap bays or even just generous internal
storage, you might want to try providing NexentaStor with
RDM’s or 1-vmdk-per-disk vmdk’s for performance testing or
“near” production use. CPU, memory and storage are the basic
elements of virtualization and there is no reason that
storage must be the bottleneck. For instance, this
environment is GREAT for testing SSD applications on a
resource limited budget.
Installing NexentaStor to the Virtual Hardware
With the ISO image mapped to the CD-ROM drive and the CD
“connected on power on” we need to modify the “Boot Options”
of the VM to “Force BIOS Setup” prior to the first time we
boot it. This will enable us to disable all unnecessary
hardware including:
Legacy Diskette A
I/O Devices
Serial Port A
Serial Port B
Parallel Port
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
4 of 6 9/10/11 12:09 PM
Floppy Disk Controller
Primary Local Bus IDE adapter
We need to demote the “Removable Devices” in the “Boot”
screen below the CD-ROM Drive, and “Exit Saving Changes.”
This will leave the unformatted disk as the primary boot
source, followed by the CD-ROM. The VM will quickly reboot
and fail to the CD-ROM, presenting a “GNU Grub” boot
selection screen. Choosing the top option, Install, the
installation will begin. After a few seconds, the “Software
License” will appear: you must read the license and select
“I Agree” to continue.
The installer checks the system for available disks and
presents the “Fresh Installation” screen. All disks will be
identified as “VMware Virtual Disk” – select the one labeled“c3t0d0 8192 MB” and continue.
The installer will ask you to confirm that your want to
repartition the selected disks. Conform by selecting “Yes”
to continue. After about four to five minutes, the
NexentaStor installer shoud be asking you to reboot, select
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
5 of 6 9/10/11 12:09 PM
“Yes” to continue the installation process.
After about 60-90 seconds, the installer will continue,
presenting the “Product registration” page and a “Machine
Signature” with instructions on how to register for an
product registration key. In short, copy the signature to
the “Machine Signature” field on the web page at
http://www.nexenta.com/register-eval and complete the
remaining required fields. Within seconds, the automated key
generator will e-mail you your key and the process can
continue. This is the only on-line requirement.
NNoottee aabboouutt mmaacchhiinnee ssiiggnnaattuurreess:: If you start over and
create a new virtual machine, the machine signature will
change to fit the new virtual hardware. However, if you use
the same base virtual machine – even after distroying andreplacing the virtual disks, the signature will stay the
same allowing you to re-use the registration key.
Next, we will configure the appliance’s network settings…
PPaaggeess:: 1 2 3 4 5 6
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
6 of 6 9/10/11 12:09 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 3
Configuring Initial Network SettingsIf the first network adapter in your VM is connected to your
management network, this interface will be identified as
“e1000g0″ in the NexentaStor interface configuration. Thedefault address will be 192.168.1.X/24, and the installer
will offer you the opportunity to change it: select “y” in
response to “Reconfigure (y/n)” and enter the appropriate
management network information.
Select “e1000g0″ as the “Primary Interface” and “static” asthe configuration option; then enter your VSA’s IP address
as it will appear on your management network, followed by
the subnet mask, primary DNS server, secondary DNS server,
tertiary DNS server and network gateway. The gateway will be
used to download patches, connect to NTP servers and connect
to remote replication devices. If you want to explore the
CIFS features of NexentaStor, make sure all DNS servers
configured are AD DNS servers. When asked to “Reconfigure”
select “n” unless you have made a mistake.
The final initial configuration question allows you to
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
1 of 7 9/10/11 12:09 PM
select a management protocol: either HTTP or HTTPS. Since we
are using this in a lab context, select HTTP as it will be
“snappier” than HTTPS. The installer will present you with a
management URL which you will use for the remainder of the
configuration steps.
Web-based Configuration Wizard
Note that the URL’s port is configured as TCP port 2000 – ifyou leave this off of your URL the VSA will not respond. The
first page of the configuration wizard sets the following
optiong:
Host Name
Domain Name
Time Zone
NTP Server
Keyboard Layout
For host name, enter the short name of the host (i.e. as in
host.domain.tld, enter “host”). For the domain name, enter
your DNS domain or AD domain. On the AD domain, make sure
the host name plus domain name is defined in AD to avoid
problems later on. The time zone should be local to the time
zone of your lab system. If your VSA will have Internet
access, accept the default NTP server from NTP.ORG –otherwise, enter your local NTP source. Also select the
appropriate keyboard layout for your country; then click on
“Next Step >>” to continue.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
2 of 7 9/10/11 12:09 PM
The next wizard page configures the “root” and “admin”
passwords. The “root” user will perform low-level tasks and
should be used from either the secure shell or via the
console. The “admin” user will perform web-GUI related
functions. These passwords should be secure and unique. You
will have the opportunity to create additional local users
of various security levels after the appliance is
configured. Enter each password twice and click on “Next
Step >>” to continue.
Notification System
The NexentaStor appliance will notify the SAN administrator
when routine checks are performed and problems are detected.
Periodic performance reports will also be sent to this user
if the notification system is properly configured. This
requires the following information:
SMTP Server
SMTP User (optional)
SMTP Password (required if user given)
SMTP Send Timeout (default 30 seconds, extend if using
Internet mail over a slow connection)
SMTP Authentication – Plain text (default), SSL or TLS(check with your e-mail administrator)
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
3 of 7 9/10/11 12:09 PM
E-Mail Addresses – comma-separated list of recipients
From E-Mail Address – how the e-mail sender will beidentified to the recipient
Once the information is entered correctly, select “Next >>”
to continue. A confirmation page is presented allowing you
to check the information for accuracy. You can return to
previous pages by selecting “<< Previous Step” or click
“Save Configuration” to save and continue. A notification in
green should pop-up between the Nexenta banner and the rest
of the page indicating that all changes were made
successfully. If your pop-up is red, some information did
not take – try to save again or go back and correct theerror.
Additional Network Interfaces
Here is the next real decision point: how to get storage in
and out of the VSA. While it may not matter in a lab
environment (and I’d argue it does) you should have some
concern for how mixing traffic of differing types may impact
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
4 of 7 9/10/11 12:09 PM
specific performance goals in your environment. To make it
simple, our lab will use the following interface
assignments:
ee11000000gg00 – Primary interface, management and CIFStraffic
ee11000000gg11 – data-only interface, primary iSCSI traffic
ee11000000gg22 – data-only interface, NFS traffic &secondary iSCSI
Shared Storage has been separated across multiple interfaces
and subnets to make traffic management simple. It is
available to the physical and virtual ESX hosts, virtual
machines and physical machines (if the vSwitches have
physical NICs associated with them.)
In our lab, although we are using only two port groups
(layer-2 domains), each interface will be placed on a
different network (layer-3 domains) – this removes anyambiguity about which interface traffic is sourced. For
hardware environments, NexentaStor supports 802.3ad
aggregates which – together with proper switchconfigurations – can increase capacity and redundancy usingmultiple 1Gbps interfaces.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
5 of 7 9/10/11 12:09 PM
With the primary interface configured in the console, we’ll
click the “Add Interface” link to prompt a dynamic HTML
expansion of the option page and configure e1000g1 as a
single static interface with a pre-defined IP address for
the SAN-only network. We’re using 192.168.200.0/25 for this
interface’s subnet (iSCSI) and 192.168.200.128/25 for the
secondary interface (NFS).
Network Interface Configuration Wizard
For each interface, add the appropriate IP information and
click “Add Interface” – if you make a mistake, click on the“Delete Interface” icon (red “X” in action column) and
re-enter the information. When the interfaces are configured
correctly, click on “Next Step >>” to continue.
Next, we will complete the initial disk and iSCSI initiator
configuration…
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
6 of 7 9/10/11 12:09 PM
PPaaggeess:: 1 2 3 4 5 6
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
7 of 7 9/10/11 12:09 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 3
Disk and iSCSI ConfigurationThe next wizard screen provides setup for the iSCSI
initiator that NexentaStor would use to access remote media
using the iSCSI protocol. The following parameters can be
modified:
IInniittiiaattoorr NNaammee – RFC 3271 initiator name of the VSA
IInniittiiaattoorr AAlliiaass – RFC 3271 initiator“informational” name for VSA – purely to aididentification by humans
AAuutthhoorriizzaattiioonn MMeetthhoodd – None (default) or ChallengeHandshake Authentication Protocol (CHAP) – enables ansecret or password to aid in the authentication of the
host beyond matching its Initiator Name
NNuummbbeerr ooff SSeessssiioonnss – 1 (default) to 4. Defines thenumber of sessions the initiator can utilize – perconnected target – for I/O multi-pathing. See yourother storage vendor documentation before changing
this value.
HHeeaaddeerr DDiiggeesstt MMeetthhoodd – None (default) or CRC32.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
1 of 8 9/10/11 12:10 PM
This determines if CRC checks will be run against each
iSCSI header or not. Because it requires the
calculation of CRC at both ends, this option can
increase latency and reduce performance.
DDaattaa DDiiggeesstt MMeetthhoodd – None (default) or CRC32. Thisdetermines if CRC checks will be run against the data
portion of each packet or not. Because it requires the
calculation of CRC at both ends, this option can
increase latency and reduce performance.
RRAADDIIUUSS SSeerrvveerr AAcccceessss – Disabled (default) orEnabled. Determines whether or not a third-party
RADIUS server will be used to handle CHAP
authentication.
With the exception of the “Initiator Alias” – which we setto “NexentaStor-VSA01″ – we will accept all defaults andclick “Save” for the iSCSI parameters. We noted that the
NexentaStor does not accept spaces although RFC 3270 does
not forbid their use. Any attempt to use spaces in the alias
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
2 of 8 9/10/11 12:10 PM
resulted in the “red letter pop-up” failure discussed
earlier. Once accepted, we click “Next Step >>” to continue.
Initial Data Volume Creation
This is the next decision point, and we highly recommend
reviewing the ZFS concepts discussed in the last post to
understand the benefits and pitfalls of the choices
presented here. What is most important to understand about
these options when building volumes with ZFS is how
redundancy groups affect performance (IOPS, latency and
bandwidth).
As a general rule, consider each redundancy group –regardless of the number of disks – as capable of handlingonly the number of IOPS as its LEAST capable member. This
concept is especially important when contrasting mirror,
RAID-Z (single parity, N+1 disks) and RAID-Z2 (double
parity, N+2 disks). For instance, with a disk budget of 30
disks, the maximum performance would be made using a pool of
15 mirror sets having the IOPS potential of 15-time an
individual drive but one half the storage potential.
However, using 6 groups of 5-drives in RAID-Z configuration,
the IOPS potential is only 6-times an individual drive (less
than 1/2 the mirror’s potential) but capacity is increased
by 60% over the mirror.
Once grouped together as a single pool, these redundancy
groups be used in a fashion similar to striping to ensure
that they are working in parallel to boost IOPS and
bandwidth. Latency will be a factor of caching efficiency
defined by the ZIL, ARC and L2ARC and – for large reads and
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
3 of 8 9/10/11 12:10 PM
writes – drive performance. Additionally, disks or groupscan be added as one of four possible members: pool (main
storage), logs (ZIL), cache (L2ARC) or spare.
Should I Configure a RAID-Z, RAID-Z2, or a MirroredStorage Pool?
A general consideration is whether your goal is to
maximum disk space or maximum performance.
A RAID-Z configuration maximizes disk space and
generally performs well when data is written and
read in large chunks (128K or more).
A RAID-Z2 configuration offers excellent data
availability, and performs similarly to RAID-Z.
RAID-Z2 has significantly better mean time to data
loss (MTTDL) than either RAID-Z or 2-way mirrors.
A mirrored configuration consumes more disk space
but generally performs better with small random
reads.
If your I/Os are large, sequential, or write-
mostly, then ZFS’s I/O scheduler aggregates them
in such a way that you’ll get very efficient use
of the disks regardless of the data replication
model.
For better performance, a mirrored configuration is
strongly favored over a RAID-Z configuration
particularly for large, uncacheable, random read loads.
- Solaris Internals ZFS Best Practices Guide
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
4 of 8 9/10/11 12:10 PM
For our use we will combine the two large virtual disks
together as a mirror for one volume and, using the remaining
disks, create another volume as a group of two smaller
mirrors. For the first volume, we set the “Group redundancy
type” to “Mirror of disks” and – holding the Control-keydown – click-select our two 80GB disks (this is the reasoneach of the two disks is on a separate virtual SCSI
controller.) Next, we click on the “Add to pool >>” button,
set the “Volume Name” as “volume0″ and the “VolumeDescription” to “Initial Storage Pool” then click “Create
Volume.”
With “volume0″ created, we create two additional mirrorsets – each member attached to a separate SCSI controller –and create the second pool we call “volume1.” Once created,
the GUI shows “No disks available” and we’re on to the next
step, but first we want to note the “Import Volume” link
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
5 of 8 9/10/11 12:10 PM
which is helpful in data recovery and an important “open
storage” aspect of NexentaStor.
Using “Import Volume” any previously formated ZFS
volume/structure could be easily imported into this system
without data loss or conversion. In the lab, we have
recovered RDM-based ZFS storage volume from a VSA to a
hardware SAN just by adding the drives and importing the
volume. The power of this should be explored in your lab by
“exporting” a volume of disks from one VSA to another – butwait do this with a volume containing several Zvols if you
really want to be impressed.
Creating Folders
ZFS and NexentaStor use the folder paradigm to separate
storage “entities.” Folders are defined in a specific volume
(storage pool) have the following configurable parameters:
FFoollddeerr NNaammee – the file system path name of thefolder without the leading “/” or volume name. If the
name contains multiple “/” characters, a multi-folder
hierarchy will be created to accommodate the name.
DDeessccrriippttiioonn – the “human-readable” descriptionidentifying the folders use or other “meaningful”
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
6 of 8 9/10/11 12:10 PM
information.
RReeccoorrdd SSiizze – Default is 128K. This sets therecommended block size for all files in this folder.
Each folder can have a different default block size
regardless of the parent or child folder’s setting.
This allows storage to easily match the application
without creating additional pools (i.e. SQL, MySQL,
etc.)
CCoommpprreessssiioonn – Default is “off.” Determines whetheror not the contents of the folder are to be compressed
or not. Available compression options are off, lzjb,
gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6,
gzip-7, gzip-8, and gzip-9. Higher gzip numbers
increase compression ratio at the expense of higher
CPU utilization.
NNuummbbeerr ooff CCooppiieess – Default is 1. Availabe range is1-3. A data integrity option that determines the
number of copies of data stored to the pool for items
within the folder. Can be used in addition to
mirroring.
CCaassee SSeennssiittiivviittyy – Default “sensitive.” Availableoptions are sensitive, insensitive and mixed. Guidance
suggests that for folders that will be exported using
CIFS, the “mixed” option should be used.
At this point in the installation we just want to create a
single default folder in each volume. We will name the
folder “default,” provide a brief description and accept the
other defaults – do this for volume0 and volume1. Forreasons that will become obvious later, we will not create a
Zvol at this time. Instead, click “Next Step >>” to
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
7 of 8 9/10/11 12:10 PM
continue.
Next, we are ready to finalize and snapshot the initial
configuration…
PPaaggeess:: 1 2 3 4 5 6
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
8 of 8 9/10/11 12:10 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 3
Finalizing the SetupInitial setup is complete, and we are presented with a
review of the interfaces, disks, volumes and folders
configured; then we are asked to approve three additional
system options:
CCrreeaattee aa SSyysstteemm CChheecckkppooiinntt – default “checked.”This performs a system snapshot or restore point
allowing the system to be reverted back to the initial
configuration if necessary. Along the way, additional
checkpoints can be made to protect later milestones.
OOppttiimmiizzee II//OO PPeerrffoorrmmaannccee – default “unchecked.”Allows a performance increase by disabling ZFS cache
flushing and ZIL which could improve CIFS, NFS or
iSCSI performance at the possible expense of data
integrity.
CCrreeaattee aa PPeerriiooddiicc SSccrruubbbbiinngg SSeerrvviiccee – default“checked.” The scrubbing service checks for corrupt
data and corrects it using the same resilvering code
used in ZFS mirroring.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
1 of 3 9/10/11 12:10 PM
We recommend that the default choices be accepted as-is – wewill look at optimizing the system a bit later by dealing
with the ZFS cache flushing issue separate from the ZIL.
Finally, click “Start NMV” to complete the installation.
After a brief update, the Nexenta ‘Status Launchpad” is
displayed…
From the “Settings->Preferences” page, we can disable the
ZFS cache flush. This will improve performance without
turning-off the ZIL. Set “Sys_zfs_nocacheflush” to the “Yes”
option and click “Save” to continue.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
2 of 3 9/10/11 12:10 PM
Next, let’s create and iSCSI target for shared VMFS storage
and NFS exports for ISO storage…
PPaaggeess:: 1 2 3 4 5 6
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
3 of 3 9/10/11 12:10 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 3
Creating Zvols for iSCSI TargetsNexentaStor has two facilities for iSCSI targets: the
default, userspace based target and the Common Multiprotocol
SCSI Target (COMSTAR) option. Besides technical differences,
the biggest difference in the COMSTAR method versus the
default is that COMSTAR delivers:
LUN masking and mapping functions
Multipathing across different transport protocols
Multiple parallel transfers per SCSI command
Compatibility with generic HBAs (i.e. Fiber Channel)
Single Target, Multiple-LUN versus One Target per LUN
To enable COMSTAR, we need to activate the NexentaStor
Console from the web GUI. In the upper right-hand corner of
the web GUI page you will find two icons: Console and View
Log. Clicking on “Console” will open-up an “NVM Login”
window that will first ask for your “Admin” user name and
password. These are the “admin” credentials configured
during installation. Enter “admin” for the user and whatever
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
1 of 4 9/10/11 12:11 PM
password your chose then click “Login” to continue.
Login to the NexentaStor Console using the administrative
user name and password established during installation.
Now we will delve briefly to command-line territory. Issue
the following command at the prompt:
setup iscsi target comstar show
The NexentaStor appliance should respond by saying “COMSTAR
is currently disabled” meaning the system is ready to have
COMSTAR enabled. Issue the following command at the prompt
to enable COMSTAR:
setup iscsi target comstar enable
After a few seconds, the system should report “done” and
COMSTAR will be enabled and ready for use. Enter “exit” at
the command line, press enter and then close the NMV window.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
2 of 4 9/10/11 12:11 PM
Enabling the COMSTAR target system in NexentaStor.
With COMSTAR successfully enabled, we can move on to
creating our iSCSI storage resources for use by VMware. In
our lab configuration we have two storage pools from which
any number of iSCSI LUNs and NFS folders can be exported. To
create our first iSCSI target, let’s first create a
container for the target – and its snapshots – to reside in.From the NexentaStor “Data Management” menu, select “Shares”
and, from the Folders area, click on the “Create” link.
The “Create New Folder” panel allows us to select volume0 as
the source volume, and we are going to create a folder named
“target0″ within a folder named “targets” directly off ofthe volume root by entering “targets/target0″ in the“Folder Name” box. Because our iSCSI target will be used
with VMware, we want to set the default record size of the
folder to 64K blocks, leave compression off and accept the
default case sensitivity. While Zvols can be created
directly off of the volume root, SOLORI’s best practice is
to confine each LUN to a separate folder unless using the
COMSTAR plug-in (which is NOT available for the Developer’s
Edition of NexentaStor.)
Since 80GB does not allow us a lot of breathing room in
VMware, and since vCenter 4 allows us to “thin provision”
virtual disks anyway, we want to “over-subscribe” our
volume0 by telling the system to create a 300GB iSCSI
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
3 of 4 9/10/11 12:11 PM
target. If we begin to run out of space, NexentaStor will
allow us to add new disks or redundancy groups without
missing taking the target or storage pool off-line (online
capacity expansion).
To accomplish this task, we jump back to the “Settings”
panel, click on the “Create” link within the “iSCSI Target”
sub-panel, select volume0 as the source volume, enter
“targets/target0/lun0″ as the “Zvol Name” with a “Size” of300GB and – this is important – set “Initial Reservation” to“No” (thin provisioning), match the record size to 64KB,
leave compression off and enable the target by setting
“iSCSI Shared” to “Yes.” Now, click “Create zvol” and the
iSCSI LUN is created and documented on the page that
follows. Clicking on the zvol’s link reports the details of
the volume properties:
Zvol Properties immediately after creation.
Now let’s create some NFS exports – one for ISO images andone for virtual machines…
PPaaggeess:: 1 2 3 4 5 6
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
4 of 4 9/10/11 12:11 PM
In-the-Lab: FullESX/vMotion Test Lab in aBox, Part 3
Creating NFS File SystemsWhy create NFS storage for VMware when NexentaStor can
provide as many Zvols as necessary to support our needs? In
a word: flexibility. NFS makes an excellent choice for
storage backing for ISO images and some virtual machine
application – especially where inexpensive backup tools areused. For instance, any Linux, Solaris, BSD, FreeBSD or OSX
box can access NFS without breaking a sweat. This means
management of ISO storage, copying backup virtual machines,
or any NFS-to-NFS moving of data can happen outside of
VMware’s pervue.
That said, moving or changing “live” virtual machine data
from a NFS file system could be a recipe for disaster, but
limiting NFS export exposure to a fixed IP group or
in-addr.arpa group can limit that danger (like LUN masking
in Fiber Channel or iSCSI.) For now, let’s use NFS for the
relatively harmless application of getting ISO images to our
ESX servers and worry about the fancy stuff in another blog…
Like anything else in ZFS, we want to first create a purpose
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
1 of 10 9/10/11 12:11 PM
provisioned folder specifically for our NFS storage. At
first, we will let it be read/write to any host and we will
lock it down later – after we have loaded our ISO images toit. To create the NFS folder, we go back to “Data
Management” and click the “Create” link from the “Folders”
sub-panel. Since we have well over-subscribed volume0, we
want to put the NFS folders on volume1. Selecting volume1,
setting the name to “default/nfs/iso” and changing the
record size to 16K, we’ll change the case sensitivity to
“mixed” to allow for CIFS access for Windows clients.
Clicking “Create” commits the changes to disk and returns to
the summary page.
Now that the NFS folder is created, let’s enable the NFS
service for the first time. Simply check the box in the NFS
column of the “volume1/default/nfs/iso” folder. A pop-up
will ask you to confirm that NFS will be enabled for that
folder: click “OK.” On the “Data Management: Shares” panel,
click on the NFS Server “Configure” link within the “Network
Storage Services” sub-panel. For VMware, change the client
version to “3″ and – if it is unchecked – check the“Service State” box to enable the service; then click “Save”
continue.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
2 of 10 9/10/11 12:11 PM
NFS settings to client version 3 for VMware.
Using SMB/CIFS to Access the NFS Folder
With NFS running, let’s help out the Windows administrators
by adding CIFS access to the NFS folder. This way, we can
update the ISO image directory from a Windows workstation if
Linux, Unix, BSD, OSX or any other native NFS system is not
available. The quickest way to accomplish this is through
the “anonymous” CIFS service: just check the selection box
in the CIFS column that corresponds to the ISO folder and
this service will become active.
CIFS can be enabled to allow access to NFS folders from
Windows clients.
To access the ISO folder from a Windows machine, enter the
UNC name of the VSA (or “\\IP_ADDRESS” if DNS is not up to
date) into the run-box of your Windows workstation and click
“OK.” A login requester will ask for a user name and
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
3 of 10 9/10/11 12:11 PM
password; the user name is “smb” and the (default) password
is “nexenta” – the default password should be changedimmediately by clicking on the “Configure” link in the
“Network Storage Services” sub-panel of the “Shares” control
page.
At this point we introduce a small problem for NexentaStor:
user access rights for CIFS (user “smb”) are different than
those for NFS (user “nfs”). Therefore, we need to tell the
NFS share that the ESXi host(s) has “root” access to the
file system so that files written as “smb” will be
accessible by the ESXi host. This is accomplished by
entering the FQDN host name of the ESXi server(s) into the
“Root” option field of the NFS Share configuration for the
ISO folder:
Enable the VMware host to see the SMB/CIFS uploaded files by
registering the host as a "root" enabled host.
It is critical to have the ESXi’s host name correctly
entered into DNS for this “root” override to work. This
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
4 of 10 9/10/11 12:11 PM
means the host name and the in-addr.arpa name must exist for
the host. If not, simply entering the IP address into the
“root” override will not work. While this may be the first
time in this series that DNS becomes a show-stopper, it will
not be the last: VMware’s DRS requires proper name
resolution to function properly. It is worth the investment
in time to get DNS straight before moving forward in this
lab.
Why NFS and CIFS?
While SMB and CIFS are convenient file sharing mechanisms in
Windows environments, VMware cannot speak them. Instead,
VMware needs either block protocols like iSCSI or Fiber
Channel or a network file system designed for a multi-access
environment like NFS. Since NexentaStor speaks both CIFS and
NFS, this happy union makes an excellent file system bridge
between the Windows world and the world of VMware.
We must reiterate the earlier caution against exposing the
NFS folders containing virtual machines: while this “bridge”
between the two worlds can be used to backup and restore
virtual machines, it could also easily introduce corruption
into an otherwise “cathedral” environment. For now, let’s
stick to using this capability for shipping ISO images to
VMware and leave the heavy lifting for another blog.
Copying ISO Images to theCIFS/NFS Folder
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
5 of 10 9/10/11 12:11 PM
With the CIFS service active, we can begin copying over the
ISO images needed for ESX to, in turn, export to its virtual
machines for their use. This makes installing operating
systems and applications painless and quick. Since, at this
point, copying the files is a trivial exercise, we will not
spend much time on the process. However, at this point it
will be good to have the following ISO images loaded onto
the CIFS/NFS share:
VMware ESX Server 4.0 ISO (820MB DVD)
VMware ESXi Server 4.o ISO (350MB CD-ROM)
Mounting the NFS Share to the Lab Host
Going back to the VI Client, find the “Storage” link in the
“Hardware” section of the “Configuration” tab. Click on the
“Add Storage…” link on the upper right-hand side of the page
and select “Network File System” from the pop-up; click
“Next >” to continue. In the “Server” entry box, enter
either the host name (if DNS is configured) or the IP
address of the NexentaStor VSA network interface you wish to
use for NFS traffic.
In the “Folder” box, enter the full NFS folder name of the
export from NexentaStor – it will always start with“/volumes” followed by the full folder name – in ourexample, “/volumes/volume1/default/nfs/iso” – and, sincethese are ISO images, we will check the “Mount NFS read
only” to prevent accidental modification from the VMware
side. Finally, enter an file system mount name for the
storage – we use “ISO-VSA01″ in our example.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
6 of 10 9/10/11 12:11 PM
Adding NFS storage to VMware for ISO images (read-only).
Click the “Next >” button to see the “Add Storage” summary,
then click “Finish” to mount the NFS storage. Once mounted,
the ISO will appear as storage in the “Datastores” table in
VMware’s VI Client view. It’s a good idea to confirm that
the host can see your CIFS uploaded images by right-clicking
on the VMware volume and selecting “Browse Datastore…” from
the pop-up menu. If the ISO images do not appear, go back
and confirm that the “root” override host name exists in the
DNS server(s) used by the host ESXi server.
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
7 of 10 9/10/11 12:11 PM
VMware Datastore Browser - viewing the ISO images uploaded
by CIFS to the NFS mount.
With a convenient way to manage ISO images in our lab, our
VSA in place with NFS and iSCSI assets, we’re finally ready
to install our virtual ESX and ESXi hosts. To date, we have
installed ESXi on our lab host, used the host’s DAS as
backing for the NexentaStor VSA, created an iSCSI target for
shared storage between our (future) ESX and ESXi hosts, and
used CIFS to manage shared NFS storage.
NFS or iSCSI: Which is Right forVMwareNow might be a good time to touch the question of which
shared file system to use for your vMotion lab: VMFS over
iSCSI or NFS. Since we are providing our own storage backing
for our VMware lab, the academic questions of which “cost”
more are moot: they both are available at the same cost.
While NFS is a file system designed for a general purpose
computing environment, VMFS was purpose-built for VMware.
For the purposes of this lab environment, the differences
between NFS and VMFS/iSCSI are negligible. However, in a
real world environment, each has advantages and
disadvantages. The main advantage in for iSCSI is the
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
8 of 10 9/10/11 12:11 PM
ability to easily manage multiple paths to storage. While it
is possible to utilize NFS over multiple physical links –using 802.3ad, for example – it is not possible to addressthe same storage volume by multiple IP addresses. Although
this is very easy to configure for iSCSI, the practical use
of this capability does have its obstacles: iSCSI time-outs
drive the fail-over window – sometimes taking as much as twominutes to converge to the backup – and this can createchallenges in a production environment.
That said, NFS is more widely available and can be found
natively in most Linux, BSD and Unix hosts – even Apple’sOSX. Windows users are not totally out in the cold, as a
Microsoft supported form of NFS is available assuming you
jump through hoops well. In a SMB infrastructure, the
ability to export virtual machine files outside of VMFS
presents some unique and cost effective disaster recovery
options.
While we prefer to stick to VMFS using iSCSI (or Fiber
Channel) for the majority of SOLORI’s clients, there are use
cases where the NFS option is smart and it is becoming a
popular alternative to iSCSI. Client infrastructures already
invested in NFS often have no compelling reason to create
entirely new infrastructure management processes just to
accommodate VMFS with iSCSI. Using NexentaStor as an
example, snapshots of NFS folders are really no different
than snapshots of Zvols with one noteworthy exception: NFS
snapshots are immediately available to the host as a
sub-folder within the snapshotted folder; Zvol snapshots
require separate iSCSI mounts of their own.
The great thing about lab exercises like this one is that it
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
9 of 10 9/10/11 12:11 PM
allows us to explore the relative merits of competing
technologies and develop use-case driven practices to
maximize the potential benefits. Therefore, we are leaving
the choice of file systems to the reader and will present a
use case for each in the remaining portions of this series
according to their strengths.
CCoommiinngg uupp iinn PPaarrtt 44 ooff tthhiiss sseerriieess, we will install ESX
and ESXi, mount our iSCSI target and ISO images, and get
ready to install our vCenter virtual machine.
PPaaggeess:: 1 2 3 4 5 6
http://solori.wordpress.com/2009/08/21/in-the-lab-full-esxvmot...
10 of 10 9/10/11 12:11 PM