jeff woolsey principal group program manager windows server, hyper-v wsv312
TRANSCRIPT
Storage & Hyper-V: The Choices you can make and the things you need to knowJeff WoolseyPrincipal Group Program ManagerWindows Server, Hyper-VWSV312
Session Objectives And Takeaways
Understand the storage options with Hyper-V as well as use cases for DAS and SANLearn what’s new in Server 2008 R2 for storage and Hyper-VUnderstand different high availability options of Hyper-V with SANsLearn performance improvements with VHDs, Passthrough & iSCSI Direct scenarios
Storage Performance/Sizing
Important to scale performance to the total workload requirements of each VMSpindles are still keyDon’t migrate 20 physical servers with 40 spindles each to a Hyper-V host with 10 spindlesDon’t use left over servers as a production SAN
Windows Storage Stack
Bus – Scan up to 8 buses (Storport)Target – Up to 255 targetsLUNs – Up to 255Support for up to 256TB volumes
>2T supported since Server 2003 SP1Common Q: What is supported maximum transfer size?
Dependent on adapter/miniport (i.e. Qlogic/Emulex)
Hyper-V Storage Parameters
VHD max size 2040GBPhysical disk size not limited by Hyper-VUp to 4 IDE devicesUp to 4 SCSI controllers with 64 devicesOptical devices only on IDE
Storage Connectivity
From parent partitionDirect Attached (SAS/SATA)Fiber ChanneliSCSI
Network attached storage not supportedExcept for ISOs
Hot add and removeVirtual Disks to SCSI controller only
ISOs on network Shares
Machine account access to shareConstrained delegation
SCSI Support in VMs
Supported InWindows XP Professional x64Windows Server 2003Windows Server 2008 & 2008 R2Windows Vista & Windows 7SuSE Linux
Not Supported InWindows XP Professional x86All other operating systems
Requires integration services installed
Antivirus and Hyper-V
Exclude VHDs & AVHDs (or directories)VM configuration directoryVMMS.exe and VMWP.exe
May not be required on core with no other roles
Run Antivirus in virtual machines
Encryption and Compression
Bitlocker on parent partition supportedEncrypted File System (EFS)
Not supported on parent partitionSupported in Virtual Machines
NTFS Compression (Parent partition)Allowed in Windows Server 2008Blocked in Windows Server 2008 R2
Hyper-V Storage & Pass Through…
Step by Step Instructions
Hyper-V Storage...
Performance wise from fastest to slowest…Fixed Disk VHDs/Pass Through Disks
The same in terms of performance with R2Dynamically Expanding VHDs
Grow as needed
Pass Through DisksPro: VM writes directly to a disk/LUN without encapsulation in a VHDCons:
You can’t use VM snapshotsDedicating a disk to a vm
More Hyper-V Storage
Hyper-V provides flexible storage optionsDAS: SCSI, SATA, eSATA, USB, FirewireSAN: iSCSI, Fibre Channel, SAS
High Availability/Live MigrationRequires block based, shared storage
Guest ClusteringVia iSCSI only
VM Setting No Pass Through
Computer Management: Disk
Taking a disk offline
Disk is offline…
Pass Through Configured
Disk Types & Performance
Disk type comparison (Read)
64K Sequential Read 4K Random Read1.00
10.00
100.00
1000.00
Native PhysicalFixed VHD in Win7Fixed VHD in Win2K8Dynamic VHD in Win7Dynamic VHD in Win2K8Passthru in Win7Passthru in Win2K8Th
roug
put(
MBp
s)
(Log Scaled by 10)
Hyper-V R2 Fixed Disks
Fixed Virtual Hard Disks (Write)Windows Server 2008 R1: ~96% of nativeWindows Server 2008 R2: Equal to Native
Fixed Virtual Hard Disks vs. Pass ThroughWindows Server 2008 R1: ~96% of pass-throughWindows Server 2008 R2: Equal to Native
Hyper-V R2 Dynamic Disks
Massive Performance Boost64 Sequential Write
Windows Server 2008 R2: 94% of nativeEqual to Hyper R1 Fixed Disks
4k Random WriteWindows Server 2008 R2: 85% of native
Disk layout - FAQ
Assuming Integration Services are installed:Do I Use
IDE or SCSI?One IDE channel or two?One VHD per SCSI controller?Multiple VHDs on a single SCSI controller?
R2: Can Hot Add VHD’s to Virtual SCSI…
Disk layout - results
1.00
10.00
100.00
1000.00
2 Physical disks in parent2 Fixed VHDs, 2 SCSI controllers2 Fixed VHDs, 1 SCSI controller2 Fixed VHDs, 2 IDE controllers2 Fixed VHDs, 1 IDE controller
Thro
ughp
ut(M
Bps)
(Log Scaled by 10)
Differencing VHDsPerformance vs chain length
1 2 4 8 16 32 641.00
10.00
100.00
1000.00
64K Sequential Reads (R2)64K Sequential Reads (v1)4K Random Reads (R2)4K Random Reads (v1)
Diff VHD Chain Length
Thro
ughp
ut(M
Bps)
(Log Scaled by 10)
Passthrough DisksWhen to use
Performance is not the only considerationIf you need support for Storage Management software
Backup & Recovery applications which require direct access to diskVSS/VDS providers
Allows VM to communicate via inband SCSI unfiltered (application compatibility)
Storage Device Ecosystem
Storage Device support maps to same support as exists in physical serversAdvanced scenarios: Live Migration require shared storageHyper-V supports both Fibre Channel & iSCSI SANs connected from parentFibre Channel SANs still represent largest install base for SANs and high usage with VirtualizationLive Migration is supported with storage arrays which have obtained the Designed for Windows Logo and which pass Cluster Validation
Storage Hardware that is qualified with Windows Server is qualified for Hyper-VApplies to running devices from Hyper-V parentStorage devices qualified for Server 2008 R2 are qualified with Server 2008 R2 Hyper-VNo additional storage device qualification for Hyper-V
Storage Hardware & Hyper-V
R2=
SAN Boot and Hyper-V
Booting Hyper-V Host from SAN is supportedFibre Channel or iSCSI from parent
Booting child VM from SAN supported using iSCSI boot with PXE solution (ex: emBoot/Doubletake)
Must use legacy NIC
Native VHD bootBoot physical system from local VHD is new feature in Server 2008 R2Booting a VHD located on SAN (iSCSI or FC) not currently supported (considering for future)
iSCSI Direct
Microsoft iSCSI Software initiator runs transparently from within the VM
VM operates with full control of LUN
LUN not visible to parent
iSCSI initiator communicates to storage array over TCP stack
Best for application transparency
LUNs can be hot added & hot removed without requiring reboot of VM (2008 and 2008 R2)
VSS hardware providers run transparently within the VM
Backup/Recovery runs in the context of VM
Enables guest clustering scenario
High Speed Storage & Hyper-V
Larger virtualization workloads require higher throughput
True for all scenariosVHDPassthroughiSCSI Direct
Fibre Channel 8 gig & 10 Gig iSCSI will become more commonAs throughput grows, requirements to support higher IO to disks also grows
High Speed Storage & Hyper-V
Customers concerned about performance should not use a single 1 Gig Ethernet NIC port to connect to iSCSI storage
Multiple NIC ports & aggregate throughput using MPIO or MCS is recommended
The Microsoft iSCSI Software Initiator performs very well at 10 Gig wire speed
10Gig Ethernet adoption is ramping upDriven by increasing use of virtualization
Fibre Channel 8 gig & 10 Gig iSCSI becoming more common
As throughput grows, requirements to support IO to disks also grows
Jumbo Frames
Offers significant performance for TCP connections including iSCSI
Max frame size 9K
Reduces TCP/IP overhead by up to 84%
Must be enabled at all end points (switches, NICs, target devices
Virtual switch is defined as an end point
Virtual NIC is defined as an end point
Jumbo Frames in Hyper-V R2
Added support in virtual switchAdded support in virtual NICIntegration components requiredHow to validate if jumbo frames is configured end to end
Ping –n 1 –l 8000 –f (hostname)-l (length)-f (don’t fragment packet into multiple Ethernet frames)-n (count)
Windows* 2008 Hyper-VNetwork I/O Path• Data packets get
sorted and routed to respective VMs by the VM Switch
NIC
TCP/IP
VM1
VM NIC1
TCP/IP
VM2
VM NIC2Port 2 Port 1
RoutingVLAN Filtering
Data Copy
Miniport Driver
Management OS
Virtual Machine Switch
VMBus
Ethernet
Windows Server 2008 R2 VMQData packets get sorted into multiple queues in the Ethernet Controller based on MAC Address and/or VLAN tags
Sorted and queued data packets are then routed to the VMs by the VM Switch
Enables the data packets to DMA directly into the VMs
• Removes data copy between the memory of the Management OS and the VM’s memory
NIC
TCP/IP
VM1
VM NIC1
TCP/IP
VM2
VM NIC2Port 2 Port
1
RoutingVLAN Filtering
Data Copy
Miniport Driver
Management OS
Virtual Machine Switch
VM Bus
Ethernet
Switch/Routing Unit
Default Queue
Q2Q1
1 2 3 4 5 6 7 80
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
No VMQVMQ
Number of VMs
Thro
ughp
ut in
Mbp
sIntel tests with Microsoft VMQ
• Quad core Intel® server, Windows* 2008 R2 Beta, ntttcp benchmark, standard frame size (1500 bytes)
• Intel® 82598 10 Gigabit Ethernet Controller• Near line rate throughput with VMDq for 4 VMs
• Throughput increase from 5.4Gbps to 9.3Gbps
Source: Microsoft Lab, Mar 2009
More than 25% throughput gain with VMDq/VMQ as VMs scale
*Other names and brands may be claimed as the property of others.
Hyper-VParent R1/R2
Jumbo Frames
LSO V1
MPIO & MCS
TCP Chimney
LSO V2
RSS
MPIO & MCS
LSO V1
TCP Chimney
LSO V2
Jumbo Frames Performance BenefitsFor iSCSI
Direct Connections
Hyper-V Performance Improvements For virtual network interface and iSCSI in Windows 7
Hyper-V2008 R2 Child
Manageability ScalabilityPerformance
•Storport support for >64 cores•Scale up storage workload•Improved scalability for iSCSI & Fibre Channel SANs
•Improved Solid State disk performance (70% reduction in latency)
•iSCSI digest offload•iSCSI Increased Performance•MPIO New Load Balancing algorithm
•MPIO Datacenter Automation•MPIO automate setting default load balance policy
Enterprise Storage Features
Automation
•iSCSI Quick Connect•Improved SAN Configuration and usability•Storage Management support for SAS
Reliability•Storport error log extensions•Multipath health & statistics reporting •Configuration reporting for MPIO•Configuration reporting for iSCSI
Diagnosability•Additional redundancy for Boot from SAN – up to 32 paths
iSCSI Quick ConnectNew in Windows 7/Windows Server 2008 R2
Fabric/Fibre Channel Network
High Availability with Hyper-V using MPIO & Fibre Channel SAN
Windows Server Hosts
Switches
VHDs
Clients
LUNs
In Hyper-V Fibre Channel LUNs Supported as
Passthrough Disk
Connect from parent, map to VM
VM formats with NTFS
VHD
Connect from Hyper-V host
Format with NTFS from host
Create VHDs for each guest
MCS & MPIO with Hyper-VProvides High Availabilty to storage arraysEspecially important in virtualized environments to reduce single points of failureLoad balancing & fail over using redundant HBAs, NICs, switches and fabric infrastructure Aggregates bandwidth to maximum performanceMPIO supported with Fibre Channel , iSCSI, Shared SAS2 Options for multi-pathing with iSCSI
Multiple Connections per SessionMicrosoft MPIO (Multipathing Input/Output)
Protects against loss of data path during firmware upgrades on storage controller
Configuring MPIO with Hyper-V
MPIOConnect from parent
Applies to:Creating vhds for each VMPassthrough disks
Additional sessions to target can also be added through MPIO directly from guestAdditional connections can be added through MCS with iSCSI using iSCSI direct
iSCSI Perf Best Practices with Hyper-V
Standard Networking & iSCSI best practices applyUse Jumbo FramesUse Dedicated NIC ports for
iSCSI traffic (Server to SAN)Multiple to scale
Client Server (LAN)Multiple to scale
Cluster heartbeat (if using cluster)Hyper-V Management
Hyper-V Enterprise Storage Testing Performance Configuration
Windows Server 2008 R2 Hyper-V
Microsoft MPIO4 Sessions
64K request size
100% read
Microsoft iSCSI Software Initiator
Intel 10 Gb/E NICRSS enabled (applicable to parent only)
Jumbo Frames (9000 byte MTU)
LSO V2 (offloads packets up to 256K)
LRO
Hyper-V Server 2008 R2
NetApp FAS 3070
Configuring Hyper-V for Networking & iSCSI
Hyper-V Networking
Two 1 Gb/E physical network adapters at a minimum
One for managementOne (or more) for VM networkingDedicated NIC(s) for iSCSIConnect parent to back-end management network
Only expose guests to internet traffic
Hyper-V Network Configurations
Example 1:Physical Server has 4 network adaptersNIC 1: Assigned to parent partition for managementNICs 2/3/4: Assigned to virtual switches for virtual machine networkingStorage is non-iSCSI such as:
Direct attachSAS or Fibre Channel
Hyper-V Setup & Networking 1
Hyper-V Setup & Networking 2
Hyper-V Setup & Networking 3
Windows Server 2008
Each VM on its own Switch…
VM 2VM 1
“Designed for Windows” Server Hardware
Windows hypervisor
VM 3
Parent Partition Child Partitions
User Mode
KernelMode
Ring -1MgmtNIC 1
VSwitch 1NIC 2
VSPVSP
VSP
VSwitch 2NIC 3
VSwitch 3NIC 4
Applications Applications Applications
VM Service
WMI Provider
VM Worker Processes
Windows Kernel VSC Windows
KernelVSC Linux
Kernel VSC
VMBus VMBus VMBusVMBus
Hyper-V Network Configurations
Example 2:Server has 4 physical network adaptersNIC 1: Assigned to parent partition for managementNIC 2: Assigned to parent partition for iSCSINICs 3/4: Assigned to virtual switches for virtual machine networking
Hyper-V Setup, Networking & iSCSI
Windows Server 2008
Now with iSCSI…
VM 2VM 1
“Designed for Windows” Server Hardware
Windows hypervisor
VM 3
Parent Partition Child Partitions
User Mode
KernelMode
Ring -1MgmtNIC 1 iSCSI NIC 2
VSPVSP
VSwitch 2NIC 3
VSwitch 3NIC 4
Applications Applications Applications
VM Service
WMI Provider
VM Worker Processes
Windows Kernel VSC Windows
KernelVSC Linux
Kernel VSC
VMBus VMBus VMBusVMBus
Networking: Parent Partition
Networking: Virtual Switches
New in R2: Core Deployment
There’s no GUI in a Core Deployment, how do I configure which NICs are bound to switches or kept separate for the parent partition?
No Problem…
Hyper-V R2 Manager includes option to set bindings per virtual switch…
Hyper-V Enterprise SAN Customer Deployments
Microsoft Confidential
AvanadePlatform
Windows® Server 2008 Hyper-V™
Microsoft MPIO
Microsoft iSCSI Software Initiator
Failover Cluster
Applications Virtualized
Team Foundation Server
System Center Operations Manager 2007
Windows® Server 2008 Terminal Services
Impetus for Change
Flexibility for Disaster Recovery
Time savings – needed ability to add servers quickly, rather than over weeks
Space is expensive – needed scalable solution without using as much space
Going green – computing power per watt
Much more efficient use of physical resources
Benefits
51% space savings with de-duplication
250GB capacity saved without code update
Auto-provisioning
Highly available virtual machines
Great performance with Hyper-V and NetApp
NetApp® Fabric-Attached Storage (FAS) System
4-Node Hyper-V Cluster
Production VMs
1 Gbit/s LAN
iSCSI
SAN
“Hyper-V allows us to provision new servers quickly and more efficiently utilize hardware resources. Using Hyper-V with our existing NetApp infrastructure provided a cost-effective and flexible solution without sacrificing performance.”
— Andy Schneider, infrastructure architect, Avanade
Lionbridge TechnologiesiSCSI/Fibre Channel
Applications Used
Microsoft SQL Server/ Microsoft Exchange Server
Microsoft File Shares
Windows Server 2008 Components
Microsoft iSCSI Software Initiator
Failover Clustering
Hyper-V
Microsoft MPIO
FalconStor MPIO DSM
Pain Points
Single Protocol / Single SAN Vendor Lock-In
Lack of Mirroring, Snapshot, Replication across any SAN regardless of protocol
Solution
Microsoft Windows 2008 iSCSI hosts with Hyper-V with Failover Clustering and Microsoft MPIO
SAN Gateway with Snapshot, Mirroring and Sub-block Replication
Benefits
Ability to deploy Multi-Site Clustering
Multiple SAN Vendors
Global IT – Windows Server 2008 Hyper-V ISCSI SAN
SQL Server Windows Server
2008 Failover Cluster
MS Exchange 2007on Windows Server
File Shares
300+Hyper-V Virtual
Machines
iSCSI
Fibre Channel SAN
SAN Gateway
iSCSI“Hyper-V has allowed us to consolidate 300+ servers to virtual machines. This configuration when combined with Microsoft’s iSCSI, Fibre Channel and multipathing support provides great flexibility in storage options. We chose FalconStor’s SAN Gateway which enables advanced storage features to be used with any SAN storage and our iSCSI based virtual machines”
— Frank Smith, Sr. Systems Engineer
Indiana University: Auxiliary Information TechnologyFibre Channel SAN
ApplicationsInternet Information Server 6.0/7.0 (IIS)
SQL Server 2005/2008
File and Print Services
Team Foundation Server
Pain PointsCost of managing DAS storage
Time to provision new servers
Insufficient restore times withbare metal recovery
Server Utilization and Legacy Hardware
Solution90% virtualized DC with Hyper-V
Microsoft MPIO
Microsoft Failover Clustering
Consolidated on Compellent Storage Center
BenefitsFully virtualized Server and Storage
Ease and speed of deployment
Energy savings with server and storage
Shared Storage and reduced footprint
SQL Server on Windows Server
2008
Windows 2008 File Servers
Hyper-V Hosts
Fibre Channel SwitchWith 4GB Dual Path HBAs
Jackson Energy AuthorityiSCSI SAN
Applications UsedExchange, SharePoint, Dynamics
Windows Server 2003 / 2008 / Hyper-V
Terminal ServicesWindows Server 2008 components
Microsoft iSCSI Software Initiator
Microsoft MPIO
Pain PointsHigh growth and change
No disaster protection
Poor storage utilization
Complex storage management
SolutionWindows Server 2008 iSCSI hosts
30TB iSCSI SAN with MPIO load balancing
Lefthand MPIO DSM
Two storage pools: SAS and SATA
Multi-site SAN between two sites
BenefitsHigh availability across sites
Reduced storage management costs
Increased flexibility in dealing with change and growth
Multi-Site iSCSI SAN
Highly Available Terminal Server InfrastructureSharePoint
Server Farm
ExchangeMail
ServersDynamics
iSCSI SANSwitched Gb-
Ethernet
Terminal Server – SITE A
Terminal Server – SITE B
SITE A SITE B
“When combining Hyper-V, and native Server 2008 technologies such as Microsoft MPIO and the Microsoft iSCSI software initiator, our administration was greatly simplified.”
— Michael Johnston, VP of Information Technology
www.virtualizationperformance.com
Virtualization PerformanceiSCSI SAN
Pain Points Capital Expenditures
Rising Datacenter Costs
Power, Cooling, and floor space
Backup and Disaster Recovery
Disk Utilization
SolutioniSCSI SAN consolidation
Microsoft iSCSI Software Initiator
Microsoft MPIO
Windows Server 2008 Hyper-V
BenefitsReduced Capital Expenditures
Controlled Datacenter costs
Increased Storage capacity to 15TB
Can failover to DR site quickly
iStor iSCSI Disk
Arrays
ExchangeMail Server
VM
File Server VM
Sales SQLDatabase
VM
iSCSI SANSwitched Gb-Ethernet
Windows Server 2008
+ Hyper-V
“An iSCSI SAN allowed us to control costs and deliver better services to our clients.”
— Stephen Ames, Virtualization Performance
Microsoft Hyper-V ServerV2
Microsoft Hyper-V Server V2New Features
Live MigrationHigh AvailabilityNew Processor Support
Second Level Address TranslationCore Parking
Networking EnhancementsTCP/IP Offload SupportVMQ & Jumbo Frame Support
Hot Add/Remove virtual storageEnhancements to SCONFIGEnhanced scalability
Manage Remotely…
Hyper-V Server V1 vs. V2Microsoft Hyper-V Server 2008 Microsoft Hyper-V Server V2
Processor Support Up to 4 processors Up to 8 processors
Physical Memory Support Up to 32 GB Up to 1 TB
Virtual Machine Memory Support
Up to 32 GB total(e.g. 31 1 GB VMs or
5 6 GB VMs)
64 GB of memory per VM
Live Migration No Yes
High Availability No Yes
Management Options Free Hyper-V Manager MMCSCVMM
Free Hyper-V Manager MMCSCVMM
Live Migration $$ Comparison
Hyper-V Server R2 VMware vSphere
3 Node Cluster2 Socket Servers
Free $13,470
3 Node Cluster4 Socket Servers
Free $26,940
5 Node Cluster2 Socket Servers
Free $22,450
5 Node Cluster4 Socket Servers
Free $44,900
For $500 add VMM 2008 R2 (Workgroup Edition) to manage MS Hyper-V Server R2:•Physical to Virtual Conversion (P2V); Quick Storage Migration; Library Management;•Heterogeneous Management; PowerShell Automation; Self-Service Portal and more…
Best Practices & Tips and Tricks
Deployment Considerations
Minimize risk to the Parent PartitionUse Server CoreDon’t run arbitrary apps, no web surfing
Run your apps and services in guests
Moving VMs from Virtual Server to Hyper-VFIRST: Uninstall the VM Additions
Two physical network adapters at a minimumOne for management (use a VLAN too)One (or more) for vm networkingDedicated iSCSI NICsConnect to back-end management network
Only expose guests to internet traffic
Don't forget the ICs!Emulated vs. VSC
Cluster Hyper-V Servers
Live Migration/HA Best Practices
Best Practices:Cluster Nodes:
Hardware with Windows Logo + Failover Cluster Configuration Program (FCCP)
Storage:Cluster Shared Volumes
Storage with Windows Logo + FCCP
Multi-Path IO (MPIO) is your friend…
Networking:Standardize the names of your virtual switches
Multiple Interfaces
CSV uses separate network
Use ISOs not physical CD/DVDsYou can’t Live Migrate a VM that has a physical DVD attached!
More…
Mitigate BottlenecksProcessors
Memory
StorageDon't run everything off a single spindle…
Networking
VHD Compaction/ExpansionRun it on a non-production system
Use .isosGreat performance
Can be mounted and unmounted remotely
Having them in SCVMM Library fast & convenient
Creating Virtual Machines
Use SCVMM LibrarySteps:1. Create virtual machine2. Install guest operating system3. Install integration components4. Install anti-virus5. Install management agents6. SYSPREP7. Add it to the VMM LibraryWindows Server 2003
Creat vms using 2-way to ensure an MP HAL
Conclusions
Significant performance gains between Server 2008 and Server 2008 R2 for enterprise storage workloads
Performance improvements in Hyper-V, MPIO, iSCSI, Core storage stack & Networking stack
For general workloads with multiple VMs, performance delta is minimal between SCSI passthrough & VHDiSCSI Performance especially with iSCSI direct scenarios is vastly improved
Additional Resources
Microsoft MPIO: http://www.microsoft.com/mpio MPIO DDK
MPIO DSM sample, interfaces and libraries will be included in Windows 7 DDK/SDK
Microsoft iSCSI: http://www.microsoft.com/[email protected]
iSCSI WMI Interfaces: http://msdn.microsoft.com/en-us/library/ms807120.aspx
Storport Website: http://www.microsoft.com/Storport Storport Documentation
Windows Driver Kit
MSDN: http://msdn.microsoft.com/en-us/library/bb870491.aspx
Microsoft Virtualization: http://www.microsoft.com/virtualization/default.mspx
Additional Resouces
Hyper-V Planning & Deployment Guidehttp://technet.microsoft.com/en-us/library/cc794762.aspx
Microsoft Virtualization Websitewww.microsoft.com/virtualizationhttp://www.microsoft.com/virtualization/partners.mspxhttp://blogs.technet.com/virtualizationhttp://blogs.technet.com/jhoward/default.aspxhttp://blogs.msdn.com/taylorb/
Partner References
Intel: http://www.intel.com Emulex: http://www.emulex.comAlacritech: http://www.alacritech.comNetApp: http://www.netapp.com3Par: http://3par.comiStor: http://istor.com Lefthand Networks http://www.lefthandnetworks.comDoubletake: http://www.doubletake.comCompellent: http://www.compellent.comDell/Equallogic: http://www.dell.com Falconstor: http://www.falconstor.com
question & answer
www.microsoft.com/teched
Sessions On-Demand & Community
http://microsoft.com/technet
Resources for IT Professionals
http://microsoft.com/msdn
Resources for Developers
www.microsoft.com/learningMicrosoft Certification and Training Resources
www.microsoft.com/learning
Microsoft Certification & Training Resources
Resources
Complete an evaluation on CommNet and enter to win!
© 2009 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS,
IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.