![Page 1: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/1.jpg)
P2P Session
VIEW POC Findings
Topic: Resource and Hardware AllocationJune 2009
James Soler
Matt Mancini
![Page 2: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/2.jpg)
Agenda
Topic Overview
Test Criteria
Results
Recommendations
![Page 3: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/3.jpg)
Topic Overview
During the 2nd and 3rd weeks of June 2009 we benchmarked and tested the functionality of VMWARE View 3.1 against our current POC hardware
Our goals were Document how our hardware performed under loads
Document how we could control the load
Establish a baseline for hardware and resource loads
Recommend View Server hardware based on our findings
![Page 4: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/4.jpg)
Test Criteria
The Blades
POC Blade is a BL460c / 32GB RAM / 6 NIC / 2 x Quad Core Xeon x5355 Clovertown Core 2.66Ghz CPU ~21.5Ghz / 2 Fiber HBA / ESX 3.5 U4
The Rack Servers
Demo Servers are DL380 G6/ 108GB RAM / 10 NIC / 2 x Quad Core Xeon x5560 Gainstown Nehalem 2.8 GHz CPU ~ 22.4 GHz / 6 Fiber HBA / ESX 3.5 U4
Test VM’s
MAX 1 x vCPU 2.66Ghz / 1GB RAM
Normal 1 x vCPU 1.5Ghz / 1GB Ram Limits (Limited using Vcenter Resource Pool)
![Page 5: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/5.jpg)
View Environment
Thin Client
Desktop
Notebook
View Connection Manager
VCenter Server
ESX Host Cluster
View Virtual Desktop Pool
![Page 6: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/6.jpg)
Test Criteria
How we Tested the Blades and VM’s
We would deploy between 1 to 41 VM’s across each server
VM’s were deployed with XP using VMWare’s XP deployment guide
We would use one VM as the benchmark and the others as the load
The load VM’s would run PassMark’s Burn in Test which would enable us to drive the CPU / RAM / NIC / HD at different saturation levels
Situations levels were tested in different combinations of 5, 10, 25, 50 and 100%
While the Burn In Test was running we’d run Passmark’s Performance Test to establish a benchmark score. This score was double and often triple verified
![Page 7: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/7.jpg)
Base Line Performance Test
![Page 8: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/8.jpg)
Environment
We did test Vista, however it seamed to make no difference in performance. In fact it seamed to waste resources as it wouldn’t settle down when idle
During testing if VM’s were unlimited they would saturate the blades and the benchmark PC will score very low with BL460c blades
If VM’s are restricted by CPU GHz then the tend to work better together and score higher even at full loads with the BL460c blades
Limiting Resources on the DL380 G6 did not have same effect as the BL460c blades. In most cases it actually hurt overall performance even with 20+ VM’s
![Page 9: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/9.jpg)
Resource Pool
BL460c 10 VM’s Running @ 100%
Resource Limited @ 1.5 GHz – 331 Resources Unlimited – 308
BL460c 20VM’s Running @ 100%
Resource Limited @ 1.5 GHz – 169 Resources Unlimited – 128
DL380 G6 10 VM’s Running @ 100%
Resource Limited @ 1.5 GHz – 447 Resources Unlimited – 535
DL380 G6 30VM’s Running @ 25%
Resource Limited @ 1.5 GHz – 358 Resources Unlimited – 370
![Page 10: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/10.jpg)
Multi Pathing
We also found early on that the Virtual Desktops were scoring very low in the Disk Mark compared to Physical systems
Once we enabled Multi Pathing with in VCenter our Disk I/O doubled improving the overall score of the View Desktops
![Page 11: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/11.jpg)
Multi Path Disk I/O Comparison
BL460c 10 VM’s Running @ 100% with out Multi Path configured
Disk Mark – 31.3
Passmark Rating – 193
BL460c 10 VM’s Running @ 100% with Multi Path configured
Disk Mark – 69.5
Passmark Rating – 308
![Page 12: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/12.jpg)
Multipathing
![Page 13: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/13.jpg)
Multipathing
![Page 14: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/14.jpg)
Logical Test Layout
Host Server 10 Virtual Desktops running @ 100% Load
1 Virtual Desktop running benchmark tool
View Virtual Desktop Pool
![Page 15: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/15.jpg)
10 VM’s @ 100%
![Page 16: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/16.jpg)
Logical Test Layout
Host Server 20 Virtual Desktops running @ 25% Load
1 Virtual Desktop running benchmark tool
View Virtual Desktop Pool
![Page 17: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/17.jpg)
20 VM’s @ 25%
![Page 18: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/18.jpg)
Logical Test Layout
Host Server 30 Virtual Desktops running @ 25% Load
1 Virtual Desktop running benchmark tool
View Virtual Desktop Pool
![Page 19: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/19.jpg)
30 VM’s @ 25%
![Page 20: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/20.jpg)
Recommendations for BL460c
We should enable VM Resource limits to
1 vCPU at 1.5Ghz
1GB of RAM
Resource Pools should be limited to 1.5GHz less the max for an ESX servers CPU
Disk I/O is a performance issues. By enabling multipathing and keeping HBA to LUN relationship as low as possible our performance issues should be minimized.
If we are to deploy 600 VM’s across 2TB with 4 to 8 LUNS then we should consider a 1:1 or 1:2 HBA / LUN relationship
4 VM’s per core due to performance issues at Max capacity.
![Page 21: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/21.jpg)
Recommendations for DL380 Physical Servers
Resource Limits are not necessary with new processors
Minimum 4 to 6 HBA’s per server
NIC’s 6 to 8 are adequate
96 GB Memory or more per server
5 VM’s per core – Conservative to VMWare’s 8 VM’s per core
![Page 22: P2P Session VIEW POC Findings Topic: Resource and Hardware Allocation June 2009](https://reader035.vdocuments.site/reader035/viewer/2022062809/56815569550346895dc333c1/html5/thumbnails/22.jpg)
Conclusion Depending on your hardware and performance
requirements View 3.1 will easily support 4-8 VM’s per Core.
VCenter configuration is very important to performance of View
Build you environment based on what your needs are