virtualisation overview

Download Virtualisation overview

Post on 19-May-2015

883 views

Category:

Technology

1 download

Embed Size (px)

TRANSCRIPT

  • 1. Copyright IBM Corporation 2009 3.2 PowerVM Virtualization plain and simple

2. Copyright IBM Corporation 2009 IBM System p Goals with Virtualization Lower costs and improve resource utilization - Data Center floor space reduction or - Increase processing capacity in the same space - Environmental (cooling and energy challenges) - Consolidation of servers - Lower over all solution costs Less hardware, fewer software licenses - Increase business flexibility Meet ever changing business needs faster provisioning - Improving Application Availability Flexibility in moving applications between servers 3. Copyright IBM Corporation 2009 IBM System p The virtualization elevator pitch The basic elements of PowerVM - Micro-partitioning allows 1 CPU look like 10 - Dynamic LPARs moving resources - Virtual I/O server partitions can share physical adapters - Live partition mobility using Power6 - Live application mobility using AIX 6.1 4. Copyright IBM Corporation 2009 IBM System p First there were servers One physical server for one operating system Additional physical servers added as business grows Physical view Users view 5. Copyright IBM Corporation 2009 IBM System p Then there were logical partitions One physical server was divided into logical partitions Each partition is assigned a whole number of physical CPUs (or cores) One physical server now looks like multiple individual servers to the user Physical view 8 CPUs Users viewLogical view 1 CPUs 3 CPUs 2 CPUs 2 CPUs 6. Copyright IBM Corporation 2009 IBM System p Then came dynamic logical partitions Whole CPUs can be moved from one partition to another partition These CPUs can be added and removed from partitions without shutting the partition down Memory can also be dynamically added and removed from partitions Physical view 8 CPUs Users viewLogical view 1 CPUs 3 CPUs 2 CPUs 2 CPUs 1 CPUs 3 CPUs 2 CPUs 7. Copyright IBM Corporation 2009 IBM System p Dynamic LPAR Standard on all POWER5 and POWER6 systems HMC AIX 5L Linux Hypervisor Part#1 Production Part#2 Part#3 Part#4 Legacy Apps Test/ Dev File/ Print AIX 5L AIX 5L Move resources between live partitions 8. Copyright IBM Corporation 2009 IBM System p Now there is micro partitioning A logical partition can now have a fraction of a full CPU Each physical CPU (core) can be spread across 10 logical partitions A physical CPU can be in a pool of CPUs that are shared by multiple logical partitions One physical server can now look like many more servers to the user Can also dynamically move CPU resources between logical partitions Physical view 8 CPUs Users viewLogical view 0.2 CPU 2.3 CPUs 1.2 CPUs 1 CPU 0.3 CPU 1.5 CPUs 0.9 CPU 9. Copyright IBM Corporation 2009 IBM System p Logical partitions (LPARs) can be defined with dedicated or shared processors Processors not dedicated to a LPAR are part of the pool of shared processors Processing capacity for a shared LPAR is specified in terms of processing units. With as little as 1/10 of a processor Micro-partitioning terminology 10. Copyright IBM Corporation 2009 IBM System p Micro-partitioning more details Lets look deeper into micro-partitioning 11. Copyright IBM Corporation 2009 IBM System p A physical CPU is a single core and also called a processor The use of micro-partitioning introduces the virtual CPU concept A virtual CPU could be a fraction of a physical CPU A virtual CPU can not be more than a full physical CPU IBMs simultaneous multi threading technology (SMT) enables two threads to run on the same processor at the same time. With SMT enabled the operating system sees twice the number of processors Micro-partitioning terminology (details) Physical CPU Virtual CPU Virtual CPU Virtual CPU Logical CPU Logical CPU Logical CPU Logical CPU Logical CPU Logical CPU Using SMT Using micro-partitioning Each logical CPU appears to the operating system as a full CPU 12. Copyright IBM Corporation 2009 IBM System p The LPAR definition sets the options for processing capacity: Minimum: Desired: Maximum: The processing capacity of an LPAR can be dynamically changed Changed by the administrator at the HMC Changed automatically by the hypervisor The LPAR definition set the behavior when under a load Capped: LPAR processing capacity is limited to the desired setting Uncapped: LPAR is allowed to use more then it was given Micro-partitioning terminology (details) 13. Copyright IBM Corporation 2009 IBM System p Shared processor pool Basic terminology around Logical Partitions Shared processor partition SMT Off Shared processor partition SMT On Dedicated processor partition SMT Off Deconfigured Inactive (CUoD) Dedicated Shared Virtual Logical (SMT) Installed physical processors Entitled capacity 14. Copyright IBM Corporation 2009 IBM System p Capped and uncapped partitions Capped partition - Not allowed to exceed its entitlement Uncapped partition - Is allowed to exceed its entitlement Capacity weight - Used for prioritizing uncapped partitions - Value 0-255 - Value of 0 referred to as a soft cap Note: The CPU utilization metric has less relevance in the uncapped partition. 15. Copyright IBM Corporation 2009 IBM System p What about system I/O adapters Back in the old days, each partition had to have its own dedicated adapters One Ethernet adapter for a network connection One SCSI or HBA card to connect to local or external disk storage The number of partitions was limited by the number of available adapters Physical adapters Users view Logical Partitions 1 CPUs 3 CPUs 2 CPUs 2 CPUs Ethernet adap Ethernet adap Ethernet adap Ethernet adap SCSI adap SCSI adap SCSI adap SCSI adap 16. Copyright IBM Corporation 2009 IBM System p Then came the Virtual I/O server (VIOS) The virtual I/O server allows partitions to share physical adapters One Ethernet adapter can not provide a network connection for multiple partitions Disks on one SCSI or HBA card can now be shared with multiple partitions The number of partitions is no longer limited by the number of available adapters Ethernet adap SCSI adap Virtual I/O Server partition 0.5 CPU 1.1 CPUs 0.3 CPU 1.4 CPUs 2.1 CPUs Ethernet network 17. Copyright IBM Corporation 2009 IBM System p Virtual I/O server and SCSI disks 18. Copyright IBM Corporation 2009 IBM System p Integrated Virtual Ethernet LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hypervisor SEA Virtual Ethernet Switch Virtual Ethernet Driver Virtual Ethernet Driver Virtual Ethernet Driver LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hyper- visor SEA Ethernet Driver Ethernet Driver Ethernet Driver Integrated Virtual Adapter VIOS Set up is not required for sharing Ethernet Adapters PCI Ethernet Adapter Virtual I/O Shared Ethernet Adapter Integrated Virtual Ethernet vs 19. Copyright IBM Corporation 2009 IBM System p Lets see it in action Now lets see this technology in action This demo illustrates the topics just discussed 20. Copyright IBM Corporation 2009 IBM System p 21. Copyright IBM Corporation 2009 IBM System p Shared Processor pools It is possible to have multiple shared processor pools Lets dive in deeper 22. Copyright IBM Corporation 2009 IBM System p Linux Software: A,B,C AIX 5L Software: X,Y,Z Multiple Shared Processor Pools VSP2 Max Cap=2VSP1 Max Cap=4 AIX 5L DB/2 Physical Shared Pool Useful for multiple business units in a single company resource allocation Only license the relevant software based on VSP Max Cap total capacity used by a group of partitions Still allow other partitions to consume capacity not used by the partitions in the VSP 23. Copyright IBM Corporation 2009 IBM System p AIX 6.1 Introduces Workload Partitions Workload partitions (WPAR) is yet another way to create virtual systems WPARs are partitions within a partition Each partition is isolated from one another AIX 6.1 can be run on Power5 or Power6 hardware 24. Copyright IBM Corporation 2009 IBM System p AIX 6 Workload Partitions (details) WPAR appears to be a stand alone AIX system Created entirely within a single AIX system image Created entirely in software (no HW assist or configuration) Provides an isolated process environment: Processes within a WPAR can only see other processes in the same partition. Provides an isolated file system space A separate branch of the global file system space is created and all of the WPARS processes are chrooted to this branch. Processes within a WPAR see files only in this branch. Provides an isolated network environment Separate network addresses, hostnames, domain names Other nodes on the network see WPAR as a stand alone system. Provides WPAR resource controls The amount of system memory, CPU resources, paging space allocated to each WPAR can be set. Shared system resources: OS, I/O Devices, Shared Library Workload Partition A Workload Partition C Workload Partition B AIX 6 Image Workload Partition D Workload Partition E 25. Copyright IBM Corporation 2009 IBM System p Inside a WPAR 26. Copyright IBM Corporation 2009 IBM System p Workload Partition Billing Workload Partition QA AIX # 2 Workload Partition Data Mining Live Application Mobility Workload Partition Application Server Workload Partition Web AIX # 1 Application Partition Dev The ability to move a Workload Partition from one server to another Provides outage avoidance and multi-system workload balancing Workload Partition eMail Policy based automation can provide more efficient resource usage Workload Partitions Manager Policy NFSNFS 27. Copyright IBM Corporation 2009 IBM System p Live application mobility in action Lets see this techonolgy in action with another demo Need to exit presentation in order to run the demo 28. Copyright IBM Corporation 2009 IBM System p Power6 hardware introduced partition mobility With Power6 hardware, partitions can not be moved from on system to another without stopping the applicatio