container-based os virtualization
DESCRIPTION
Container-based OS Virtualization. A Scalable, High-performance Alternative to Hypervisors Stephen Soltesz, Herbert Pötzl, Marc Fiuczynski, Andy Bavier & Larry Peterson. PlanetLab Usage. Number of Active VMs. Number of Resident VMs. 100. 30. 25. 80. 20. 60. 15. 40. 10. 20. 5. 0. - PowerPoint PPT PresentationTRANSCRIPT
Container-based OS Virtualization
A Scalable, High-performance Alternative to Hypervisors
Stephen Soltesz, Herbert Pötzl, Marc Fiuczynski, Andy Bavier & Larry Peterson
2
PlanetLab Usage
Typical Node (2.4GHz, 1GB, 100-200GB disk) ~250-300 configured VM file systems on disk 40-90 resident VMs with ≥ 1 process 5-20 active VMs using CPU
80
60
40
0
20
100Number of Resident VMs
2520151050
Number of Active VMs30
3
What is the Trade-Off?
4
Usage Scenarios Efficiency -> Performance
IT Data Centers Grid, HPC Clusters
Efficiency -> Low-overhead Linux-based Phone OLPC Laptops Enhanced WIFI Routers
Efficiency -> Scalability Web Hosting Amazon EC2 PlanetLab, VINI Network Research
5
Presentation Outline
Why Container-based OS Virtualization? High-level Design
Hypervisor Container-based OS
Guest VM Environment Xen VServer
Evaluation
6
Hypervisor Design
DriverDomain
7
Container Design
VM1 VM2 VMn
8
Feature Comparison
Hypervisor Container
Multiple Kernels X
Load Arbitrary Modules X
Local Administration All
Live Migration OpenVZ
Live System Update X Zap
9
Presentation Outline
Why Container-based OS Virtualization? High-level Design
Hypervisor Container-based OS
Guest VM Environment Xen VServer
Evaluation
10
Xen 3.0 Guest VM
I/O Path•Process to Guest OS•Guest OS to IDD
Resource Control•Driver Domain
•Map Virtual Devices•CFQ for disk•HTB for network
Security Isolation•Hypervisor•Access Physical Level
•PCI Address•Virtual Memory
Resource Control•Hypervisor
•Allocate Resources•Schedule VMs
Schedules All VMs•Guest VM & IDD Scheduled•Two levels scheduling in Guest
11
VServer 2.0 Guest VM
Security Isolation•Access to Logical Objects
•Context ID Filter•User IDs•SHM & IPC address•File system Barriers
Resource Control•Map Container to
•HTB for Network•CFQ for Disk
•Logical Limits•Processes•Open FD•Memory Locks
Optimizations•File-level Copy-on-write
I/O Path•Process to COS
Scheduler•Single Level•Token Bucket Filterpreserves O(1) scheduler
12
VServer Implementation
8,700 lines across 350+ files Leverage existing implementations Applied to Logical Resources
Not architecture specific MIPS, ARM, SPARC, etc.. Low Overhead
13
Guest Comparison
Xen 3.0 VServer 2.0Level of Virtualization Physical LogicalResource Control HTB, CFQ, etc HTB, CFQ, etcScheduler 2-levels: Hyp + Guest 1-levelI/O Path 3 transfers 2 transfer
14
Configuration
Kernel Linux VServer 2.0 Xen 3.0.4
Version 2.6.16.33Distribution Fedora Core 5File system Independent LVM PartitionsScheduler O(1) O(1)+TBF Credit
Machine HP DL360 G4pCPU 2 x 1 core Xeon with 2MB L2Network 2 Port GbEMemory 4 GB
Hardware
System Software
15
Network I/O: TCP Receive
0
0.2
0.4
0.6
0.8
1
71.9% 70.3% 100.0% 134.8% 77.8% 77.6% 173.1%
Linux VServer Xen3 oneCPU
Xen3 twoCPUs
Linux VServer Xen3
IPerf - UP IPerf - SMP
Normalized Throughput
CPU %
16
Disk I/O: Write
0
0.2
0.4
0.6
0.8
1
1.2
DD DBench DD DBench
UP SMP
Performance Relative to Linux-UP
Linux VServer Xen
17
CPU & Memory Performance
0
0.5
1
1.5
2
Kernel Compile OSDB IR Kernel Compile OSDB IR
UP SMP
Performance Relative to Linux-UP
Linux VServer Xen
18
Performance at Scale - UP
0
50
100
150
200
250
1 2 4 8 1 2 4 8
VServer Xen3
OSDB IR + CrossSection Test - UP
Avg. Aggregate Throughput (tup/sec)
19
Performance at Scale - SMP
0
50
100
150
200
250
300
350
400
450
500
1 2 4 8 1 2 4 8
VServer Xen3
OSDB IR + CrossSection Test - SMP
Avg. Aggregate Throughput (tup/sec)
20
Conclusion Virtualization for Manageability
Variety of current Implementations No one-size-fits-all solution
Hypervisors offer compelling features Containers are built on well understood technology Isolation & Efficiency Trade-off
When trade-off is possible… VServer as alternative Native Efficiency I/O Low-Overhead Implementation More Scalable
21
Questions
Thank you
22
23
Speculation on Future Trends
Future improvements to both platforms COS-Linux + Linux-as-Hypervisor (KVM)
24
Conclusion
Performance, Lower-Overhead, Scalability