1 xensocket: vm-to-vm ipc john linwood griffin jagged technology virtual machine inter-process...
TRANSCRIPT
1
XenSocket: VM-to-VM IPC
John Linwood GriffinJagged Technology
virtual machine inter-process communication
Suzanne McIntosh, Pankaj Rohatgi, Xiaolan Zhang
IBM Research
Presented at ACM Middleware: November 28, 2007
2
with XenSocket:
before XenSocket:
What we did: Reduce work on the critical path
VM 1
Xen
Domain-0 VM 2
VM 1
Xen
VM 2
Put packet into a page
Ask Xen to remap page
Routepacket
Ask Xen to remap page
Allocate pool of pages (once)
Ask Xen to share pages (once)
Write into pool Read from pool
3
The standard outline
• What we did
• (Why) we did what we did
• (How) we did what we did
• What we did (again)
4
IBM building a stream processing system with high-throughput requirements
Video
Enormous volume of data enters the
system
Independent nodes process and forward
data objects
Design for isolated, audited, and profiled
execution environments
5
VM 1
Xen
VM 2 VM 3 VM 4
x86 virtualization technology provides isolation in our security architecture
Node 1
Other physical nodes
Other physical nodes
Other physical nodes
Other physical nodes
2
3
1 41
Node 2
2
Node 3
3
Node 4
41
2
34
6
Using Xen virtual network resulted in low throughput @ max CPU usage
VM 1
Xen
Domain-0 VM 2
Xen
Linux
Process 1 Process 2
UNIX socket14 Gbit/s
TCP socket0.14 Gbit/s
100% CPU 20% CPU 100% CPU
7
before XenSocket:
Our belief: root causes are Xen hypercalls and network stack
VM 1
Xen
Domain-0 VM 2
Uses 1.5 KB of 4 KB page
Ask Xen to swap pages
Packet routed
Ask Xen to swap pages
Put packet into a page
May invoke Xen hypercall after only 1 packet queued
Victim pages must be zeroed
8
The standard outline
• What we did
• (Why) we did what we did
• (How) we did what we did
• What we did (again)
9
with XenSocket:
XenSocket hypothesis: Cooperative memory buffer improves throughput
VM 1
Xen
VM 2
Allocate 128 KB pool of pages
Ask Xen to share pages
No per-packet processing
Writes are visible immediately
Still requires hypercalls for signaling (but fewer)
Pages reused in circular buffer
10
Caveat emptor
• We used Xen 3.0—Latest is Xen 3.1
• Xen networking is reportedly improved
• Shared-memory concepts remain valid
• Released under GPL as XVMSockethttp://sourceforge.net/projects/xvmsocket/
Community is porting to Xen 3.1
11
Sockets interface; new socket family used to set up shared memory
Server
socket();
bind(sockaddr_inet);
listen();
accept();
socket();
bind(sockaddr_xen);
Client
socket();
connect(sockaddr_inet);
socket();
connect(sockaddr_xen);System returns grant # for client
• Remote address• Remote port #
• Remote VM #• Remote VM #• Remote grant #
• Local port #
12
After setup, steady-state operation needs little (if any) synchronization
X e n S o c k e t
VM 1 VM 2
write(“XenSocket”) read(3) “Xen”
If receiver is blocked, send signal via Xen
13
Design goal (future work): Support for efficient local multicast
X e n S o c k e t
VM 1VM 2
VM 3
write(“XenSocket”)read(3) “Xen”
read(5) “XenSo”
Future writes wrap around;block on first unread page
14
The standard outline
• What we did
• (Why) we did what we did
• (How) we did what we did
• What we did (again)
15
Figure 5: Pretty good performance
Message size (KB, log scale)0.5 16
Ban
dwid
th (
Mbi
t/s)
14
7
UNIX socket: 14 MB/s
XenSocket: 9 MB/s
INET socket: 0.14 MB/s
16
Figure 6: Interesting cache effects
14
7
Message size (MB, log scale)1 10 1000.10.01
Ban
dwid
th (
Mbi
t/s)
UNIX socket
XenSocket
INET socket
17
Throughput limited by CPU usage; Advantageous to offload Domain-0
VM 1
Xen
Domain-0 VM 2
Xen100% CPU 1% CPU 100% CPU
VM 1
Xen
Domain-0 VM 2
Xen
TCP socket0.14 Gbit/s
100% CPU 20% CPU 100% CPU
XenSocket9 Gbit/s
18
Adjusted communications integrity and relaxing of pure VM isolation
Possible solution: Use a proxy for pointer updates along the reverse path
VM 1
VM 2
VM 3
But now this path is bidirectional(?)
Any masters students looking for a project?
19
Potential memory leak: Xen didn’t (doesn’t?) support page revocation
VM 1 VM 2 SetupVM 1 shares pages
VM 1 VM 2 Scenario #1VM 2 releases pages
VM 1 VM 2 Scenario #2VM 1 cannot safely
reuse pages
20
Xen shared memory: Hot topic!
XenSocketMiddleware’07 | make a better virtual network
MVAPICH-ivc: Huang and colleagues (Ohio State, USA)SC’07 | What we did, but with a custom HPC API
XWay: Kim and colleagues (ETRI, Korea)’07 | What we did, but hidden behind TCP sockets
Menon and colleagues (HP, USA)VEE’05, USENIX’06 | make the virtual network better
21
Conclusion: XenSocket is awesome
Shared memory enables high-throughput VM-to-VM communication in Xen
(a broadly applicable result?)
John Linwood GriffinJohn.Griffin @ JaggedTechnology.com
Also here at Middleware: Sue McIntosh