Xen and the Art of Virtualization Ian Pratt

  • Slides: 61
Download presentation
Xen and the Art of Virtualization Ian Pratt University of Cambridge and Founder of

Xen and the Art of Virtualization Ian Pratt University of Cambridge and Founder of Xen. Source Inc. Computer Laboratory

Outline ¾Virtualization Overview ¾Xen Today : Xen 2. 0 Overview ¾Architecture ¾Performance ¾Live VM

Outline ¾Virtualization Overview ¾Xen Today : Xen 2. 0 Overview ¾Architecture ¾Performance ¾Live VM Relocation ¾Xen 3. 0 features (Q 3 2005) ¾Research Roadmap

Virtualization Overview ¾Single OS image: Virtuozo, Vservers, Zones § Group user processes into resource

Virtualization Overview ¾Single OS image: Virtuozo, Vservers, Zones § Group user processes into resource containers § Hard to get strong isolation ¾ Full virtualization: VMware, Virtual. PC, QEMU § Run multiple unmodified guest OSes § Hard to efficiently virtualize x 86 ¾Para-virtualization: UML, Xen § Run multiple guest OSes ported to special arch § Arch Xen/x 86 is very close to normal x 86

Virtualization in the Enterprise Consolidate under-utilized servers to reduce Cap. Ex and Op. Ex

Virtualization in the Enterprise Consolidate under-utilized servers to reduce Cap. Ex and Op. Ex X Avoid downtime with VM Relocation Dynamically re-balance workload to guarantee application SLAs X X Enforce security policy

Xen Today : 2. 0 Features ¾Secure isolation between VMs ¾Resource control and Qo.

Xen Today : 2. 0 Features ¾Secure isolation between VMs ¾Resource control and Qo. S ¾Only guest kernel needs to be ported § All user-level apps and libraries run unmodified § Linux 2. 4/2. 6, Net. BSD, Free. BSD, Plan 9 ¾Execution performance is close to native ¾Supports the same hardware as Linux x 86 ¾Live Relocation of VMs between Xen nodes

Para-Virtualization in Xen ¾Arch xen_x 86 : like x 86, but Xen hypercalls required

Para-Virtualization in Xen ¾Arch xen_x 86 : like x 86, but Xen hypercalls required for privileged operations § Avoids binary rewriting § Minimize number of privilege transitions into Xen § Modifications relatively simple and self-contained ¾Modify kernel to understand virtualised env. § Wall-clock time vs. virtual processor time • Xen provides both types of alarm timer § Expose real resource availability • Enables OS to optimise behaviour

x 86 CPU virtualization ¾Xen runs in ring 0 (most privileged) ¾Ring 1/2 for

x 86 CPU virtualization ¾Xen runs in ring 0 (most privileged) ¾Ring 1/2 for guest OS, 3 for user-space § GPF if guest attempts to use privileged instr ¾Xen lives in top 64 MB of linear addr space § Segmentation used to protect Xen as switching page tables too slow on standard x 86 ¾Hypercalls jump to Xen in ring 0 ¾Guest OS may install ‘fast trap’ handler § Direct user-space to guest OS system calls ¾MMU virtualisation: shadow vs. direct-mode

MMU Virtualizion : Shadow-Mode guest reads Virtual → Pseudo-physical Guest OS guest writes Accessed

MMU Virtualizion : Shadow-Mode guest reads Virtual → Pseudo-physical Guest OS guest writes Accessed & dirty bits Updates Virtual → Machine VMM MMU Hardware

MMU Virtualization : Direct-Mode guest reads Virtual → Machine guest writes Guest OS Xen

MMU Virtualization : Direct-Mode guest reads Virtual → Machine guest writes Guest OS Xen VMM MMU Hardware

Para-Virtualizing the MMU ¾Guest OSes allocate and manage own PTs § Hypercall to change

Para-Virtualizing the MMU ¾Guest OSes allocate and manage own PTs § Hypercall to change PT base ¾Xen must validate PT updates before use § Allows incremental updates, avoids revalidation ¾Validation rules applied to each PTE: 1. Guest may only map pages it owns* 2. Pagetable pages may only be mapped RO ¾Xen traps PTE updates and emulates, or ‘unhooks’ PTE page for bulk updates

MMU Micro-Benchmarks 1. 1 1. 0 0. 9 0. 8 0. 7 0. 6

MMU Micro-Benchmarks 1. 1 1. 0 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0. 0 L X V Page fault (µs) U L X V U Process fork (µs) lmbench results on Linux (L), Xen (X), VMWare Workstation (V), and UML (U)

Writeable Page Tables : 1 – write fault guest reads Virtual → Machine first

Writeable Page Tables : 1 – write fault guest reads Virtual → Machine first guest write Guest OS page fault Xen VMM MMU Hardware

Writeable Page Tables : 2 - Unhook guest reads guest writes X Virtual →

Writeable Page Tables : 2 - Unhook guest reads guest writes X Virtual → Machine Guest OS Xen VMM MMU Hardware

Writeable Page Tables : 3 - First Use guest reads guest writes X Virtual

Writeable Page Tables : 3 - First Use guest reads guest writes X Virtual → Machine Guest OS page fault Xen VMM MMU Hardware

Writeable Page Tables : 4 – Re-hook guest reads Virtual → Machine guest writes

Writeable Page Tables : 4 – Re-hook guest reads Virtual → Machine guest writes Guest OS validate Xen VMM MMU Hardware

I/O Architecture ¾Xen IO-Spaces delegate guest OSes protected access to specified h/w devices §

I/O Architecture ¾Xen IO-Spaces delegate guest OSes protected access to specified h/w devices § Virtual PCI configuration space § Virtual interrupts ¾Devices are virtualised and exported to other VMs via Device Channels § Safe asynchronous shared memory transport § ‘Backend’ drivers export to ‘frontend’ drivers § Net: use normal bridging, routing, iptables § Block: export any blk dev e. g. sda 4, loop 0, vg 3

Xen 2. 0 Architecture VM 0 VM 1 VM 2 VM 3 Device Manager

Xen 2. 0 Architecture VM 0 VM 1 VM 2 VM 3 Device Manager & Control s/w Unmodified User Software Guest. OS (Xen. Linux) (Xen. BSD) Back-End Front-End Device Drivers Native Device Driver Control IF Native Device Driver Safe HW IF Event Channel Virtual CPU Virtual MMU Xen Virtual Machine Monitor Hardware (SMP, MMU, physical memory, Ethernet, SCSI/IDE)

System Performance 1. 1 1. 0 0. 9 0. 8 0. 7 0. 6

System Performance 1. 1 1. 0 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0. 0 L X V U SPEC INT 2000 (score) L X V U Linux build time (s) L X V U OSDB-OLTP (tup/s) L X V U SPEC WEB 99 (score) Benchmark suite running on Linux (L), Xen (X), VMware Workstation (V), and UML (U)

TCP results 1. 1 1. 0 0. 9 0. 8 0. 7 0. 6

TCP results 1. 1 1. 0 0. 9 0. 8 0. 7 0. 6 0. 5 0. 4 0. 3 0. 2 0. 1 0. 0 L X V U Tx, MTU 1500 (Mbps) L X V U Rx, MTU 1500 (Mbps) L X V U Tx, MTU 500 (Mbps) L X V U Rx, MTU 500 (Mbps) TCP bandwidth on Linux (L), Xen (X), VMWare Workstation (V), and UML (U)

Scalability 1000 800 600 400 200 0 L X 2 L X 4 L

Scalability 1000 800 600 400 200 0 L X 2 L X 4 L X 8 L X 16 Simultaneous SPEC WEB 99 Instances on Linux (L) and Xen(X)

Xen 3. 0 Architecture AGP ACPI PCI 32/64 bit VM 0 Device Manager &

Xen 3. 0 Architecture AGP ACPI PCI 32/64 bit VM 0 Device Manager & Control s/w VM 1 Unmodified User Software VM 2 Unmodified User Software Guest. OS (Xen. Linux) Back-End SMP Native Device Driver Control IF Native Device Driver Safe HW IF Front-End Device Drivers Event Channel Virtual CPU VM 3 Unmodified User Software Unmodified Guest. OS (Win. XP)) Front-End Device Drivers Virtual MMU Xen Virtual Machine Monitor Hardware (SMP, MMU, physical memory, Ethernet, SCSI/IDE) VT-x

3. 0 Headline Features ¾AGP/DRM graphics support ¾Improved ACPI platform support ¾Support for SMP

3. 0 Headline Features ¾AGP/DRM graphics support ¾Improved ACPI platform support ¾Support for SMP guests ¾x 86_64 support ¾Intel VT-x support for unmodified guests ¾Enhanced control and management tools ¾IA 64 and Power support, PAE

x 86_64 ¾Intel EM 64 T and AMD Opteron ¾Requires different approach to x

x 86_64 ¾Intel EM 64 T and AMD Opteron ¾Requires different approach to x 86 32 bit: § Can’t use segmentation to protect Xen from guest OS kernels as no segment limits § Switch page tables between kernel and user • Not too painful thanks to Opteron TLB flush filter § Large VA space offers other optimisations ¾Current design supports up to 8 TB mem

SMP Guest OSes ¾Takes great care to get good performance while remaining secure ¾Paravirtualized

SMP Guest OSes ¾Takes great care to get good performance while remaining secure ¾Paravirtualized approach yields many important benefits § Avoids many virtual IPIs § Enables ‘bad preemption’ avoidance § Auto hot plug/unplug of CPUs ¾SMP scheduling is a tricky problem § Strict gang scheduling leads to wasted cycles

VT-x / Pacifica ¾ Will enable Guest OSes to be run without paravirtualization modifications

VT-x / Pacifica ¾ Will enable Guest OSes to be run without paravirtualization modifications § E. g. Windows XP/2003 ¾ CPU provides traps for certain privileged instrs ¾ Shadow page tables used to provide MMU virtualization ¾ Xen provides simple platform emulation § BIOS, Ethernet (e 100), IDE and SCSI emulation ¾ Install paravirtualized drivers after booting for high-performance IO

VM Relocation : Motivation ¾VM relocation enables: § High-availability Xen • Machine maintenance §

VM Relocation : Motivation ¾VM relocation enables: § High-availability Xen • Machine maintenance § Load balancing • Statistical multiplexing gain Xen

Assumptions ¾Networked storage § NAS: NFS, CIFS § SAN: Fibre Channel § i. SCSI,

Assumptions ¾Networked storage § NAS: NFS, CIFS § SAN: Fibre Channel § i. SCSI, network block dev § drdb network RAID ¾Good connectivity § common L 2 network § L 3 re-routeing Xen Storage

Challenges ¾VMs have lots of state in memory ¾Some VMs have soft real-time requirements

Challenges ¾VMs have lots of state in memory ¾Some VMs have soft real-time requirements § E. g. web servers, databases, game servers § May be members of a cluster quorum è Minimize down-time ¾Performing relocation requires resources è Bound and control resources used

Relocation Strategy Stage 0: pre-migration Stage 1: reservation Stage 2: iterative pre-copy Stage 3:

Relocation Strategy Stage 0: pre-migration Stage 1: reservation Stage 2: iterative pre-copy Stage 3: stop-and-copy Stage 4: commitment VM active on host A Destination host selected (Block devices mirrored) Initialize container on target host Copy dirty pages in successive rounds Suspend VM on host A Redirect network traffic Synch remaining Activate on hoststate B VM state on host A released

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 1

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Round 2

Pre-Copy Migration: Final

Pre-Copy Migration: Final

#dirty Page Dirtying Rate time into iteration ¾Dirtying rate determines VM down-time § Shorter

#dirty Page Dirtying Rate time into iteration ¾Dirtying rate determines VM down-time § Shorter iters → less dirtying → shorter iters § Stop and copy final pages ¾Application ‘phase changes’ create spikes

Writable Working Set

Writable Working Set

Rate Limited Relocation ¾Dynamically adjust resources committed to performing page transfer § Dirty logging

Rate Limited Relocation ¾Dynamically adjust resources committed to performing page transfer § Dirty logging costs VM ~2 -3% § CPU and network usage closely linked ¾E. g. first copy iteration at 100 Mb/s, then increase based on observed dirtying rate § Minimize impact of relocation on server while minimizing down-time

Web Server Relocation

Web Server Relocation

Iterative Progress: SPECWeb 52 s

Iterative Progress: SPECWeb 52 s

Iterative Progress: Quake 3

Iterative Progress: Quake 3

Quake 3 Server relocation

Quake 3 Server relocation

Extensions ¾Cluster load balancing § Pre-migration analysis phase § Optimization over coarse timescales ¾Evacuating

Extensions ¾Cluster load balancing § Pre-migration analysis phase § Optimization over coarse timescales ¾Evacuating nodes for maintenance § Move easy to migrate VMs first ¾Storage-system support for VM clusters § Decentralized, data replication, copy-on-write ¾Wide-area relocation § IPSec tunnels and Co. W network mirroring

Research Roadmap ¾Software fault tolerance § Exploit deterministic replay ¾System debugging § Lightweight checkpointing

Research Roadmap ¾Software fault tolerance § Exploit deterministic replay ¾System debugging § Lightweight checkpointing and replay ¾VM forking § Lightweight service replication, isolation ¾Secure virtualization § Multi-level secure Xen

Xen Supporters Operating System and Systems Management Hardware Systems Acquired by Platforms & I/O

Xen Supporters Operating System and Systems Management Hardware Systems Acquired by Platforms & I/O * Logos are registered trademarks of their owners

Conclusions ¾Xen is a complete and robust GPL VMM ¾Outstanding performance and scalability ¾Excellent

Conclusions ¾Xen is a complete and robust GPL VMM ¾Outstanding performance and scalability ¾Excellent resource control and protection ¾Live relocation makes seamless migration possible for many real-time workloads ¾http: //xensource. com

Thanks! ¾The Xen project is hiring, both in Cambridge, Palo Alto and New York

Thanks! ¾The Xen project is hiring, both in Cambridge, Palo Alto and New York Computer Laboratory ¾ian@xensource. com

Backup slides

Backup slides

Research Roadmap ¾Whole distributed system emulation § I/O interposition and emulation § Distributed watchpoints,

Research Roadmap ¾Whole distributed system emulation § I/O interposition and emulation § Distributed watchpoints, replay ¾VM forking § Service replication, isolation ¾Secure virtualization § Multi-level secure Xen ¾Xen. BIOS § Closer integration with the platform / BMC ¾Device Virtualization

Isolated Driver VMs ¾ Run device drivers in separate domains ¾ Detect failure e.

Isolated Driver VMs ¾ Run device drivers in separate domains ¾ Detect failure e. g. § Illegal access § Timeout ¾ Kill domain, restart ¾ E. g. 275 ms outage from failed Ethernet driver 350 300 250 200 150 100 50 0 0 5 10 15 20 25 time (s) 30 35 40

Segmentation Support ¾Segmentation req’d by thread libraries § Xen supports virtualised GDT and LDT

Segmentation Support ¾Segmentation req’d by thread libraries § Xen supports virtualised GDT and LDT § Segment must not overlap Xen 64 MB area § NPT TLS library uses 4 GB segs with –ve offset! • Emulation plus binary rewriting required ¾x 86_64 has no support for segment limits § Forced to use paging, but only have 2 prot levels § Xen ring 0; OS and user in ring 3 w/ PT switch • Opteron’s TLB flush filter CAM makes this fast

Device Channel Interface

Device Channel Interface

Live migration for clusters ¾Pre-copy approach: VM continues to run ¾‘lift’ domain on to

Live migration for clusters ¾Pre-copy approach: VM continues to run ¾‘lift’ domain on to shadow page tables § Bitmap of dirtied pages; scan; transmit dirtied § Atomic ‘zero bitmap & make PTEs read-only’ ¾Iterate until no forward progress, then stop VM and transfer remainder ¾Rewrite page tables for new MFNs; Restart ¾Migrate MAC or send unsolicited ARP-Reply ¾Downtime typically 10’s of milliseconds § (though very application dependent)

Scalability ¾Scalability principally limited by Application resource requirements § several 10’s of VMs on

Scalability ¾Scalability principally limited by Application resource requirements § several 10’s of VMs on server-class machines ¾Balloon driver used to control domain memory usage by returning pages to Xen § Normal OS paging mechanisms can deflate quiescent domains to <4 MB § Xen per-guest memory usage <32 KB ¾Additional multiplexing overhead negligible

Scalability 1000 800 600 400 200 0 L X 2 L X 4 L

Scalability 1000 800 600 400 200 0 L X 2 L X 4 L X 8 L X 16 Simultaneous SPEC WEB 99 Instances on Linux (L) and Xen(X)

Aggregate throughput relative to one instance Resource Differentation 2. 0 1. 5 1. 0

Aggregate throughput relative to one instance Resource Differentation 2. 0 1. 5 1. 0 0. 5 0. 0 2 4 OSDB-IR 8 8(diff) 2 4 OSDB-OLTP 8 8(diff) Simultaneous OSDB-IR and OSDB-OLTP Instances on Xen