designing high-availability for xen virtual machines with hp
TRANSCRIPT
Designing high-availability for Xen Virtual Machines with
HP Serviceguard for Linux
Executive Summary .............................................................................................................................. 3
Introduction......................................................................................................................................... 3
Scope ................................................................................................................................................ 3
Support requirements ........................................................................................................................... 4
Xen Host (Dom0) Configuration ............................................................................................................. 4
Xen Dom0 Installation and Configuration on x86_64 platforms ............................................................. 4
Xen Dom0 Installation and Configuration on HP Integrity Server platforms .............................................. 5
Serviceguard for Linux Configuration ..................................................................................................... 6
Storage Configuration ...................................................................................................................... 6
Network Configuration ..................................................................................................................... 6
Secure Linux Settings ........................................................................................................................ 7
Domain Memory Ballooning.............................................................................................................. 7
Serviceguard for Linux on Xen host ........................................................................................................ 7
RPM package dependency for installing Serviceguard for Linux 11.18................................................... 7
Packaging the Xen virtual machine with Serviceguard for Linux ............................................................. 8
Creation of Xen VM and dependencies........................................................................................... 8
Consolidate all Xen Guest VM files in one Volume Group ................................................................. 8
Modify the Serviceguard for Linux Package Configuration File........................................................... 9
Basic Principles of Xen Control and Management .................................................................................. 10
Xen VM Startup ............................................................................................................................. 10
Xen VM Shutdown ......................................................................................................................... 11
Xen VM Monitoring........................................................................................................................ 11
Virtualization-mode independent approach ................................................................................... 11
Virtualization-mode dependent approach ...................................................................................... 12
Pros and Cons of Independent vs. Dependent Approach ................................................................. 12
Monitoring Xen virtual machine network interfaces ......................................................................... 12
Summary .......................................................................................................................................... 13
Requirements: ................................................................................................................................ 13
For more information:......................................................................................................................... 14
Appendix Section I ............................................................................................................................ 15
Xen Legacy Configuration File (xenhost.cnf)....................................................................................... 15
Appendix Section II............................................................................................................................ 16
Xen Legacy Control Script (xenhost.sh).............................................................................................. 16
Appendix Section III ........................................................................................................................... 21
Xen Legacy Monitor Script (cmxenvmd)............................................................................................. 21
Appendix IV...................................................................................................................................... 22
Table of Acronyms and Abbreviations .............................................................................................. 22
3
Executive Summary
This white paper describes how to integrate Xen Dom0 hosts into a Serviceguard for Linux (SG/LX)
cluster, and how to configure Xen Virtual Machines (VMs) as SG/LX packages. It also makes
recommendations for eliminating single points of failure and provides pointers to other useful
documents.
Introduction
Virtual machines on Xen are increasingly being deployed for server consolidation and flexibility.
Virtual machine technology allows one physical server to emulate multiple servers, each capable of
running its own operating system (OS) concurrently with other virtual servers. The virtualization layer,
also known as the hypervisor, abstracts the physical resources so that each instance of an OS
appears to have its own network and storage adapters, processors, disks, and memory, when in fact
they are virtual instances.
A significant single point of failure for Xen virtual machines is the physical hardware which runs the
Xen dom0 hypervisor. Dom0, or domain zero to expand the abbreviation, is the first domain started
by the Xen hypervisor on boot. It has special privileges, such as being able to cause new domains to
start, and being able to access the hardware directly. A failure of the Xen hypervisor or the server it is
running on can bring down all of the Xen virtual machines running on that server. One can protect the
Xen virtual machines from these failures by using SG/LX clustering to make the Xen hypervisor highly
available.
This document describes how to configure an SG/LX cluster consisting of multiple Xen dom0 hosts.
The Xen virtual machines (VMs) can then be configured as SG/LX packages. In the event of a failure,
or to maintain application availability while performing online upgrades and maintenance, a Xen
virtual machine protected within a Serviceguard for Linux package can be restarted on the same
node, or on another node in the cluster. The other nodes would also be Xen dom0 hypervisors
running Serviceguard for Linux.
This solution protects the Xen virtual machine from the following failures:
Failure of a Xen virtual machine
Failure of networking
Failure of storage
Failure of the physical machine (running as Xen dom0 host)
Scope
This document describes how to provide high availability for Xen virtual machines using Serviceguard
for Linux running on multiple Xen dom0 hosts. See the Support Requirements section for a list of
supported versions of Xen Server and Linux distributions. As new versions of Xen server or Linux
distributions are certified, this whitepaper and the Serviceguard for Linux Certification Matrix will be
updated accordingly. The most recent version of the Serviceguard for Linux Certification Matrix can
be found at www.hp.com/go/sglx/info, in the Solution planning section.
Note: Serviceguard for Linux is currently certified to run on Xen dom0 hosts, not on Xen virtual
machine guests. Applications running on the dom0 host, including the virtual machines can be
configured to run as SG/LX packages. This configuration provides high availability to host based
applications and virtual machines.
4
However, high availability for the virtual machine based applications is not guaranteed unless a
monitoring technique is also used on the applications themselves. Serviceguard for Linux does not
currently provide monitoring and high availability for applications running on the Xen virtual machine.
You can develop alternative approaches to monitoring the applications running on the Virtual
Machine, for example using a network-based or a file-based protocol between the guest and the host.
A reasonable expertise in the installation and configuration of Xen server on x86_64 platforms and
familiarity with its capabilities and limitations is assumed.
It is also assumed that the reader is familiar with Serviceguard for Linux. Any special issues arising out
of changes due to the Xen environment are discussed in following sections.
Note: Except as noted in this whitepaper, all the Serviceguard for Linux configuration options
documented in the Managing HP Serviceguard for Linux manual are supported for Xen hosts, and all
documented requirements apply. You can find the latest version of the manual at http://docs.hp.com
High Availability Serviceguard for Linux.
Support requirements
The requirements for configuring highly-available Xen virtual machines with Serviceguard for Linux
are:
Serviceguard for Linux A.11.18.02 or later
HP ProLiant or non-HP x86_64-based server certified with Serviceguard for Linux
A Linux distribution certified with Serviceguard for Linux that supports Xen virtualization
technologies on the selected platform – specifically, one of:
o Red Hat Enterprise Linux 5 Update 1 and Update 2
Note: Xen Release 3.0 supported by Red Hat and SG/LX on RHEL5-x86_64
platforms. However SG/LX configuration to provide HA for VMs on RHEL5-
Xen based-HP Integrity server platform is not supported.
o SUSE Linux Enterprise Server Service Pack 1 and Service Pack 2
Note: Xen Release 3.0 is supported by Novell only on x86_64 platforms
Citrix XenSource is not currently supported.
Note: For the most recent list of servers and storage, Linux distributions and hypervisors certified for
use with Serviceguard for Linux, see the Serviceguard for Linux Certification Matrix, available at
www.hp.com/go/sglx/info .
Xen Host (Dom0) Configuration
For detailed documentation on Xen installation see either the Red Hat Enterprise Linux Virtualization
Guide [6] or Xen Installation section of the SUSE Linux 10 Reference Guide [5].
Xen Dom0 Installation and Configuration on x86_64 platforms
The installation procedure for Xen involves the setup of a domain-0 domain and the
installation of Xen clients.
Before you install Xen, SLES10 SP1 or SP2 or RHEL5.1, 5.2 must be installed on the machine.
Refer to RHEL5 Deployment Guide [8] or SLES10 Deployment Guide [10] for OS Installation.
5
You can use the command ‘yast2’ (on SLES10) or ‘rpm’ (on RHEL5) to install additional
rpms required for Xen.
These rpms are :
‘python’
‘python-virtinst’
‘libvirt’
‘bridge-utils’
‘libvirt-python’
‘dnsmasq’
‘xen-libs’
‘xen’
‘kernel-xen’
Xen is added to the GRUB configuration. The installation process places an entry in
‘/boot/grub/menu.lst’. This entry should be similar to the following:
title Xen2kernel (hd0,0)/boot/xen.gz dom0_mem=458752module (hd0,0)/boot/vmlinuz-xen <parameters>module (hd0,0)/boot/initrd-xen
Reboot the machine to boot from Xen kernel. This sets up the Xen Host (or Domain 0) for
SLES10/RHEL5- Xen.
You can use console client ‘virt-install’ or Linux GUI client‘virt-manager’ can to
install client VMs on the domain0.
Refer to the Xen User Manual [4] for installation, configuration and administration details for
Xen.
Xen Dom0 Installation and Configuration on HP Integrity Server
platforms
Only RHEL5 supports Xen on HP Integrity Server (Integrity) platforms. Be aware that during
Serviceguard testing, there was at least one instance of a kernel panic related to Xen. This
problem is not seen on x86_64 platforms. A bugzilla is filed for the issue at
https://bugzilla.redhat.com/show_bug.cgi?id=457537. In such an instance Serviceguard
will fail over any protected guests.
Novell SUSE10 SP1, SP2 does not support the HP Integrity Server platform. Until Novell
supports HP Integrity Server it cannot be supported with SG/LX.
Before installing Xen on Integrity platforms read the whitepaper “Xen-Based Virtualization
Technology Preview for Red Hat Enterprise Linux 5.0 on HP Integrity servers” [7] and follow
the recommendations given in the current paper.
By default, the hypervisor on Integrity systems reserves only 512 MB of memory for the dom0
instance. Dom0 will be allocated only one virtual CPU by default. The default value for
memory and CPU must be increased to prevent delays under heavy load. A recommended
value of 1 GB memory and 2 virtual CPUs for Dom0 is optimum.
6
Serviceguard for Linux Configuration
Storage Configuration
When you install the OS (xen-dom0), the fibre channel drivers ‘qla1280.ko’ and
‘qla2xxx/qla2xxx.ko’ do not get loaded by default. A modprobe on the driver module name
results in ‘Invalid module format’.
The procedure for loading the SCSI QLogic drivers is as follows:
1. Edit ‘/etc/modprobe.conf’, remove any ‘qla*’ and ‘scsi_adapter*’ alias lines.
Then add
alias scsi_adapter qla1280alias scsi_adapter1 qla2xxx
Rebuild the ‘initrd’ file using the ‘mkinitrd’ command
2. On RHEL5.1, 5.2 use the ‘mkinitrd’ command with the listed arguments, to create an‘initrd’
mkinitrd -v /boot/initrd-(uname -r).img $(uname -r)
On SUSE10 SP1, SP2 use ‘mkinitrd’ command with no arguments to create an
‘initrd’ in the ‘/boot’ directory
mkinitrd
Network Configuration
On RHEL5.1, 5.2 Xen dom0, a pseudo-network device with the name of ‘virbr0’ (default IP
address: ‘192.168.122.1’) will appear in the ‘ifconfig’ list. This device is configured and
controlled by ‘libvirt’. The device does not serve any purpose other than to provide a private
VM subnet. This is not necessary if the guest virtual machines are configured to use bridge
networking.
If the device is left configured (in the default state), SG/LX automatic network probing may report a
possible network partition on the ‘192.168.122.0’ subnet, when it detects it as a potential
heartbeat network. Since ‘192.168.122.0’ is meant to be a private subnet within Xen dom0 and
not meant to be visible beyond the local dom0, we suggest deconfiguring the device.
To unconfigure the device; remove the entries in the file ‘default.xml’ located at:
/etc/libvirt/qemu/networks/autostart/default.xml
Leaving it configured does not affect the operation of Serviceguard for Linux in any other way other
than what is mentioned above. SG/LX does not do automatic network probing for heartbeat
networks and manually configured heartbeat networks will not have this issue.
The virbr0 device is only seen on RHEL5.1 and RHEL5.2 platform. SLES10 SP1 and SLES10 SP2 do
not bring up the device by default.
7
Secure Linux Settings
On RHEL5, for successful creation of virtual machines, and also for creation of volume groups, it is
required that appropriate role-based ‘selinux’ rules be placed on the files and directories related
to Xen and Serviceguard for Linux. On SLES10, ‘apparmor’ provides equivalent role-based
security for files and directories. For additional details on configuring ‘selinux’ or ‘apparmor’,
refer to RHEL5 [8] Deployment Guide or SLES10 Apparmor Administration Guide[9].
Domain Memory Ballooning
Memory allocation to dom0 and domU can be modified using the ‘max-mem’ and ‘mem-set’
parameters. These parameters are used in conjunction with the ‘xm’ commands.
When a domU is running, the ‘max-mem’ and ‘mem-set’ can be modified dynamically. Use the
‘xm mem-set’ command to increase the memory allocation for a dom0/domU
xm mem-set <dom-id> <mem>
The maximum memory allocation limit for a dom0 / domU can be set using the command
xm mem-max <dom-id> <mem>
For domU, expanding only works up to the amount initially allocated to a domain. However, it is
possible to modify these parameters before a guest VM is powered up through the DomU
configuration script.
maxmem = 500memory = 500
Memory Ballooning is useful when Serviceguard for Linux is installed on machines with similar
architecture and mismatched hardware configuration. A Xen virtual machine package running with a
given amount of memory cannot be failed over to a machine where at least that same amount of
memory is not available. Using memory ballooning makes it possible to set the amount of memory
used by the guest VM at the time of package startup on the other machine. Without this, the package
may fail to start up on the recipient node.
Serviceguard for Linux on Xen host
This section describes installation and configuration tasks specific to Serviceguard for Linux running
on a Xen host. The configuration aims to provide high availability to virtual machines running on the
Xen host. The Serviceguard for Linux in-host model defines virtual machines as Serviceguard for Linux
packages.
RPM package dependency for installing Serviceguard for Linux 11.18
To install SG/LX successfully on a Xen Dom0, the ‘kernel-xen-devel’ rpm must be installed first.
This package is required to build the ‘deadman’ driver, which is compiled as part of SG/LX rpm
installation on Linux.
RHEL5.1, RHEL5.2, SLES10 SP1, and SLES10 SP2 provide the ‘kernel-xen-devel’ rpm with the
distribution.
Note: Citrix XenSource is currently not supported.
8
Packaging the Xen virtual machine with Serviceguard for Linux
Creation of Xen VM and dependencies
A Xen VM guest can be created using ‘virt-install’ (Redhat) or ‘vm-install’ (Novell). The
VM install script prompts for the following information at the time of installation:
VM types: fully-virtualized (FV) or para-virtualized (PV). For detailed information on
the VM types, refer to Xen Administration Guide [4]
Storage: Virtual Disk or Physical Shared Disk as the primary disk for the guest OS
Network: Virtual Network Interface (An Internet protocol address is configured for the
Xen VM at the time of guest OS installation)
Installation Repository for guest OS: nfs, ftp, http, cdrom, iso
The ‘virt-install/vm-install’ script creates a Xen VM guest configuration file to store
configuration information for the guest VM. The ‘virt-install/vm-install’ picks up the boot
image from the install repository and boots the guest VM into install mode. The installation of the OS
proceeds as usual on the guest VM.
After installation of the guest OS, the guest VM can be started using the command
xm create <absolute path to guest configuration file>
and shutdown using the commands
xm shutdown <name of guest configuration file>
See the section “Xen Control and Management” to learn more about the management commands.
To provide high availability to the Xen virtual machine guests running on the Xen host, the Xen VM
must be registered as an SG/LX package. In order to do this, the following requirements must be met:
The Xen VM guest configuration file must reside on a volume group on the shared storage. All
default paths in the Xen VM guest configuration file must be adjusted to reflect the new
location of the Xen VM files (virtual disks, config files etc.)
If a Xen VM uses a virtual disk, it must reside on a volume group on the shared storage. If a
Xen VM uses a physical disk, the physical disk must be located on the shared storage. This
ensures that the Xen VM storage is available from other nodes when a Xen VM package fails
over
In the case of the SG/LX legacy package used in the examples in this paper, the package
control script must be created, modified and applied. (See the latest version of Managing
Serviceguard for Linux for information about legacy versus modular packages.)
Consolidate all Xen Guest VM files in one Volume Group
When creating a Serviceguard for Linux package for the Xen VM, you need to identify all of the
resources used by the Xen VM and include them in the package.
9
The two main resources used by a Xen VM are as follows:
‘Xen virtual disk file’ located in a directory specified by the user at time of VM
creation
‘Xen VM configuration file’ located in a sub-directory of ‘/etc/xen/’.
Follow these steps to consolidate the VM resources:
1. Create a logical volume large enough to hold the virtual disk file and the guest configuration
file. If the Xen VM uses a physical shared disk, then the logical volume would hold the
configuration file.
2. Create a file system on the logical volume, for instance, an ‘ext2 or ext3’ file system
using ‘mke2fs’ or simply ‘mkfs’
3. Mount the logical volume.
4. Create identical mount points on other nodes of the Serviceguard for Linux cluster. These
mount points would be used when the package is failed over to the other node.
5. Create a Xen VM guest using ‘virt-install’ (Redhat) or ‘vm-install’ (Novell).
Specify the install options so that the VM virtual disk and the VM configuration file resides on
the shared logical volume. In the case of a pre-created guest VM, copy the VM configuration
files to the logical volume and edit the VM guest configuration file to change the path entries
to reflect the new location of the VM guest files.
6. Unmount the logical volume.
Modify the Serviceguard for Linux Package Configuration File
An application can be packaged using the legacy method or the modular method. This white paper
describes the legacy method. A modular package scheme will be released in the future.
Legacy Method
Create a package configuration file using the following command
cmmakepkg –v -p pkg.conf
Create a package control script using the command
cmmakepkg –v -s pkg.cntl
Modify the package control script to enter the start, stop functions in the control script.
Appendix I lists the Xen configuration file (xenhost.cnf), which sets the environment for the Xen Control
Script and the Xen Monitor Script.
Refer to Appendix Section II for more details on the Xen control script (xenhost.sh). The Xen control
script can be called from within the SG/LX package control script; define the following functions:
function customer_defined_run_cmds
{
# ADD customer defined run commands.
: # do nothing instruction, because a function must contain some command.
${XENPATH}/xenhost.sh start
}
10
# This function is a place holder for customer define functions.
# You should define all actions you want to happen here, after the service is
# halted.
#
function customer_defined_halt_cmds
{
# ADD customer defined halt commands.
: # do nothing instruction, because a function must contain some command.
${XENPATH}/xenhost.sh stop
}
Where ${XENPATH} is an absolute path to the Xen Legacy Scripts.
Refer to Appendix III for details on the monitor script (cmxenvmd).
The ‘monitor’ script must be registered as a ‘SERVICE_CMD’ in SG Package Control Script.
Modify the package configuration file with the following entires:
SERVICE_NAME cmxenvmdSERVICE_FAIL_FAST_ENABLED noSERVICE_HALT_TIMEOUT 300
The package control script must contain the following entries
SERVICE_NAME[0]="cmxenvmd"SERVICE_CMD[0]="$SGSBIN/cmxenvmd"SERVICE_RESTART[0]="0"
Note: Once a VM is launched as a package under Serviceguard for Linux, you should never start or
stop a VM manually using the ‘xm’ command. A manual start or stop of a virtual machine would
lead to virtual machine package failure. If a VM is already started manually, it must be stopped
before starting the VM through the package control script.
Distribute the package control script and apply the package configuration using the command:
cmapplyconf –P pkg.conf
Basic Principles of Xen Control and Management
The Xen control and management script used the ‘xm’ command to start, stop and monitor Xen virtual
machine guests.
Xen VM Startup
A Xen VM can be started by using the command:
xm create <xen vm guest config filename>
11
When a Xen VM guest is started, it first goes into a running state, which is then followed by a booted
up state. It is necessary to wait for the guest to get into the booted up state before using the Xen
virtual machine. This is usually significant when the Xen guest is started up using SG/LX package
control script. The package control script expects the VM to get into a booted up state before it can
perform further user-defined operations on the virtual machine.
At the time of startup, specific run-time information is captured to enable monitoring of the VM
machine. Various monitoring approaches are discussed in the monitoring section.
Xen VM Shutdown
A Xen VM can be gracefully shut down using the command:
xm shutdown <config filename>
After invocation of the command, it is necessary to wait until all the VM-specific daemons stop to
achieve a clean shutdown. A clean shutdown ensures that the file systems and volume groups
associated with the machine are successfully unmounted and deactivated respectively.
The Xen virtual machine probe function can be used to check for a complete shutdown of the VM.
Once the probe function declares the VM is no longer running, post shutdown steps can be
performed. If a Xen VM is packaged as a SG/LX package, the post-shutdown steps performed are as
follows:
1. Unmount the file systems
2. Deactivate the volume groups
3. Re-assign IP address (optional): The virtual machine IP address should not be registered with
the package control script for monitoring. However, it is possible to monitor the subnet in
which the virtual machine IP resides. The subnet may be monitored by placing an entry in the
package configuration script:
# Example :MONITORED_SUBNET 15.154.63.0 # (netmask=255.255.255.0)
These steps are performed by the SG/LX package control script. The Xen control script is callable
from the SG/LX Package Control Script.
Xen VM Monitoring
Xen supports two modes of virtualization - Para-Virtualized (PV) and Fully-Virtualized (FV). The
monitoring of a Xen VM is typically specific to the mode of virtualization used by the VM, but it is also
possible to implement a technique which allows monitoring that is independent of the virtualization
mode used.
Virtualization-mode independent approach
Xen Manager 'xm' commands can be used to monitor the Xen Virtual Machine (both full and para-
virtualized VMs) through the 'xm list' commands. For example:
xm list | grep ${Virtual Machine Name} | awk '{print $5}' | grep -e "r-"-e "b-"
12
A return of '0' from the command indicates that the machine is in the 'booting up' state (represented
by the ‘r’ in the grep command) or in the 'booted up' (represented by the ‘b’) state. The command
can be called periodically to check the status of the virtual machine.
Virtualization-mode dependent approach
Fully-Virtualized mode
For every Fully-Virtualized VM, a 'qemu-dm' process is spawned. It is possible to identify which
instance of 'qemu-dm' belong to a particular VM with the following command:
ps -ef | grep "qemu-dm" | grep ${Virtual Machine Name}
Capture the PID of ’qemu-dm’ and touch a file and write the PID into a file. The monitoring can be
done by probing this particular PID on regular intervals.
Para-Virtualized mode
No specific approach has been identified for Para-Virtualized Guests. Hence, usage of the
virtualization-mode independent approach described above should be used.
Pros and Cons of Independent vs. Dependent Approach
The virtualization-mode dependent approach provides more accurate monitoring of the Fully-
Virtualized VMs than the independent approach. The independent approach relies on the ‘xm’
commands to gather monitoring information. There are limitations to this type of monitoring. The
virtual machine name is removed from the 'xm list' before the ‘qemu-dm’ process goes down.
This small time gap may lead to an incorrect interpretation of the virtual machine status at that specific
time. Since the ‘qemu-dm’ process belongs to Fully-Virtualized VMs, the time gap only effects the
Fully-Virtualized VM monitoring.
For these reasons the virtualization-mode independent approach works best for Para-Virtualized
guests as there are no latencies involved, and the virtualization-mode dependent approach is
recommended for Fully-Virtualized VMs.
Monitoring Xen virtual machine network interfaces
Xen provides network availability to virtual machines through the virtual interfaces (vif) configured on
a Xen dom0 host.
A 'vif' interface is created for each virtual machine, at the time of startup. (using‘xm create’
command). The naming convention for the 'vif' interface is dynamic.
‘vif<vmid>.<interface index>’
Replace the ‘vmid’ and ‘interface index’ values to get the correct name for the vm interface.
The ‘vmid’ of a virtual machine change everytime the virtual machine is halted and started up. Run
the command listed below to extract the ‘vmid’ using the name of the virtual machine.
For example:
13
1. To get the virtual machine ID of the virtual machine “s102vm1” , use the command
xm list | grep “s102vm1” | awk ‘{print $2}’
2. Now use the ‘vmid’ to get the name of the virtual interface that is mapped to the virtual
machine on the dom0 host.
vif < vmid >.<interface index>so, the derived interface is ‘vif2.0’
3. Probe the interface using the command
‘ifconfig vif2.0 | grep “UP”’.
vif2.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FFinet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:805 errors:0 dropped:0 overruns:0 frame:0TX packets:3744 errors:0 dropped:18 overruns:0 carrier:0collisions:0 txqueuelen:32RX bytes:26624 (26.0 Kb) TX bytes:247301 (241.5 Kb)
The virtual machine network can be monitored from the host by simply probing the 'vif' interface
associated with the VM.
A failure of the physical interface or the virtual machine would bring the virtual interface down, thus
notifying that the network on the virtual machine has failed.
Serviceguard for Linux can then fail the package and restart it on a different node where the network
is available. Repeat the steps listed above to identify the new virtual interface on the other node and
monitor it accordingly.
Summary
Protecting Xen virtual machines with clustering provided by Serviceguard for Linux delivers lower total
cost of ownership by enabling Xen virtual machines to be used for business critical applications. This
consolidation leads to an overall reduced data center footprint.
Requirements:
Install ‘kernel-xen-devel’ rpm on all nodes configured as Xen dom0 nodes. The rpm
does not get installed by default on RHEL5. SLES10 installs the package by default when
configured through Yast2 tool. The package is required for successful installation of the
SG/LX rpm.
The Xen Control and Management Script is necessary to automate the start, stop and
monitor operations for a VM when configured as a Serviceguard for Linux package.
Refer Appendix I, II and II for developing the Xen control and management script.
Serviceguard for Linux require specific configurations at install time mentioned earlier.
These requirements are mandatory to ensure proper operation of SG/LX on a Xen dom0
platform.
Modification to the grub bootloader sequence is required to ensure that the machines
boots into Xen by default.
14
For more information:
1. Serviceguard for Linux
http://www.hp.com/go/sglx
2. Serviceguard for Linux Certification Matrix
www.hp.com/go/sglx/info under Solution Planning
3. Managing HP Serviceguard for Linux, Eighth Edition, March 2008
http://docs.hp.com/en/B9903-90060/B9903-90060.pdf
4. Xen User Administration Guidehttp://bits.xensource.com/Xen/docs/user.pdf
5. Xen Installation section of the SUSE Linux 10 Reference Guidehttp://www.novell.com/documentation/suse10/index.html?page=/documentation/suse10/adminguid
e/data/sec_xen_inst.html
6. Red Hat Enterprise Linux Virtualization Guide
http://www.Red Hat.com/docs/manuals/enterprise/ -> Virtualization Guide
7. Xen-Based Virtualization Technology Preview for Red Hat Enterprise Linux 5.0 on HP Integrity
servershttp://docs.hp.com/en/4AA1-4405ENW/4AA1-4405ENW.pdf
8. Red Hat Enterprise Linux Deployment Guide
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-
US/index.html
9. Novell SLES 10 AppArmor Administration Guide
http://www.novell.com/documentation/apparmor/apparmor201_sp2_admin/index.html?p
age=/documentation/apparmor/apparmor201_sp2_admin/data/apparmor201_sp2_admi
n.html
10. Novell SLES 10 Linux Deployment Guide
http://www.novell.com/documentation/sles10/sles_admin/index.html?page=/documentatio
n/sles10/sles_admin/data/sles_admin.html
15
Appendix Section I
Xen Legacy Configuration File (xenhost.cnf)
The file “xenhost.cnf” defines a set of configuration parameters for the Xen Legacy Control Script and
Xen Legacy Monitor Script.
The file “xenhost.cnf” along with the control script “xenhost.sh” and the monitor script “cmxenvmd” is
used to package Xen VMs as SG/LX packages in Legacy Mode.
####################################################################
# (C) Copyright 2008 Hewlett-Packard Development Company, L.P.
# @(#) Serviceguard for Linux Attribute Definition File
# @(#) Product Name : HP Serviceguard for Linux
# @(#) Product Version : A.11.18.02
# @(#) Patch Name :
#
# *** Note: This file MUST NOT be edited. *****
#
# Any changes made to it will be overwritten when you upgrade to the
# next release of HP Serviceguard for Linux.
#
# Changing this file may lead to unrecoverable package configuration
# problems.
#
###################################################################
#"XEN_BIN" is the path of the Xen Management Binary.
SG_XEN_BIN=/usr/sbin/xm
#"XEN_VM_PATH" is the path to the Xen Configuration File.
SG_XEN_VM_PATH=
#"XEN_VM_NAME" is the name of the Xen Configuration File.
SG_XEN_VM_NAME=
#"XEN_VM_PID_STATUS" is the absolute path to the Xen VM PID file. This file can be probed and
may be useful for monitoring the VM.
SG_XEN_VM_PID_STATUS=
#"XEN_RETRY_INTERVAL" is the time gap, the Xen Control Script waits to check if the Xen VM has
halted completely.
SG_XEN_RETRY_INTERVAL=10
# "XEN_PROBE_TIMEOUT" The amount of time that must elapse before the VM probe function
attempts a forced shutdown on the VM
SG_XEN_PROBE_TIMEOUT=20
16
Appendix Section II
Xen Legacy Control Script (xenhost.sh)
The script was designed for the purpose of Xen VM Legacy Packages. The script functions may be
called from a Legacy Package Control Script through “customer_defined_run_cmds” and
“customer_defined_halt_cmds”. The script depends on the “xenhost.cnf” file for Xen environment
setup.
Template for “xenhost.sh” script
####################################################################
# (C) Copyright 2008 Hewlett-Packard Development Company, L.P.
# @(#) Serviceguard for Linux Xen Legacy Package Control Script.
# @(#) Product Name : HP Serviceguard for Linux
# @(#) Product Version : A.11.18.02
# @(#) Patch Name :
#
# *** Note: This file MUST NOT be edited. *****
#
# Any changes made to it will be overwritten when you upgrade to the
# next release of HP Serviceguard for Linux.
#
# Changing this file may lead to unrecoverable package startup
# and/or shutdown problems.
#
###################################################################
#######################################
#
# Source SG and Xen Environment Setup
#######################################
set -a
. ${SGCONFFILE:=/etc/cmcluster.conf}
. ${SGXCONFFILE:=./xenhost.cnf}
set +a
echo ${SG_XEN_BIN}
#######################################################################
###
#
# sg_validate_services() is called while validating the package
# configuration (through cmcheckconf/cmapplyconf)
#
#######################################################################
###
function sgx_validate_vm
{
17
echo "sgx_validate_vm"
typeset -i retval=0
# validate if the package is run on xen host
if [ -d /proc/xen ]; then
grep -q control_d /proc/xen/capabilities
case $? in
0)
echo "Xen Dom0 Host found !"
retval=0 ;;
1)
echo "ERROR:: Xen Package Module cannot be configured on a Xen VM Guest"
retval=1 ;;
*)
echo "ERROR:: Xen Environment Detection Failed"
retval=255 ;;
esac
fi
if (( retval != 0 ))
then
echo "ERROR:" $retval " Failed to validate Xen VM Package"
to_exit=1
fi
}
#######################################################################
###
#
# sg_xen_start_vm() is called while starting a xen vm package (cmrunpkg)
#
# For each {service name/service command string} pair, start the
# service command string at the service name using cmrunserv(1m)
#
#######################################################################
###
function sgx_start_vm
{
typeset -i retval=0
echo "sgx_start_vm"
if [ -f ${SG_XEN_VM_PATH}${SG_XEN_VM_NAME} ]; then
[ -x ${SG_XEN_BIN} ] || ( echo "xm not found" && exit 255 )
${SG_XEN_BIN} create ${SG_XEN_VM_PATH}${SG_XEN_VM_NAME}
case $? in
0)
echo "Xen VM ${SG_XEN_VM_NAME} successfully started !!"
18
retval=0 ;;
1)
# An error occured while running xm
echo "ERROR: Xen VM startup failed"
retval=1 ;;
*)
# xm command failed with an undocumented error or an error occured while starting
xm
echo "Unrecoverable error occured "
retval=255 ;;
esac
else
echo "ERROR: 255 Xen VM configuration file not found !!!"
exit 255
fi
if (( retval != 0 ))
then
echo "ERROR:" $retval " Function sgx_start_vm"
echo "ERROR:" $retval " Failed to start xen vm"
to_exit=1
fi
}
#######################################################################
###
#
# sg_xen_stop_vm() is called while stopping a xen vm package (cmhaltpkg)
# or during rollback operation to recover from the package startup.
#
# Halt each service using xm command.
#
#######################################################################
###
function sgx_stop_vm
{
typeset -i retval=0
echo "sgx_stop_vm"
[ -x ${SG_XEN_BIN} ] || ( echo "xm not found" && exit 255 )
echo "Halting xen vm ${SG_XEN_VM_NAME}"
${SG_XEN_BIN} shutdown ${SG_XEN_VM_NAME}
case $? in
0)
sgx_probe_vm_status
echo "Xen VM ${SG_XEN_VM_NAME} successfully halted!!"
retval=0 ;;
1)
19
# An error occured while stopping vm
echo "ERROR: Xen VM shutdown failed"
retval=0 ;;
*)
# xm command failed with an undocumented error
echo "Unrecoverable error occured "
retval=255 ;;
esac
if (( retval != 0 ))
then
echo "ERROR:" $retval " Function sgx_stop_vm"
echo "ERROR:" $retval " Failed to halt xen vm ${SG_XEN_VM_NAME}"
to_exit=1
fi
}
#######################################################################
###
#
# sgx_probe_vm_status() is called to probe the status of a running vm;
# it exits only when the vm is completely halted.
#
# sgx_probe_vm_status is called by sgx_stop_vm to ensure the package is
# completely halted
#
# probe done using the 'xm list' command
#
#######################################################################
###
function sgx_probe_vm_status
{
typeset -i retval=0
typeset -i count=${SG_XEN_PROBE_TIMEOUT}
echo "sgx_probe_vm_status"
if [ -x ${SG_XEN_BIN} ]; then
while [ 1 ]; do
${SG_XEN_BIN} list | grep ${SG_XEN_VM_NAME} | awk '{print $5}' | grep -e "r" -e
"b"
case $? in
0)
# Just wait on the Vm to go down completely
count=count-1
if [ $count -le 0 ]; then
# force power down the VM; this is guaranteed to succeed
echo "WARNING:: Attempting force shutdown of the VM"
${SG_XEN_BIN} destroy ${SG_XEN_VM_NAME}
fi
sleep ${SG_XEN_RETRY_INTERVAL}
continue ;;
1)
20
# exit only when Vm is completely halted
return 1 ;;
*)
echo "ERROR: 255 Unrecoverable error occured "
return 255 ;;
esac
done
fi
}
################
# main routine
################
#
# Module script must be specified with three required entry points:
# start, stop, and validate.
#
# The variable to_exit indicates the success or failure of the entry
# point.
#
typeset -i to_exit=0
echo "sg_xen_vm"
case ${1} in
start)
sgx_start_vm
;;
stop)
sgx_stop_vm
;;
validate)
sgx_validate_vm
;;
*)
echo "ERROR: Illegal entry point specification $1."
to_exit=1
;;
esac
#
# Exit to indicate success/failure to the master_control_script.sh
#
exit $to_exit
21
Appendix Section III
Xen Legacy Monitor Script (cmxenvmd)
The script is designed to monitor Xen VMs which are started using the Xen control script. The script
depends on the Xen Configuration file to source Xen specific environment variables. The script must
be registered as a SERVICE_CMD in the package control script.
####################################################################
# (C) Copyright 2008 Hewlett-Packard Development Company, L.P.
# @(#) Serviceguard for Linux - Xen VM monitor Script
# @(#) Product Name : HP Serviceguard for Linux
# @(#) Product Version : A.11.18.02
# @(#) Patch Name :
#
# *** Note: This file MUST NOT be edited. *****
# Any changes made to it will be overwritten when you upgrade to the
# next release of HP Serviceguard for Linux.
#
# Changing this file may lead to unrecoverable package configuration
# problems.
###################################################################
set -a
. ${SGCONFFILE:=/etc/cmcluster.conf}
. ${SGXENCONFFILE:=./xenhost.cnf}
set +a
# wait for the VM to be up first time before monitoring
run=0
if [ -x ${SG_XEN_BIN} ]; then
while [ 1 ]; do
${SG_XEN_BIN} list | grep ${SG_XEN_VM_NAME} | awk '{print $5}' | grep -e "r" -e "b"
case $? in
0)
run=1
sleep ${SG_XEN_RETRY_INTERVAL}
continue ;;
1)
# exit only when Vm is completely halted
if [ $run -eq 1 ]; then
echo "Xen VM halted"
exit 1
else
continue
fi
;;
*)
echo "Unrecoverable error occured "
exit 255 ;;
esac
done
fi
22
Appendix IV
Table of Acronyms and Abbreviations
Product Name Abbreviation
Serviceguard SG or SG/UX
Serviceguard for Linux SG/LX
Quorum Server QS
Business Continuity and Availability BC&A
High Availability HA
Integrity Virtual Machines Integrity VM
SUSE Linux Enterprise SUSE
SUSE Linux Enterprise Server SLES
Red Hat Enterprise Linux RH or RHEL
Xen Domain0 Dom0
Xen DomainU DomU
Virtual Machine VM
© 2008 Hewlett-Packard Development Company, L.P. The information containedherein is subject to change without notice. The only warranties for HP products andservices are set forth in the express warranty statements accompanying suchproducts and services. Nothing herein should be construed as constituting anadditional warranty. HP shall not be liable for technical or editorial errors oromissions contained herein.
Itanium is a trademark or registered trademark of Intel Corporation or itssubsidiaries in the United States and other countries.
August 2008