how to create a live upgrade boot environment

3
How To Create A Live Upgrade Boot Environment Solaris Live Upgrade is an excellent way to manage Solaris operating system upgr ades and patches. Live Upgrade allows the system admin to upgrade or patch a run ning system with the only downtime being the server reboot once the upgrade or p atch is complete. Live Upgrade accomplishes this trick by employing boot environme nts. A boot environment is essentially a copy of the operating system and other auxil liary files on disk. Live Upgrade requires a server to have at least two boot en vironments, a primary and an alternate. In such a setup, the primary boot enviro nment (PBE) continues to run while the alternate boot environment (ABE) is being upgraded or patched. In this example I partitioned the boot disk such that there are dedicated slices for two boot environments. After mirroring the disks with Solaris Volume Manage r (SVM), the metadevice d0 became the default boot partition. In this example, t his will be the current boot environment. I assigned the metadevice d3 to hold t he second boot environment. To start the process, we create the boot environments by running the lucreate th e command. The -c option assigns a name, s10_be0, to the current boot environmen t in d0 and turns it into the primary boot environment. The -n option names the new boot environment s10_be1. This will become the alternate boot environment. - m specifies the filesystem information for the new boot environment, namely / (r oot) filesystem in metadevice d3 created as a UFS filesystem. It s worth noting at this stage that apart from /usr, I have also included the oth er two directories, /var and /opt, that gets updated during upgrades in the root filesystem. With 146 GB disks (even 300 GB disks) becoming more common nowadays and the use of the logadm utility, the convenience of having /usr, /var and /op t in one 40 GB root filesystem far outweighs the necessity of having a separate /var filesystem. # lucreate -c s10_be0 -n s10_be1 -m /:d3:ufs Determining types of file systems supported Validating file system requests The device name expands to device path Preparing logical storage devices Preparing physical storage devices Configuring physical storage devices Configuring logical storage devices Analyzing system configuration. No name for current boot environment. Current boot environment is named . Creating initial configuration for primary boot environment . WARNING: The device for the root file system mount point </> is not a physical device. WARNING: The system boot prom identifies the physical device as the system boot device. Is the physical device the boot device for the logical device ? (yes or no) yes INFORMATION: Assuming the boot device obtained from the system boot prom is the physical boot device for logical device . INFORMATION: No BEs are configured on this system. The device is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name PBE Boot Device . Updating boot environment description database on all BEs. Updating system configuration files. The device is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment . Source boot environment is .

Upload: shekhar785424

Post on 21-Jul-2016

220 views

Category:

Documents


3 download

DESCRIPTION

live

TRANSCRIPT

Page 1: How to Create a Live Upgrade Boot Environment

How To Create A Live Upgrade Boot EnvironmentSolaris Live Upgrade is an excellent way to manage Solaris operating system upgrades and patches. Live Upgrade allows the system admin to upgrade or patch a running system with the only downtime being the server reboot once the upgrade or patch is complete. Live Upgrade accomplishes this �trick� by employing boot environments.

A boot environment is essentially a copy of the operating system and other auxilliary files on disk. Live Upgrade requires a server to have at least two boot environments, a primary and an alternate. In such a setup, the primary boot environment (PBE) continues to run while the alternate boot environment (ABE) is being upgraded or patched.

In this example I partitioned the boot disk such that there are dedicated slices for two boot environments. After mirroring the disks with Solaris Volume Manager (SVM), the metadevice d0 became the default boot partition. In this example, this will be the current boot environment. I assigned the metadevice d3 to hold the second boot environment.

To start the process, we create the boot environments by running the lucreate the command. The -c option assigns a name, s10_be0, to the current boot environment in d0 and turns it into the primary boot environment. The -n option names the new boot environment s10_be1. This will become the alternate boot environment. -m specifies the filesystem information for the new boot environment, namely / (root) filesystem in metadevice d3 created as a UFS filesystem.

It�s worth noting at this stage that apart from /usr, I have also included the other two directories, /var and /opt, that gets updated during upgrades in the root filesystem. With 146 GB disks (even 300 GB disks) becoming more common nowadays and the use of the logadm utility, the convenience of having /usr, /var and /opt in one 40 GB root filesystem far outweighs the necessity of having a separate /var filesystem.

# lucreate -c s10_be0 -n s10_be1 -m /:d3:ufsDetermining types of file systems supportedValidating file system requestsThe device name expands to device pathPreparing logical storage devicesPreparing physical storage devicesConfiguring physical storage devicesConfiguring logical storage devicesAnalyzing system configuration.No name for current boot environment.Current boot environment is named .Creating initial configuration for primary boot environment .WARNING: The device for the root file system mount point </> is not a physical device.WARNING: The system boot prom identifies the physical device as the system boot device.Is the physical device the boot device for the logical device ? (yes or no) yesINFORMATION: Assuming the boot device obtained from the system boot prom is the physical boot device for logical device .INFORMATION: No BEs are configured on this system.The device is not a root device for any boot environment; cannot get BE ID.PBE configuration successful: PBE name PBE Boot Device .Updating boot environment description database on all BEs.Updating system configuration files.The device is not a root device for any boot environment; cannot get BE ID.Creating configuration for boot environment .Source boot environment is .

Page 2: How to Create a Live Upgrade Boot Environment

Creating file systems on boot environment .Creating file system for </> in zone on .Mounting file systems for boot environment .Calculating required sizes of file systems for boot environment .Populating file systems on boot environment .Analyzing zones.Mounting ABE .Generating file list.Copying data from PBE to ABE .100% of filenames transferredFinalizing ABE.Fixing zonepaths in ABE.Unmounting ABE .Fixing properties on ZFS datasets in ABE.Reverting state of zones in PBE .Making boot environment bootable.Setting root slice to Solaris Volume Manager metadevice .Population of boot environment successful.Creation of boot environment successful.#Let�s have a look at the boot environments that we created. We find that both boot environments are complete; i.e. they can be used to boot with. We also find that s10_be0 is the boot environment currently in use (active now) and it is the boot environment that will be used the next time the server is rebooted (active on reboot). We will change this is in the next step.

# lustatusBoot Environment Is Active Active Can CopyName Complete Now On Reboot Delete Status-------------------------- -------- ------ --------- ------ ----------s10_be0 yes yes yes no -s10_be1 yes no no yes -## luactivate s10_be1

**********************************************************************

The target boot environment has been activated. It will be used when youreboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. YouMUST USE either the init or the shutdown command when you reboot. If youdo not use either init or shutdown, the system will not boot using thetarget BE.

**********************************************************************

In case of a failure while booting to the target BE, the following processneeds to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

setenv boot-device rootdisk

3. Boot to the original boot environment by typing:

boot

**********************************************************************

Modifying boot archive service

Page 3: How to Create a Live Upgrade Boot Environment

Activation of boot environment successful.## lustatusBoot Environment Is Active Active Can CopyName Complete Now On Reboot Delete Status-------------------------- -------- ------ --------- ------ ----------s10_be0 yes yes no no -s10_be1 yes no yes no -#Note that the alternate BE �s10_be1' is now marked �active on reboot�. Let�s reboot the server to validate the change.

# shutdown -g0 -y -i6

Shutdown started. Wednesday, 2 November 2011 2:03:53 PM EST

Changing to init state 6 - please waitBroadcast Message from root (console) on solaris_serv Wed Nov 2 14:03:53...THE SYSTEM solaris_serv IS BEING SHUT DOWN NOW ! ! !Log off now or risk your files being damaged:: {snip}:svc.startd: The system is down.syncing file systems... donerebooting...Resetting...:: {snip}:SPARC Enterprise M4000 Server, using Domain consoleCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.Copyright (c) 2010, Oracle and/or its affiliates and Fujitsu Limited. All rights reserved.OpenBoot 4.24.15, 32768 MB memory installed, Serial #95389486.Ethernet address 0:21:28:af:87:2e, Host ID: 85af872e.

Rebooting with command: bootBoot device: rootmirror:d File and args:SunOS Release 5.10 Version Generic_144488-06 64-bitCopyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.Hostname: solaris_servConfiguring devices.Loading smf(5) service descriptions: 2/2Reading ZFS config: done.

solaris_serv console login:Once the server is up check that �s10_be1' is now the active BE. Also, check that the root filesystem now lives in slice d3.

# lustatusBoot Environment Is Active Active Can CopyName Complete Now On Reboot Delete Status-------------------------- -------- ------ --------- ------ ----------s10_be0 yes no no yes -s10_be1 yes yes yes no -## df -h /Filesystem size used avail capacity Mounted on/dev/md/dsk/d3 39G 9.8G 29G 26% /#