Back to the main page

Live Upgrade from Solaris 9 to Solaris 10

Live upgrade of OS reduces system downtime. 

The idea is duplicating current running boot environment (BE) and upgrading duplicate BE while current one is running. 
Rebooting will activate new BE.  If something goes wrong, previous good BE can be easily re-activating with next reboot. 

Live upgrade divides FS(s) in 2 groups: Critical (ones OS cannot live, like / and /var) and Sharable (like /export or /home). 

This means that critical FS are separate mount points in /etc/vfstab of active/inactive BE. 
Sharable FS are same mount points in /etc/vfstab so updating then in active BE will also update them in inactive BE. 

Duplicating current running BE means coping critical FS to another available slice. 
First FS(s) can be reconfigured (splitting or merging them) on new BE. 
After you are happy with new configuration, critical files are copied to their directories. 
To be on safe side, have say 5-6G of free space for new BE.   

Note: place new BE on disk slice. The SVM metadevice cannot be used for new BE, but running BE can be on SVM. 

For this exercise, I installed Solaris 9 on SunFire V120 (using jumpstart and having 'cluster SUNWCall' in profile file, this install entire distribution).  

So after installation, required application (Live Upgrade) is on my system. There are 2 packages. 

application SUNWlur	Live Upgrade (root)
application SUNWluu     Live Upgrade (usr)

But there is a trick here. These packages are from Solaris 9. And we want to upgrade new BE to Solaris 10. 

So we need to uninstall these packages and install new ones for Solaris release to be installed (in this case Solaris 10). 

1. Remove packages for Solaris 9 (# pkgrm SUNWlur SUNWluu)
2. Get packages from Solaris 10 media (there are 3 of them) and copy to /tmp
3. Install them (# tmp> pkgadd -d . SUNWlucfg SUNWlur SUNWluu)
4. See Sun doc 206844 for required patches and install them. 

Example: 
# tmp> patchadd 137477-01
Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...
Patch number 137477-01 has been successfully installed.
See /var/sadm/patch/137477-01/log for details
Patch packages installed:
  SUNWbzip
  SUNWsfman

# showrev -p |grep 137477-01
Patch: 137477-01 Obsoletes:  Requires:  Incompatibles:  Packages: SUNWbzip, SUNWsfman
Live Upgrade Menus can be viewed with command /usr/sbin/lu Looks like below:
|>Activate - Activate a Boot Environment                |
| Cancel   - Cancel a Copy Job                          |
| Compare  - Compare the contents of Boot Environments  |
| Copy     - Start/Schedule a Copy                      |
| Create   - Create a Boot Environment                  |
| Current  - Name of Current Boot Environment           |
| Delete   - Delete a Boot Environment
| List     - List the filesystems of a Boot Environment
| Rename   - Change the name of a Boot Environment
| Status   - List the status of all Boot Environments   |
| Upgrade  - Upgrade an Alternate Boot Environment      |
| Flash    - Flash an Alternate Boot Environment        |
| Help     - Help Information on Live Upgrade           |
| Exit     - Exit the Live Upgrade Menu System          |
+-------------------------------------------------------+
But let's use CLI here. By the way, system has 2 disks.
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@1f,0/pci@1/scsi@8/sd@0,0
       1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@1f,0/pci@1/scsi@8/sd@1,0
Two partitions, 0 and 3, (size 20G) were created on Disk 1. They will be dedicated for / and /var upgrades. Well, first I partitioned Disk 1 same as Disk 0, but partition size of 4G wasn't enough for upgrade, so I just created 20G to be on safe side for this testing).
partition> p (Disk 1)
Current partition table (unnamed):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       1 -  4122       20.00GB    (4122/0/0)   41945472
  1 unassigned    wu       0                0         (0/0/0)             0
  2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
  3        var    wm    4123 -  8244       20.00GB    (4122/0/0)   41945472
  4 unassigned    wm       0                0         (0/0/0)             0
  5 unassigned    wm       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wm       0                0         (0/0/0)             0
And current entries in /etc/vfstab are
/dev/dsk/c1t0d0s1       -       -       swap    -       no      -
/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0      /       ufs     1       no      -
/dev/dsk/c1t0d0s3       /dev/rdsk/c1t0d0s3      /var    ufs     1       no      -
/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4      /.0     ufs     2       yes     -
/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5      /backup ufs     2       yes     -
swap    -       /tmp    tmpfs   -       yes     -
The command lucreate creates new BE and syntax is (can be more than one -m where you specify critical FS for new BE):
# lucreate -c current_BE -m mountpoint:device:FStype -n new_BE
Here, I will not specify swap under -m, so both BEs will share same swap slice. For me, below command completed in 1 hour.
# lucreate -c oldBE-disk0 -m /:/dev/dsk/c1t1d0s0:ufs -m /var:/dev/dsk/c1t1d0s3:ufs -n newBE-disk1

Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment  file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <newBE-disk1>.
Source boot environment is <oldBE-disk0>.
Creating boot environment <newBE-disk1>.
Creating file systems on boot environment <newBE-disk1>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c1t1d0s0>.
Creating <ufs> file system for </var> in zone <global> on </dev/dsk/c1t1d0s3>.
Mounting file systems for boot environment <newBE-disk1>.
Calculating required sizes of file systems for boot environment <newBE-disk1>.
Populating file systems on boot environment <newBE-disk1>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Populating contents of mount point </var>.
Copying.
Creating shared file system mount points.
WARNING: The file </tmp/lucopy.errors.3569> contains a list of <2>
potential problems (issues) that were encountered while populating boot
environment <newBE-disk1>.
INFORMATION: You must review the issues listed in
</tmp/lucopy.errors.3569> and determine if any must be resolved. In
general, you can ignore warnings about files that were skipped because
they did not exist or could not be opened. You cannot ignore errors such
as directories or files that could not be created, or file systems running
out of disk space. You must manually resolve any such problems before you
activate boot environment <newBE-disk1>.
Creating compare databases for boot environment <newBE-disk1>.
Creating compare database for file system </var>.
Creating compare database for file system </>.
Updating compare databases on boot environment <newBE-disk1>.
Making boot environment <newBE-disk1> bootable.
Population of boot environment <newBE-disk1> successful.
Creation of boot environment <newBE-disk1> successful.
During duplication on old BE, I checked BE status and it was.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0                yes      yes    yes       no     -
newBE-disk1                no       no     no        no     ACTIVE
Also I can follow progress of creating new BE.
# df -h -F ufs
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0      3.9G   1.6G   2.3G    40%    /
/dev/dsk/c1t0d0s3      3.9G    37M   3.9G     1%    /var
/dev/dsk/c1t0d0s4       20G    20M    19G     1%    /.0
/dev/dsk/c1t0d0s5       36G   1.7G    33G     5%    /backup
/dev/dsk/c1t1d0s0       20G   1.3G    18G     7%    /.alt.tmp.b-y4.mnt
/dev/dsk/c1t1d0s3       20G    49M    19G     1%    /.alt.tmp.b-y4.mnt/var
After duplication is done, the BE status is:
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0                yes      yes    yes       no     -
newBE-disk1                yes      no     no        yes    -
Okay, now we are ready to upgrade new BE to Solaris 10. The ISO image is in shared directory on other machine, hostname unixlab (accessed over /net directory). Let's first check the media from where we want to perform OS upgrade.
# luupgrade -c -l /tmp/error-checkmedia.txt -o /tmp/output-checkmedia.txt -s /net/unixlab/export/jumpstart/distrib/sparc/5.10u7
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains a standard media installer which can be run.
The media contains < Solaris> version < 10>.
If I run only luupgrade, it will show me all options as very good help. Let's now try only dry-run (option -N) to see 'projection' of upgrading new BE to Solaris 10. Note: I use -N for dry-run and place error and output log files in shared FS (/.0)
# luupgrade -u -n newBE-disk1 -l /.0/error-upgrade.txt -o /.0/output-upgrade.txt -N -s /net/unixlab/export/jumpstart/distrib/sparc/5.10u7

42126 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </net/unixlab/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot>
Validating the contents of the media </net/unixlab/export/jumpstart/distrib/sparc/5.10u7>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <newBE-disk1>.
Performing the operating system upgrade of the BE <newBE-disk1>.
Execute Command: </net/unixlab/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot/usr/sbin/install.d/pfinstall -L /a -c /net/unixlab/export/jumpstart/distrib/sparc/5.10u7 /tmp/.luupgrade.profile.upgrade.5743>.
Adding operating system patches to the BE < newBE-disk1>.
Execute Command </net/unixlab/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot/usr/sbin/install.d/install_config/patch_finish -R "/a" -c "/net/unixlab/export/jumpstart/distrib/sparc/5.10u7">.
This looks okay. Let's run it in real.
# luupgrade -u -n newBE-disk1 -l /.0/error-upgrade.txt -o /.0/output-upgrade.txt -s /net/unixlab/export/jumpstart/distrib/sparc/5.10u7

42126 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </net/unixlab/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot>
Validating the contents of the media </net/unixlab/export/jumpstart/distrib/sparc/5.10u7>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <newBE-disk1>.
Determining packages to install or upgrade for BE <newBE-disk1>.
Performing the operating system upgrade of the BE <newBE-disk1>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <newBE-disk1>.
Package information successfully updated on boot environment <newBE-disk1>.
Adding operating system patches to the BE <newBE-disk1>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <newBE-disk1> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <newBE-disk1> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <newBE-disk1>. Before you activate boot
environment <newBE-disk1>, determine if any additional system maintenance
is required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment < newBE-disk1> is complete.
Installing failsafe
Failsafe install is complete.
I was checking status during upgrade.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0                yes      yes    yes       no     -
newBE-disk1                yes      no     no        no     UPDATING (can take up to 2h)
Okay, now we need to activate new BE that will make it bootable on next reboot. First check BE status again, make sure everything is okay.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0                yes      yes    yes       no     -
newBE-disk1                yes      no     no        yes    -
Or, below command also shows what BE is activated on next reboot.
 
#  luactivate
oldBE-disk0
And finally activation of new BE.
# luactivate newBE-disk1

A Live Upgrade Sync operation will be performed on startup of boot environment .
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Change the boot device back to the original boot environment by typing:
     setenv boot-device bootdisk
3. Boot to the original boot environment by typing:
     boot
**********************************************************************
Modifying boot archive service
Activation of boot environment  successful.
And new Live Upgrade status.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0                yes      yes    no        no     -
newBE-disk1                yes      no     yes       no     -
Tip: make sure that OpenBoot parameter diag-switch? is setup to false.
ok setenv diag-switch? false
Reboot # init 6 And watch console.

INIT: New run level: 6
The system is coming down.  Please wait.
System services are now being stopped.
Dec  9 20:03:52 unixlab-1 nrpe[343]: Cannot remove pidfile '/var/run/nrpe.pid' - check your privileges.
Print services already stopped.
umount: /net/unixlab/export/jumpstart busy
umount: /net/unixlab/export busy
umount: /net busy
Live Upgrade: Deactivating current boot environment <oldBE-disk0>.
Live Upgrade: Executing Stop procedures for boot environment <oldBE-disk0>.
Live Upgrade: Current boot environment is <oldBE-disk0>.
Live Upgrade: New boot environment will be <newBE-disk1>.
Live Upgrade: Activating boot environment <newBE-disk1>.
Creating boot_archive for /.alt.tmp.b-xrh.mnt
updating /.alt.tmp.b-xrh.mnt/platform/sun4u/boot_archive
Live Upgrade: The boot device for boot environment <newBE-disk1> is
</dev/dsk/c1t1d0s0>.
Live Upgrade: Activation of boot environment < newBE-disk1> completed.
rm: /etc/lu/DelayUpdate/ is a directory
umount: /net/unixlab/export/jumpstart busy
umount: /net/unixlab/export busy
umount: /net busy
The system is down.
syncing file systems... done
rebooting...
Executing last command: boot
Boot device: mirrdisk  File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hostname: unixlab-1
SUNW,eri0 : 100 Mbps full duplex link up
Configuring devices.
Loading smf(5) service descriptions:  39/170
Etc etc
Quick check of Solaris 10 BE. The command lufslist shows you nice output about BE and it's file systems.
# lufslist newBE-disk1
               boot environment name: newBE-disk1
               This boot environment is currently active.
               This boot environment will be active on next system boot.
Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/dsk/c1t0d0s1       swap       4298342400 -                   -
/dev/dsk/c1t1d0s0       ufs       21476081664 /                   -
/dev/dsk/c1t1d0s3       ufs       21476081664 /var                -
/dev/dsk/c1t0d0s4       ufs       21476081664 /.0                 -
/dev/dsk/c1t0d0s5       ufs       38752813056 /backup             -


# lufslist oldBE-disk0
               boot environment name: oldBE-disk0
Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/dsk/c1t0d0s1       swap       4298342400 -                   -
/dev/dsk/c1t0d0s0       ufs        4298342400 /                   -
/dev/dsk/c1t0d0s3       ufs        4298342400 /var                -
/dev/dsk/c1t0d0s4       ufs       21476081664 /.0                 -
/dev/dsk/c1t0d0s5       ufs       38752813056 /backup             -
You can also compare current BE with one specified in below command.
# lucompare newBE-disk1
ERROR: newBE-disk1 is the active boot environment; cannot compare with itself

# lucompare oldBE-disk0
Determining the configuration of oldBE-disk0 ...
zoneadm: global: could not get state: No such zone configured
zoneadm: failed to get zone data
        < newBE-disk1
        > oldBE-disk0
Processing Global Zone
Comparing / ...
 Links differ
 01 < /:root:root:31:16877:DIR:
 02 > /:root:root:25:16877:DIR:
 Permissions, Links, Group differ
 01 < /lib:root:bin:7:16877:DIR:
 02 > /lib:root:root:1:41471:SYMLINK:9:
 02 > /lib/svc does not exist
 02 > /lib/svc/bin does not exist
 02 > /lib/svc/bin/lsvcrun does not exist
Etc etc
I was checking status during comparing.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0                yes      no     no        no     COMPARING
newBE-disk1                yes      yes    yes       no     -
Another check:
# cat /etc/release
                       Solaris 10 5/09 s10s_u7wos_08 SPARC
           Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
                        Use is subject to license terms.
                             Assembled 30 March 2009

# cat /etc/release~9
                        Solaris 9 9/05 s9s_u8wos_05 SPARC
           Copyright 2005 Sun Microsystems, Inc.  All Rights Reserved.
                        Use is subject to license terms.
                            Assembled 04 August 2005
Tue Dec  8 09:15:23 PST 2009 : 01 : supplementary software
Tue Dec  8 09:15:24 PST 2009 : 02 : prototype tree
See that both BE share shared FS (/.0 and /backup)
# df -h -F ufs
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t1d0s0       20G   4.3G    15G    22%    /
/dev/dsk/c1t1d0s3       20G    86M    19G     1%    /var
/dev/dsk/c1t0d0s4       20G    20M    19G     1%    /.0
/dev/dsk/c1t0d0s5       36G   1.7G    33G     5%    /backup
/dev/dsk/c1t0d0s0      3.9G   1.6G   2.3G    41%    /.alt.tmp.b-t2b.mnt
/dev/dsk/c1t0d0s3      3.9G    33M   3.9G     1%    /.alt.tmp.b-t2b.mnt/var

# df -h -F lofs
Filesystem             size   used  avail capacity  Mounted on
/.0                     20G    20M    19G     1%    /.alt.tmp.b-t2b.mnt/.0
/backup                 36G   1.7G    33G     5%    /.alt.tmp.b-t2b.mnt/backup
Back to the main page