Back to the main page

Migrating from UFS Root File System to a ZFS Root File System (Without Zones)


Okay, say I have system with 2 disks.
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SEAGATE-SX318203LC-B90C cyl 9770 alt 2 hd 12 sec 303>
          /pci@1f,0/pci@1/scsi@8/sd@0,0
       1. c1t1d0 <SEAGATE-SX318203LC-B90C cyl 9770 alt 2 hd 12 sec 303>
          /pci@1f,0/pci@1/scsi@8/sd@1,0
Disk 0 is formatted with UFS and is boot disk. I want to migrate UFS root file system to ZFS one (zpool will be on Disk 1). ZFS root environment can be only created on pool consisting of slices (not whole disk). So I partition disk 1 like below. This means that disk label must be SMI, not EFI.
partition> p
Current partition table (original):
Total disk cylinders available: 9770 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       1 - 9229       16.00GB    (9229/0/0) 33556644
  1 unassigned    wu       0               0         (0/0/0)           0
  2     backup    wm       0 - 9769       16.94GB    (9770/0/0) 35523720
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0
Then I create zpool named pool-0
# zpool create pool-0 c1t1d0s0

# zfs list
NAME     USED  AVAIL  REFER  MOUNTPOINT
pool-0   106K  15.6G    18K  /pool-0

# zpool list
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
pool-0  15.9G   111K  15.9G     0%  ONLINE  -
The current boot environment (BE) is ufsBE (I named it like this) and the new one will be created (using -n). Obviously, the zpool has to be crated earlier (-p supports creation of new BE on ZFS).
# lucreate -c ufsBE -n zfsBE -p pool-0

Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufsBE>.
Creating initial configuration for primary boot environment <ufsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <ufsBE> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <pool-0/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </var>.
Creating compare database for file system </pool-0/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-0fc.mnt
updating /.alt.tmp.b-0fc.mnt/platform/sun4u/boot_archive
15+0 records in
15+0 records out
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.
Now see status of BE-s.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      yes    yes       no     -
zfsBE                      yes      no     no        yes    -
Check new ZFS file systems that have been created.
# zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
pool-0             4.06G  11.6G  92.5K  /pool-0
pool-0/ROOT        1.56G  11.6G    18K  /pool-0/ROOT
pool-0/ROOT/zfsBE  1.56G  11.6G  1.56G  /
pool-0/dump         512M  12.1G    16K  -
pool-0/swap        2.00G  13.6G    16K  -
Let's now activate newly created ZFS BE.
# luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment .

/usr/sbin/luactivate: /etc/lu/DelayUpdate/: cannot create
Okay, this is known issue. Fix follows. For tcsh shell: setup environmental variable.
# setenv BOOT_MENU_FILE menu.lst
Try again:
# luactivate zfsBE

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).
2. Change the boot device back to the original boot environment by typing:

     setenv boot-device /pci@1f,0/pci@1/scsi@8/disk@0,0:a

3. Boot to the original boot environment by typing:
     boot
**********************************************************************
Modifying boot archive service
Activation of boot environment <zfsBE> successful.
Reboot (but read previous message to know what command to use) # init 6 See console during boot ...
Sun Fire V120 (UltraSPARC-IIe 648MHz), No Keyboard
OpenBoot 4.0, 1024 MB memory installed, Serial #53828024.
Ethernet address 0:3:ba:35:59:b8, Host ID: 833559b8.
Executing last command: boot
Boot device: /pci@1f,0/pci@1/scsi@8/disk@1,0:a  File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hostname: counterstrike2
SUNW,eri0 : 100 Mbps full duplex link up
Configuring devices.
/dev/rdsk/c1t0d0s4 is clean
/dev/rdsk/c1t0d0s5 is clean
Reading ZFS config: done.
Mounting ZFS filesystems: (3/3)
NOTICE: setting nrnode to max value of 57843
New status of BEs follows.
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      no     no        yes    -
zfsBE                      yes      yes    yes       no     -
Both UFS/ZFS file systems are visible. This is it! # df -h -F zfs Filesystem size used avail capacity Mounted on pool-0/ROOT/zfsBE 16G 1.6G 12G 12% / pool-0 16G 97K 12G 1% /pool-0 pool-0/ROOT 16G 18K 12G 1% /pool-0/ROOT pool-0/.0 16G 50M 12G 1% /pool-0/.0 pool-0/backup 16G 18K 12G 1% /pool-0/backup # df -h -F ufs Filesystem size used avail capacity Mounted on /dev/dsk/c1t0d0s4 2.0G 130M 1.8G 7% /.0 /dev/dsk/c1t0d0s5 4.6G 1.6G 3.0G 35% /backup
Back to the main page