Back to the main page

Upgrade of SVM to ZFS based system

Here is the real-life-production example of upgrading SunFire V240 from system based on Solaris Voluma Manager (SVM) to one based on a ZFS.
The system has running ClearCase software, which supports software configuration management of source code.
The SVM system is Solaris 9 and the new one will be of course Solaris 10.

So here is the file system layout (the V240 has 2 x 73G and 2 x 146G disks).

The info about operational environment file systems:
{host}/> df -F ufs -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d0         486M    84M   353M    20%    /
/dev/md/dsk/d3         1.5G   905M   529M    64%    /usr
/dev/md/dsk/d4         1.5G   639M   795M    45%    /var
/dev/md/dsk/d5         194G   174G    19G    91%    /.0
The info about file systems for ClearCase:
{host}/> df -F lofs -h
Filesystem             size   used  avail capacity  Mounted on
/export/vobstore       194G   174G    19G    91%    /vobstore
/export/viewstore      194G   174G    19G    91%    /viewstore
/export/shipping       194G   174G    19G    91%    /shipping
/export/ccase_rls      194G   174G    19G    91%    /ccase_rls
And status of SVM metadevices:
{host}/> metastat -p
d5 -m d25 d15 1
d25 2 1 c1t1d0s5 \
         1 c1t3d0s0
d15 2 1 c1t0d0s5 \
         1 c1t2d0s0
d1 -m d11 d21 1
d11 1 1 c1t0d0s1
d21 1 1 c1t1d0s1
d4 -m d14 d24 1
d14 1 1 c1t0d0s4
d24 1 1 c1t1d0s4
d3 -m d13 d23 1
d13 1 1 c1t0d0s3
d23 1 1 c1t1d0s3
d0 -m d10 d20 1
d10 1 1 c1t0d0s0
d20 1 1 c1t1d0s0
And the info about replicas of the metadevice state database.
{host}/> metadb -i
        flags           first blk       block count
     a m  p  luo   r    16              8192            /dev/dsk/c1t0d0s7
     a    p  luo   r    8208            8192            /dev/dsk/c1t0d0s7
     a    p  luo   r    16              8192            /dev/dsk/c1t1d0s7
     a    p  luo   r    8208            8192            /dev/dsk/c1t1d0s7

In short, the idea is to:

The upgrade process:

1. Delete SVM DB from disk0 (c1t0d0)

> metadb -d /dev/dsk/c1t0d0s7

2. Detach/clear disks 0 and 2 from SVM mirror

Detach/clear disk 0 (c1t0d0) from the SVM (only slices 0, 1, 3 and 4 which are Operational Environment)
>foreach i (0 1 3 4)
>metadetach d${i} d1${i}
>metaclear d1${i}
>end
d0: submirror d10 is detached
d10: Concat/Stripe is cleared
d1: submirror d11 is detached
d11: Concat/Stripe is cleared
d3: submirror d13 is detached
d13: Concat/Stripe is cleared
d4: submirror d14 is detached
d14: Concat/Stripe is cleared
Detach/clear disk0 slice 5 and disk 2 (c1t2d0) from SVM (which is Clearcase data)
>metadetach d5 d15
d5: submirror d15 is detached

>metaclear d15
d15: Concat/Stripe is cleared

> metastat -p
d5 -m d25 1
d25 2 1 c1t1d0s5 \
         1 c1t3d0s0
d4 -m d24 1
d24 1 1 c1t1d0s4
d3 -m d23 1
d23 1 1 c1t1d0s3
d1 -m d21 1
d21 1 1 c1t1d0s1
d0 -m d20 1
d20 1 1 c1t1d0s0

3. Insert 300G disks in slot 0 and 2

4. The disk0 slice0 (c1t0d0s0) is 32G and has installed Solaris 10

5. The disk0 slice1 (c1t0d0s1) is formatted with SMI label to take the rest of the disk space.

6. Go to OpenBoot prompt: init 0

7. Boot from the disk0: {ok} boot

8. Change hostname, reboot

Host name is left over from installation on test machine, so verify files:
/etc/nodename, /etc/inet/hosts, /etc/netmasks, /etc/hostname.bge0, /etc/defaultrouter, /etc/defaultdomain,
also make sure to see all disks (command: devfsadm)

9. Install NIS slave

10. Create zpool space-CC on c1t0d0s1 with Clearcase file system layout.

The file systems vobstore and viewstore have compression OFF. Basically options for Clearcase file systems are something like in the table below.
File systemMount point quotaowner permissions NFS options
space-CC/cc-pri/var_adm_rational /var/adm/rational 1 root:root 0755 off
space-CC/cc-pri/vobstore /vobstore 400 vobadm:ccusers 0755 rw=usershosts (one of our NIS netgroups)
space-CC/cc-pri/viewstore /viewstore 20 vobadm:ccusers 2775 rw=usershosts
space-CC/cc-pri/ccase_rls /ccase_rls 20 vobadm:ccusers 0755 ro
space-CC/cc-pri/vobadm /vobadm 5 vobadm:ccusers 0755 off
space-CC/cc-pri/buildstore /buildstore 20 vobadm:ccusers 2775 rw=usershosts
space-CC/cc-pri/shipping /shipping 5 root:root 0555 off

Set vobstore and viewstore compression to OFF.
> zfs set compression=off space-CC/cc-pri/vobstore 
> zfs set compression=off space-CC/cc-pri/viewstore

> zfs get compression space-CC/cc-pri/viewstore
NAME                       PROPERTY     VALUE     SOURCE
space-CC/cc-pri/viewstore  compression  off       local
> zfs get compression space-CC/cc-pri/vobstore
NAME                      PROPERTY     VALUE     SOURCE
space-CC/cc-pri/vobstore  compression  off       local

11. Configure access to legacy UFS data

- If some disk(s) not visible, run .devfsadm.. 

- Services metainit/meta/mdmonitor has to run
        > svcadm enable -r metainit meta mdmonitor
        
- Create SVM database replica on swap slice on disk1: 
        > metadb -afc 3 c1t1d0s1 
        
- Create file /etc/lvm/md.tab and create metadevices from this file with command: 
        >metainit -a

>cat /etc/lvm/md.tab
# /.0
d25 2 1 c1t1d0s5 \
         1 c1t3d0s0
# /var
d24 1 1 c1t1d0s4
# /usr
d23 1 1 c1t1d0s3
# /
d20 1 1 c1t1d0s0

> metainit -a
d25: Concat/Stripe is setup
d24: Concat/Stripe is setup
d23: Concat/Stripe is setup
d20: Concat/Stripe is setup

Create legacy mount points
        >umask 22
        >mkdir .p /legacy/root ; mkdir .p /legacy/usr
        >mkdir .p /legacy/var ; mkdir .p /legacy/.0

Add lines to /etc/vfstab:
/dev/md/dsk/d20  /dev/md/rdsk/d20 /legacy/root   ufs     1   no   ro
/dev/md/dsk/d23  /dev/md/rdsk/d23 /legacy/usr    ufs     1   no   ro
/dev/md/dsk/d24  /dev/md/rdsk/d24 /legacy/var    ufs     1   no   ro
/dev/md/dsk/d25  /dev/md/rdsk/d25 /legacy/.0     ufs     2   no   ro

Run: 
mount /legacy/root
mount /legacy/usr
mount /legacy/var
mount /legacy/.0

12. Restore /ccase_rls and /vobadm from legacy to space-CC (needed for CC installation)

> rsync -avH /legacy/.0/ccase_rls/ /ccase_rls/
> rsync -avH /legacy/root/home/vobadm/ /vobadm/

13. Now let developers install Clearcase on zpool space0

14. Transfer CC data from legacy to space-CC

>rsync -avH /legacy/var/adm/rational/ /var/adm/rational/
>rsync -avH /legacy/.0/viewstore/ /viewstore/
>rsync -avH /legacy/.0/vobstore/ /vobstore/
>rsync -avH /legacy/.0/shipping/ /shipping/

15. Configure NFS

Adjust NFS startup script to have verbose logging:
> egrep mountd /lib/svc/method/nfs-server
# Start up mountd and nfsd if anything is exported.
/usr/lib/nfs/mountd .v
Enable NFS client and server.

16. Configure samba

Use existent host account, restore file:
/legacy/usr/local/samba/private/ secrets.tdb

Join ourwindomain domain:
>net rpc join -U ourwindomain/zarko

Verify Samba is running.

17. Wait until Clearcase people verify Clearcase functionality and data on space-CC

18. Un-mount /legacy/root(usr,var,.0)

19. Delete temporarily SVMDB from disk1/s1

>metadb -df /dev/dsk/c1t1d0s1

20. Move file /etc/lvm/md.tab to /etc/lvm/.md.tab

21. Unconfigure disks 1 and 3

>cfgadm -c unconfigure c1::dsk/c1t1d0
>cfgadm -c unconfigure c1::dsk/c1t3d0

22. Replace disk 1 and 3 with 300G ones.

23. Format disk1 with SMI label

24. Copy VTOC from disk0 to disk1

>prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2

25. Attach disk1 slice0 to space0

>zpool attach -f space0 c1t0d0s0 c1t1d0s0

26. Wait space0 to finish resilvering

27. Install bootblock on c1t1d0s0

> installboot -F zfs /usr/platform/SUNW,Sun-Fire-V240/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0

28. Check/verify OpenBoot parameter, test boot from disk1 (ok boot mirrdisk)

boot-device=bootdisk mirrdisk 
use-nvramrc?=true 
nvramrc=
devalias bootdisk /pci@1c,600000/scsi@2/disk@0,0:a
devalias mirrdisk /pci@1c,600000/scsi@2/disk@1,0:a

29. Restore, if needed, vobadm cronjob from backup

30. Reboot and verify services cron/nfs/samba/CC

31. Attach disk1 slice1 (c1t1d0s1) to space-CC

>zpool attach -f space-CC c1t0d0s1 c1t1d0s1

32. Wait for space-CC to resilver

33. Add mirrored disk 2 and 3 to space-CC

>zpool add space-CC mirror c1t2d0 c1t3d0

34. Done


Back to the main page