Back to the main page

HP-UX SAN Add on Veritas

Note: In these examples, site-to-site cluster (metrocluster) is using Hitachi TrueCopy, formerly known as Hitachi Open Remote Copy (HORC) or Hitachi Remote Copy (HRC).

Examine a current system

[Local/S2S Cluster] - Run on all nodes

Run below commands as root to collect info before SAN LUNs are allocated.

# vxdg list > vxdg_list.`date +'%Y-%m-%d_%H:%M:%S'`
# vxdg free > vxdg_free.`date +'%Y-%m-%d_%H:%M:%S'`
# vxprint -hrt > vxprint-hrt.`date +'%Y-%m-%d_%H:%M:%S'`
# vxdisk -o alldgs list > vxdisk_alldgs.`date +'%Y-%m-%d_%H:%M:%S'`
# vxddladm list > vxddladm_list.`date +'%Y-%m-%d_%H:%M:%S'`
# vxddladm list hbas > vxddladm_list_hbas.`date +'%Y-%m-%d_%H:%M:%S'`
# vxddladm list ports > vxddladm_list_ports.`date +'%Y-%m-%d_%H:%M:%S'`
# vxddladm list devices > vxddladm_list_devices.`date +'%Y-%m-%d_%H:%M:%S'`
# vxddladm listsupport all > vxddladm_listsupport_all.`date +'%Y-%m-%d_%H:%M:%S'`
# cp -p /etc/fstab fstab.`date +'%Y-%m-%d_%H:%M:%S'`
# bdfv > bdfv.`date +'%Y-%m-%d_%H:%M:%S'`
# cp -p /etc/horcm50.conf horcm50.conf.`date +'%Y-%m-%d_%H:%M:%S'`

Also note that there are VxVM backup configuration files in directory /etc/vx/cbr/bk/

Veritas disk group (DG) configuration can be also backed up, specify DG name and directory where to place backup
# /usr/lib/vxvm/bin/vxconfigbackup -l /home/zarko/tmp/ dg02

Discovery of new LUNs

[Local/S2S Cluster] - Run on all nodes

Scan I/O system for new LUNs (-f = show full listing, -C class = restrict list only to class, disk in this case)
# ioscan -fC disk  

Install special device files
# insf -vC disk

Enable VxVM configuration daemon
# vxdctl enable

[CFS cluster]
To find which server is master in the CFS cluster:
#  vxdctl -c mode  
mode: enabled: cluster active - MASTER
master: hostname-node1

New LUNs should be visible on the system now, say it's disk33
$  ioscan -fN | grep lunpath  |grep disk33 
lunpath  32  0/0/0/5/0/0/0.0x50060e8006ff6540.0x4002000000000000  eslpt  CLAIMED  LUN_PATH  LUN path for disk33
lunpath  15  0/0/0/5/0/0/1.0x50060e8006ff6550.0x4002000000000000  eslpt  CLAIMED  LUN_PATH  LUN path for disk33

If this is not the case, the fix (for HP-UX 11.31) is looking for LUNs with no hardware
# ioscan -fN | grep lunpath | grep NO_HW 

Validate change of LUN associated to LUN path:
# scsimgr -f replace_wwid -C lunpath -I <controler_number>

Example of fixing not visible LUNs:
# ioscan -fN | grep lunpath | grep NO_HW 
lunpath  90  0/0/0/5/0/0/0.0x50060e8006fe2328.0x0 eslpt  NO_HW  LUN_PATH  LUN path for ctl24
lunpath 302  1/0/0/5/0/0/1.0x50060e8006fe2338.0x0 eslpt  NO_HW  LUN_PATH  LUN path for ctl24

# scsimgr -f replace_wwid -C lunpath -I 90
Binding of LUN path 0/0/0/5/0/0/0.0x50060e8006fe2328.0x0 with new LUN validated successfully

# scsimgr -f replace_wwid -C lunpath -I 302
Binding of LUN path 1/0/0/5/0/0/1.0x50060e8006fe2338.0x0 with new LUN validated successfully

Creating new DG with new LUNs

[Two nodes Local Cluster, Active/Passive, with shared LUNs]
Create DG on active node.
[Two nodes S2S Cluster, Active/Passive, with replicated LUNs]
Create DG on both nodes in both sites.
[>2 Nodes CFS Local Cluster, Active/Active, with CFS LUNs]
Create DG on CFS master node.
[>2 Nodes CFS S2S Cluster, Active/Passive, with CFS and replicated LUNs]
Create DG on CFS master nodes in both sites. Once DG is created on passive site, deport it.

# /etc/vx/bin/vxdisksetup -f -i <disk1>   # example: /etc/vx/bin/vxdisksetup -f -i c32t0d4 

# vxdg init <dgname> <dgname_disk1name>=<disk1>   # example: vxdg init dg01 dg01_disk01=c32t0d4 

# /etc/vx/bin/vxdisksetup -f -i <disk2>     # adding second disk to DG 

# vxdg -g <dgname> adddisk <dgname_disk2name>=<disk2>   # example: vxdg -g dg01 adddisk dg01_disk02=c32t0d5

# vxprint -g dgtest -hrt

To remove disk from DG
#  vxdg -g <dgname> rmdisk lt;dgname_disk2name>  # example: vxdg -g dg01 rmdisk dg01_disk03 

Adding new LUNs to an existing DG

[Two nodes Local Cluster, Active/Passive, with shared LUNs]
Add LUNs only on active node where DG is imported, since the DG is deported on passive node.
[Two nodes S2S Cluster, Active/Passive, with replicated LUNs]
Add LUNs on a node in Active site where DG is imported, since a DG is deported on Passive site. The LUNs in Active site replicates to LUNs in Passive site (HORM replication works on block level).
[>2 Nodes CFS Local Cluster, Active/Active, with CFS LUNs]
Add LUNs on CFS master node.
[>2 Nodes CFS S2S Cluster, Active/Passive, with CFS and replicated LUNs]
Add LUNs on CFS master in Active site. The LUNs in Active site replicates to LUNs in Passive site. The DG is deported on Passive site.

If needed, rename LUNs that are already members of DG
# vxedit -g <dg_name> rename <old_LUn_name> <new_LUN_name>

# /etc/vx/bin/vxdisksetup -f -i <disk2>     # prepare disk for vxvm 
# vxdg -g <dgname> adddisk <dgname_disk2name>=<disk2>   # example: vxdg -g dg10 adddisk dg10_disk05=c32t0d5
# vxprint -g dgtest -hrt

Creating new LVOL in existing DG

Create lvol (in this example it's 4 disk stripe) & mount point, make new file system on the top of that, mount it and setup ownership and permissions.

# vxassist -g dgA make lvolN 500000m layout=stripe stripeunit=128k ncol=4
# mkdir -p /opt/mountpoint
# newfs -F vxfs /dev/vx/rdsk/dgA/lvolN
# mount -F vxfs /dev/vx/dsk/dgA/lvolN /opt/mountpoint
# chown user:group /opt/mountpoint
# chmod 755 /opt/mountpoint

[CFS Cluster, legacy packages]

On CFS master: create new LVOL, make mount point and create new file system:
# vxassist -g dg05 make lvol8 500000m layout=stripe stripeunit=128k ncol=4
# mkdir -p /opt/mountpoint
# newfs -F vxfs -o largefiles /dev/vx/rdsk/dg05/lvol8

Create CFS mount point and mount it and verify it:
# cfsmntadm add dg05 lvol8 /opt/mountpoint SG-CFS-OPT-MOUNTPOINT all=rw
# cfsmount /opt/mountpoint
# bdfv

Verify the file /etc/cmcluster/cfs/SG-CFS-OPT-MOUNTPOINT.env
$ cat SG-CFS-OPT-MOUNTPOINT.env
CFS_MOUNT_POINT[0]="/opt/mountpoint"
CFS_VOLUME[0]="dg05/lvol8"
CFS_MOUNT_OPTIONS[0]="g2u0271c=rw g2u0272c=rw"
CFS_PRIMARY_POLICY[0]=""

To alter mount options:
cfsmntadm modify /opt/mountpoint all+=nodatainlog,mincache=direct,convosync=direct

The /etc/cmcluster/cfs/SG-CFS-SG-CFS-OPT-MOUNTPOINT.env is now:
$ cat SG-CFS-SG-CFS-OPT-MOUNTPOINT.env
CFS_MOUNT_POINT[0]="/opt/mountpoint"
CFS_VOLUME[0]="dg05/lvol8"
CFS_MOUNT_OPTIONS[0]="g2u0271c=rw,nodatainlog,mincache=direct,convosync=direct g2u0272c=rw,nodatainlog,mincache=direct,convosync=direct"
CFS_PRIMARY_POLICY[0]=""

Go to the directory /etc/cmcluster/crsp/, backup file crsp.conf and add dependency lines:
DEPENDENCY_NAME          SG-CFS-SG-CFS-OPT-MOUNTPOINT
DEPENDENCY_CONDITION     SG-CFS-SG-CFS-OPT-MOUNTPOINT=UP
DEPENDENCY_LOCATION      SAME_NODE

Verify and apply crsp configuration:
cmcheckconf -P crsp.conf
cmapplyconf -P crsp.conf
cmviewcl -vp crsp

Synchronize configuration file to all cluster nodes:
cmsync crsp.conf

Mirroring LVOL to new LUNs and releasing old LUNs to SAN team

Sometimes it's needed to mirror LVOLs to new LUNs, and release old ones. Example can be when you need more space or you want to replace one hige LUN with smaller ones and stripe them. To start mirroring (in background with .b) to new disks using RAID0 = strping
# vxassist -b -g <dg_name> mirror <lvol_name> layout=stripe stripeunit=128k nstripe=4 alloc=<disk1>,<disk2>,<etc>

Example (mirror to 8 new LUNs):
# vxassist -b -g dg10 mirror lvol1 layout=stripe stripeunit=128k nstripe=4 alloc=dg10_disk001,dg10_disk002,dg10_disk003,dg10_disk004,dg10_disk005,dg10_disk006,dg10_disk007,dg10_disk008

Use vxtask to monitor progress of mirroring:
# vxtask list

Once mirroring is done on all needed LVOLs, dissociate (dis) and remove (rm) old plexes:
# vxplex -g <dg_name> -o rm dis <plex_name>
Example:
# vxplex -g dg10 -o rm dis lvol1-01
# vxplex -g dg10 -o rm dis lvol2-01

Then remove disks from DG
# vxdg -g <dg_name> rmdisk <disk_name>
Example:
# vxdg -g dg10 rmdisk dg10_disk1

Initialize disk again and it.s now ready to be returned to SAN Team:
# /opt/VRTS/bin/vxdisksetup -i <disk_name> 
Example:
# /opt/VRTS/bin/vxdisksetup -i c3t0d5 

Resizing (mostly growing) existing LVOL

[Two nodes Local Cluster, Active/Passive, with shared LUNs] Run on active node where DG is imported, since the DG is deported on passive node.
[Two nodes S2S Cluster, Active/Passive, with replicated LUNs] Run on a node in Active site where DG is imported, since a DG is deported on Passive site. The LUNs in Active site replicates to LUNs in Passive site.
[>2 Nodes CFS Local Cluster, Active/Active, with CFS LUNs] Run on CFS master node.
[>2 Nodes CFS S2S Cluster, Active/Passive, with CFS and replicated LUNs] Run on CFS master in Active site. The LUNs in Active site replicates to LUNs in Passive site.

To add 50G to lvol, execute (the file system grows automatically).
# /opt/VRTS/bin/vxresize -g <dg_name> <lvol_name> +50G

Or to resize to desired size execute:
# /opt/VRTS/bin/vxresize -g <dg_name> <lvol_name> <desired_size><M|G>

Verify result with bdfv command

HORCM (only for S2S cluster)

Example of s2s cluster, with horcm instance 50. The /etc/horcm50.conf exists on both nodes.
#  cmviewcl 

CLUSTER        STATUS
gcu42069       up

 SITE_NAME     Houston1_pri

  NODE           STATUS       STATE
  g4u2069c       up           running

    PACKAGE        STATUS           STATE            AUTO_RUN    NODE
    my_pkg           up               running          enabled     g4u2069c

 SITE_NAME     Houston2_sec

  NODE           STATUS       STATE
  g9u0943c       up           running

$  cmdo "cat /etc/horcm50.conf" 

 ## Executing on node g4u2069c:
HORCM_MON
#ip_address     service         poll(10ms)      timeout(10ms)
g4u2069c        horcm50         1000            3000

HORCM_CMD
#Primary-dev_name       Alternate-dev_name
/dev/rdsk/c10t0d1       /dev/rdsk/c2t0d1

HORCM_LDEV
#group_name     dev_name        XPSerial        CU:Ldev MU#
#Disk Dev: /dev/rdsk/c2t0d0      - Disk Group: dg02
gcu42069_CA             gcu42069_CA_dg02_1      00085580        01:4d
#Disk Dev: /dev/rdsk/c5t0d0      - Disk Group: dg02
gcu42069_CA             gcu42069_CA_dg02_2      00085580        01:4c
#Disk Dev: /dev/rdsk/c6t0d0      - Disk Group: dg02
gcu42069_CA             gcu42069_CA_dg02_3      00085580        01:4a
#Disk Dev: /dev/rdsk/c8t0d0      - Disk Group: dg02
gcu42069_CA             gcu42069_CA_dg02_4      00085580        01:4b

HORCM_INST
#group_name     ip_address      service
gcu42069_CA             g9u0943c        horcm50 

## Executing on node g9u0943c:
HORCM_MON
#ip_address     service         poll(10ms)      timeout(10ms)
g9u0943c        horcm50         1000            3000

HORCM_CMD
#Primary-dev_name       Alternate-dev_name
/dev/rdsk/c14t0d1       /dev/rdsk/c6t0d1

HORCM_LDEV
#group_name     dev_name        XPSerial        CU:Ldev MU#
#Disk Dev: /dev/rdsk/c2t0d0      - Disk Group: dg02
gcu42069_CA             gcu42069_CA_dg02_1      00066749        20:5e
#Disk Dev: /dev/rdsk/c4t0d0      - Disk Group: dg02
gcu42069_CA             gcu42069_CA_dg02_2      00066749        20:5d
#Disk Dev: /dev/rdsk/c6t0d0      - Disk Group: dg02
gcu42069_CA             gcu42069_CA_dg02_3      00066749        20:5f
#Disk Dev: /dev/rdsk/c8t0d0      - Disk Group: dg02
gcu42069_CA             gcu42069_CA_dg02_4      00066749        20:5c

HORCM_INST
#group_name     ip_address      service
gcu42069_CA             g4u2069c        horcm50

If LUNs need to be removed from horcm, then: Example when you check horcm pairing after adding two (2) new LUNs
$  pairdisplay -g dbciGRP_CA -I50 -CLI -fcx  
dbciGRP_CA      dbciGRP_CA_dg13grp_1 L   CL1-J-12  0    0 17892   138 P-VOL PAIR ASYNC      0    5e -
dbciGRP_CA      dbciGRP_CA_dg13grp_1 R   CL3-K-6  0    2 45232    5e S-VOL PAIR ASYNC      0   138 -
dbciGRP_CA      dbciGRP_CA_dg13grp_2 L   CL1-J-12  0    1 17892   139 P-VOL PAIR ASYNC      0    5f -
dbciGRP_CA      dbciGRP_CA_dg13grp_2 R   CL3-K-6  0    3 45232    5f S-VOL PAIR ASYNC      0   139 -
dbciGRP_CA      dbciGRP_CA_dg13grp_3 L   CL1-J-12  0    2 17892   13a P-VOL PAIR ASYNC      0    60 -
dbciGRP_CA      dbciGRP_CA_dg13grp_3 R   CL3-K-6  0    4 45232    60 S-VOL PAIR ASYNC      0   13a -
dbciGRP_CA      dbciGRP_CA_dg13grp_4 L   CL1-J-12  0    3 17892   13b P-VOL PAIR ASYNC      0    61 -
dbciGRP_CA      dbciGRP_CA_dg13grp_4 R   CL3-K-6  0    5 45232    61 S-VOL PAIR ASYNC      0   13b -
dbciGRP_CA      dbciGRP_CA_dg13grp_5 L   CL3-P-12  0    2 17892   137 P-VOL PAIR ASYNC      0    62 -
dbciGRP_CA      dbciGRP_CA_dg13grp_5 R   CL3-K-6  0    6 45232    62 S-VOL PAIR ASYNC      0   137 -
dbciGRP_CA      dbciGRP_CA_dg13grp_6 L   CL3-B-4  2    0 17892    47 SMPL     -      -     -     - -
dbciGRP_CA      dbciGRP_CA_dg13grp_6 R   CL3-B-6  2    0 45232    ae SMPL     -      -     -     - -
dbciGRP_CA      dbciGRP_CA_dg13grp_7 L   CL3-B-4  2    1 17892    48 SMPL     -      -     -     - -
dbciGRP_CA      dbciGRP_CA_dg13grp_7 R   CL3-B-6  2    1 45232   13c SMPL     -      -     -     - -

The ask SAN team to pair new LUNs and follow copy progress in %.
$  pairdisplay -g dbciGRP_CA -I50 -CLI -fcx 
dbciGRP_CA      dbciGRP_CA_dg13grp_1 L   CL1-J-12  0    0 17892   138 P-VOL PAIR ASYNC      0    5e -
dbciGRP_CA      dbciGRP_CA_dg13grp_1 R   CL3-K-6  0    2 45232    5e S-VOL PAIR ASYNC      0   138 -
dbciGRP_CA      dbciGRP_CA_dg13grp_2 L   CL1-J-12  0    1 17892   139 P-VOL PAIR ASYNC      0    5f -
dbciGRP_CA      dbciGRP_CA_dg13grp_2 R   CL3-K-6  0    3 45232    5f S-VOL PAIR ASYNC      0   139 -
dbciGRP_CA      dbciGRP_CA_dg13grp_3 L   CL1-J-12  0    2 17892   13a P-VOL PAIR ASYNC      0    60 -
dbciGRP_CA      dbciGRP_CA_dg13grp_3 R   CL3-K-6  0    4 45232    60 S-VOL PAIR ASYNC      0   13a -
dbciGRP_CA      dbciGRP_CA_dg13grp_4 L   CL1-J-12  0    3 17892   13b P-VOL PAIR ASYNC      0    61 -
dbciGRP_CA      dbciGRP_CA_dg13grp_4 R   CL3-K-6  0    5 45232    61 S-VOL PAIR ASYNC      0   13b -
dbciGRP_CA      dbciGRP_CA_dg13grp_5 L   CL3-P-12  0    2 17892   137 P-VOL PAIR ASYNC      0    62 -
dbciGRP_CA      dbciGRP_CA_dg13grp_5 R   CL3-K-6  0    6 45232    62 S-VOL PAIR ASYNC      0   137 -
dbciGRP_CA      dbciGRP_CA_dg13grp_6 L   CL3-B-4  2    0 17892    47 P-VOL COPY ASYNC     16    ae -
dbciGRP_CA      dbciGRP_CA_dg13grp_6 R   CL3-B-6  2    0 45232    ae S-VOL COPY ASYNC      -    47 -
dbciGRP_CA      dbciGRP_CA_dg13grp_7 L   CL3-B-4  2    1 17892    48 P-VOL COPY ASYNC     16   13c -
dbciGRP_CA      dbciGRP_CA_dg13grp_7 R   CL3-B-6  2    1 45232   13c S-VOL COPY ASYNC      -    48 .

Back to the main page