Back to the main page

Solaris Volume Manager

Basically what you want and is most important for you is that active data is available even after say hardware failure. 

RAID (Redundant Array of Independent Disks) Levels

1. RAID 0 (stripes and concatenation) - there is no redundancy here, provides fast I/O
2. RAID 1 (mirroring) - data is mirrored on two or more disks, data can be read from drives simultaneously. SVM also supports RAID 1+0 or 0+1
3. RAID 5 (striping with parity) - each dish has data and parity stripe. If possible use hot spares here.

Requirement RAID 0 - Concatenation (writes data to first disk until it is full, then moves to next one) RAID 0 - Stripe without parity (spreads data equally across all disks) RAID 1 - Mirror RAID 5 - Stripe set with parity Soft Partition (divide volume into more smaller volumes)
RedundancyNo NoYesYesNo
Improved read performanceNoYesYesYesNo
Improved write performanceNoYesNoNoNo
More than 8 slices per deviceNoNoNoNoYes
Lager available storage spaceYesYesNoYesNo
SVM is using virtual disks to manage physical ones. Virtual disk is volume or metadevice. File system sees volume as physical disk. SVM converts I/O directed to/from volume into I/O to/from physical member disk. So basically, SVM is layer of software that sits between Physical Disks and File System. SVM volumes are built from disk slices or other volumes. SVM is integrated into SMF:
# svcs -a |egrep "md|meta"
disabled       Jun_10   svc:/network/rpc/mdcomm:default
disabled       Jun_10   svc:/network/rpc/metamed:default
disabled       Jun_10   svc:/network/rpc/metamh:default
online         Jun_10   svc:/system/metainit:default
online         Jun_10   svc:/network/rpc/meta:default
online         Jun_10   svc:/system/mdmonitor:default
online         Jun_10   svc:/system/fmd:default
State Database and Replicas SVM State Database has configuration and status information for all volumes, hot spares and disk sets. For redundancy, multiple replicas (copies) of database are maintained. Minimum of 3 replicas are needed (in case of same failure, SVM is using majority consensus algorithm to determine valid replicas, and majority is half + 1). Replicas facts: 1. Max number of replicas per disk set is 50 2. By default, replica is 4Mb (8192 disk sectors) 3. Do not store replicas on external storages 4. Replicas cannot be placed on root (/), /usr and swap slices 5. Even replica(s) fail, system continues to run if there are at least half of them available. If there are less then half replicas available, system will panic Creating State Database Replicas The Solaris hard drive is traditionally limited to 8 slices (partition), 0-7. Use seventh one for replicas. Use "format" to determine how disks are defined.
# metadb -afc 3 c1t1d0s7
-a = add replicas -f = force creation of initial replica -c = number of replica
# metadb -i
        flags           first blk       block count
     a        u    r    16              8192            /dev/dsk/c1t1d0s7
     a        u    r    8208 (8192+16)  8192            /dev/dsk/c1t1d0s7
     a        u    r    16400(8208+8192)8192            /dev/dsk/c1t1d0s7
 r - replica does not have device relocation information
 o - replica active prior to last mddb configuration change
 u - replica is up to date
 l - locator for this replica was read successfully
 c - replica's location was in /etc/lvm/mddb.cf
 p - replica's location was patched in kernel
 m - replica is master, this is replica selected as input
 W - replica has device write errors
 a - replica is active, commits are occurring to this replica
 M - replica had problem with master blocks
 D - replica had problem with data blocks
 F - replica had format problems
 S - replica is too small to hold current data base
 R - replica had device read errors
Deleting replicas Have 8 now.
# metadb
        flags           first blk       block count
     a        u    r    16              8192            /dev/dsk/c1t1d0s7
     a        u    r    8208            8192            /dev/dsk/c1t1d0s7
     a        u    r    16400           8192            /dev/dsk/c1t1d0s7
     a        u    r    16              200             /dev/dsk/c1t0d0s7
     a        u    r    216             200             /dev/dsk/c1t0d0s7
     a        u    r    416             200             /dev/dsk/c1t0d0s7
     a        u    r    616             200             /dev/dsk/c1t0d0s7
     a        u    r    816             200             /dev/dsk/c1t0d0s7
Want to delete five (5) from c1t0d0s7 # metadb -d -f /dev/dsk/c1t0d0s7 Have 3 now.
# metadb
        flags           first blk       block count
     a        u    r    16              8192            /dev/dsk/c1t1d0s7
     a        u    r    8208            8192            /dev/dsk/c1t1d0s7
     a        u    r    16400           8192            /dev/dsk/c1t1d0s7
Tips: 1. Take good care (like backup) of files: /etc/lvm/mddb.cf (config file of SVM database) and /etc/lvm/md.cf (config file of metadevices) 2. Backup info about disks/partitions: keep copy of output from commands prtvtoc and metastat -p
Back to the main page