Back to the main page
ZFS snapshot, clone, volume
ZFS Snapshot
ZFS snapshot is read-only copy of the file system.
When taken, it consumes no additional disk space, but when data changes, snapshot is growing since references to old data (unique to snapshot), so space cannot be freed.
Let's take snapshot of FS mypool/home/user1. Snapshot name is Monday. So you can see what's command's syntax.
# zfs snapshot mypool/home/user1@Monday
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 1.49G 527M 528M /mypool
mypool/home 993M 527M 20K /mypool/home
mypool/home/user1 993M 527M 993M /mypool/home/user1
mypool/home/user1@Monday 0 - 993M - (consumes no additional data within zpool)
|
You even cannot destroy FS if it has any snapshot.
# zfs destroy mypool/home/user1
cannot destroy 'mypool/home/user1': filesystem has children
use '-r' to destroy the following datasets:
mypool/home/user1@Monday
|
Let's change some data and see now that used data of snapshot is growing.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 1.49G 527M 528M /mypool
mypool/home 993M 527M 20K /mypool/home
mypool/home/user1 993M 527M 993M /mypool/home/user1
mypool/home/user1@Monday 28K - 993M -
|
Snapshot(s) is stored in hidden directory named .zfs/snapshot/, located in the root of FS.
So you can go there and restore files if needed, remember it is read-only copy of FS.
# /mypool/home/user1/.zfs/snapshot> ls
total 3
dr-xr-xr-x 2 root root 2 Nov 16 13:23 .
dr-xr-xr-x 3 root root 3 Nov 16 13:23 ..
drwxr-xr-x 3 root root 6 Nov 16 17:09 Monday
|
Beside restoring individual files from snapshot, you can roll back whole FS to previously taken snapshot.
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
mypool/home/user1@Monday 993M - 993M -
# zfs rollback mypool/home/user1@Monday
|
And all changes since snapshot Monday has been taken up to now are deleted and FS is same as it was on that Monday.
ZFS Clone
Clones can be only created from snapshot and they are actually new FS with initial content as original FS, so they are writable.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 1.49G 527M 528M /mypool
mypool/home 993M 527M 20K /mypool/home
mypool/home/user1 993M 527M 993M /mypool/home/user1
mypool/home/user1@Monday 0 - 993M -
# zfs clone mypool/home/user1@Monday mypool/home/user2
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 1.49G 527M 528M /mypool
mypool/home 993M 527M 20K /mypool/home
mypool/home/user1 993M 527M 993M /mypool/home/user1
mypool/home/user1@Monday 0 - 993M -
mypool/home/user2 0 527M 993M /mypool/home/user2
|
Once clone is created from snapshot, the snapshot can not be deleted until clone exists.
# zfs get origin mypool/home/user2
NAME PROPERTY VALUE SOURCE
mypool/home/user2 origin mypool/home/user1@Monday -
# zfs destroy mypool/home/user1@Monday
cannot destroy 'mypool/home/user1@Monday': snapshot has dependent clones
use '-R' to destroy the following datasets:
mypool/home/user2
|
Below command can be used to save snapshot as stream.
The file /tmp/Tuesday is ZFS snapshot stream of FS mypool/home/user1.
# zfs send mypool/home/user1@Tuesday > /tmp/Tuesday
Now I can use this snapshot stream to create new FS.
# zfs receive mypool/home/user-3 < /tmp/Tuesday
ZFS Volume
ZFS Volume represents block device.
# zfs create -V 2g mypool/volume-1
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 4.46G 12.2G 528M /mypool
mypool/home 1.94G 12.2G 23K /mypool/home
mypool/home/user1 993M 2.03G 993M /mypool/home/user1
mypool/home/user3 993M 12.2G 993M /mypool/home/user3
mypool/home/user3@Tuesday 0 - 993M -
mypool/volume-1 2G 14.2G 16K -
# zfs get all mypool/volume-1
NAME PROPERTY VALUE SOURCE
mypool/volume-1 type volume -
mypool/volume-1 creation Mon Nov 16 18:38 2009 -
mypool/volume-1 used 2G -
mypool/volume-1 available 14.2G -
mypool/volume-1 referenced 16K -
mypool/volume-1 compressratio 1.00x -
mypool/volume-1 reservation none default
mypool/volume-1 volsize 2G -
mypool/volume-1 volblocksize 8K -
mypool/volume-1 checksum on default
mypool/volume-1 compression on inherited from mypool
mypool/volume-1 readonly off inherited from mypool
mypool/volume-1 shareiscsi off default
mypool/volume-1 copies 1 default
mypool/volume-1 refreservation 2G local
|
ZFS data integrity
The tool fsck is not needed any more. ZFS is transactional FS so forget about inconsistency.
Tool fsck also validates data integrity.
For this task, ZFS is using scrubbing (no need for un-mounting FS like fsck requires).
You may want to run follow command using cronjob once per day.
# zpool scrub mypool
# zpool status
pool: mypool
state: ONLINE
scrub: scrub in progress for 0h0m, 6.80% done, 0h2m to go
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
errors: No known data errors
|
To stop srubbing:
# zpool scrub -s mypool
# zpool status
pool: mypool
state: ONLINE
scrub: scrub stopped after 0h0m with 0 errors on Mon Nov 16 18:52:24 2009
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
errors: No known data errors
|
Moving zpool between 2 machines.
Say you have 2 disks in Sun Fire V120, and zpool is consisting of one disk.
So you want to move it from the system and install in another V120.
The storage must be explicitly exported so the first system knows it's ready for migration.
Also this will flush all data to the disk and system will lose any knowledge about exported pool.
After exporting zpool, take disk out and import in another system.
first_v120# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
cs2-pool 16.9G 94K 16.9G 0% ONLINE -
first_v120# zfs list
NAME USED AVAIL REFER MOUNTPOINT
cs2-pool 89.5K 16.6G 1K /cs2-pool
first_v120# zpool export cs2-pool
first_v120# zfs list
no datasets available
first_v120# zpool list
no pools available
|
Take disk out and install in another V240.
Following command shows zpool available for import but DOES NOT import anything.
second_v120# zpool import
pool: cs2-pool
id: 8561877967037688236
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
cs2-pool ONLINE
c1t1d0 ONLINE
second_v120# zpool list
no pools available
|
Now import specific zpool (zpool, zfs and files will be imported).
second_v120# zpool import cs2-pool
second_v120# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
cs2-pool 16.9G 161K 16.9G 0% ONLINE -
second_v120# zfs list
NAME USED AVAIL REFER MOUNTPOINT
cs2-pool 152K 16.6G 64K /cs2-pool
|
Back to the main page