10.1 + ZFS snapshot eating diskspace
Rick Romero
rick at havokmon.com
Mon Apr 27 16:15:16 UTC 2015
Try number two. I built another new system, no encryption this time.
I replacated ONE snapshot that is about 562GB of data.
(I just found Ronalds reply in my Spam folder, sorry!)
This new 10.1 system has the exact same 3 drives in RAIDZ1 as the original
source (9.2). What's confusing is the original RAIDZ1 is replicated
correctly to a 10 drive RAIDZ2 (10.1), but the RAIDZ2 source cannot
replicate data correctly to a new 3 drive RAIDZ1.
So not only is this a problem with the new system, but it concerns me that
if there were a problem with the old system that a full restore from backup
would eat all the disk space.
Source:
# zfs get all sysvol/primessd_home |grep -i used
sysvol/primessd_home used
822G -
sysvol/primessd_home usedbysnapshots
260G -
sysvol/primessd_home usedbydataset
562G -
sysvol/primessd_home usedbychildren
0 -
sysvol/primessd_home usedbyrefreservation
0 -
sysvol/primessd_home logicalused
811G -
Right? 562 is the 'current' amount of space used?
So I send it to a new box, and this is the result
# zfs list -t all
NAME USED AVAIL REFER
MOUNTPOINT
sysvol 919G 0 12.5G
/sysvol
sysvol/home 906G 0 898G
/sysvol/home
sysvol/home at remrep-Week16 8.53G - 898G -
I can see a possible sector size diff or recordsize affecting a few bytes,
but 400G is a bit excessive. The fact that it more closely matches the full
dataset+snapshots, IMHO, is much more telling.
# zfs get all sysvol/home | grep used
sysvol/home used
906G -
sysvol/home usedbysnapshots
8.53G -
sysvol/home usedbydataset
898G -
sysvol/home usedbychildren
0 -
sysvol/home usedbyrefreservation
0 -
sysvol/home logicalused
574G -
logical used is actual used, correct? Why is it the 'full' amount, when
only one snapshot was replicated?
So I thought maybe it's not reporting correctly
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
sysvol 907G 12.3G 256M /sysvol
sysvol/home 906G 12.3G 898G /sysvol/home
# dd bs=1M count=12560 if=/dev/zero of=test2
dd: test2: No space left on device
12558+0 records in
12557+1 records out
13167886336 bytes transferred in 33.499157 secs (393081126 bytes/sec)
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
sysvol 919G 0 12.5G /sysvol
sysvol/home 906G 0 898G /sysvol/home
# dd bs=1M count=12560 if=/dev/zero of=test3
dd: test3: No space left on device
So what's going on? Is this a known issue?
I suppose I can take the new server down to the colo and replicate from the
original, but that doesn't resolve the 'restore from backup' issue that I
see happening...
More information about the freebsd-fs
mailing list