Adding to a zpool -- different redundancies and risks

David Christensen dpchrist at holgerdanske.com
Sat Dec 14 07:54:43 UTC 2019


On 2019-12-13 06:49, Norman Gray wrote:
> 
> David, hello.
> 
> On 13 Dec 2019, at 4:49, David Christensen wrote:
> 
>> On 2019-12-12 04:42, Norman Gray wrote:

>>> # zpool list pool
>>> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP
>>> HEALTH  ALTROOT
>>> pool    98T  75.2T  22.8T        -         -    29%    76%  1.00x
>>> ONLINE  -
>>
>> So, your pool is 75.2 TB / 77 TB = 97.7% full.
> 
> Well, I have compression turned on, so I take it that the 98TB quoted
> here is an estimate of the capacity in that case, and that the 76%
> capacity quoted in this output is the effective capacity -- ie,
> alloc/size.
> 
> The zpool(8) manpage documents these two properties as
> 
>        alloc       Amount of storage space within the pool that has been
>                    physically allocated.
> 
>        capacity    Percentage of pool space used. This property can also
> be
>                    referred to by its shortened column name, "cap".
> 
>        size        Total size of the storage pool.
> 
> The term 'physically allocated' is a bit confusing.  I'm guessing that
> it takes compression into account, rather than bytes-in-sectors.
> 
> I could be misinterpreting this output, though.

I believe the 'SIZE 98T' corresponds to eighteen 5.5 TB drives.


My bad -- I agree the 'CAP 76%' should be correct and my '97.7% full' 
calculation is wrong.


I use the following command to get compression information:

     # zfs get -t filesystem compressratio | grep POOLNAME


I am still trying to understand how to reconcile 'zpool list', 'zfs 
list', etc., against df(1) and du(1).


>> https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
> 
> Thanks for the reminder of this.  I'm familiar with that article, and
> it's an interesting point of view.  I don't find it completely
> convincing, though, since I'm not convinced that the speed of
> resilvering fully compensates for the less than 100% probability of
> surviving two disk failures.  

I haven't done the benchmarking to find out, but I have read similar 
assertions and recommendations elsewhere.  STFW might yield data to 
support or refuse the claims.


> In the last couple of years I've had
> problems with water ingress over a rack, and with a failed AC which
> baked a room, so that failure modes which affect multiple disks
> simultaneously are fairly prominent in my thinking about this sort of
> issue.  Poisson failures are not the only mode to worry about!

Agreed.  I am working towards implementing offsite scheduled replication.


David


More information about the freebsd-questions mailing list