[Bug 237807] ZFS: ZVOL writes fast, ZVOL reads abysmal...

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Mon May 13 13:58:34 UTC 2019


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237807

--- Comment #7 from Nils Beyer <nbe at renzel.net> ---
(In reply to crest from comment #6)

the ZVOLs are used for ReFS-formatted (64kB cluster size) Veeam backup
repositories connected via iSCSI and 10GBit:
==============================================================================
morsleben-grube2/dshyp02-veeam  type                  volume                 -
morsleben-grube2/dshyp02-veeam  creation              Fri Mar 29 11:01 2019  -
morsleben-grube2/dshyp02-veeam  used                  40.2T                  -
morsleben-grube2/dshyp02-veeam  available             12.8T                  -
morsleben-grube2/dshyp02-veeam  referenced            30.4T                  -
morsleben-grube2/dshyp02-veeam  compressratio         1.00x                  -
morsleben-grube2/dshyp02-veeam  reservation           none                  
default
morsleben-grube2/dshyp02-veeam  volsize               40T                   
local
morsleben-grube2/dshyp02-veeam  volblocksize          64K                    -
morsleben-grube2/dshyp02-veeam  checksum              on                    
default
morsleben-grube2/dshyp02-veeam  compression           lz4                   
inherited from morsleben-grube2
morsleben-grube2/dshyp02-veeam  readonly              off                   
default
morsleben-grube2/dshyp02-veeam  createtxg             76                     -
morsleben-grube2/dshyp02-veeam  copies                1                     
default
morsleben-grube2/dshyp02-veeam  refreservation        40.2T                 
local
morsleben-grube2/dshyp02-veeam  guid                  10198386066639651165   -
morsleben-grube2/dshyp02-veeam  primarycache          metadata              
local
morsleben-grube2/dshyp02-veeam  secondarycache        none                  
local
morsleben-grube2/dshyp02-veeam  usedbysnapshots       0                      -
morsleben-grube2/dshyp02-veeam  usedbydataset         30.4T                  -
morsleben-grube2/dshyp02-veeam  usedbychildren        0                      -
morsleben-grube2/dshyp02-veeam  usedbyrefreservation  9.78T                  -
morsleben-grube2/dshyp02-veeam  logbias               latency               
default
morsleben-grube2/dshyp02-veeam  dedup                 off                   
default
morsleben-grube2/dshyp02-veeam  mlslabel                                     -
morsleben-grube2/dshyp02-veeam  sync                  standard              
default
morsleben-grube2/dshyp02-veeam  refcompressratio      1.00x                  -
morsleben-grube2/dshyp02-veeam  written               30.4T                  -
morsleben-grube2/dshyp02-veeam  logicalused           30.5T                  -
morsleben-grube2/dshyp02-veeam  logicalreferenced     30.5T                  -
morsleben-grube2/dshyp02-veeam  volmode               default               
default
morsleben-grube2/dshyp02-veeam  snapshot_limit        none                  
default
morsleben-grube2/dshyp02-veeam  snapshot_count        none                  
default
morsleben-grube2/dshyp02-veeam  redundant_metadata    all                   
default
==============================================================================



Using CTL as the iSCSI target (4k blocksize) for the Windows Server 2016 hosts:
==============================================================================
        lun 0 {
                path /dev/zvol/morsleben-grube2/dshyp02-veeam
                blocksize 4k
        }
==============================================================================



Nothing else besides "ctld" is running on the storage system...

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-fs mailing list