Different size after zfs send receive

Mark Saad nonesuch at longcount.org
Thu May 18 21:53:29 UTC 2017


Hi kc 
  This has to do with how data blocks are replicated when stored on a raidzN . Moving them to a mirror removes replicated blocks . This is way over simplified but imagine you store a file of 10gb on a raidz1 . The system splits the file into smaller chunks; of say 1mb , and stores one extra chunk for each chunk that us striped around the raidz1 . Storing on a mirror is just write the chunk once on each disk . However with a mirror since you only see 1/2 the number of disks you never see the extra chunks in the used field . 

Hope this helps . 

---
Mark Saad | nonesuch at longcount.org

> On May 18, 2017, at 3:36 PM, kc atgb <kisscoolandthegangbang at hotmail.fr> wrote:
> 
> Hi,
> 
> Some days ago I had a need to backup my current pool and restore it after pool destroy and create. 
> 
> The pool in my home server is a raidz1 with 4 disks. To backup this pool I grabbed two 4TB disks (single disk pools) to have a double backup (I have just one
> sata port left I can use to plug a disk). 
> 
> The whole process of backup and restore went well as I can say. But looking at the size reported by zfs list make me a little bit curious. 
> 
> storage/datas/ISO                                                        35420869824  381747995136    35420726976  /datas/ISO
> storage/datas/ISO at backup_send                                                 142848             -    35420726976  -
> storage/datas/ISO at backup_sync                                                      0             -    35420726976  -
> 
> b1/datas/ISO                                                        35439308800  2176300351488    35439210496  /datas/ISO
> b1/datas/ISO at backup_send                                                  98304              -    35439210496  -
> b1/datas/ISO at backup_sync                                                      0              -    35439210496  -
> 
> b2/datas/ISO                                                        35439308800  2176298991616    35439210496  /datas/ISO
> b2/datas/ISO at backup_send                                                  98304              -    35439210496  -
> b2/datas/ISO at backup_sync                                                      0              -    35439210496  -
> 
> storage/datas/ISO                                                        35421024576  381303470016    35420715072  /datas/ISO
> storage/datas/ISO at backup_send                                                 142848             -    35420715072  -
> storage/datas/ISO at backup_sync                                                  11904             -    35420715072  -
> 
> 
> storage/usrobj                                                            5819085888  381747995136     5816276544  legacy
> storage/usrobj at create                                                         166656             -         214272  -
> storage/usrobj at backup_send                                                   2642688             -     5816228928  -
> storage/usrobj at backup_sync                                                         0             -     5816276544  -
> 
> b1/usrobj                                                            5675081728  2176300351488     5673222144  legacy
> b1/usrobj at create                                                         114688              -         147456  -
> b1/usrobj at backup_send                                                   1744896              -     5673222144  -
> b1/usrobj at backup_sync                                                         0              -     5673222144  -
> 
> b2/usrobj                                                            5675188224  2176298991616     5673328640  legacy
> b2/usrobj at create                                                         114688              -         147456  -
> b2/usrobj at backup_send                                                   1744896              -     5673328640  -
> b2/usrobj at backup_sync                                                         0              -     5673328640  -
> 
> storage/usrobj                                                            5820359616  381303470016     5815098048  legacy
> storage/usrobj at create                                                         166656             -         214272  -
> storage/usrobj at backup_send                                                   2535552             -     5815098048  -
> storage/usrobj at backup_sync                                                     11904             -     5815098048  -
> 
> As you can see the numbers are different for each pool (the initial raidz1, backup1 disk, backup2 disk and new raidz1). I mean in the USED column. I have
> nearly all my datasets in the same situation (those with fixed data that have not changed between the beginning of the process and now). backup1 and backup2
> are identical disks with exactly the same configurations and have different numbers. I used the same commands for all my transfers except the name of the
> destination pool. 
> 
> So, I wonder what can cause these differences ? Is it something I have to worry about ? Can I consider this as a normal behavior ? 
> 
> Thanks for your enlightments,
> K.
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"


More information about the freebsd-fs mailing list