Regarding regular zfs

Joar Jegleim joar.jegleim at gmail.com
Mon Apr 8 07:18:52 UTC 2013


the rsync was running from the live system.
As I wrote earlier, the problem seem to only occur while the backup server
is rsync'ing from the slave (zfs receiving side), so I was actually trying
to figure out if this was to be expected (as in zfs sync, where the
receiving end get a diff and roll 'back' to version = latest snapshot from
'master') with a setup with +1TB data and +2million files .




On 5 April 2013 16:07, Ronald Klop <ronald-freebsd8 at klop.yi.org> wrote:

> On Fri, 05 Apr 2013 15:02:12 +0200, Joar Jegleim <joar.jegleim at gmail.com>
> wrote:
>
>  You make some interesting points .
>> I don't _think_ the script 'causes more than 1 zfs write at a time, and
>> I'm
>> sure 'nothing else' is doing that neither . But I'm gonna check that out
>> because it does sound like a logical explanation.
>> I'm wondering if the rsync from the receiving server (that is: the backup
>> server is doing rsync from the zfs receive server) could 'cause the same
>> problem, it's only reading though ...
>>
>>
>>
>>
> Do you run the rsync from a snapshot or from the 'live' filesystem? The
> live one changes during zfs receive. I don't know if that has anything to
> do with your problem, but rsync from a snapshot gives a consistent backup
> anyway.
>
> BTW: It is probably more simple for you to test if the rsync is related to
> the problem, than for other people to theorize about it here.
>
> Ronald.
>



-- 
----------------------
Joar Jegleim
Homepage: http://cosmicb.no
Linkedin: http://no.linkedin.com/in/joarjegleim
fb: http://www.facebook.com/joar.jegleim
AKA: CosmicB @Freenode

----------------------


More information about the freebsd-fs mailing list