ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena]

Steven Hartland killing at multiplay.co.uk
Thu Jul 30 11:49:27 UTC 2015


On 30/07/2015 12:30, Lev Serebryakov wrote:
> Hello Freebsd-fs,
>
>
>    I'm migrating my NAS from geom_raid5 + UFS to ZFS raidz. My main storage
> is 5x2Tb HDDs. Additionaly, I have 2x3Tb HDDs attached to hold my data when
> I re-make my main storage.
>
>   So, I have now two ZFS pools:
>
> ztemp mirror ada0 ada1 [both are 3Tb HDDS]
> zstor raidz ada3 ada4 ada5 ada6 ada7 [all of them are 2Tb]
>
>   ztemp contain one filesystem with 2.1Tb of my data. ztemp was populated
>   with my data from old geom_raid5 + UFS installation via "rsync" and it was
>   FAST (HDD-speed).
>
>   zstor contains several empty file systems (one per user), like:
>
> zstor/home/lev
> zstor/home/sveta
> zstor/home/nsvn
> zstor/home/torrents
> zstor/home/storage
>
>   Deduplication IS TURNED OFF. atime is turned off. Record size set to 1M as
> I have a lot of big files (movies, RAW photo from DSLR, etc). Compression is
> turned off.
You don't need to do that as record set size is a min not a max, if you 
don't force it large files will still be stored efficiently.
>   When I try to copy all my data from temporary HDDs (ztemp pool) to my new
> shiny RIAD (zstor pool) with
>
> cd /ztemp/fs && rsync -avH lev sveta nsvn storage /usr/home/
>
>   rsync pauses for tens of minutes (!) after several hundreds of files. ^T
> and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do
> "zfs list" it waits forever, in state "[*kmem arena]" again.
>
>   This server is equipped with 6GiB of RAM.
>
>   It looks FreeBSD contains bug about year ago which leads to this behavior,
> but mailing lists says, that it was fixed in r272221, 10 months ago.
When this happens what is the state of memory on the machine?

Top will give a good overview, while sysctl vm.stats.vm and vmstat -z 
will provide some detail.

If you're seeing significant memory pressure, which could well be the 
case with a mixed ZFS UFS system during this transfer (they use 
competing memory resource pools) then you could try limiting ARC via 
vfs.zfs.arc_max

You could also see if the patch on 
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 help.

     Regards
     Steve


More information about the freebsd-fs mailing list