[Fwd: Re: Large ZFS arrays?]

Tom Evans tevans.uk at googlemail.com
Tue Jun 17 22:07:13 UTC 2014


On Tue, Jun 17, 2014 at 4:47 PM, Dennis Glatting <dg at pki2.com> wrote:
> On Sun, 2014-06-15 at 11:00 -0500, Kevin Day wrote:
>> 4) “zfs destroy” can be excruciatingly expensive on large datasets.
>> http://blog.delphix.com/matt/2012/07/11/performance-of-zfs-destroy/
>> It’s a bit better now, but don’t assume you can “zfs destroy” without
>> killing performance to everything.
>>
>
> Is that still a problem? Both FreeBSD and ZFS-on-Linux had a significant
> problem on destroy but I am under the impression that is now
> backgrounded on FreeBSD (ZoL, however, destroyed the pool with dedup
> data). It's been several months since I deleted TB files but I seem to
> recall that non-dedup was now good but dedup will forever suck.
>

I had a 9-stable (9.1ish) box that I was migrating the data from to a
newer box, and was caught by zfs destroy requiring an insane amount of
memory. I idly "zfs destroy" a 5 TB fs to clean up some free space,
the box churns to a semi halt as it exhausts all memory before finally
giving out and panicking.

I had to transfer the disks to the new host, boot from my rescue usb
and force import the pool and allow the destroy to proceed. It still
used *all* the memory on the newer box, but whenever it got to the
point it would run out it would force a cleanup and start over. I
can't imagine how deadly that would be to regular processes.

Still, it showed me that moving the disks and then the data was
quicker than trying to move the data over the network, and now I
arrange my file systems a little more prudently!

Cheers

Tom


More information about the freebsd-fs mailing list