zfs send/receive: is this slow?
Artem Belevich
fbsdlist at src.cx
Mon Oct 4 02:06:21 UTC 2010
On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille <dan at langille.org> wrote:
> I'm rerunning my test after I had a drive go offline[1]. But I'm not
> getting anything like the previous test:
>
> time zfs send storage/bacula at transfer | mbuffer | zfs receive
> storage/compressed/bacula-buffer
>
> $ zpool iostat 10 10
> capacity operations bandwidth
> pool used avail read write read write
> ---------- ----- ----- ----- ----- ----- -----
> storage 6.83T 5.86T 8 31 1.00M 2.11M
> storage 6.83T 5.86T 207 481 25.7M 17.8M
It may be worth checking individual disk activity using gstat -f 'da.$'
Some time back I had one drive that was noticeably slower than the
rest of the drives in RAID-Z2 vdev and was holding everything back.
SMART looked OK, there were no obvious errors and yet performance was
much worse than what I'd expect. gstat clearly showed that one drive
was almost constantly busy with much lower number of reads and writes
per second than its peers.
Perhaps previously fast transfer rates were due to caching effects.
I.e. if all metadata already made it into ARC, subsequent "zfs send"
commands would avoid a lot of random seeks and would show much better
throughput.
--Artem
More information about the freebsd-stable
mailing list