Strange ZFS performance
morganw at chemikals.org
Mon Apr 5 10:59:52 UTC 2010
On Mon, 5 Apr 2010, Mikle Krutov wrote:
> On Sun, Apr 04, 2010 at 10:08:21PM -0500, Wes Morgan wrote:
> > On Sun, 4 Apr 2010, Mikle wrote:
> > > Hello, list! I've got some strange problem with one-disk zfs-pool:
> > > read/write performance for the files on the fs (dd if=/dev/zero
> > > of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading
> > > from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me
> > > ~70MB/s. pool is about 80% full; PC with the pool has 2GB of ram (1.5 of
> > > which is free); i've done no tuning in loader.conf and sysctl.conf for
> > > zfs. In dmesg there is no error-messages related to the disk (dmesg|grep
> > > ^ad12); s.m.a.r.t. seems OK. Some time ago disk was OK, nothing in
> > > software/hardware has changed from that day. Any ideas what could have
> > > happen to the disk?
> > Has it ever been close to 100% full? How long has it been 80% full and
> > what kind of files are on it, size wise?
> No, it was never full. It is at 80% for about a week maybe. Most of the files are the video of the 200MB - 1.5GB size per file.
I'm wondering if your pool is fragmented. What does gstat or iostat -x
output for the device look like when you're doing accessing the raw device
versus filesystem access? A very interesting experiment (to me) would be
to try these things:
1) using dd to replicate the disc to another disc, block for block
2) zfs send to a newly created, empty pool (could take a while!)
Then, without rebooting, compare the performance of the "new" pools. For
#1 you would need to export the pool first and detach the original device
before importing the duplicate.
There might be a script out there somewhere to parse the output from zdb
and turn it into a block map to identify fragmentation, but I'm not aware
of one. If you did find that was the case, currently the only fix is to
rebuild the pool.
More information about the freebsd-fs