ZFS + NFS poor performance after restarting from 100 day uptime
Steven Hartland
killing at multiplay.co.uk
Thu Mar 21 16:14:50 UTC 2013
----- Original Message -----
From: "Josh Beard" <josh at signalboxes.net>
To: <freebsd-fs at freebsd.org>
Sent: Thursday, March 21, 2013 3:53 PM
Subject: ZFS + NFS poor performance after restarting from 100 day uptime
> Hello,
>
> I have a system with 12 disks spread between 2 raidz1. I'm using the
> native ("new") NFS to export a pool on this. This has worked very well all
> along, but since a reboot, has performed horribly - unusably under load.
>
> The system was running 9.1-rc3 and I upgraded it to 9.1-release-p1 (GENERIC
> kernel) after ~110 days of running (with zero performance issues). After
> rebooting from the upgrade, I'm finding the disks seem constantly slammed.
> gstat reports 90-100% busy most of the day with only ~100-130 ops/s.
>
> I didn't change any settings in /etc/sysctl.conf or /boot/loader. No ZFS
> tuning, etc. I've looked at the commits between 9.1-rc3 and 9.1-release-p1
> and I can't see any reason why simply upgrading it would cause this.
...
> A snip of gstat:
> dT: 1.002s w: 1.000s
> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
> 0 0 0 0 0.0 0 0 0.0 0.0| cd0
> 0 1 0 0 0.0 1 32 0.2 0.0| da0
> 0 0 0 0 0.0 0 0 0.0 0.0| da0p1
> 0 1 0 0 0.0 1 32 0.2 0.0| da0p2
> 0 0 0 0 0.0 0 0 0.0 0.0| da0p3
> 4 160 126 1319 31.3 34 100 0.1 100.3| da1
> 4 146 110 1289 33.6 36 98 0.1 97.8| da2
> 4 142 107 1370 36.1 35 101 0.2 101.9| da3
> 4 121 95 1360 35.6 26 19 0.1 95.9| da4
> 4 151 117 1409 34.0 34 102 0.1 100.1| da5
> 4 141 109 1366 35.9 32 101 0.1 97.9| da6
> 4 136 118 1207 24.6 18 13 0.1 87.0| da7
> 4 118 102 1278 32.2 16 12 0.1 89.8| da8
> 4 138 116 1240 33.4 22 55 0.1 100.0| da9
> 4 133 117 1269 27.8 16 13 0.1 86.5| da10
> 4 121 102 1302 53.1 19 51 0.1 100.0| da11
> 4 120 99 1242 40.7 21 51 0.1 99.7| da12
Your ops/s are be maxing your disks. You say "only" but the ~190 ops/s
is what HD's will peak at, so whatever our machine is doing is causing
it to max the available IO for your disks.
If you boot back to your previous kernel does the problem go away?
If so you could look at the changes between the two kernel revisions
for possible causes and if needed to a binary chop with kernel builds
to narrow down the cause.
Regards
Steve
Regards
Steve
================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.
In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postmaster at multiplay.co.uk.
More information about the freebsd-fs
mailing list