Imposing ZFS latency limits

Dustin Wenz dustinwenz at ebureau.com
Thu Oct 18 21:39:53 UTC 2012


The way I can usually detect this is by watching the operation queues with gstat. If a disk is running slower than the others, I/O ops tend to pile up. When that happens, I can restore performance by taking the disk offline. It's a manual process; I think the filesystem should do better than that.

	- .Dustin

On Oct 17, 2012, at 8:38 AM, Steven Hartland <killing at multiplay.co.uk> wrote:

> ----- Original Message ----- From: "Mark Felder" <feld at feld.me>
>> On Tue, 16 Oct 2012 06:25:57 -0500, Olivier Smedts <olivier at gid0.org>  wrote:
>>> 
>>> That would be great - no need for TLER drives. But if you want to
>>> "drop" the drive from the bus, that would be a GEOM thing. Don't know
>>> if that's possible to implement.
>> This would be GREATLY appreciated. I've seen this happen on my own ZFS  boxes as well as on a custom made SAN. It's painful but easy to detect  when you notice the symptoms...
> 
> Interesting, what metrics where you using which made it easy to detect,
> work be nice to know your process there Mark?
> 
>   Regards
>   Steve
> 
> ================================================
> This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 
> In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
> or return the E.mail to postmaster at multiplay.co.uk.
> 



More information about the freebsd-fs mailing list