possible zfs bug? lost all pools

JoaoBR joao at matik.com.br
Sat Jun 28 04:18:42 UTC 2008


On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote:

...


>>> and if necessary /etc/rc.d/zfs should start hostid or at least set REQUIRE 
>>> different and warn

...

>>
>> I've been in the same boat you are, and I was told the same thing.  I've
>> documented the situation on my Wiki, and the necessary workarounds.
>>
>> http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issue

> so I changed the rcorder as you can see in the attached filesile

> http://suporte.matik.com.br/jm/zfs.rcfiles.tar.gz


i'm  coming back on this because I am convicted to zfs each day more and more 
and like to express my gratitude not only to whom made zfs but also and 
specially to the people who brought it to FBSD - and: thank you guys making 
it public, this is really a step forward!

my zfs related rc files changes(above) made my problems go away and like to 
share some other experience here

as on Jeremie's page explained I had similare problems with zfs but seems I 
could get around them with (depending on machine's load) setting either to 
500, 1000 or 1500k vm.kmem_size* ... but seems main problem on FBSD is zfs 
recordsize, on ufs like partitions I set it to 64k and I never got panics any 
more, even with several zpools (as said as dangerous), cache_dirs for squid 
or mysql partitions might need lower values to get to there new and 
impressive peaks. 

this even seems to solve panics when copying large files from nfs|ufs to or 
from zfs ...

so seems that FBSD do not like recordsize>64k  ...

I have now a mail server running, for almost two month,  with N zfs volumes 
(one per user) in order simulating quotas (-/+ 1000 users) with success and 
completely stable and performance is outstanding under all loads

web server, apache/php/mysql, gave up maior stability problems  but 
distributing depending on workload to zpools with different recordsizes and 
never >64k solved my problems and I am appearently panic free now

I run almost scsi-only, only my test machines are sata, lowest conf is X2/4G, 
rest is X4 or opterons with 8g or more and I am extremely satisfied and happy 
with zfs

my backups are running twice as fast as on ufs, mirroring in comparims to 
gmirror is fucking-incredible fast and the zfs snapshot thing deserves an 
Oscar! ... and the zfs send|receive another

so thank you all who had fingers in/on zfs! (sometimes I press reset at my 
home server only to see how fast it comes up) .. just kidding but true is: 
thank's again! zfs is thE fs.


-- 

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br


More information about the freebsd-bugs mailing list