zfs, raidz, spare and jbod
Kris Kennaway
kris at FreeBSD.org
Fri Jul 25 09:18:41 UTC 2008
Claus Guttesen wrote:
> Hi.
>
> I installed FreeBSD 7 a few days ago and upgraded to the latest stable
> release using GENERIC kernel. I also added these entries to
> /boot/loader.conf:
>
> vm.kmem_size="1536M"
> vm.kmem_size_max="1536M"
> vfs.zfs.prefetch_disable=1
>
> Initially prefetch was enabled and I would experience hangs but after
> disabling prefetch copying large amounts of data would go along
> without problems. To see if FreeBSD 8 (current) had better (copy)
> performance I upgraded to current as of yesterday. After upgrading and
> rebooting the server responded fine.
>
> The server is a supermicro with a quad-core harpertown e5405 with two
> internal sata-drives and 8 GB of ram. I installed an areca arc-1680
> sas-controller and configured it in jbod-mode. I attached an external
> sas-cabinet with 16 sas-disks at 1 TB (931 binary GB).
>
> I created a raidz2 pool with 10 disks and added one spare. I copied
> approx. 1 TB of small files (each approx. 1 MB) and during the copy I
> simulated a disk-crash by pulling one of the disks out of the cabinet.
> Zfs did not activate the spare and the copying stopped until I
> rebooted after 5-10 minutes. When I performed a 'zpool status' the
> command would not complete. I did not see any messages in
> /var/log/message. State in top showed 'ufs-'.
That means that it was UFS that hung, not ZFS. What was the process
backtrace, and what role does UFS play on this system?
Kris
> A similar test on solaris express developer edition b79 activated the
> spare after zfs tried to write to the missing disk enough times and
> then marked it as faulted. Has any one else tried to simulate a
> disk-crash in raidz(2) and succeeded?
>
More information about the freebsd-stable
mailing list