ZFS on Hardware RAID
andy thomas
andy at time-domain.co.uk
Tue Jan 22 11:19:35 UTC 2019
On Sun, 20 Jan 2019, Ireneusz Pluta wrote:
> W dniu 2019-01-20 o?09:45, andy thomas pisze:
>> I run a number of very busy webservers (Dell PowerEdge 2950 with LSI
>> MegaRAID SAS 1078 controllers) with the first two disks in RAID 1 as the
>> FreeBSD system disk and the remaining 4 disks configured as RAID 0 virtual
>> disks making up a ZFS RAIDz1 pool with 3 disks plus one hot spare.
> In this configuration, have you ever made a test of causing a drive failure,
> to see the hot spare activated?
Yesterday I set up a spare Dell 2950 with Perc 5/i Integrated HBA and six
73 GB SAS disks, with the first two disks configured as a RAID 1 system
disk (/dev/mfid0) and the remaining 4 disks as RAID 0 (mfid1- mfid4).
After adding a freebsd-zfs GPT partition to each of these 4 disks I then
created a RAIDz1 pool using mfid1p1, mfid2p1 and mfid3p1 with mfid4p1 as a
spare, followed by creating a simple ZFS filesystem.
After copying a few hundred MB of files to the ZFS filesystem, I yanked
/dev/mfid3 out to simulate a disk failure. I was then able to manually
detach the failed disk and replace it with the spare. Later, after pushing
/dev/mfid3 back in followed by a reboot and scrubbing the pool, mfid4
automatically replaced the former mfid3 that was pulled out and mfid3
became the new spare.
So a spare disk replacing a failed disk seems to be semi-automatic in
FreeBSD (this was version 10.3) although I have seen fully automatic
replacement on a Solaris parc platform.
Andy
More information about the freebsd-fs
mailing list