benefit of GEOM labels for ZFS, was Hard drive device names... serial numbers
Freddie Cash
fjwcash at gmail.com
Fri Mar 1 03:30:32 UTC 2013
You label the drive with something that tells you:
- enclosure
- column
- row
IOW, something that definitively tells you where the drive is located,
without having to pull the drive to find it.
To do so, you have to install 1 drive at a time, and label it at that point.
For example, we use the following pattern: encX-A-#
Where X tells you which enclosure it's in, A tells you which column it's in
(letters start at A increasing to the right), and # tells you the disk in
the column, numbered top-down.
Whether you label the entire drive using glabel or just a GPT partition is
up to you. We use GPT labels.
On 2013-02-28 3:49 PM, "Graham Allan" <allan at physics.umn.edu> wrote:
> Sorry to come in late on this thread but I've been struggling with
> thinking about the same issue, from a different perspective.
>
> Several months ago we created our first "large" ZFS storage system, using
> 42 drives plus a few SSDs in one of the oft-used Supermicro 45-drive
> chassis. It has been working really nicely but has led to some puzzling
> over the best way to do some things when we build more.
>
> We made our pool using geom drive labels. Ever since, I've been wondering
> if this really gives any advantage - at least for this type of system. If
> you need to replace a drive, you don't really know which enclosure slot any
> given da device is, and so our answer has been to dig around using
> sg3_utils commands wrapped in a bit of perl, to try and correlate the da
> device to the slot via the drive serial number.
>
> At this point, having a geom label just seems like an extra bit of
> indirection to increase my confusion :-) Although setting the geom label to
> the drive serial number might be a serious improvement...
>
> We're about to add a couple more of these shelves to the system, giving a
> total of 135 drives (although each shelf would be a separate pool), and
> given that they will be standard consumer grade drives, some frequency of
> replacement is a given.
>
> Does anyone have any good tips on how to manage a large number of drives
> in a zfs pool like this?
>
> Thanks,
>
> Graham
> --
> ------------------------------**------------------------------**
> -------------
> Graham Allan
> School of Physics and Astronomy - University of Minnesota
> ------------------------------**------------------------------**
> -------------
> ______________________________**_________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/**mailman/listinfo/freebsd-fs<http://lists.freebsd.org/mailman/listinfo/freebsd-fs>
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@**freebsd.org<freebsd-fs-unsubscribe at freebsd.org>
> "
>
More information about the freebsd-fs
mailing list