ZFS/NVMe layout puzzle
Mel Pilgrim
list_freebsd at bluerosetech.com
Sat Oct 6 15:42:22 UTC 2018
On 2018-10-04 11:43, Garrett Wollman wrote:
> Say you're using an all-NVMe zpool with PCIe switches to multiplex
> drives (e.g., 12 4-lane NVMe drives on one side, 1 PCIe x8 slot on the
> other). Does it make more sense to spread each vdev across switches
> (and thus CPU sockets) or to have all of the drives in a vdev on the
> same switch? I have no intuition about this at all, and it may not
> even matter. (You can be sure I'll be doing some benchmarking.)
>
> I'm assuming the ZFS code doesn't have any sort of CPU affinity that
> would allow it to take account of the PCIe topology even if that
> information were made available to it.
In this scenario, the PCIe switch takes the role of an HBA in terms of
fault vulnerability.
More information about the freebsd-fs
mailing list