questions using zfs on raid controllers without jbod option

Josh Carter josh at multipart-mixed.com
Thu Dec 3 17:00:23 UTC 2009


Kai,

Does your controller have the option of creating a "volume" rather than a RAID0? On Adaptec and LSI cards I've tested, they've had the option of creating a simple catenated volume of disks, thus bypassing any re-chunking of data. I created one volume per drive and performance was on-par with using a non-RAID card. (As a side note, ZFS could push the driver harder as separate volumes than the RAID card could push the drives using the hardware's RAID controller.)

The spikes you see in write performance are normal. ZFS gathers up individual writes and commits them to disk as transactions; when a transaction flushes you see the spike in iostat.

As for caching, I'd go ahead and turn on write caching on the RAID card if you've got a battery. To use write caching in ZFS effectively (i.e. with the ZIL) you need a very fast write device or you'll slow the system down. STEC Zeus solid-state drives make good ZIL devices but they're super-expensive. I would let ZFS do its own caching on the read side.

Best regards,
Josh


On Dec 3, 2009, at 1:38 AM, Kai Gallasch wrote:

> 
> Hi list.
> 
> What's the best way to deploy zfs on a server with builtin raid
> controller and missing JBOD functionality?
> 
> I am currently testing a hp/compaq proliant server with Battery Backed
> SmartArray P400 controller (ciss) and 5 sas disks which I use for a
> raidz1 pool.
> 
> What I did was to create a raid0 array on the controller for each disk,
> with raid0 chunksize set to 32K (Those raid0 drives show up as da2-da6
> in FreeBSD) and used them for a raidz1 pool.
> 
> Following zpool iostat I can see, that there are almost all of the time
> no continous writes, but most of the copied data is written in spikes of
> write operations. My guess is, that this behaviour is caching related
> and that it might be caused by zfs-arc and raid-controller cache not
> playing too well together.
> 
> questions:
> 
> "raid0 drives":
> 
> - What's the best chunksize for a single raid0 drive that is used as a
>  device for a pool ? (I use 32K)
> 
> - Should the write cache on the physical disks that are used as raid0
>  drives for zfs be enabled, if the raid controller has a battery
>  backup unit ? ( I enabled the disk write cache for all disks)
> 
> raid controller cache:
> 
> My current settings for the raid controller cache are: "cache 50% reads
> and 50% writes"
> 
> - Does it make sense to have caching of read- and write-ops enabled
>  with this setup? I wonder: Shouldn't it be the job of the zfs arc to
>  do the caching?
> 
> - Does zfs prefetch make any sense If your raid controller already
>  caches read operations?
> 
> 
> Cheers,
> Kai.
> 
> 
> 
> 
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list