Write cache, is write cache, is write cache?

Olivier Smedts olivier at gid0.org
Mon Jan 24 14:22:49 UTC 2011


Hello,

2011/1/22 Jeremy Chadwick <freebsd at jdc.parodius.com>:
> On Sat, Jan 22, 2011 at 10:39:13AM +0000, Karl Pielorz wrote:
>> I've a small HP server I've been using recently (an NL36). I've got
>> ZFS setup on it, and it runs quite nicely.
>>
>> I was using the server for zeroing some drives the other day - and
>> noticed that a:
>>
>>  dd if=/dev/zero of=/dev/ada0 bs=2m
>>
>> Gives around 12Mbyte/sec throughput when that's all that's running
>> on the machine.
>>
>> Looking in the BIOS is a "Enabled drive write cache" option - which
>> was set to 'No'. Changing it to 'Yes' - I now get around
>> 90-120Mbyte/sec doing the same thing.
>>
>> Knowing all the issues with IDE drives and write caches - is there
>> any way of telling if this would be safe to enable with ZFS? (i.e.
>> if the option is likely to be making the drive completely ignore
>> flush requests?) - or if it's still honouring the various 'write
>> through' options if set on data to be written?
>>
>> I'm presuming DD won't by default be writing the data with the
>> 'flush' bit set - as it probably doesn't know about it.
>>
>> Is there anyway of testing this? (say using some tool to write the
>> data using either lots of 'cache flush' or 'write through' stuff) -
>> and seeing if the performance drops back to nearer the 12Mbyte/sec?
>>
>> I've not enabled the option with the ZFS drives in the machine - I
>> suppose I could test it.
>>
>> Write performance on the unit isn't that bad [it's not stunning] -
>> though with 4 drives in a mirrored set - it probably helps hide some
>> of the impact this option might have.
>
> I'm stating the below with the assumption that you have SATA disks with
> some form of AHCI-based controller (possibly Intel ICHxx or ESBx
> on-board), and *not* a hardware RAID controller with cache/RAM of its
> own:
>
> Keep write caching *enabled* in the system BIOS.  ZFS will take care of
> any underlying "issues" in the case the system abruptly loses power
> (hard disk cache contents lost), since you're using ZFS mirroring.  The
> same would apply if you were using raidz{1,2}, but not if you were using
> ZFS on a single device (no mirroring/raidz).  In that scenario, expect
> data loss; but the same could be said of any non-journalling filesystem.

Could you explain this behavior ? I don't see why ZFS would not ask a
single disk to flush its caches like in a mirror/raidz. It's necessary
for the ZIL, and to avoid FS corruption.

> I have no idea why your BIOS setting for this option was disabled.  I do
> not know if it's the factory default either; you would have to talk to
> HP about that, or spend the time figuring out who was in the system BIOS
> last and how/if/why they messed around (the number of possibilities for
> why the option is disabled are endless).
>
> You can use bonnie++ (ports/benchmarks/bonnie++) if you wish to do
> throughput and/or benchmark testing of sorts.
>
> --
> | Jeremy Chadwick                                   jdc at parodius.com |
> | Parodius Networking                       http://www.parodius.com/ |
> | UNIX Systems Administrator                  Mountain View, CA, USA |
> | Making life hard for others since 1977.               PGP 4BD6C0CB |
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>



-- 
Olivier Smedts                                                 _
                                        ASCII ribbon campaign ( )
e-mail: olivier at gid0.org        - against HTML email & vCards  X
www: http://www.gid0.org    - against proprietary attachments / \

  "Il y a seulement 10 sortes de gens dans le monde :
  ceux qui comprennent le binaire,
  et ceux qui ne le comprennent pas."


More information about the freebsd-fs mailing list