ufs2 / softupdates / ZFS / disk write cache

Dan Naumov dan.naumov at gmail.com
Sun Jun 21 02:18:41 UTC 2009


I decided to do some performance tests of my own, "bonnie -s 4096" was
used to obtain the results. Note that these results should be used to
compare "write cache on" to "write cache off" and not to compare UFS2
vs ZFS, as the testing was done on different parts of the same
physical disk (the UFS2 partition resides on the first 16gb of disk
and ZFS pool takes the remaining ~1,9tb) and I am also using rather
conservative ZFS tunables.


UFS2 with write cache:
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         4096 55457 95.9 91630 46.7 36264 37.5 46565 74.0 84751 33.7 164.3 10.3

UFS2 without write cache:
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         4096  4938 46.9  4685 18.0  4288 21.8 17453 34.0 74232 31.6 165.0  9.9


As we can clearly see, the performance diffence between having disk
cache enabled and disabled is _ENORMOUS_. In the case of sequential
block write on UFS2, the performance loss is a staggering 94,89%. More
surprinsingly, even reading seems to be affected in a noticable way,
per char reads suffer a 62,62% penalty while block reads take a 12,42%
hit. Moving on to testing ZFS with and without disk cache enabled:


ZFS with write cache (384M ARC, 1GB max kmem):
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         4096 25972 66.1 45026 40.6 34269 36.0 46371 86.5 93973 34.6  84.5  8.5

ZFS without write cache (384M ARC, 1GB max kmem):
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         4096  2399  6.7  2258  3.5  2290  3.9 34380 66.1 85971 32.8  56.7  6.1

		
Uh oh.... After some digging around, I found the following quote: "ZFS
is designed to work with storage devices that manage a disk-level
cache. ZFS commonly asks the storage device to ensure that data is
safely placed on stable storage by requesting a cache flush." at
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide I
guess this might be somewhat related to why in the "disk cache
disabled" scenario, ZFS suffers bigger losses than UFS2.

It is quite obvious at this point that disabling disk cache in order
have softupdates live in harmony with disks "lying" about whether disk
cache contents have actually been committed to the disk in not in any
way, shape or form a viable solution to the problem. On a sidenote, is
there any way I can test whether *MY* disk is truthful about writing
cache to disk or not?

In the past (this was during my previous foray into the FreeBSD world,
circa-2001/2002) I have suffered severe data corruption (leading to an
unbootable system) using UFS2 + softupdates on 2 different occasions
due to power losses and this past experience has me very worried about
the proper way to configure my system to avoid such incidents in the
future.


- Sincerely,
Dan Naumov








On Sun, Jun 21, 2009 at 4:08 AM, Kip Macy<kip.macy at gmail.com> wrote:
>>
>> My guess is that it will be quite noticable, but that is only a guess.
>> (Keep in mind that UFS+softupdates does quite a bit of write-caching on its
>> own, so just switching to ZFS is unlikely to improve write performance
>> significantly compared to using UFS.)
>
>
> That all depends on how much the drive relies on the write cache for
> batching writes to disk. Soft updates does a lot of small random
> writes for metadata updates which will likely be heavily penalized by
> the absence of write caching. On my SSD, which unfortunately turned
> out to be camera grade flash, with FFS the system was unusable when
> doing large numbers of metadata updates, svn checkouts would take
> hours. I postulated that ZFS would map well to the large erase blocks,
> so I destroyed /usr and recreated a zpool in its place. I now get
> random write performance  better than FFS, "I lived happily ever
> after."
>
> I don't know if ZFS will provide the same benefit in your situation.
> My point is just that FFS+SU and ZFS are "apples and oranges."
>
> Please note that I've taken -stable off of the the CC, ZFS has been
> getting a lot of mailing list traffic lately and I've been hearing
> groans from certain quarters about it drowning out other discussions.
> Let's try to keep the discussions to freebsd-fs.
>
>
> Thanks,
> Kip


More information about the freebsd-fs mailing list