zvol + raidz issue?
Niki Hammler
mailinglists at nobaq.net
Tue Aug 28 09:10:29 UTC 2012
Am 26.08.2012 22:13, schrieb Freddie Cash:
> (Sorry for top-post, sending from phone.)
>
> Please show the command-line used to create the zvol. Especially the
> recordsize option. When using zvols, you have to make sure to match the
> recordsize of the zvol to that of the filesystem used above it.
> Otherwise, performance will be atrocious.
Hi,
Sorry for my third posting on this.
Now I strictly followed your suggestion and used
zfs create -b 128k -V 500g plvl5i0/zvtest
(with 128k being the recordsize of the dataset in the zpool).
Suddenly the write performance increased from the 2.5 MB/s to 250 MB/s
(or 78MB/s when using bs=4096 with dd)
1.) How can this explained?
2.) Is there any problem when choosing -b 128k (can I always blindly
choose -b 128k)?
Remember again that the problem ONLY occurs with raidz1+zvol+force 4096
block alignment and in no other case!
Regards
Niki
> On Aug 26, 2012 11:50 AM, "Niki Hammler" <mailinglists at nobaq.net
> <mailto:mailinglists at nobaq.net>> wrote:
>
> Hi,
>
> Given: new HP Proliant Microserver N40L (4 GB RAM) and 3x2TB SATA drives
> (SAMSUNG HD204UI, ST32000542AS, WDC WD20EARX-00PASB0).
>
> Goal: RAIDz1 containg datasets and zvols to be exported via iSCSI.
>
> Issue: When I create a zvol on a RAIDz1 I get horrible performance (few
> MB/s or less).
>
> First test: 500G zvol on a mirror (freshly created):
>
> # zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
> plvl1i0 1.81T 1.97G 1.81T 0% ONLINE /mnt
> # zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> plvl1i0 500G 1.30T 112K /mnt/plvl1i0
> plvl1i0/zvtest 500G 1.78T 1.97G -
> # dd if=/dev/zero of=/dev/zvol/plvl1i0/zvtest bs=2048k count=1000
> 1000+0 records in
> 1000+0 records out
> 2097152000 bytes transferred in 17.318348 secs (121094230 bytes/sec)
> #
>
> Corresponds to 115,48 MB/s which is good (similar results for a single
> drive).
>
> Second test: 500G zvol on the 3x2TB raidz1 (freshly created):
>
> # dd if=/dev/zero of=/dev/zvol/plvl5i0/zvtest bs=2048k count=1000
>
> 1000+0 records in
> 1000+0 records out
> 2097152000 bytes transferred in 700.126725 secs (2995389 bytes/sec)
> #
>
> which is only 2,85 MB/s.
>
> Remark: Both pools are created with the force 4096 alignment option
> (since I have 512 and 4096 drives mixed).
>
> Now is the point where you might say the problem is related to the
> raidz1. But it is not: I created a 500G dataset in the same RAIDz pool
> and copied about 100G data onto it with rsync+ssh. Result: about 28MB/s
> end2end performance which is reasonable.
>
> Are there any issues with zvol + raidz1? Google resulted in empty result
> set.
>
> I run a minimal FreeBSD 8.2 (FreeNAS):
>
> # uname -a
> FreeBSD zetta 8.2-RELEASE-p9 FreeBSD 8.2-RELEASE-p9 #0: Thu Jul 19
> 12:39:10 PDT 2012
> root at build.ixsystems.com:/build/home/jpaetzel/8.2.0/os-base/amd64/build/home/jpaetzel/8.2.0/FreeBSD/src/sys/FREENAS.amd64
> amd64
>
> Regards,
> Niki
>
>
> PS: This is also posted on
> http://forums.freenas.org/showthread.php?p=35590
> _______________________________________________
> freebsd-fs at freebsd.org <mailto:freebsd-fs at freebsd.org> mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org
> <mailto:freebsd-fs-unsubscribe at freebsd.org>"
>
More information about the freebsd-fs
mailing list