FreBSD 9.1 and ZFS v28 performances
Davide D'Amico
davide.damico at contactlab.com
Mon Mar 18 16:13:21 UTC 2013
Il 18/03/13 16:31, Steven Hartland ha scritto:
>
> ----- Original Message ----- From: "Davide D'Amico"
> <davide.damico at contactlab.com>
> To: <freebsd-fs at freebsd.org>
> Sent: Monday, March 18, 2013 2:50 PM
> Subject: FreBSD 9.1 and ZFS v28 performances
>
>
>> Hi all,
>> I'm trying to use ZFS on a DELL R720 with 2x6-core, 32GB ram, H710
>> controller (no JBOD) and 15K rpm SAS HD: I will use it for a mysql 5.6
>> server, so I am trying to use ZFS to get L2ARC and ZIL benefits.
>>
>> I created a RAID10 and used zpool to create a pool on top:
>>
>> # zpool create DATA mfid3
>> # zpool add DATA cache mfid1 log mfid2
>>
>> I have a question on zfs performances. Using:
>>
>> dd if=/dev/zero of=file.out bs=16k count=1M
>>
>> I cannot go faster than 400MB/s so I think I'm missing something; I
>> tried removing zil, removing l2arc but everything is still the same.
>>
>> Here my configuration details:
>>
>> OS: FreeBSD 9.1 amd64 GENERIC
>>
>> /boot/loader.conf
>> vfs.zfs.arc_min="4096M"
>> vfs.zfs.arc_max="15872M"
>> vm.kmem_size_max="64G"
>> vm.kmem_size="49152M"
>> vfs.zfs.write_limit_override=1073741824
>>
>> /etc/sysctl.conf:
>> kern.ipc.somaxconn=32768
>> kern.threads.max_threads_per_proc=16384
>> kern.maxfiles=262144
>> kern.maxfilesperproc=131072
>> kern.ipc.nmbclusters=65536
>> kern.corefile="/var/coredumps/%U.%N.%P.core"
>> vfs.zfs.prefetch_disable="1"
>> kern.maxvnodes=250000
>>
>> mfiutil show volumes:
>> mfi0 Volumes:
>> Id Size Level Stripe State Cache Name
>> mfid0 ( 278G) RAID-1 64k OPTIMAL Disabled <BASE>
>> mfid1 ( 118G) RAID-0 64k OPTIMAL Disabled <L2ARC0>
>> mfid2 ( 118G) RAID-0 64k OPTIMAL Disabled <ZIL0>
>> mfid3 ( 1116G) RAID-10 64k OPTIMAL Disabled <DATA>
>>
>> zpool status:
>> pool: DATA
>> state: ONLINE
>> scan: none requested
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> DATA ONLINE 0 0 0
>> mfid3 ONLINE 0 0 0
>> logs
>> mfid2 ONLINE 0 0 0
>> cache
>> mfid1 ONLINE 0 0 0
>>
>> errors: No known data errors
>>
>> zfs get all DATA
>> NAME PROPERTY VALUE SOURCE
>> DATA type filesystem -
>> DATA creation Mon Mar 18 13:41 2013 -
>> DATA used 53.0G -
>> DATA available 1.02T -
>> DATA referenced 53.0G -
>> DATA compressratio 1.00x -
>> DATA mounted yes -
>> DATA quota none default
>> DATA reservation none default
>> DATA recordsize 16K local
>> DATA mountpoint /DATA default
>> DATA sharenfs off default
>> DATA checksum on default
>> DATA compression off default
>> DATA atime off local
>> DATA devices on default
>> DATA exec on default
>> DATA setuid on default
>> DATA readonly off default
>> DATA jailed off default
>> DATA snapdir hidden default
>> DATA aclmode discard default
>> DATA aclinherit restricted default
>> DATA canmount on default
>> DATA xattr off temporary
>> DATA copies 1 default
>> DATA version 5 -
>> DATA utf8only off -
>> DATA normalization none -
>> DATA casesensitivity sensitive -
>> DATA vscan off default
>> DATA nbmand off default
>> DATA sharesmb off default
>> DATA refquota none default
>> DATA refreservation none default
>> DATA primarycache metadata local
>> DATA secondarycache all default
>> DATA usedbysnapshots 0 -
>> DATA usedbydataset 53.0G -
>> DATA usedbychildren 242K -
>> DATA usedbyrefreservation 0 -
>> DATA logbias latency default
>> DATA dedup off default
>> DATA mlslabel -
>> DATA sync standard default
>> DATA refcompressratio 1.00x -
>> DATA written 53.0G -
>> DATA zfs:zfs_nocacheflush 1 local
>>
>>
>> I'm using recordsize=16k because of mysql.
>>
>> I am trying to use sysbench (0.5, not in the ports yet) with oltp test
>> suite and my performances not so good.
>
> First off ideally you shouldn't use RAID controllers for ZFS, let it
> have the raw disks and use a JBOD controller e.g. mps not a HW RAID
> controller like mfi.
I tried removing the hardware raid10 and leaving 4 disks unconfigured
and then:
# mfiutil create jbod mfid3 mfid4 mfid5 mfid6
same behaviour/performance (probably because perc h710 'sees' them as
raid0-single disks devices.
Here my controller details:
mfi0 Firmware Package Version: 21.0.2-0001
mfi0 Firmware Images:
Name Version Date Time Status
BIOS 5.30.00_4.12.05.00_0x05110000 1/ 7/2012
1/ 7/2012
active
CTLR 4.00-0014 Aug 04 2011 12:49:17 active
PCLI 05.00-03:#%00008 Feb 17 2011 14:03:12 active
APP 3.130.05-1587 Apr 03 2012 09:36:13 active
NVDT 2.1108.03-0076 Dec 02 2011 22:55:02 active
BTBL 2.03.00.00-0003 Dec 16 2010 17:31:28 active
BOOT 06.253.57.219 9/9/2010 15:32:25 active
>
> HEAD has some significant changes for the mfi driver specifically:-
> http://svnweb.freebsd.org/base?view=revision&revision=247369
>
> This fixes lots off bugs but also enables full queue support on TBOLT
> cards so if your mfi is a TBOLT card you may see some speed up in
> random IO, not that this would effect your test here.
>
> While having a separate ZIL disk is good, your benefits may well be
> limited if said disk is a traditional HD, better to look at enterprise
> SSD's for this. The same and them some applies to your L2ARC disks.
I'm using SSD disks for zfs cache and zfs log:
mfi0 Physical Drives:
0 ( 279G) ONLINE <SEAGATE ST3300657SS ES65 serial=6SJ5JWFD> SAS E1:S0
1 ( 279G) ONLINE <SEAGATE ST3300657SS ES65 serial=6SJ5JW8S> SAS E1:S1
2 ( 558G) ONLINE <SEAGATE ST3600057SS ES65 serial=6SL45EB8> SAS E1:S2
3 ( 558G) ONLINE <SEAGATE ST3600057SS ES65 serial=6SL44ZV5> SAS E1:S3
4 ( 558G) ONLINE <SEAGATE ST3600057SS ES65 serial=6SL462QV> SAS E1:S4
5 ( 558G) ONLINE <SEAGATE ST3600057SS ES65 serial=6SL42YQY> SAS E1:S5
6 ( 119G) ONLINE <OCZ-VERTEX4 1.4 serial=OCZ-17D56E1KT4PW8MX> SATA E1:S6
7 ( 119G) ONLINE <OCZ-VERTEX4 1.4 serial=OCZ-605IWNB3XLKQ6CP> SATA E1:S7
Thanks,
d.
More information about the freebsd-fs
mailing list