Freebsd8.1 ZFS with mfi driver performance
jackie wang
jackieit at gmail.com
Thu Jan 20 14:27:53 UTC 2011
hello ,I have a question.Sorry I have google a lot and have no answer.
I run freebsd 8.1 with zfs file system and mfi driver .
And my problem is when the system started , the speed is fast ,but
after one or two days the system began to slow .
My server main used as a web server ,the ram is 8G.
my uname -a output
FreeBSD xxx 8.1-RELEASE FreeBSD 8.1-RELEASE #0: Mon Dec 20 20:50:20
CST 2010 root at VS001.vlongbiz.com:/usr/obj/usr/src/sys/xxxCore
amd64
zpool status -v
VS001# zpool status -v
pool: backup
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gpt/disk7 ONLINE 0 0 0
gpt/disk8 ONLINE 0 0 0
gpt/disk9 ONLINE 0 0 0
gpt/disk10 ONLINE 0 0 0
gpt/disk11 ONLINE 0 0 0
errors: No known data errors
pool: wwwroot
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
wwwroot ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gpt/disk2 ONLINE 0 0 0
gpt/disk3 ONLINE 0 0 0
gpt/disk4 ONLINE 0 0 0
gpt/disk5 ONLINE 0 0 0
gpt/disk6 ONLINE 0 0 0
errors: No known data errors
pool: zroot
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/disk0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
errors: No known data errors
-------------------------------------------------------------
mfiutil show list output
VS001# mfiutil show config
mfi0 Configuration: 12 arrays, 12 volumes, 0 spares
array 0 of 1 drives:
drive 0 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1PYA0>
SATA enclosure 1, slot 0
array 1 of 1 drives:
drive 1 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1QE0P>
SATA enclosure 1, slot 1
array 2 of 1 drives:
drive 2 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1LXZE>
SATA enclosure 1, slot 2
array 3 of 1 drives:
drive 3 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1QDPS>
SATA enclosure 1, slot 3
array 4 of 1 drives:
drive 4 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1JGKZ>
SATA enclosure 1, slot 4
array 5 of 1 drives:
drive 5 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1NQBS>
SATA enclosure 1, slot 5
array 6 of 1 drives:
drive 6 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1QHVC>
SATA enclosure 1, slot 6
array 7 of 1 drives:
drive 7 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1Q8N4>
SATA enclosure 1, slot 7
array 8 of 1 drives:
drive 8 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1JVD6>
SATA enclosure 1, slot 8
array 9 of 1 drives:
drive 9 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1NJ9K>
SATA enclosure 1, slot 9
array 10 of 1 drives:
drive 10 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1QHM1>
SATA enclosure 1, slot 10
array 11 of 1 drives:
drive 11 ( 932G) ONLINE <ST31000524NS KA05 serial=9WK1PELL>
SATA enclosure 1, slot 11
volume mfid0 (931G) RAID-0 64K OPTIMAL <vdisk0> spans:
array 0
volume mfid1 (931G) RAID-0 64K OPTIMAL <vdisk1> spans:
array 1
volume mfid2 (931G) RAID-0 64K OPTIMAL <vdisk2> spans:
array 2
volume mfid3 (931G) RAID-0 64K OPTIMAL <vdisk3> spans:
array 3
volume mfid4 (931G) RAID-0 64K OPTIMAL <vdisk4> spans:
array 4
volume mfid5 (931G) RAID-0 64K OPTIMAL <vdisk5> spans:
array 5
volume mfid6 (931G) RAID-0 64K OPTIMAL <vdisk6> spans:
array 6
volume mfid7 (931G) RAID-0 64K OPTIMAL <vdisk7> spans:
array 7
volume mfid8 (931G) RAID-0 64K OPTIMAL <vdisk8> spans:
array 8
volume mfid9 (931G) RAID-0 64K OPTIMAL <vdisk9> spans:
array 9
volume mfid10 (931G) RAID-0 64K OPTIMAL <vdisk10> spans:
array 10
volume mfid11 (931G) RAID-0 64K OPTIMAL <vdisk11> spans:
array 11
VS001# gstat -f mfid.p1
dT: 1.002s w: 1.000s filter: mfid.p1
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 0 0 0 0.0 0 0 0.0 0.0| mfid0p1
0 0 0 0 0.0 0 0 0.0 0.0| mfid1p1
0 63 63 2946 26.7 0 0 0.0 90.6| mfid2p1
0 62 62 2824 22.8 0 0 0.0 78.0| mfid3p1
0 64 64 2949 22.3 0 0 0.0 84.9| mfid4p1
0 66 66 3012 19.4 0 0 0.0 77.4| mfid5p1
2 68 68 3266 16.3 0 0 0.0 68.9| mfid6p1
0 0 0 0 0.0 0 0 0.0 0.0| mfid7p1
0 0 0 0 0.0 0 0 0.0 0.0| mfid8p1
0 0 0 0 0.0 0 0 0.0 0.0| mfid9p1
more /boot/loader.conf
VS001# more /boot/loader.conf
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot"
geom_mirror_load="YES"
#vm.kmem_size="4096M"
#vm.kmem_size_max="3072M"
vfs.zfs.arc_min="1024M"
vfs.zfs.arc_max="4096M"
#vfs.zfs.vdev.cache.size="5M"
vfs.zfs.vdev.min_pending="1"
vfs.zfs.vdev.max_pending="1"
vfs.zfs.prefetch_disable="1"
vfs.zfs.txg.timeout="5"
vfs.zfs.txg.synctime="1"
vfs.zfs.txg.write_limit_override="524288000"
when the system slow there much mermory in wired status about 5GB, and
the system begin to use swap.
So,my question is why the zfs slow . Please Help ,now the box is in
production env.
More information about the freebsd-fs
mailing list