strange zfs behavior

Thiago Damas tdamas at gmail.com
Sat Jun 12 15:36:43 UTC 2010


  Hi,
  I'm testing some configuration using ZFS with 4 disks seagate:
ad4: 953869MB <Seagate ST31000528AS CC38> at ata2-master UDMA100 SATA 3Gb/s
ad6: 953869MB <Seagate ST31000528AS CC38> at ata3-master UDMA100 SATA 3Gb/s
ad8: 953869MB <Seagate ST31000528AS CC38> at ata4-master UDMA100 SATA 3Gb/s
ad10: 953869MB <Seagate ST31000528AS CC38> at ata5-master UDMA100 SATA 3Gb/s

  The system its amd64 8.1-BETA1 (tested too in 8.0-p3).
  My only tuning its (in /boot/loader.conf):
vm.kmem_size_scale="2"
vfs.zfs.txg.timeout=5

  The machine has 4Gb RAM, and SATA controller its LSI53C1020/1030 (adaptec
1020)

  At first, I used the following:
zpool create -f -m /storage tank mirror /dev/ad4 /dev/ad6 mirror /dev/ad8
/dev/ad10
 and I noticed ad10 slower than others (svc_t)
svc_t:
http://i48.tinypic.com/34s1ndd.gif
http://i45.tinypic.com/m9x6ra.gif
 wait:
http://i47.tinypic.com/2uqksv5.gif
http://i49.tinypic.com/200qza9.gif

  Now, I swapped the configuration:
zpool create -f -m /storage tank mirror /dev/ad10 /dev/ad8 mirror /dev/ad6
/dev/ad4
 and now I have ad4 slower than others
 svc_t:
http://i49.tinypic.com/2uxtqww.gif
http://i50.tinypic.com/10dbcix.gif
 wait:
http://i46.tinypic.com/331f5lf.gif
http://i46.tinypic.com/2lc7c5k.gif

  Will always the last disk in zfs configuration perform like that?
  Any comments?


More information about the freebsd-hackers mailing list