FreeBSD 10.0-BETA1 #8 r256765M spend too much time in locks

Vitalij Satanivskij satan at ukr.net
Thu Oct 24 07:48:36 UTC 2013


Hello.

After upgrading system from old current (r245701) to fresh current r255173 (than switch to stable/10 r256765M) 
found some strange system behavior:

Diff  from r256765M anf r256765 is 
Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
===================================================================
--- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c        (revision 256765)
+++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c        (working copy)
@@ -5147,7 +5147,7 @@
        len = l2hdr->b_asize;
        cdata = zio_data_buf_alloc(len);
        csize = zio_compress_data(ZIO_COMPRESS_LZ4, l2hdr->b_tmp_cdata,
-           cdata, l2hdr->b_asize, (size_t)SPA_MINBLOCKSIZE);
+           cdata, l2hdr->b_asize, (size_t)(1ULL << l2hdr->b_dev->l2ad_vdev->vdev_ashift));
 
        if (csize == 0) {
                /* zero block, indicate that there's nothing to write */


But same situation was befor patch.


System load to high 
CPU:  6.8% user,  0.0% nice, 57.3% system,  0.8% interrupt, 35.1% idle

hotkernel (dtrace script) says 

kernel`__mtx_unlock_flags                                 292   0.3%
kernel`__mtx_lock_flags                                   315   0.3%
zfs.ko`lzjb_compress                                      349   0.3%
kernel`__rw_wlock_hard                                    686   0.7%
kernel`spinlock_exit                                     1050   1.0%
kernel`vmem_xalloc                                       7516   7.3%
kernel`_sx_xlock_hard                                    8664   8.5%
kernel`acpi_cpu_c1                                       9737   9.5%
kernel`cpu_idle                                         20189  19.7%
kernel`__mtx_lock_sleep                                 45952  44.9%



Trying to  understand where is a problem I'm build kernel with lock profiling. 

and get some data (for one minute )

(file attached) 

using agregation find most lock's is in 

14,159818        /usr/src/sys/kern/subr_vmem.c:1128(sleep mutex:kmem arena) 
9,553523         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1597(sx:buf_hash_table.ht_locks[i].ht_lock) 
8,386943         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3541(sx:l2arc_buflist_mtx) 
8,110206         /usr/src/sys/kern/subr_vmem.c:1230(sleep mutex:kmem arena) 
5,909429         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1969(sx:arc_mru->arcs_locks[i].arcs_lock) 
5,452206         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1969(sx:arc_mfu->arcs_locks[i].arcs_lock) 
5,050224         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:303(sx:tx->tx_cpu[c].tc_open_lock) 
4,232819         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:891(sx:buf_hash_table.ht_locks[i].ht_lock) 
4,211348         /usr/src/sys/kern/vfs_subr.c:2101(lockmgr:zfs) 
4,011656         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:862(sx:buf_hash_table.ht_locks[i].ht_lock) 
3,823698         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2009(sx:arc_eviction_mtx) 
2,697344         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:126(sx:h->hash_mutexes[i]) 
2,343242         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1256(sx:arc_mfu->arcs_locks[i].arcs_lock) 
1,752713         /usr/src/sys/kern/vfs_lookup.c:707(lockmgr:zfs) 
1,580856         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_znode.c:1136(sx:zfsvfs->z_hold_mtx[i]) 
1,534242         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1291(sx:arc_mfu->arcs_locks[i].arcs_lock) 
1,331583         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:129(sx:db->db_mtx) 
1,105058         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c:427(sx:vq->vq_lock) 
1,080855         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c:396(sx:vq->vq_lock) 
0,858568         /usr/src/sys/kern/vfs_cache.c:488(rw:Name Cache) 
0,831652         /usr/src/sys/vm/vm_kern.c:344(rw:kmem vm object) 
0,815439         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1845(sx:buf_hash_table.ht_locks[i].ht_lock) 
0,812613         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1256(sx:arc_mru->arcs_locks[i].arcs_lock) 
0,794274         /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:1529(lockmgr:zfs) 
0,669845         /usr/src/sys/vm/uma_core.c:2097(sleep mutex:zio_cache)




Short system description 
CPU E5-1650
MEM 128Gb ddr3-1600

Storage subsystem 

36x1Tb WD RE4 drives on LSI SAS2308 Controler
3x180Gb Intel ssd 530 series as l2 cache 


POOL is  18 mirrors, two drives in ich mirror  and 3 cache devices 

eg. 
....
          mirror-14     ONLINE       0     0     0
            gpt/disk28  ONLINE       0     0     0
            gpt/disk29  ONLINE       0     0     0
          mirror-15     ONLINE       0     0     0
            gpt/disk30  ONLINE       0     0     0
            gpt/disk31  ONLINE       0     0     0
          mirror-16     ONLINE       0     0     0
            gpt/disk32  ONLINE       0     0     0
            gpt/disk33  ONLINE       0     0     0
          mirror-17     ONLINE       0     0     0
            gpt/disk34  ONLINE       0     0     0
            gpt/disk35  ONLINE       0     0     0
        cache
          ada1          ONLINE       0     0     0
          ada2          ONLINE       0     0     0
          ada3          ONLINE       0     0     0


POOL have two ZFS 

main with options (diffs from default) - 
compression           lz4
secondarycache        all
sync                  disabled

Data size for it around 6Tb (compresed) eg 
disk1  refcompressratio      1.56x                                          -
disk1  written               5,99T                                          -
disk1  logicalused           10,8T                                          -
disk1  logicalreferenced     9,32T                                          -


and another  with options 
recordsize            4K, before it was 32k, but internal software use data blocks mostly 4k so we try to change (without real data realocate ) 
compresion -s off 
sync                  disabled
secondarycache        all

DATA size on it around 1.4Tb

ARC configured to use max 80Gb 

top most time looks like this 

Mem: 14G Active, 13G Inact, 94G Wired, 497M Cache, 3297M Buf, 2214M Free
ARC: 80G Total, 2010M MFU, 70G MRU, 49M Anon, 3822M Header, 4288M Other


LA's - around 10-20 depend on day time.


zpool iostat disk1 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
disk1       7,45T  8,86T    546  1,49K  16,4M  13,4M
disk1       7,45T  8,86T    272  3,91K  11,7M  27,4M
disk1       7,45T  8,86T    344  2,94K  7,26M  25,2M
disk1       7,45T  8,86T    224  2,02K  9,91M  21,8M
disk1       7,45T  8,86T    244  2,35K  8,23M  18,4M
disk1       7,45T  8,86T    245  2,54K  6,44M  23,4M
disk1       7,45T  8,86T    114  2,94K  3,28M  13,3M
disk1       7,45T  8,86T    288  4,43K  6,09M  33,5M
disk1       7,45T  8,86T    157  1,26K  2,98M  15,7M
disk1       7,45T  8,86T     94    842  3,07M  13,6M
disk1       7,45T  8,86T    327  1,71K  9,63M  8,21M
disk1       7,45T  8,86T    237  1,81K  5,73M  29,3M
disk1       7,45T  8,86T    247  3,47K  5,17M  29,6M
disk1       7,45T  8,86T    165  2,37K  3,22M  16,7M
disk1       7,45T  8,86T    155  3,23K  3,27M  23,9M

Strange as timeout seted up to 10sec's.

What intresting - after reboot system work fine for some time, at last while ARC not become  80G size. 
Low limit for arc is 40gb, strange but old system can take memory from arc eg like this 


Mem: 32G Active, 8797M Inact, 78G Wired, 2370M Cache, 209M Buf, 3977M Free
ARC: 43G Total, 2204M MFU, 28G MRU, 135M Anon, 7898M Header, 5301M Other

On new ARC getting to it max allowed size. 

So for now question is, what it can be, what we can try to improve system perfomance and so on?





More information about the freebsd-hackers mailing list