zfs/bsd conf call minutes
Tushar Tambay
tushar.tambay at gmail.com
Tue Jun 22 20:23:23 UTC 2010
date : 7/22/10
attendees : justin, pawel, xin, ganesh, samir, steve, tushar, john
development branch & checkin logistics
- /project/zfs/stable/8 will be our primary collaborative branch
- everyone should have checkin privileges for this already
- justin will do a weekly merge of changes/fixes from the bsd/head and
bsd/stable branches to /project/zfs/stable/8 and /project/zfs/current
- pawel will request martin to do a weekly merge of changes/fixes from
/user/pjd/zfs branch to the /project/zfs/stable/8 and /project/zfs/current
- ganesh will send out a description of zfs test suite layout to
zfs-devel at freebsd.org alias and run the discussion to decide where the
test suite will be checked in
- samir will establish a checkin criteria based on zfs test suite
- we will delay discussing the details of translation layer required to
support zfs v24 on the official 8.x BSD release until we get a better feel
for stability/performance of zfs
zfs / python
- on opensolaris, zfs utilities have been re-written in python (earlier
in C)
- python is not available in base bsd disribution (only in ports)
- we will keep zfs utilities in python and include zfs in base bsd
distribution but users will need to configure their systems with the python
port to get all the zfs functionality
zfs/dedupe
- suspected dedupe performance issue needs to be quantified with further
testing
- dedupe performance primarily tied to how much of the index can be
cached.
- zfs has the facility to introduce a second level cache using (say) SSD
storage but this needs testing/verification.
- pawel suspects a priority/scheduling issue with dedupe - under heavy
dedupe load, system becomes unresponsive/deadlocked.
- no dedupe tests exist in recently ported test suite
zfs/memory issue
- KVA exhaustion is a problem but limited to 32bit platforms
- besides KVA, there are other memory usage issues (though not very
severe). pawel's recent fix to reclaim memory used by name-cache should
address some of these
- BSD memory tuning is primarily for ufs. so assumptions about inode and
other sizes need to be revisited for zfs.
zfs / io
- (pawel) currently zfs commits changes in transaction groups every 10
(earlier 30) seconds and flushes whenever ARC is full.
- (justin) zfs flushing to disk is v. bursty even when applied i/o load
is very uniform.
- zfs needs better flush behind algorithms for (application) i/o
- justin will fix broken disk elevator code. he has seen data corruption
/ unmountable zfs file systems that are caused by this.
zfs / zvols
- bioflush has no sync/async flag. a big problem for zfs/zvol
- currently all i/o is treated as async and callers need to call another
interface to force flush
- ... pretty sure i've got some details wrong on this. justing - can you
please correct this ?
next conf call
- 7/20/2010
cheers,
--
tyt
phone: 408 203 9736 (c)
-
More information about the zfs-devel
mailing list