svn commit: r292361 - in projects/zfsd/head/tests/sys/cddl/zfs: include tests/bootfs tests/cache tests/cachefile tests/clean_mirror tests/cli_root tests/cli_root/zfs_get tests/cli_root/zfs_receive ...

Alan Somers asomers at FreeBSD.org
Wed Dec 16 20:34:35 UTC 2015


Author: asomers
Date: Wed Dec 16 20:34:32 2015
New Revision: 292361
URL: https://svnweb.freebsd.org/changeset/base/292361

Log:
  Another round of ZFS test reliability/performance improvements.
  
  tests/sys/cddl/zfs/include/libtest.kshlib:
  	- Add create_vdevs, which creates sparse files for use as vdevs.
  	  Set the default size to 10GB to encourage ZFS to allocate
  	  blocks in large chunks for performance.
  	- Simplify fill_fs, which was overly complicated.
  	- Add get_tvd_prop, which uses zdb to determine the value of the
  	  requested top-level vdev's property.
  	- Add {raidz_,}dva_to_block_addr, which takes a DVA and converts
  	  it to a block offset the DVA maps to.
  	- Add vdevs_for_tvd, which returns the type, total number of
  	  children and the usable children names, for the given
  	  top-level vdev.
  	- Add dva_to_vdev_off, which figures out what vdev and block
  	  offset the requested DVA maps to.
  	- Add file_dva, which uses the new zdb -o function to determine
  	  the DVA for the requested file's block given the requested
  	  offset and level in its block hierarchy.
  	- Add corrupt_file, which uses all of the above together to
  	  figure out exactly where to scribble random data onto a vdev.
  	- Add busy_path and reap_children, which provide a generic
  	  interface for making a path busy in the background and for
  	  reaping the child processes that are generated from such.
  
  tests/sys/cddl/zfs/tests/utils_test/utils_test.kshlib:
  tests/sys/cddl/zfs/include/libtest.kshlib:
  	- Move populate_dir from utils_test to libtest and generalize it to
  	  allow the caller to specify the various arguments and whether to
  	  snapshot after each round.
  
  tests/sys/cddl/zfs/include/libtest.kshlib:
  tests/sys/cddl/zfs/tests/redundancy/redundancy.kshlib:
  tests/sys/cddl/zfs/tests/cli_root/zpool_clear/zpool_clear_001_pos.ksh:
  	- Add {vdev,pool}_has_errors, which is originally from the
  	  redundancy suite & zpool_clear_001_pos.  This returns whether
  	  the pool has the given number, or any errors.  The underlying
  	  implementation counts the number of errors reported in 'zpool
  	  status'.
  	- Move is_healthy from redundancy to libtest, renamed
  	  is_pool_healthy.
  
  tests/sys/cddl/zfs/tests/redundancy/redundancy.kshlib:
  	- replace_missing_devs(): Don't recreate the vdev if it already
  	  exists.  In the case where damage_devs() is used (as opposed
  	  to remove_devs()), the vdev file already exists and its state
  	  reconciled (but not healthy until replaced later in this
  	  function).
  
  tests/sys/cddl/zfs/tests/slog/...:
  	- Refactor device state table generation to a separate function.
  	  Simplify the awk logic.
  	- Refactor repeated loop patterns.
  
  tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_004_pos.ksh:
  tests/sys/cddl/zfs/tests/redundancy/...:
  	- Reduce the runtime of these tests by >90% by using sparse
  	  vdevs.
  	- In the case of intentionally corrupting tests, this is also
  	  accomplished by making use of the new mechanism that targets
  	  the exact block required to generate corruption rather than
  	  scribbling on entire vdevs.
  
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/...:
  	- Fix several tests that were inadvertently testing with one
  	  fewer vdev than they were supposed to ("$VDIV3" instead of
  	  "$VDEV3").
  	- Rewrite several tests to simply regenerate pools from scratch
  	  instead of using a complicated mechanism of "saving" vdevs via
  	  tar archives, which doesn't work well with large sparse vdevs.
  	- More refactoring.
  
  tests/sys/cddl/zfs/tests/compression/compress_001_pos.ksh:
  	- Replace a silly "sleep 60" with a sync call to force the just
  	  written file data to be pushed to disk.
  
  tests/sys/cddl/zfs/tests/inheritance/inheritance_test.sh:
  	- Increase timeout from 15 minutes to 30.  This test (among
  	  others) seems to expose ZFS sync task latency issues that need
  	  to be fixed.
  
  tests/sys/cddl/zfs/tests/hotspare/hotspare_detach_001_pos.ksh:
  tests/sys/cddl/zfs/tests/hotspare/hotspare_scrub_001_pos.ksh:
  	- Replace a silly "write a file 3/4ths the size of the pool"
  	  routine with a fixed "write a 100MB file"; these tests aren't
  	  dependent on the file size, they're just trying to generate
  	  some pool I/O.
  
  tests/sys/cddl/zfs/tests/clean_mirror/clean_mirror_common.kshlib:
  	- Don't bother creating a files array since file names are
  	  predictable.
  
  tests/sys/cddl/zfs/tests/...:
  	- Remove unnecessary cleanup steps; many tests cleaned up things
  	  in a pool only to destroy it in the next step.  Many tests
  	  also "restored" temp files created for the test instead of
  	  just destroying.
  	- Replace most uses of mkfile or dd to create file vdevs, with
  	  calls to create_vdevs() using VDEV_SIZE where necessary.  Some
  	  tests still create vdev files the old way because they can't
  	  be changed quite so easily.  Most tests don't care about the
  	  vdev size; for those tests it improves ZFS performance to run
  	  on large sparse vdevs.
  	- Refactor many excessively indented code sections.
  	- Replace scores of duplicated code with various newly
  	  introduced library function calls.
  
  Submitted by:	Will
  Sponsored by:	Spectra Logic Corp

Added:
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/replacement/replacement.kshlib
Modified:
  projects/zfsd/head/tests/sys/cddl/zfs/include/libtest.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_004_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_008_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cache/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cachefile/cachefile_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/clean_mirror_common.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/default.cfg
  projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_get/zfs_get_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_receive/zfs_receive_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_send/zfs_send_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_send/zfs_send_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool/zpool_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_clear/zpool_clear_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_clear/zpool_clear_002_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_clear/zpool_clear_003_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_clear/zpool_clear_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_005_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_010_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_export/zpool_export_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/Makefile
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import.cfg
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_005_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_007_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_008_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_010_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_011_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_set/zpool_set_002_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_set/zpool_set_003_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_upgrade/zpool_upgrade.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_user/misc/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/compression/Makefile
  projects/zfsd/head/tests/sys/cddl/zfs/tests/compression/compress_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/history/history_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/hotspare/hotspare.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/hotspare/hotspare_detach_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/hotspare/hotspare_replace_002_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/hotspare/hotspare_scrub_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/inheritance/inherit_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/inheritance/inheritance_test.sh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/online_offline/online_offline_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/online_offline/online_offline_002_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/quota/quota.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/redundancy/redundancy.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/redundancy/redundancy_004_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/replacement/Makefile
  projects/zfsd/head/tests/sys/cddl/zfs/tests/replacement/replacement_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/replacement/replacement_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/replacement/replacement_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/scrub_mirror/default.cfg
  projects/zfsd/head/tests/sys/cddl/zfs/tests/scrub_mirror/scrub_mirror_common.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_005_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_007_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_008_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_009_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_011_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_012_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_013_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/slog/slog_014_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/clone_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/rollback_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/rollback_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/snapshot_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/snapshot_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/snapshot_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/snapshot_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/snapshot_007_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/snapshot_008_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/snapshot_011_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/snapshot/snapshot_013_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/userquota/userquota_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test_005_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test_007_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test_008_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/utils_test/utils_test_009_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_degrade_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_degrade_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_fault_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_replace_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_replace_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_replace_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zvol/zvol_ENOSPC/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zvol/zvol_ENOSPC/zvol_ENOSPC_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zvol/zvol_misc/zvol_misc_002_pos.ksh

Modified: projects/zfsd/head/tests/sys/cddl/zfs/include/libtest.kshlib
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/include/libtest.kshlib	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/include/libtest.kshlib	Wed Dec 16 20:34:32 2015	(r292361)
@@ -745,49 +745,37 @@ function size_of_file # fname
 #	destdir:    is the directory where everything is to be created under
 #	dirnum:	    the maximum number of subdirectories to use, -1 no limit
 #	filenum:    the maximum number of files per subdirectory
-#	bytes:	    number of bytes to write
-#	num_writes: numer of types to write out bytes
-#	data:	    the data that will be writen
+#	blocksz:    number of bytes per block
+#	num_writes: number of blocks to write
+#	data:	    the data that will be written
 #
 #	E.g.
 #	file_fs /testdir 20 25 1024 256 0
 #
-# Note: bytes * num_writes equals the size of the testfile
+# Note: blocksz * num_writes equals the size of the testfile
 #
-function fill_fs # destdir dirnum filenum bytes num_writes data
+function fill_fs # destdir dirnum filenum blocksz num_writes data
 {
 	typeset destdir=${1:-$TESTDIR}
 	typeset -i dirnum=${2:-50}
 	typeset -i filenum=${3:-50}
-	typeset -i bytes=${4:-8192}
+	typeset -i blocksz=${4:-8192}
 	typeset -i num_writes=${5:-10240}
 	typeset -i data=${6:-0}
 
-	typeset -i odirnum=1
-	typeset -i idirnum=0
-	typeset -i fn=0
 	typeset -i retval=0
-
-	log_must $MKDIR -p $destdir/$idirnum
-	while (( $odirnum > 0 )); do
-		if (( dirnum >= 0 && idirnum >= dirnum )); then
-			odirnum=0
-			break
-		fi
-		$FILE_WRITE -o create -f $destdir/$idirnum/$TESTFILE.$fn \
-		    -b $bytes -c $num_writes -d $data
-		retval=$?
-		if (( $retval != 0 )); then
-			odirnum=0
-			break
-		fi
-		if (( $fn >= $filenum )); then
-			fn=0
-			(( idirnum = idirnum + 1 ))
-			log_must $MKDIR -p $destdir/$idirnum
-		else
-			(( fn = fn + 1 ))
-		fi
+	typeset -i dn=0 # current dir number
+	typeset -i fn=0 # current file number
+	while (( retval == 0 )); do
+		(( dirnum >= 0 && dn >= dirnum )) && break
+		typeset curdir=$destdir/$dn
+		log_must $MKDIR -p $curdir
+		for (( fn = 0; $fn < $filenum && $retval == 0; fn++ )); do
+			log_cmd $FILE_WRITE -o create -f $curdir/$TESTFILE.$fn \
+			    -b $blocksz -c $num_writes -d $data
+			retval=$?
+		done
+		(( dn = dn + 1 ))
 	done
 	return $retval
 }
@@ -806,8 +794,7 @@ function get_prop # property dataset
 
 	prop_val=$($ZFS get -pH -o value $prop $dataset 2>/dev/null)
 	if [[ $? -ne 0 ]]; then
-		log_note "Unable to get $prop property for dataset " \
-		"$dataset"
+		log_note "Unable to get $prop property for dataset $dataset"
 		return 1
 	fi
 
@@ -1160,6 +1147,19 @@ function destroy_pool #pool
 }
 
 #
+# Create file vdevs.
+# By default this generates sparse vdevs 10GB in size, for performance.
+#
+function create_vdevs # vdevs
+{
+	typeset vdsize=10G
+
+	[ -n "$VDEV_SIZE" ] && vdsize=$VDEV_SIZE
+	rm -f $@ || return 1
+	truncate -s $vdsize $vdev $@
+}
+
+#
 # Firstly, create a pool with 5 datasets. Then, create a single zone and 
 # export the 5 datasets to it. In addition, we also add a ZFS filesystem
 # and a zvol device to the zone.
@@ -1195,7 +1195,7 @@ function zfs_zones_setup #zone_name zone
 	#
 	if verify_slog_support ; then
 		typeset sdevs="$TMPDIR/sdev1 $TMPDIR/sdev2"
-		log_must $MKFILE 100M $sdevs
+		log_must create_vdevs $sdevs
 		log_must $ZPOOL add $pool_name log mirror $sdevs
 	fi
 
@@ -1307,6 +1307,7 @@ function wait_for_checked # timeout dt <
 	typeset -i start=$(date '+%s')
 	typeset -i endtime
 
+	log_note "Waiting $timeout seconds (checked every $dt seconds) for: $*"
 	((endtime = start + timeout))
 	while :; do
 		$*
@@ -1632,6 +1633,147 @@ function check_pool_status # pool token 
 	return $?
 }
 
+function vdev_pool_error_count
+{
+	typeset errs=$1
+	if [ -z "$2" ]; then
+		test $errs -gt 0; ret=$?
+	else
+		test $errs -eq $2; ret=$?
+	fi
+	log_debug "vdev_pool_error_count: errs='$errs' \$2='$2' ret='$ret'"
+	return $ret
+}
+
+#
+# Generate a pool status error file suitable for pool_errors_from_file.
+# If the pool is healthy, returns 0.  Otherwise, the caller must handle the
+# returned temporarily file appropriately.
+#
+function pool_error_file # <pool>
+{
+	typeset pool="$1"
+
+	typeset tmpfile=$TMPDIR/pool_status.${TESTCASE_ID}
+	$ZPOOL status -x $pool > ${tmpfile}
+	echo $tmpfile
+}
+
+#
+# Evaluates <file> counting the number of errors.  If vdev specified, only
+# that vdev's errors are counted.  Returns the total number.  <file> will be
+# deleted on exit.
+#
+function pool_errors_from_file # <file> [vdev]
+{
+	typeset file=$1
+	shift
+	typeset checkvdev="$2"
+
+	typeset line
+	typeset -i fetchbegin=1
+	typeset -i errnum=0
+	typeset -i c_read=0
+	typeset -i c_write=0
+	typeset -i c_cksum=0
+
+	cat ${file} | $EGREP -v "pool:" | while read line; do 
+	 	if (( $fetchbegin != 0 )); then
+                        $ECHO $line | $GREP "NAME" >/dev/null 2>&1
+                        (( $? == 0 )) && (( fetchbegin = 0 ))
+                         continue
+                fi
+
+		if [[ -n $checkvdev ]]; then 
+			$ECHO $line | $GREP $checkvdev >/dev/null 2>&1
+			(( $? != 0 )) && continue
+			c_read=`$ECHO $line | $AWK '{print $3}'`
+			c_write=`$ECHO $line | $AWK '{print $4}'`
+			c_cksum=`$ECHO $line | $AWK '{print $5}'`
+			if [ $c_read != 0 ] || [ $c_write != 0 ] || \
+		   	   [ $c_cksum != 0 ]
+			then
+				(( errnum = errnum + 1 ))
+			fi
+			break
+		fi
+
+		c_read=`$ECHO $line | $AWK '{print $3}'`
+		c_write=`$ECHO $line | $AWK '{print $4}'`
+		c_cksum=`$ECHO $line | $AWK '{print $5}'`
+		if [ $c_read != 0 ] || [ $c_write != 0 ] || \
+		    [ $c_cksum != 0 ]
+		then
+			(( errnum = errnum + 1 ))
+		fi
+	done
+
+	rm -f $file
+	echo $errnum
+}
+
+#
+# Returns whether the vdev has the given number of errors.
+# If the number is unspecified, any non-zero number returns true.
+#
+function vdev_has_errors # pool vdev [errors]
+{
+	typeset pool=$1
+	typeset vdev=$2
+	typeset tmpfile=$(pool_error_file $pool)
+	log_note "Original pool status:"
+	cat $tmpfile
+
+	typeset -i errs=$(pool_errors_from_file $tmpfile $vdev)
+	vdev_pool_error_count $errs $3
+}
+
+#
+# Returns whether the pool has the given number of errors.
+# If the number is unspecified, any non-zero number returns true.
+#
+function pool_has_errors # pool [errors]
+{
+	typeset pool=$1
+	typeset tmpfile=$(pool_error_file $pool)
+	log_note "Original pool status:"
+	cat $tmpfile
+
+	typeset -i errs=$(pool_errors_from_file $tmpfile)
+	vdev_pool_error_count $errs $2
+}
+
+#
+# Return whether the pool is healthy
+#
+function is_pool_healthy # pool
+{
+	typeset pool=$1
+
+	typeset healthy_output="pool '$pool' is healthy"
+	typeset real_output=$($ZPOOL status -x $pool)
+
+	if [[ "$real_output" == "$healthy_output" ]]; then
+		return 0
+	else
+		typeset -i ret
+		$ZPOOL status -x $pool | $GREP "state:" | \
+			$GREP "FAULTED" >/dev/null 2>&1
+		ret=$?
+		(( $ret == 0 )) && return 1
+		typeset l_scan
+		typeset errnum
+		l_scan=$($ZPOOL status -x $pool | $GREP "scan:")
+		l_scan=${l_scan##*"with"}
+		errnum=$($ECHO $l_scan | $AWK '{print $1}')
+		if [ "$errnum" != "0" ]; then
+		 	return 1
+		else
+			return 0
+		fi
+	fi
+}
+
 #
 # These 5 following functions are instance of check_pool_status()
 #	is_pool_resilvering - to check if the pool is resilver in progress
@@ -2503,7 +2645,7 @@ function verify_slog_support
 	typeset sdev=$dir/b
 
 	$MKDIR -p $dir
-	$MKFILE 64M $vdev $sdev
+	log_must create_vdevs $vdev $sdev
 
 	typeset -i ret=0
 	if ! $ZPOOL create -n $pool $vdev log $sdev > /dev/null 2>&1; then
@@ -2997,3 +3139,351 @@ function restart_zfsd
 	fi
 	$RM -f $TMPDIR/.zfsd_enabled_during_stf_zfs_tests
 }
+
+#
+# Using the given <vdev>, obtain the value of the property <propname> for
+# the given <tvd> identified by numeric id.
+#
+function get_tvd_prop # vdev tvd propname
+{
+	typeset vdev=$1
+	typeset -i tvd=$2
+	typeset propname=$3
+
+	$ZDB -l $vdev | $AWK -v tvd=$tvd -v prop="${propname}:" '
+		BEGIN { start = 0; }
+		/^        id:/ && ($2==tvd) { start = 1; next; }
+		(start==0) { next; }
+		/^        [a-z]+/ && ($1==prop) { print $2; exit; }
+		/^        children/ { exit; }
+		'
+}
+
+#
+# Convert a DVA into a physical block address.  Prints number of blocks.
+# This takes the usual printed form, in which offsets are left shifted so
+# they represent bytes rather than the native sector count.
+#
+function dva_to_block_addr # dva
+{
+	typeset dva=$1
+
+	typeset offcol=$(echo $dva | cut -f2 -d:)
+	typeset -i offset="0x${offcol}"
+	# First add 4MB to skip the boot blocks and first two vdev labels,
+	# then convert to 512 byte blocks (for use with dd).  Note that this
+	# differs from simply adding 8192 blocks, since the input offset is
+	# given in bytes and has the actual ashift baked in.
+	(( offset += 4*1024*1024 ))
+	(( offset >>= 9 ))
+	echo "$offset"
+}
+
+#
+# Convert a RAIDZ DVA into a physical block address.  This has the same
+# output as dva_to_block_addr, but is more complicated due to RAIDZ.  ashift
+# is normally always 9, but RAIDZ uses the actual tvd ashift instead.
+# Furthermore, the number of vdevs changes the actual block for each device.
+# This is also tricky because ksh93 requires special effort to round up.
+#
+function raidz_dva_to_block_addr # dva ncols ashift
+{
+	typeset dva=$1
+	typeset -F ncols="${2}.0"
+	typeset -i ashift=$3
+
+	typeset -i offset=0x$(echo $dva | cut -f2 -d:)
+	(( offset >>= ashift ))
+
+	# Calculate the floating point offset after dividing by #columns.
+	typeset -F foff=$offset
+	(( foff /= ncols ))
+
+	# Convert the calculation to integer, then figure out if the
+	# remainder requires rounding up the integer calculation.
+	typeset -i ioff=$foff
+	(( foff -= ioff ))
+	[[ $foff -ge 0.5 ]] && (( ioff += 1 ))
+
+	# Now add the front 4MB and return.
+	(( ioff += 8192 ))
+	echo "$ioff"
+}
+
+#
+# Return the vdevs for the given toplevel vdev number.
+# Child vdevs will only be included if they are ONLINE.  Output format:
+#
+#   <toplevel vdev type> <nchildren> <child1>[:<child2> ...]
+#
+# Valid toplevel vdev types are mirror, raidz[1-3], leaf (which can be a
+# disk or a file).  Note that 'nchildren' can be larger than the number of
+# returned children; it represents the number of children regardless of how
+# many are actually online.
+#
+function vdevs_for_tvd # pool tvd
+{
+	typeset pool=$1
+	typeset -i tvd=$2
+
+	$ZPOOL status $pool | $AWK -v want_tvd=$tvd '
+		BEGIN {
+			 start = 0; tvd = -1; lvd = -1;
+			 type = "UNKNOWN"; disks = ""; disk = "";
+			 nchildren = 0;
+		}
+		/NAME.*STATE/ { start = 1; next; }
+		(start==0) { next; }
+
+		(tvd > want_tvd) { exit; }
+		END { print type " " nchildren " " disks; }
+
+		length(disk) > 0 {
+			if (length(disks) > 0) { disks = disks " "; }
+			if (substr(disk, 0, 1) == "/") {
+				disks = disks disk;
+			} else {
+				disks = disks "/dev/" disk;
+			}
+			disk = "";
+		}
+
+		/^\t(spares|logs)/ { tvd = want_tvd + 1; next; }
+		/^\t  (mirror|raidz[1-3])-[0-9]+/ { 
+			tvd += 1;
+			(tvd == want_tvd) && type = substr($1, 0, 6);
+			next;
+		}
+		/^\t  [\/A-Za-z]+/ {
+			tvd += 1;
+			if (tvd == want_tvd) {
+				(( nchildren += 1 ))
+				type = "leaf";
+				($2 == "ONLINE") && disk = $1;
+			}
+			next;
+		}
+
+		(tvd < want_tvd) { next; }
+
+		/^\t    spare-[0-9]+/ { next; }
+		/^\t      [\/A-Za-z]+/ {
+			(( nchildren += 1 ))
+			($2 == "ONLINE") && disk = $1;
+			next;
+		}
+
+		/^\t    [\/A-Za-z]+/ {
+			(( nchildren += 1 ))
+			($2 == "ONLINE") && disk = $1;
+			next;
+		}
+		'
+}
+
+#
+# Get a vdev path & offset for a given pool/dataset and DVA.
+# If desired, can also select the toplevel vdev child number.
+#
+function dva_to_vdev_off # pool/dataset dva [leaf_vdev_num]
+{
+	typeset poollike=$1
+	typeset dva=$2
+	typeset -i leaf_vdev_num=$3
+
+	# vdevs are normally 0-indexed while arguments are 1-indexed.
+	(( leaf_vdev_num += 1 ))
+
+	# Strip any child datasets or snapshots.
+	pool=$(echo $poollike | sed -e 's,[/@].*,,g')
+	tvd=$(echo $dva | cut -d: -f1)
+
+	set -- $(vdevs_for_tvd $pool $tvd)
+	log_debug "vdevs_for_tvd: $* <EOM>"
+	tvd_type=$1; shift
+	nchildren=$1; shift
+
+	lvd=$(eval echo \$$leaf_vdev_num)
+	log_debug "type='$tvd_type' children='$nchildren' lvd='$lvd' dva='$dva'"
+	case $tvd_type in
+	raidz*)
+		ashift=$(get_tvd_prop $lvd $tvd ashift)
+		log_debug "raidz: ashift='${ashift}'"
+		off=$(raidz_dva_to_block_addr $dva $nchildren $ashift)
+		;;
+	*)
+		off=$(dva_to_block_addr $dva)
+		;;
+	esac
+	echo "${lvd}:${off}"
+}
+
+#
+# Get the DVA for the specified dataset's given filepath.
+#
+function file_dva # dataset filepath [level] [offset] [dva_num]
+{
+	typeset dataset=$1
+	typeset filepath=$2
+	typeset -i level=$3
+	typeset -F offset=$4
+	typeset -i dva_num=$5
+
+	# A lot of these numbers can be larger than 32-bit, so we have to
+	# use floats to manage them...  :(
+	typeset -F blksz=0
+	typeset -i blknum=0
+	typeset -F startoff
+
+	# The inner match is for 'DVA[0]=<0:1b412600:200>', in which the
+	# text surrounding the actual DVA is a fixed size with 8 characters
+	# before it and 1 after.
+	$ZDB -P -vvvvv -o "ZFS plain file" $dataset $filepath | \
+	    $AWK -v level=${level} -v dva_num=${dva_num} '
+		BEGIN { stage = 0; }
+		(stage == 0) && ($1=="Object") { stage = 1; next; }
+
+		(stage == 1) {
+			print $3 " " $4;
+			stage = 2; next;
+		}
+
+		(stage == 2) && /^Indirect blocks/ { stage=3; next; }
+		(stage < 3) { next; }
+
+		match($2, /L[0-9]/) {
+			if (substr($2, RSTART+1, RLENGTH-1) != level) { next; }
+		}
+		match($3, /DVA\[.*>/) {
+			dva = substr($3, RSTART+8, RLENGTH-9);
+			if (substr($3, RSTART+4, 1) == dva_num) {
+				print $1 " " dva;
+			}
+		}
+		' | \
+	while read line; do
+		log_debug "params='$blksz/$blknum/$startoff' line='$line'"
+		if (( blksz == 0 )); then
+			typeset -i iblksz=$(echo $line | cut -d " " -f1)
+			typeset -i dblksz=$(echo $line | cut -d " " -f2)
+
+			# Calculate the actual desired block starting offset.
+			if (( level > 0 )); then
+				typeset -i nbps_per_level
+				typeset -F indsz
+				typeset -i i=0
+
+				(( nbps_per_level = iblksz / 128 ))
+				(( blksz = dblksz ))
+				for (( i = 0; $i < $level; i++ )); do
+					(( blksz *= nbps_per_level ))
+				done
+			else
+				blksz=$dblksz
+			fi
+
+			(( blknum = offset / blksz ))
+			(( startoff = blknum * blksz ))
+			continue
+		fi
+
+		typeset lineoffstr=$(echo $line | cut -d " " -f1)
+		typeset -F lineoff=$(printf "%d" "0x${lineoffstr}")
+		typeset dva="$(echo $line | cut -d " " -f2)"
+		log_debug "str='$lineoffstr' lineoff='$lineoff' dva='$dva'"
+		if [[ -n "$dva" ]] && (( lineoff == startoff )); then
+			echo $line | cut -d " " -f2
+			return 0
+		fi
+	done
+	return 1
+}
+
+#
+# Corrupt the given dataset's filepath file.  This will obtain the first
+# level 0 block's DVA and scribble random bits on it.
+#
+function corrupt_file # dataset filepath [leaf_vdev_num]
+{
+	typeset dataset=$1
+	typeset filepath=$2
+	typeset -i leaf_vdev_num="$3"
+
+	dva=$(file_dva $dataset $filepath)
+	[ $? -ne 0 ] && log_fail "ERROR: Can't find file $filepath on $dataset"
+
+	vdoff=$(dva_to_vdev_off $dataset $dva $leaf_vdev_num)
+	vdev=$(echo $vdoff | cut -d: -f1)
+	off=$(echo $vdoff | cut -d: -f2)
+
+	log_note "Corrupting ${dataset}'s $filepath on $vdev at DVA $dva"
+	log_must $DD if=/dev/urandom of=$vdev seek=$off count=1 conv=notrunc
+}
+
+#
+# Given a number of files, this function will iterate through
+# the loop creating the specified number of files, whose names
+# will start with <basename>.
+#
+# The <data> argument is special: it can be "ITER", in which case
+# the -d argument will be the value of the current iteration.  It
+# can be 0, in which case it will always be 0.  Otherwise, it will
+# always be the given value.
+#
+# If <snapbase> is specified, a snapshot will be taken using the
+# argument as the snapshot basename.
+#
+function populate_dir # basename num_files write_count blocksz data snapbase
+{
+	typeset basename=$1
+	typeset -i num_files=$2
+	typeset -i write_count=$3
+	typeset -i blocksz=$4
+	typeset data=$5
+	typeset snapbase="$6"
+
+	log_note "populate_dir: data='$data'"
+	for (( i = 0; i < num_files; i++ )); do
+		case "$data" in
+		0)	d=0	;;
+		ITER)	d=$i ;;
+		*)	d=$data	;;
+		esac
+
+        	log_must $FILE_WRITE -o create -c $write_count \
+		    -f ${basename}.$i -b $blocksz -d $d
+
+		[ -n "$snapbase" ] && log_must $ZFS snapshot ${snapbase}.${i}
+	done
+}
+
+# Reap all children registered in $child_pids.
+function reap_children
+{
+	[ -z "$child_pids" ] && return
+	for wait_pid in $child_pids; do
+		log_must $KILL $wait_pid
+	done
+	child_pids=""
+}
+
+# Busy a path.  Expects to be reaped via reap_children.  Tries to run as
+# long and slowly as possible.  [num] is taken as a hint; if such a file
+# already exists a different one will be chosen.
+function busy_path # <path> [num]
+{
+	typeset busypath=$1
+	typeset -i num=$2
+
+	while :; do
+		busyfile="$busypath/busyfile.${num}"
+		[ ! -f "$busyfile" ] && break
+	done
+
+	cmd="$DD if=/dev/urandom of=$busyfile bs=512"
+	( cd $busypath && $cmd ) &
+	typeset pid=$!
+	$SLEEP 1
+	log_must $PS -p $pid
+	child_pids="$child_pids $pid"
+}

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_001_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_001_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_001_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -74,7 +74,7 @@ log_onexit cleanup
 
 typeset VDEV=$TMPDIR/bootfs_001_pos_a.${TESTCASE_ID}.dat
 
-log_must $MKFILE 400m $VDEV
+log_must create_vdevs $VDEV
 create_pool "$TESTPOOL" "$VDEV"
 log_must $ZFS create $TESTPOOL/$FS
 

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_003_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_003_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_003_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -78,7 +78,7 @@ fi
 log_onexit cleanup
 
 log_assert "Valid pool names are accepted by zpool set bootfs"
-$MKFILE 64m $VDEV
+create_vdevs $VDEV
 
 typeset -i i=0;
 

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_004_neg.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_004_neg.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_004_neg.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -92,9 +92,7 @@ done
 pools[${#pools[@]}]="$bigname"
 
 
-
-$MKFILE 64m $VDEV
-
+create_vdevs $VDEV
 typeset -i i=0;
 
 while [ $i -lt "${#pools[@]}" ]

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_006_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_006_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_006_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -109,7 +109,7 @@ log_assert "Pools of correct vdev types 
 
 
 log_onexit cleanup
-log_must $MKFILE 64m $VDEV1 $VDEV2 $VDEV3 $VDEV4
+log_must create_vdevs $VDEV1 $VDEV2 $VDEV3 $VDEV4
 
 
 ## the following configurations are supported bootable pools

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_008_neg.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_008_neg.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/bootfs/bootfs_008_neg.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -76,7 +76,7 @@ typeset COMP_FS=$TESTPOOL/COMP_FS
 log_onexit cleanup
 log_assert $assert_msg
 
-log_must $MKFILE 300m $VDEV
+log_must create_vdevs $VDEV
 log_must $ZPOOL create $TESTPOOL $VDEV
 log_must $ZFS create $COMP_FS
 

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cache/setup.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cache/setup.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cache/setup.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -46,6 +46,6 @@ if [[ -d $VDEV2 ]]; then
 	log_must $RM -rf $VDIR2
 fi
 log_must $MKDIR -p $VDIR $VDIR2
-log_must $MKFILE $SIZE $VDEV $VDEV2
+log_must create_vdevs $VDEV $VDEV2
 
 log_pass

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cachefile/cachefile_004_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cachefile/cachefile_004_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cachefile/cachefile_004_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -93,7 +93,7 @@ log_must $ZPOOL create $TESTPOOL $DISKS
 mntpnt=$(get_prop mountpoint $TESTPOOL)
 typeset -i i=0
 while ((i < 2)); do
-	log_must $MKFILE 64M $mntpnt/vdev$i
+	log_must create_vdevs $mntpnt/vdev$i
 	eval vdev$i=$mntpnt/vdev$i
 	((i += 1))
 done

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/clean_mirror_common.kshlib
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/clean_mirror_common.kshlib	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/clean_mirror_common.kshlib	Wed Dec 16 20:34:32 2015	(r292361)
@@ -38,15 +38,12 @@ function overwrite_verify_mirror
 	typeset OVERWRITING_DEVICE=$2
 
 	typeset atfile=0
-	set -A files
 	set -A cksums
 	set -A newcksums
 
+	populate_dir $TESTDIR/file $FILE_COUNT $WRITE_COUNT $BLOCKSZ 0
 	while (( atfile < FILE_COUNT )); do
-		files[$atfile]=$TESTDIR/file.$atfile
-		log_must $FILE_WRITE -o create -f $TESTDIR/file.$atfile \
-			-b $FILE_SIZE -c 1
-		cksums[$atfile]=$($CKSUM ${files[$atfile]})
+		cksums[$atfile]=$($CKSUM ${TESTDIR}/file.${atfile})
 		(( atfile = atfile + 1 ))
 	done
 
@@ -62,11 +59,9 @@ function overwrite_verify_mirror
 	log_must $ZPOOL import $TESTPOOL
 
 	atfile=0
-
 	typeset -i failedcount=0
 	while (( atfile < FILE_COUNT )); do
-		files[$atfile]=$TESTDIR/file.$atfile
-		newcksum=$($CKSUM ${files[$atfile]})
+		newcksum=$($CKSUM $TESTDIR/file.${atfile})
 		if [[ $newcksum != ${cksums[$atfile]} ]]; then
 			(( failedcount = failedcount + 1 ))
 		fi

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/default.cfg
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/default.cfg	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/default.cfg	Wed Dec 16 20:34:32 2015	(r292361)
@@ -45,10 +45,10 @@ fi
 
 export MIRROR_PRIMARY MIRROR_SECONDARY SINGLE_DISK SIDE_PRIMARY SIDE_SECONDARY
 
-export FILE_COUNT=30
-export FILE_SIZE=$(( 1024 * 1024 ))
-export MIRROR_MEGS=70
-export MIRROR_SIZE=${MIRROR_MEGS}m # default mirror size
+export FILE_COUNT=10
+export BLOCKSZ=131072
+export WRITE_COUNT=8
+export FILE_SIZE=$(( BLOCKSZ * WRITE_COUNT ))
+export MIRROR_SIZE=70
 export DD_BLOCK=$(( 64 * 1024 ))
-export DD_COUNT=$(( MIRROR_MEGS * 1024 * 1024 / DD_BLOCK ))
-
+export DD_COUNT=$(( MIRROR_SIZE * 1024 * 1024 / DD_BLOCK ))

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -41,8 +41,8 @@ else
 	log_note "Partitioning disks ($MIRROR_PRIMARY $MIRROR_SECONDARY)"
 fi
 wipe_partition_table ${SINGLE_DISK} ${MIRROR_PRIMARY} ${MIRROR_SECONDARY}
-log_must set_partition ${SIDE_PRIMARY##*p} "" $MIRROR_SIZE $MIRROR_PRIMARY
-log_must set_partition ${SIDE_SECONDARY##*p} "" $MIRROR_SIZE $MIRROR_SECONDARY
+log_must set_partition ${SIDE_PRIMARY##*p} "" ${MIRROR_SIZE}m $MIRROR_PRIMARY
+log_must set_partition ${SIDE_SECONDARY##*p} "" ${MIRROR_SIZE}m $MIRROR_SECONDARY
 
 default_mirror_setup $SIDE_PRIMARY $SIDE_SECONDARY
 

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_get/zfs_get_004_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_get/zfs_get_004_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_get/zfs_get_004_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -137,7 +137,7 @@ while (( availspace > DFILESIZE )) && ((
 	(( i += 1 ))
 
 	if [[ -n $globalzone ]] ; then
-		log_must $MKFILE $FILESIZE ${file}$i
+		log_must create_vdevs ${file}$i
 		eval pool=\$TESTPOOL$i
 		log_must $ZPOOL create $pool ${file}$i
 	else

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_receive/zfs_receive_001_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_receive/zfs_receive_001_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_receive/zfs_receive_001_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -120,10 +120,8 @@ for orig_fs in $datasets ; do
 
 	typeset -i i=0
 	while (( i < ${#orig_snap[*]} )); do
-		$FILE_WRITE -o create -f ${orig_data[$i]} -b $BLOCK_SIZE \
-			-c $WRITE_COUNT >/dev/null 2>&1
-		(( $? != 0 )) && \
-			log_fail "Writing data into zfs filesystem fails."
+		log_must $FILE_WRITE -o create -f ${orig_data[$i]} \
+			-b $BLOCK_SIZE -c $WRITE_COUNT
 		log_must $ZFS snapshot ${orig_snap[$i]}
 		if (( i < 1 )); then
 			log_must eval "$ZFS send ${orig_snap[$i]} > ${bkup[$i]}"

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_send/zfs_send_001_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_send/zfs_send_001_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_send/zfs_send_001_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -106,7 +106,7 @@ log_must $ZFS create $rst_root
 	log_must $MKDIR -p $TESTDIR1
 log_must $ZFS set mountpoint=$TESTDIR1 $rst_root
 
-$FILE_WRITE -o create -f $init_data -b $BLOCK_SIZE -c $WRITE_COUNT
+log_must $FILE_WRITE -o create -f $init_data -b $BLOCK_SIZE -c $WRITE_COUNT
 
 log_must $ZFS snapshot $init_snap
 $ZFS send $init_snap > $full_bkup
@@ -121,7 +121,7 @@ compare_cksum $init_data $rst_data
 
 log_note "Verify 'zfs send -i' can create incremental send stream."
 
-$FILE_WRITE -o create -f $inc_data -b $BLOCK_SIZE -c $WRITE_COUNT -d 0
+log_must $FILE_WRITE -o create -f $inc_data -b $BLOCK_SIZE -c $WRITE_COUNT -d 0
 
 log_must $ZFS snapshot $inc_snap
 $ZFS send -i $init_snap $inc_snap > $inc_bkup

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_send/zfs_send_002_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_send/zfs_send_002_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zfs_send/zfs_send_002_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -78,7 +78,7 @@ function do_testing # <prop> <prop_value
 	typeset prop_val=$2
 
 	log_must $ZFS set $property=$prop_val $fs
-	$FILE_WRITE -o create -f $origfile -b $BLOCK_SIZE -c $WRITE_COUNT
+	log_must $FILE_WRITE -o create -f $origfile -b $BLOCK_SIZE -c $WRITE_COUNT
 	log_must $ZFS snapshot $snap
 	$ZFS send $snap > $stream
 	(( $? != 0 )) && \

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool/zpool_002_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool/zpool_002_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool/zpool_002_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -80,9 +80,7 @@ pool=pool.${TESTCASE_ID}
 vdev1=$TESTDIR/file1
 vdev2=$TESTDIR/file2
 vdev3=$TESTDIR/file3
-for vdev in $vdev1 $vdev2 $vdev3; do
-	$MKFILE 64m $vdev
-done
+log_must create_vdevs $vdev1 $vdev2 $vdev3
 
 set -A cmds "create $pool mirror $vdev1 $vdev2" "list $pool" "iostat $pool" \
 	"status $pool" "upgrade $pool" "get delegation $pool" "set delegation=off $pool" \

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -123,15 +123,18 @@ function setup_vdevs #<disk> 
 # Create a pool first using the first file, and make subsequent files ready
 # as vdevs to add to the pool
 
-	log_must $MKFILE ${file_size}m ${TESTDIR}/file.$count
+	vdev=${TESTDIR}/file.$count
+	VDEV_SIZE=${file_size}m
+	log_must create_vdevs ${TESTDIR}/file.$count
 	create_pool "$TESTPOOL1" "${TESTDIR}/file.$count"
 	log_must poolexists "$TESTPOOL1"
 
 	while (( count < vdevs_num )); do # minus 1 to avoid space non-enough
 		(( count = count + 1 ))
-		log_must $MKFILE ${file_size}m ${TESTDIR}/file.$count  
+		log_must create_vdevs ${TESTDIR}/file.$count
 		vdevs_list="$vdevs_list ${TESTDIR}/file.$count"
 	done
+	unset VDEV_SIZE
 }
 
 log_assert " 'zpool add [-f]' can add large numbers of vdevs to the specified" \
@@ -152,11 +155,5 @@ setup_vdevs $disk
 log_must $ZPOOL add -f "$TESTPOOL1" $vdevs_list
 log_must iscontained "$TESTPOOL1" "$vdevs_list"
 
-(( file_size = file_size * (vdevs_num/20 + 1 ) ))
-log_mustnot $MKFILE ${file_size}m ${TESTDIR}/broken_file 
-
-log_mustnot $ZPOOL add -f "$TESTPOOL1" ${TESTDIR}/broken_file
-log_mustnot iscontained "$TESTPOOL1" "${TESTDIR}/broken_file"
-
 log_pass "'zpool successfully add [-f]' can add large numbers of vdevs to the" \
 	 "specified pool without any errors."

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_clear/zpool_clear_001_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_clear/zpool_clear_001_pos.ksh	Wed Dec 16 20:33:47 2015	(r292360)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_clear/zpool_clear_001_pos.ksh	Wed Dec 16 20:34:32 2015	(r292361)
@@ -69,161 +69,39 @@ log_assert "Verify 'zpool clear' can cle
 log_onexit cleanup
 
 #make raw files to create various configuration pools
-typeset -i i=0
-while (( i < 3 )); do
-	log_must $MKFILE $FILESIZE $TMPDIR/file.$i
-
-	(( i = i + 1 ))
-done
-
 fbase=$TMPDIR/file
+log_must create_vdevs $fbase.0 $fbase.1 $fbase.2
 set -A poolconf "mirror $fbase.0 $fbase.1 $fbase.2" \
                 "raidz1 $fbase.0 $fbase.1 $fbase.2" \
                 "raidz2 $fbase.0 $fbase.1 $fbase.2" 
 
-function check_err # <pool> [<vdev>]
+function test_clear
 {
-	typeset pool=$1
-	shift
-	if (( $# > 0 )); then
-		typeset	checkvdev=$1
-	else
-		typeset checkvdev=""
-	fi
-	typeset -i errnum=0
-	typeset c_read=0
-	typeset c_write=0
-	typeset c_cksum=0
-	typeset tmpfile=$TMPDIR/file.${TESTCASE_ID}
-	typeset healthstr="pool '$pool' is healthy"
-	typeset output="`$ZPOOL status -x $pool`"
-
-	[[ "$output" ==  "$healthstr" ]] && return $errnum
-
-	$ZPOOL status -x $pool | $GREP -v "^$" | $GREP -v "pool:" \
-			| $GREP -v "state:" | $GREP -v "config:" \
-			| $GREP -v "errors:" > $tmpfile
-	typeset line
-	typeset -i fetchbegin=1
-	while read line; do 
-	 	if (( $fetchbegin != 0 )); then
-                        $ECHO $line | $GREP "NAME" >/dev/null 2>&1
-                        (( $? == 0 )) && (( fetchbegin = 0 ))
-                         continue
-                fi
-
-		if [[ -n $checkvdev ]]; then 
-			$ECHO $line | $GREP $checkvdev >/dev/null 2>&1
-			(( $? != 0 )) && continue
-			c_read=`$ECHO $line | $AWK '{print $3}'`
-			c_write=`$ECHO $line | $AWK '{print $4}'`
-			c_cksum=`$ECHO $line | $AWK '{print $5}'`
-			if [ $c_read != 0 ] || [ $c_write != 0 ] || \
-		   	   [ $c_cksum != 0 ]
-			then
-				(( errnum = errnum + 1 ))
-			fi
-			break
-		fi
-
-		c_read=`$ECHO $line | $AWK '{print $3}'`
-		c_write=`$ECHO $line | $AWK '{print $4}'`
-		c_cksum=`$ECHO $line | $AWK '{print $5}'`
-		if [ $c_read != 0 ] || [ $c_write != 0 ] || \
-		    [ $c_cksum != 0 ]
-		then
-			(( errnum = errnum + 1 ))
-		fi
-	done <$tmpfile
+	typeset type="$1"
+	typeset vdev_arg=""
 
-	return $errnum
-}
+	log_note "Testing ${type} clear type ..."
+	[ "$type" = "device" ] && vdev_arg="${fbase}.0"
 
-function do_testing #<clear type> <vdevs>
-{
-	typeset FS=$TESTPOOL1/fs
-	typeset file=/$FS/f
-	typeset type=$1
-	shift
-	typeset vdev="$@"
-
-	log_note "Testing with vdevs ${vdev} ..."
-
-	log_must $ZPOOL create -f $TESTPOOL1 $vdev
-	log_must $ZFS create $FS
-	#
-	# Fully fill up the zfs filesystem in order to make data block errors
-	# zfs filesystem
-	# 
-	typeset -i ret=0
-	typeset -i i=0
-	while $TRUE ; do
-        	$FILE_WRITE -o create -f $file.$i \
-            		-b $BLOCKSZ -c $NUM_WRITES
-        	ret=$?
-        	(( $ret != 0 )) && break
-        	(( i = i + 1 ))
-	done
-	(( $ret != 28 )) && log_fail "ERROR: $FILE_WRITE failed with error $ret"
-	log_note "$FILE_WRITE has filled up $FS."
-	
-	#
-	# Make errors to the testing pool by overwrite the vdev device with  
-	# the dd command, taking care to skip the first and last labels.
-	#
-	(( i = $RANDOM % 3 ))
-	typeset -i wcount=0
-	typeset -i size
-	case $FILESIZE in 
-		*g|*G) 
-			(( size = ${FILESIZE%%[g|G]} ))
-			(( wcount = size*1024*1024 - 512 ))
-			;;
-		*m|*M)
-			(( size = ${FILESIZE%%[m|M]} ))
-			(( wcount = size*1024 - 512 ))
-			;;

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***


More information about the svn-src-projects mailing list