svn commit: r292346 - in projects/zfsd/head/tests/sys/cddl/zfs: include tests/clean_mirror tests/cli_root/zpool_import tests/cli_root/zpool_upgrade tests/grow_pool tests/grow_replicas tests/hotspar...

Alan Somers asomers at FreeBSD.org
Wed Dec 16 18:29:56 UTC 2015


Author: asomers
Date: Wed Dec 16 18:29:54 2015
New Revision: 292346
URL: https://svnweb.freebsd.org/changeset/base/292346

Log:
  Fix various ZFS test suite issues, improving reliability of the tests.
  
  This change actually affects several different bugs, and is not being broken
  apart, primarily because they are intertwined and often spread across many
  files.  Descriptions below per bug.
  
  Make ZFS tests involving phy cycling more deterministic
  
  tests/sys/cddl/zfs/include/libsas.kshlib:
  tests/sys/cddl/zfs/tests/zfsd/zfsd.kshlib:
  	- Replace blind sleeps with functions to check phy state that accept
  	  timeout parameters.
  
  tests/sys/cddl/zfs/tests/hotspare/hotspare_replace_003_neg.ksh:
  	- Ensure that PHYs are always reenabled in the cleanup routine, even
  	  if destroying a pool fails.
  
  tests/sys/cddl/zfs/tests/hotspare/hotspare.kshlib:
          - Only rmdir $HOTSPARE_TMPDIR if specified AND it is a directory.
  
  tests/sys/cddl/zfs/tests/sas_phy_thrash/sas_phy_thrash_001_pos.ksh:
  	- Update test to check for disk failing to return, since
  	  find_disk_by_phy just returns the state without judgment.
  
  tests/sys/cddl/zfs/tests/hotspare/hotspare_replace_003_neg.ksh:
  tests/sys/cddl/zfs/tests/zfsd/zfsd.kshlib:
  tests/sys/cddl/zfs/tests/zfsd/zfsd_replace_002_pos.ksh:
  	- Update test code to rescan disks after reenabling phys.  Instead
  	  of doing this for each phy, make the tests now enable all PHYs
  	  they are going to, then initiate the rescan.
  
  Handle partitions more carefully during ZFS tests
  
  tests/sys/cddl/zfs/include/libtest.kshlib:
  	- set_partition(): Don't create the partition table when adding
  	  partitions; this hides bugs.
  	- wipe_partition_table(): Re-create the GPT partition table after
  	  destroying it.  Commands must succeed.
  	- cleanup_devices():
  	  - Perform a ZFS labelclear before wiping the partition table,
  	    since labelclear may destroy the partition table.
  	  - Only wipe the partition table if a device is a physical disk as
  	    opposed to a slice, used in some tests that still want to have
  	    the ZFS label cleared.
  
  tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh:
  tests/sys/cddl/zfs/tests/grow_pool/setup.ksh:
  tests/sys/cddl/zfs/tests/grow_replicas/setup.ksh:
  tests/sys/cddl/zfs/tests/interop/setup.ksh:
  tests/sys/cddl/zfs/tests/inuse/inuse_008_pos.ksh:
  tests/sys/cddl/zfs/tests/inuse/inuse_009_pos.ksh:
  tests/sys/cddl/zfs/tests/migration/setup.ksh:
  tests/sys/cddl/zfs/tests/no_space/setup.ksh:
  tests/sys/cddl/zfs/tests/scrub_mirror/setup.ksh:
  tests/sys/cddl/zfs/tests/write_dirs/setup.ksh:
          - Update tests to ensure that a startup condition is honored: the
            test's disk partitions are cleared.
  
  ZFS tests need to check every scenario, not a random subset
  
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_004_pos.ksh:
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_005_pos.ksh:
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_006_pos.ksh:
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_007_pos.ksh:
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_008_pos.ksh:
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_001_pos.ksh:
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_002_pos.ksh:
  tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_rename_001_pos.ksh:
          - Refactor several tests so that they check every scenario instead of
            picking them at random.  This caused spurious test failures.
  
  tests/sys/cddl/zfs/tests/cli_root/zpool_upgrade/zpool_upgrade.kshlib:
  tests/sys/cddl/zfs/include/libtest.kshlib:
          - Fix assumptions about how the cachefile is managed.
  
  zfsd(8) tests should ensure that zfsd is running
  
  tests/sys/cddl/zfs/include/libtest.kshlib:
  tests/sys/cddl/zfs/tests/zfsd/zfsd_*.ksh:
  	- Add ensure_zfsd_running: Check to make sure the zfsd service is
  	  running, and start it if possible.  If this is not possible, fail
  	  the test as unsupported.  This is aimed at identifying when a
  	  system might not be set up properly.
  	- Update tests to call ensure_zfsd_running.
  
  Reduce code duplication in ZFS test suite
  
  tests/sys/cddl/zfs/...
  	- Replace repeated patterns of checking command results and failing,
  	  with use of log_must/log_mustnot.
          - Replace copy-pasted code in various places with subroutines.
  
  Submitted by:	Will
  Sponored by:	Spectra Logic Corp

Modified:
  projects/zfsd/head/tests/sys/cddl/zfs/include/libsas.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/include/libtest.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_005_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_007_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_008_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_rename_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_upgrade/zpool_upgrade.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/grow_pool/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/grow_replicas/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/hotspare/hotspare.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/hotspare/hotspare_replace_003_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/interop/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/inuse/inuse_008_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/inuse/inuse_009_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/migration/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/no_space/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/sas_phy_thrash/sas_phy_thrash_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/scrub_mirror/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/write_dirs/setup.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd.kshlib
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_autoreplace_001_neg.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_autoreplace_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_autoreplace_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_degrade_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_degrade_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_fault_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_003_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_004_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_005_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_006_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_007_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_import_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_replace_001_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_replace_002_pos.ksh
  projects/zfsd/head/tests/sys/cddl/zfs/tests/zfsd/zfsd_replace_003_pos.ksh

Modified: projects/zfsd/head/tests/sys/cddl/zfs/include/libsas.kshlib
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/include/libsas.kshlib	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/include/libsas.kshlib	Wed Dec 16 18:29:54 2015	(r292346)
@@ -31,6 +31,15 @@
 # $FreeBSD$
 #
 
+# Get all PHYs for a given expander.
+# Returns formatting suitable for iteration.
+function get_all_phys
+{
+	typeset expander=$1
+	camcontrol smpphylist $expander -q | awk '{print $1" "$NF}' | \
+		tr -d '()' | tr ',' ' ' | tr '\n' ','
+}
+
 #
 # Given a disk (e.g. /dev/da0 or da0), determine the following:
 #	- Does it exist in CAM?
@@ -45,9 +54,7 @@ function find_verify_sas_disk
 	typeset DISK=${1##*/}
 	typeset i
 
-	if [ ! -c /dev/$DISK ]; then
-		log_fail "Cannot find device \"/dev/$DISK\", arg is \"$1\""
-	fi
+	[ ! -c /dev/$DISK ] && log_fail "\"/dev/$DISK\" is not a char device"
 
 	# Make sure this device exists.  An inquiry should always succeed,
 	# even if there is a pending error.
@@ -61,14 +68,11 @@ function find_verify_sas_disk
 
 	typeset FOUND=0
 
-	for i in $PASSLIST;
-	do
+	for i in $PASSLIST; do
 		# Make sure this particular device supports SMP.  If not,
 		# no big deal, keep going.
 		camcontrol smprg $i > /dev/null 2>&1
-		if [ $? != 0 ]; then
-			continue
-		fi
+		[ $? -ne 0 ] && continue
 
 		# Make sure this particular device is an Enclosure Services
 		# device.  That way, we won't wind up removing the device
@@ -78,17 +82,13 @@ function find_verify_sas_disk
 		# advantage of the fact that most (all?) expanders include
 		# a SES device.
 		camcontrol inquiry $i |grep "Enclosure Services" > /dev/null
-		if [ $? != 0 ]; then
-			continue
-		fi
+		[ $? -ne 0 ] && continue
 
 		# For every peripheral, we go through and pull out the
 		# list of devices and their phys that we can see via this
 		# peripheral.
 		IFS=","
-		for j in `camcontrol smpphylist $i -q |awk '{print $1" "$NF}' |\
-			tr -d '()' |tr ',' ' ' |tr '\n' ','`
-		do
+		for j in $(get_all_phys $i); do
 			IFS=", \n"
 			set -A PERIPHLIST $j
 			unset IFS
@@ -108,21 +108,15 @@ function find_verify_sas_disk
 				break;
 			fi
 		done
-
 		unset IFS
 
-		if [ $FOUND != 0 ]; then
-			break;
-		fi
+		[ $FOUND -ne 0 ] && break
 	done
-
-	if [ $FOUND = 0 ]; then
-		log_fail "Could not find PHY for disk $DISK"
-	fi
-
+	[ $FOUND -eq 0 ] && log_fail "Could not find PHY for disk $DISK"
 }
 
-# Given an expander and phy number, find the 
+#
+# Given an expander and phy number, find the disk device name.
 # 
 function find_disk_by_phy
 {
@@ -133,35 +127,25 @@ function find_disk_by_phy
 	unset FOUNDDISK
 
 	IFS=","
-	for j in `camcontrol smpphylist $1 -q |awk '{print $1" "$NF}' |\
-		tr -d '()' |tr ',' ' ' |tr '\n' ','`
-	do
+	for j in $(get_all_phys $EXPANDER); do
 		IFS=", \n"
 		set -A PERIPHLIST $j
 		unset IFS
-		if [ ${PERIPHLIST[0]} != $2 ]; then
-			continue;
-		fi
+		[ "${PERIPHLIST[0]}" != "$2" ] && continue
 
 		typeset NUMPERIPHS=${#PERIPHLIST[*]}
 
-		((k=1))
-		while [ $k -lt $NUMPERIPHS ]; do 
+		for ((k=1; $k < $NUMPERIPHS; k=$k + 1)); do
 			((PSTOP=$NUMPERIPHS-1))
 			if [ "${PERIPHLIST[$k]%%[0-9]*}" != "pass" ] || \
 			   [ "$k" -eq $PSTOP ]; then
 				export FOUNDDISK=${PERIPHLIST[$k]}
 				break;
 			fi
-			((k=k+1))
 		done
 		FOUND=1
 		break;
 	done
-
-	if [ "$FOUND" = 0 ]; then
-		log_fail "could not find peripheral for phy $2 on expander $1"
-	fi
 }
 
 # Given an expander and phy on that expander, disable the phy.
@@ -173,36 +157,29 @@ function disable_sas_disk
 	typeset PHY=$2
 
 	# Disable the phy for this particular device
-	camcontrol smppc $EXPANDER -v -p $PHY -o disable
-	if [ $? != 0 ]; then
-		log_fail "error disabling phy $PHY on $EXPANDER to remove $DISK"
-	fi
-
-	# Wait for the rescan to happen
-	if [ -z $NO_SAS_DISK_WAIT ]; then
-		log_note "waiting 10 seconds for device to go away"
-		$SLEEP 10
-	fi
+	log_must camcontrol smppc $EXPANDER -v -p $PHY -o disable
 }
 
 # Given an expander and phy on that expander, enable the phy.
 # This function will exit (via log_fail) if it can't send the link reset
 # request.
-#
-# XXX KDM in tests, the link reset doesn't always cause the disk to show
-# up.  A rescan does seem to work in that situation, however.
 function enable_sas_disk
 {
 	typeset EXPANDER=$1
 	typeset PHY=$2
 
 	# Send a link reset to bring the device back
-	camcontrol smppc $EXPANDER -p $PHY -o linkreset
-	if [ $? != 0 ]; then
-		log_fail "error sending a linkreset to phy $PHY on $EXPANDER"
-	fi
+	log_must camcontrol smppc $EXPANDER -p $PHY -o linkreset
+}
 
-	if [ -z $NO_SAS_DISK_WAIT ]; then
-		$SLEEP 10
+function rescan_disks
+{
+	if [[ -z "$1" ]]; then
+		log_must camcontrol rescan all >/dev/null
+		return
 	fi
+
+	for device in $(echo $* | sort -u); do
+		log_must camcontrol rescan $device >/dev/null
+	done
 }

Modified: projects/zfsd/head/tests/sys/cddl/zfs/include/libtest.kshlib
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/include/libtest.kshlib	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/include/libtest.kshlib	Wed Dec 16 18:29:54 2015	(r292346)
@@ -629,7 +629,13 @@ function wipe_partition_table #<whole_di
 {
 	while [[ -n $* ]]; do
 		typeset diskname=$1
-		$GPART destroy -F $diskname >/dev/null 2>&1
+		[ ! -e $diskname ] && log_fail "ERROR: $diskname doesn't exist"
+		if gpart list $(basename $diskname) >/dev/null 2>&1; then
+			log_must $GPART destroy -F $diskname
+		else
+			log_note "No GPT partitions detected on $diskname"
+		fi
+		log_must $GPART create -s gpt $diskname
 		shift
 	done
 }
@@ -655,8 +661,6 @@ function set_partition #<slice_num> <sli
 	size=`$ECHO $size| sed s/gb/G/`
 	size=`$ECHO $size| sed s/g/G/`
 	[[ -n $start ]] && start="-b $start"
-	# Ignore the return value; it will fail if $disk is already partitioned
-	$GPART create -s GPT $disk > /dev/null 2>&1
 	log_must $GPART add -t efi $start -s $size -i $slicenum $disk
 	return 0
 }
@@ -1617,8 +1621,14 @@ function is_pool_scrub_stopped #pool
 function cleanup_devices #vdevs
 {
 	for device in $@; do
-		wipe_partition_table $device
-		$ZPOOL labelclear -f $device
+		# Labelclear must happen first, otherwise it may interfere
+		# with the teardown/setup of GPT labels.
+		log_must $ZPOOL labelclear -f $device
+		# Only wipe partition tables for arguments that are disks,
+		# as opposed to slices (which are valid arguments here).
+		if camcontrol inquiry $device >/dev/null 2>&1; then
+			wipe_partition_table $device
+		fi
 	done
 	return 0
 }
@@ -2284,6 +2294,74 @@ function get_disk_guid
 }
 
 #
+# Get cachefile for a pool.
+# Prints the cache file, if there is one.
+# Returns 0 for a default zpool.cache, 1 for an explicit one, and 2 for none.
+#
+function cachefile_for_pool
+{
+	typeset pool=$1
+
+	cachefile=$(get_pool_prop cachefile $pool)
+	[[ $? != 0 ]] && return 1
+
+	case "$cachefile" in
+		none)	ret=2 ;;
+		"-")
+			ret=2
+			for dir in /boot/zfs /etc/zfs; do
+				if [[ -f "${dir}/zpool.cache" ]]; then
+					cachefile="${dir}/zpool.cache"
+					ret=0
+					break
+				fi
+			done
+			;;
+		*)	ret=1;
+	esac
+	[[ $ret -eq 0 || $ret -eq 1 ]] && print "$cachefile"
+	return $ret
+}
+
+#
+# Assert that the pool is in the appropriate cachefile.
+#
+function assert_pool_in_cachefile
+{
+	typeset pool=$1
+
+	cachefile=$(cachefile_for_pool $pool)
+	[ $? -ne 0 ] && log_fail "ERROR: Cachefile not created for '$pool'?"
+	log_must test -e "${cachefile}"
+	log_must zdb -U ${cachefile} -C ${pool}
+}
+
+#
+# Get the zdb options given the cachefile state of the pool.
+#
+function zdb_cachefile_opts
+{
+	typeset pool=$1
+	typeset vdevdir=$2
+	typeset opts
+
+	if poolexists "$pool"; then
+		cachefile=$(cachefile_for_pool $pool)
+		typeset -i ret=$?
+		case $ret in
+			0)	opts="-C" ;;
+			1)	opts="-U $cachefile -C" ;;
+			2)	opts="-eC" ;;
+			*)	log_fail "Unknown return '$ret'" ;;
+		esac
+	else
+		opts="-eC"
+		[[ -n "$vdevdir" ]] && opts="$opts -p $vdevdir"
+	fi
+	echo "$opts"
+}
+
+#
 # Get configuration of pool
 # $1 pool name
 # $2 config name
@@ -2296,35 +2374,7 @@ function get_config
 	typeset alt_root
 	typeset zdb_opts
 
-	if poolexists "$pool"; then
-		cachefile=$(get_pool_prop cachefile $pool)
-		if [[ $? != 0 ]]; then
-			# Shouldn't get here.  Try treating as an exported pool
-			zdb_opts="-eC"
-		else
-			case $cachefile in
-				none)
-					# Treat as exported pool
-					zdb_opts="-eC"
-					break
-					;;
-				"-")
-					# Normal pool
-					zdb_opts="-C"
-					break
-					;;
-				*)
-					zdb_opts="-U $cachefile -C"
-					break
-					;;
-			esac
-		fi
-	else
-		# use -e for exported pools
-		zdb_opts="-eC"
-		[[ -n "$vdevdir" ]] && zdb_opts="$zdb_opts -p $vdevdir"
-	fi
-
+	zdb_opts=$(zdb_cachefile_opts $pool $vdevdir)
 	value=$($ZDB $zdb_opts $pool | $GREP "$config:" | $AWK -F: '{print $2}')
 	if [[ -n $value ]] ; then
 		value=${value#'}
@@ -2332,7 +2382,7 @@ function get_config
 	else
 		return 1
 	fi
-	print $value
+	echo $value
 
 	return 0
 }
@@ -2844,6 +2894,17 @@ function get_zpool_version
 	echo $ZPOOL_VERSION
 }
 
+# Ensures that zfsd is running, starting it if necessary.  Every test that
+# interacts with zfsd must call this at startup.  This is intended primarily
+# to eliminate interference from outside the test suite.
+function ensure_zfsd_running
+{
+	if ! service zfsd status > /dev/null 2>&1; then
+		service zfsd start || service zfsd onestart
+		service zfsd status > /dev/null 2>&1 ||
+			log_unsupported "Test requires zfsd"
+	fi
+}
 
 # Temporarily stops ZFSD, because it can interfere with some tests.  If this
 # function is used, then restart_zfsd _must_ be called in the cleanup routine.

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/clean_mirror/setup.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -40,6 +40,7 @@ if [[ -n $SINGLE_DISK ]]; then
 else
 	log_note "Partitioning disks ($MIRROR_PRIMARY $MIRROR_SECONDARY)"
 fi
+wipe_partition_table ${SINGLE_DISK} ${MIRROR_PRIMARY} ${MIRROR_SECONDARY}
 log_must set_partition ${SIDE_PRIMARY##*p} "" $MIRROR_SIZE $MIRROR_PRIMARY
 log_must set_partition ${SIDE_SECONDARY##*p} "" $MIRROR_SIZE $MIRROR_SECONDARY
 

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_002_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_002_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_002_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -106,6 +106,31 @@ typeset -i i=0
 typeset -i j=0
 typeset basedir
 
+function inner_test
+{
+	typeset pool=$1
+	typeset target=$2
+	typeset devs=$3
+	typeset opts=$4
+	typeset mtpt=$5
+
+	log_must $ZPOOL import ${devs} ${opts} $target
+	log_must poolexists $pool
+	log_must ismounted $pool/$TESTFS
+
+	basedir=$mtpt
+	[ -n "$opts" ] && basedir="$ALTER_ROOT/$mtpt"
+
+	[ ! -e "$basedir/$TESTFILE0" ] && \
+		log_fail "ERROR: $basedir/$TESTFILE0 missing after import."
+
+	checksum2=$($SUM $basedir/$TESTFILE0 | $AWK '{print $1}')
+	[[ "$checksum1" != "$checksum2" ]] && \
+		log_fail "ERROR: Checksums differ ($checksum1 != $checksum2)"
+
+	log_mustnot $ZPOOL import $devs $target
+}
+
 while (( i < ${#pools[*]} )); do
 	log_must $CP $MYTESTFILE ${mtpts[i]}/$TESTFILE0
 
@@ -114,40 +139,20 @@ while (( i < ${#pools[*]} )); do
 	j=0
 	while (( j <  ${#options[*]} )); do
 		typeset pool=${pools[i]}
-		k=0
-		while (( k < 2 )); do
-			typeset target=$pool
-			log_must $ZPOOL export $pool
-
-			if (( k == 1 )); then
-				typeset vdevdir=""
-				if [[ "$pool" = "$TESTPOOL1" ]]; then
-					vdevdir="$DEVICE_DIR"
-				fi
-				target=$(get_config $pool pool_guid $vdevdir)
-				log_must test -n "$target"
-				log_note "Importing '$pool' by guid '$target'."
-			fi
-
-			log_must $ZPOOL import ${devs[i]} ${options[j]} $target
-			log_must poolexists $pool
-			log_must ismounted $pool/$TESTFS
-
-			basedir=${mtpts[i]}
-			[[ -n ${options[j]} ]] && \
-				basedir=$ALTER_ROOT/${mtpts[i]}
-
-			[[ ! -e $basedir/$TESTFILE0 ]] && log_fail \
-				"$basedir/$TESTFILE0 missing after import."
-
-			checksum2=$($SUM $basedir/$TESTFILE0 | $AWK '{print $1}')
-			[[ "$checksum1" != "$checksum2" ]] && log_fail \
-				"Checksums differ ($checksum1 != $checksum2)"
+		typeset vdevdir=""
+
+		log_must $ZPOOL export $pool
+
+		[ "$pool" = "$TESTPOOL1" ] && vdevdir="$DEVICE_DIR"
+		guid=$(get_config $pool pool_guid $vdevdir)
+		log_must test -n "$guid"
+		log_note "Importing '$pool' by guid '$guid'"
+		inner_test $pool $guid "${devs[i]}" "${options[j]}" ${mtpts[i]}
 
-			log_mustnot $ZPOOL import ${devs[i]} $target
+		log_must $ZPOOL export $pool
 
-			(( k = k + 1 ))
-		done
+		log_note "Importing '$pool' by name."
+		inner_test $pool $pool "${devs[i]}" "${options[j]}" ${mtpts[i]}
 
 		((j = j + 1))
 	done

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_004_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_004_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_004_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -60,13 +60,34 @@ verify_runnable "global"
 function cleanup
 {
 	destroy_pool $TESTPOOL1
-
 	log_must $RM -rf $DEVICE_DIR/*
-	typeset i=0
-	while (( i < $MAX_NUM )); do
-		log_must $MKFILE $FILE_SIZE ${DEVICE_DIR}/${DEVICE_FILE}$i
-		((i += 1))
-	done
+}
+
+function perform_test
+{
+	target=$1
+
+	assert_pool_in_cachefile $TESTPOOL1
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_note "Devices was moved to different directories."
+	log_must $MKDIR -p $DEVICE_DIR/newdir1 $DEVICE_DIR/newdir2
+	log_must $MV $VDEV1 $DEVICE_DIR/newdir1
+	log_must $MV $VDEV2 $DEVICE_DIR/newdir2
+	log_must $ZPOOL import -d $DEVICE_DIR/newdir1 -d $DEVICE_DIR/newdir2 \
+		-d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy -f $TESTPOOL1
+
+	log_note "Devices was moved to same directory."
+	log_must $MV $VDEV0 $DEVICE_DIR/newdir2
+	log_must $MV $DEVICE_DIR/newdir1/* $DEVICE_DIR/newdir2
+	log_must $ZPOOL import -d $DEVICE_DIR/newdir2 -D -f $target
+	log_must $ZPOOL destroy -f $TESTPOOL1
+
+	# Revert at the end so this test can be rerun.
+	log_must $MV $DEVICE_DIR/newdir2/$(basename $VDEV0) $VDEV0
+	log_must $MV $DEVICE_DIR/newdir2/$(basename $VDEV1) $VDEV1
+	log_must $MV $DEVICE_DIR/newdir2/$(basename $VDEV2) $VDEV2
 }
 
 log_assert "Destroyed pools devices was moved to another directory," \
@@ -74,26 +95,14 @@ log_assert "Destroyed pools devices was 
 log_onexit cleanup
 
 log_must $ZPOOL create $TESTPOOL1 $VDEV0 $VDEV1 $VDEV2
+log_note "Testing import by name '$TESTPOOL1'."
+perform_test $TESTPOOL1
+
+log_must $ZPOOL create $TESTPOOL1 $VDEV0 $VDEV1 $VDEV2
+log_must $ZPOOL status $TESTPOOL1
+log_must $ZDB -C $TESTPOOL1
 typeset guid=$(get_config $TESTPOOL1 pool_guid)
-typeset target=$TESTPOOL1
-if (( RANDOM % 2 == 0 )) ; then
-	target=$guid
-	log_note "Import by guid."
-fi
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_note "Devices was moved to different directories."
-log_must $MKDIR $DEVICE_DIR/newdir1 $DEVICE_DIR/newdir2
-log_must $MV $VDEV1 $DEVICE_DIR/newdir1
-log_must $MV $VDEV2 $DEVICE_DIR/newdir2
-log_must $ZPOOL import -d $DEVICE_DIR/newdir1 -d $DEVICE_DIR/newdir2 \
-	-d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy -f $TESTPOOL1
-
-log_note "Devices was moved to same directory."
-log_must $MV $VDEV0 $DEVICE_DIR/newdir2
-log_must $MV $DEVICE_DIR/newdir1/* $DEVICE_DIR/newdir2
-log_must $ZPOOL import -d $DEVICE_DIR/newdir2 -D -f $target
-log_must $ZPOOL destroy -f $TESTPOOL1
+log_note "Testing import by GUID '${guid}'."
+perform_test $guid
 
 log_pass "Destroyed pools devices was moved, 'zpool import -D' passed."

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_005_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_005_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_005_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -71,26 +71,40 @@ log_assert "Destroyed pools devices was 
 	"correctly."
 log_onexit cleanup
 
+function perform_test
+{
+	typeset target=$1
+
+	assert_pool_in_cachefile $TESTPOOL1
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_note "Testing some devices renamed in the same directory."
+	log_must $MV $VDEV0 $DEVICE_DIR/vdev0-new
+	log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy -f $TESTPOOL1
+
+	log_note "Testing all devices moved to different directories."
+	log_must $MKDIR -p $DEVICE_DIR/newdir1 $DEVICE_DIR/newdir2
+	log_must $MV $VDEV1 $DEVICE_DIR/newdir1/vdev1-new
+	log_must $MV $VDEV2 $DEVICE_DIR/newdir2/vdev2-new
+	log_must $ZPOOL import -d $DEVICE_DIR/newdir1 -d $DEVICE_DIR/newdir2 \
+		-d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy -f $TESTPOOL1
+
+	# Restore the vdevs to their old location so this can be re-run
+	log_note "Restoring vdev files for any further runs."
+	log_must $MV $DEVICE_DIR/vdev0-new $VDEV0
+	log_must $MV $DEVICE_DIR/newdir1/vdev1-new $VDEV1
+	log_must $MV $DEVICE_DIR/newdir2/vdev2-new $VDEV2
+}
+
+log_note "Testing import by name."
+log_must $ZPOOL create $TESTPOOL1 $VDEV0 $VDEV1 $VDEV2
+perform_test $TESTPOOL1
+
+log_note "Testing import by GUID."
 log_must $ZPOOL create $TESTPOOL1 $VDEV0 $VDEV1 $VDEV2
 typeset guid=$(get_config $TESTPOOL1 pool_guid)
-typeset target=$TESTPOOL1
-if (( RANDOM % 2 == 0 )) ; then
-	target=$guid
-	log_note "Import by guid."
-fi
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_note "Part of devices was renamed in the same directory."
-log_must $MV $VDEV0 $DEVICE_DIR/vdev0-new
-log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy -f $TESTPOOL1
-
-log_note "All of devices was rename to different directories."
-log_must $MKDIR $DEVICE_DIR/newdir1 $DEVICE_DIR/newdir2
-log_must $MV $VDEV1 $DEVICE_DIR/newdir1/vdev1-new
-log_must $MV $VDEV2 $DEVICE_DIR/newdir2/vdev2-new
-log_must $ZPOOL import -d $DEVICE_DIR/newdir1 -d $DEVICE_DIR/newdir2 \
-	-d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy -f $TESTPOOL1
+perform_test $guid
 
 log_pass "Destroyed pools devices was renamed, 'zpool import -D' passed."

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_006_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_006_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_006_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -74,21 +74,32 @@ log_assert "For mirror, N-1 destroyed po
 	"by other pool, it still can be imported correctly."
 log_onexit cleanup
 
-log_must $ZPOOL create $TESTPOOL1 mirror $VDEV0 $VDEV1 $VDEV2
+function perform_test
+{
+	typeset target=$1
+
+	assert_pool_in_cachefile $TESTPOOL1
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	create_pool $TESTPOOL2 $VDEV0 $VDEV2
+	log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_must $ZPOOL destroy $TESTPOOL2
+	log_must $RM -rf $VDEV2
+	log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
+
+	# Restore the vdev.
+	log_must $MKFILE $FILE_SIZE $VDEV2
+}
+
+log_note "Testing import by name."
+create_pool $TESTPOOL1 mirror $VDEV0 $VDEV1 $VDEV2
+perform_test $TESTPOOL1
+
+log_note "Testing import by GUID."
+create_pool $TESTPOOL1 mirror $VDEV0 $VDEV1 $VDEV2
 typeset guid=$(get_config $TESTPOOL1 pool_guid)
-typeset target=$TESTPOOL1
-if (( RANDOM % 2 == 0 )) ; then
-	target=$guid
-	log_note "Import by guid."
-fi
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_must $ZPOOL create $TESTPOOL2 $VDEV0 $VDEV2
-log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_must $ZPOOL destroy $TESTPOOL2
-log_must $RM -rf $VDEV2
-log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
+perform_test $guid
 
 log_pass "zpool import -D mirror passed."

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_007_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_007_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_007_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -74,28 +74,36 @@ log_assert "For raidz, one destroyed poo
 	"other pool, it still can be imported correctly."
 log_onexit cleanup
 
+function perform_test
+{
+	typeset target=$1
+
+	assert_pool_in_cachefile $TESTPOOL1
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_must $ZPOOL create $TESTPOOL2 $VDEV0 
+	log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_must $ZPOOL destroy $TESTPOOL2
+	log_must $RM -rf $VDEV0
+	log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_note "For raidz, two destroyed pool's devices were used, import failed."
+	log_must $MKFILE $FILE_SIZE $VDEV0
+	log_must $ZPOOL create $TESTPOOL2 $VDEV0 $VDEV1
+	log_mustnot $ZPOOL import -d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy $TESTPOOL2
+}
+
+log_note "Testing import by name."
+log_must $ZPOOL create $TESTPOOL1 raidz $VDEV0 $VDEV1 $VDEV2 $VDIV3
+perform_test $TESTPOOL1
+
+log_note "Testing import by GUID."
 log_must $ZPOOL create $TESTPOOL1 raidz $VDEV0 $VDEV1 $VDEV2 $VDIV3
 typeset guid=$(get_config $TESTPOOL1 pool_guid)
-typeset target=$TESTPOOL1
-if (( RANDOM % 2 == 0 )) ; then
-	target=$guid
-	log_note "Import by guid."
-fi
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_must $ZPOOL create $TESTPOOL2 $VDEV0 
-log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_must $ZPOOL destroy $TESTPOOL2
-log_must $RM -rf $VDEV0
-log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_note "For raidz, two destroyed pool's devices were used, import failed."
-log_must $MKFILE $FILE_SIZE $VDEV0
-log_must $ZPOOL create $TESTPOOL2 $VDEV0 $VDEV1
-log_mustnot $ZPOOL import -d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy $TESTPOOL2
+perform_test $guid
 
 log_pass "zpool import -D raidz passed."

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_008_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_008_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_008_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -70,33 +70,41 @@ function cleanup
 	done
 }
 
+function perform_test
+{
+	typeset target=$1
+
+	assert_pool_in_cachefile $TESTPOOL1
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_must $ZPOOL create $TESTPOOL2 $VDEV0 $VDEV1
+	log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_must $ZPOOL destroy $TESTPOOL2
+	log_must $RM -rf $VDEV0 $VDEV1
+	log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy $TESTPOOL1
+
+	log_note "For raidz2, more than two destroyed pool's devices were used, " \
+		"import failed."
+	log_must $MKFILE $FILE_SIZE $VDEV0 $VDEV1
+	log_must $ZPOOL create $TESTPOOL2 $VDEV0 $VDEV1 $VDEV2
+	log_mustnot $ZPOOL import -d $DEVICE_DIR -D -f $target
+	log_must $ZPOOL destroy $TESTPOOL2
+}
+
 log_assert "For raidz2, two destroyed pools devices was removed or used by " \
 	"other pool, it still can be imported correctly."
 log_onexit cleanup
 
+log_note "Testing import by name."
+log_must $ZPOOL create $TESTPOOL1 raidz2 $VDEV0 $VDEV1 $VDEV2 $VDIV3
+perform_test $TESTPOOL1
+
+log_note "Testing import by GUID."
 log_must $ZPOOL create $TESTPOOL1 raidz2 $VDEV0 $VDEV1 $VDEV2 $VDIV3
 typeset guid=$(get_config $TESTPOOL1 pool_guid)
-typeset target=$TESTPOOL1
-if (( RANDOM % 2 == 0 )) ; then
-	target=$guid
-	log_note "Import by guid."
-fi
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_must $ZPOOL create $TESTPOOL2 $VDEV0 $VDEV1
-log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_must $ZPOOL destroy $TESTPOOL2
-log_must $RM -rf $VDEV0 $VDEV1
-log_must $ZPOOL import -d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy $TESTPOOL1
-
-log_note "For raidz2, more than two destroyed pool's devices were used, " \
-	"import failed."
-log_must $MKFILE $FILE_SIZE $VDEV0 $VDEV1
-log_must $ZPOOL create $TESTPOOL2 $VDEV0 $VDEV1 $VDEV2
-log_mustnot $ZPOOL import -d $DEVICE_DIR -D -f $target
-log_must $ZPOOL destroy $TESTPOOL2
+perform_test $guid
 
 log_pass "zpool import -D raidz2 passed."

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_001_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_001_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_001_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -104,12 +104,40 @@ function recreate_files
 	done
 }
 
+function perform_inner_test
+{
+	typeset action=$1
+	typeset import_opts=$2
+	typeset target=$3
+
+	$action $ZPOOL import -d $DEVICE_DIR ${import_opts} $target
+	[[ $action == "log_mustnot" ]] && return
+
+	log_must poolexists $TESTPOOL1
+
+	health=$($ZPOOL list -H -o health $TESTPOOL1)
+	[[ "$health" == "DEGRADED" ]] || \
+		log_fail "ERROR: $TESTPOOL1: Incorrect health '$health'"
+	log_must ismounted $TESTPOOL1/$TESTFS
+
+	basedir=$TESTDIR1
+	[[ -n "${import_opts}" ]] && basedir=$ALTER_ROOT/$TESTDIR1
+	[[ ! -e "$basedir/$TESTFILE0" ]] && \
+		log_fail "ERROR: $basedir/$TESTFILE0 missing after import."
+
+	checksum2=$($SUM $basedir/$TESTFILE0 | $AWK '{print $1}')
+	[[ "$checksum1" != "$checksum2" ]] && \
+		log_fail "ERROR: Checksums differ ($checksum1 != $checksum2)"
+
+	log_must $ZPOOL export $TESTPOOL1
+}
+
 log_onexit cleanup
 
 log_assert "Verify that import could handle damaged or missing device."
 
 CWD=$PWD
-cd $DEVICE_DIR || log_fail "Unable change directory to $DEVICE_DIR"
+cd $DEVICE_DIR || log_fail "ERROR: Unable change directory to $DEVICE_DIR"
 
 checksum1=$($SUM $MYTESTFILE | $AWK '{print $1}')
 
@@ -171,35 +199,11 @@ while (( i < ${#vdevs[*]} )); do
 					;;
  			esac
 
-			typeset target=$TESTPOOL1
-			if (( RANDOM % 2 == 0 )) ; then
-				target=$guid
-				log_note "Import by guid."
-			fi
-			$action $ZPOOL import \
-				-d $DEVICE_DIR ${options[j]} $target
-
-			[[ $action == "log_mustnot" ]] && continue
-
-			log_must poolexists $TESTPOOL1
-
-			health=$($ZPOOL list -H -o health $TESTPOOL1)
-
-			[[ $health == "DEGRADED" ]] || \
-				log_fail "$TESTPOOL1: Incorrect health($health)" 
-			log_must ismounted $TESTPOOL1/$TESTFS
-
-			basedir=$TESTDIR1
-			[[ -n ${options[j]} ]] && \
-				basedir=$ALTER_ROOT/$TESTDIR1
-
-			[[ ! -e $basedir/$TESTFILE0 ]] && \
-				log_fail "$basedir/$TESTFILE0 missing after import."
-
-			checksum2=$($SUM $basedir/$TESTFILE0 | $AWK '{print $1}')
-			[[ "$checksum1" != "$checksum2" ]] && \
-				log_fail "Checksums differ ($checksum1 != $checksum2)"
+			log_note "Testing import by name."
+			perform_inner_test $action "${options[j]}" $TESTPOOL1
 
+			log_note "Testing import by GUID."
+			perform_inner_test $action "${options[j]}" $guid
 		done
 
 		((j = j + 1))

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_002_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_002_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_missing_002_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -97,9 +97,7 @@ function cleanup_all
 	typeset i=0
 	while (( i < $MAX_NUM )); do
 		typeset dev_file=${DEVICE_DIR}/${DEVICE_FILE}$i
-		if [[ ! -e ${dev_file} ]]; then
-			log_must $MKFILE $FILE_SIZE ${dev_file}
-		fi
+		[ ! -e ${dev_file} ] && log_must $MKFILE $FILE_SIZE ${dev_file}
 		((i += 1))
 	done
 
@@ -186,14 +184,18 @@ while (( i < ${#vdevs[*]} )); do
 					;;
  			esac
 
-			typeset target=$TESTPOOL1
-			if (( RANDOM % 2 == 0 )) ; then
-				target=$guid
-				log_note "Import by guid."
-			fi
+			log_note "Testing import by name."
 			$action $ZPOOL import \
-				-d $DEVICE_DIR ${options[j]} $target
+				-d $DEVICE_DIR ${options[j]} $TESTPOOL1
 
+			# We have to test for pool existence since action
+			# may be 'log_mustnot'.
+			poolexists $TESTPOOL1 && \
+				log_must $ZPOOL export $TESTPOOL1
+
+			log_note "Testing import by GUID."
+			$action $ZPOOL import \
+				-d $DEVICE_DIR ${options[j]} $guid
 		done
 
 		((j = j + 1))

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_rename_001_pos.ksh
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_rename_001_pos.ksh	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_rename_001_pos.ksh	Wed Dec 16 18:29:54 2015	(r292346)
@@ -104,6 +104,40 @@ function cleanup
 		log_must $RM -rf $ALTER_ROOT
 }
 
+function perform_inner_test
+{
+	target=$1
+
+	log_must $ZPOOL import ${devs[i]} ${options[j]} \
+		$target ${pools[i]}-new
+
+	log_must poolexists "${pools[i]}-new"
+
+	log_must ismounted ${pools[i]}-new/$TESTFS
+
+	basedir=${mtpts[i]}
+	[[ -n ${options[j]} ]] && \
+		basedir=$ALTER_ROOT/${mtpts[i]}
+
+	[[ ! -e $basedir/$TESTFILE0 ]] && \
+		log_fail "$basedir/$TESTFILE0 missing after import."
+
+	checksum2=$($SUM $basedir/$TESTFILE0 | $AWK '{print $1}')
+	[[ "$checksum1" != "$checksum2" ]] && \
+		log_fail "Checksums differ ($checksum1 != $checksum2)"
+
+	log_must $ZPOOL export "${pools[i]}-new"
+
+	[[ -d /${pools[i]}-new ]] && \
+		log_must $RM -rf /${pools[i]}-new
+
+	target=${pools[i]}-new
+	if (( RANDOM % 2 == 0 )) ; then
+		target=$guid
+	fi
+	log_must $ZPOOL import ${devs[i]} $target ${pools[i]}
+}
+
 log_onexit cleanup
 
 log_assert "Verify that an imported pool can be renamed."
@@ -128,40 +162,13 @@ while (( i < ${#pools[*]} )); do
 		[[ -d /${pools[i]} ]] && \
 			log_must $RM -rf /${pools[i]}
 
-		typeset target=${pools[i]}
-		if (( RANDOM % 2 == 0 )) ; then
-			target=$guid
-			log_note "Import by guid."
-		fi
-		
-		log_must $ZPOOL import ${devs[i]} ${options[j]} \
-			$target ${pools[i]}-new
-
-		log_must poolexists "${pools[i]}-new"
-
-		log_must ismounted ${pools[i]}-new/$TESTFS
-
-		basedir=${mtpts[i]}
-		[[ -n ${options[j]} ]] && \
-			basedir=$ALTER_ROOT/${mtpts[i]}
-	
-		[[ ! -e $basedir/$TESTFILE0 ]] && \
-			log_fail "$basedir/$TESTFILE0 missing after import."
-
-		checksum2=$($SUM $basedir/$TESTFILE0 | $AWK '{print $1}')
-		[[ "$checksum1" != "$checksum2" ]] && \
-			log_fail "Checksums differ ($checksum1 != $checksum2)"
-
-		log_must $ZPOOL export "${pools[i]}-new"
-
-		[[ -d /${pools[i]}-new ]] && \
-			log_must $RM -rf /${pools[i]}-new
-
-		target=${pools[i]}-new
-		if (( RANDOM % 2 == 0 )) ; then
-			target=$guid
-		fi
-		log_must $ZPOOL import ${devs[i]} $target ${pools[i]}
+		log_note "Testing import by name."
+		perform_inner_test ${pools[i]}
+
+		log_must $ZPOOL export ${pools[i]}
+
+		log_note "Testing import by GUID."
+		perform_inner_test $guid
 
 		((j = j + 1))
 	done

Modified: projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_upgrade/zpool_upgrade.kshlib
==============================================================================
--- projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_upgrade/zpool_upgrade.kshlib	Wed Dec 16 17:45:03 2015	(r292345)
+++ projects/zfsd/head/tests/sys/cddl/zfs/tests/cli_root/zpool_upgrade/zpool_upgrade.kshlib	Wed Dec 16 18:29:54 2015	(r292346)
@@ -126,21 +126,15 @@ function check_poolversion { # pool vers
 	VERSION=$2
 
 	# check version using zdb
-	ACTUAL=$($ZDB -eC -p $TMPDIR $POOL | $GREP version: | \
-	 $SED -e 's/ //g' -e 's/version://g')
-
-	if [ "$ACTUAL" != "$VERSION" ]
-	then
-		log_fail "$POOL not upgraded, ver. $ACTUAL, expected $VERSION"

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***


More information about the svn-src-projects mailing list