zfs on nvme: gnop breaks pool, zfs gets stuck

Gerrit Kühn gerrit.kuehn at aei.mpg.de
Thu Apr 28 05:48:50 UTC 2016


On Wed, 27 Apr 2016 15:14:36 +0100 Gary Palmer <gpalmer at freebsd.org> wrote
about Re: zfs on nvme: gnop breaks pool, zfs gets stuck:

GP> vfs.zfs.min_auto_ashift
GP> 
GP> which lets you manage the ashift on a new pool without having to try
GP> the gnop trick

I just tried this, and it appears to work fine:

---
root at storage:~ # sysctl vfs.zfs.min_auto_ashift
vfs.zfs.min_auto_ashift: 12

root at storage:~ # zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

	NAME            STATE     READ WRITE CKSUM
	data            ONLINE       0     0     0
	  raidz2-0      ONLINE       0     0     0
	    gpt/disk0   ONLINE       0     0     0
	    gpt/disk1   ONLINE       0     0     0
	    gpt/disk2   ONLINE       0     0     0
	    gpt/disk3   ONLINE       0     0     0
	    gpt/disk4   ONLINE       0     0     0
	    gpt/disk5   ONLINE       0     0     0
	    gpt/disk6   ONLINE       0     0     0
	    gpt/disk7   ONLINE       0     0     0
	    gpt/disk8   ONLINE       0     0     0
	    gpt/disk9   ONLINE       0     0     0
	    gpt/disk10  ONLINE       0     0     0
	    gpt/disk11  ONLINE       0     0     0

errors: No known data errors

  pool: flash
 state: ONLINE
  scan: none requested
config:

	NAME            STATE     READ WRITE CKSUM
	flash           ONLINE       0     0     0
	  raidz1-0      ONLINE       0     0     0
	    gpt/flash0  ONLINE       0     0     0
	    gpt/flash1  ONLINE       0     0     0
	    gpt/flash2  ONLINE       0     0     0

errors: No known data errors

root at storage:~ # zdb | grep ashift
            ashift: 12
            ashift: 12

root at storage:~ # zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
data     65T  1.88M  65.0T         -     0%     0%  1.00x  ONLINE  -
flash  1.39T   800K  1.39T         -     0%     0%  1.00x  ONLINE  -

---


I still wonder why the gnop workaround went so terribly wrong. Anyway,
thanks again for pointing out this new sysctl to me!

And for the record: this is what I get with a simple linear write test:

---
root at storage:~ # dd if=/dev/zero of=/flash/Z bs=1024k count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 3.912829 secs (2679840997 bytes/sec)
---


I guess I won't complain...


cu
  Gerrit


More information about the freebsd-fs mailing list