degraded zfs slowdown

Randy Bush randy at psg.com
Sun Mar 14 03:26:48 UTC 2010


i lost a drive on a remote server.  i had to use tw_cli from single user
at boot time to remove it from the controller as it was making the
controller unusable.

so now, while waiting for the replacement drive to ship in, i have

    # df
    Filesystem                         1024-blocks     Used     Avail Capacity  Mounted on
    /dev/twed0s1a                           253678   198102     35282    85%    /
    /dev/twed0s1h                            63254     2414     55780     4%    /root
    tank                                 154191872    16256 154175616     0%    /tank
    tank/usr                             173331328 19155712 154175616    11%    /usr
    tank/usr/home                        213014784 58839168 154175616    28%    /usr/home
    tank/var                             157336192  3160576 154175616     2%    /var
    tank/var/spool                       154475392   299776 154175616     0%    /var/spool
    /dev/md0                                126702      156    116410     0%    /tmp
    devfs                                        1        1         0   100%    /dev
    procfs                                       4        4         0   100%    /proc

and

    # zpool status
      pool: tank
     state: DEGRADED
    status: One or more devices has experienced an unrecoverable error.  An
	    attempt was made to correct the error.  Applications are unaffected.
    action: Determine if the device needs to be replaced, and clear the errors
	    using 'zpool clear' or replace the device with 'zpool replace'.
       see: http://www.sun.com/msg/ZFS-8000-9P
     scrub: none requested
    config:

	    NAME        STATE     READ WRITE CKSUM
	    tank        DEGRADED     0     0     0
	      mirror    DEGRADED     0     0     0
		twed1   REMOVED      0     2     0
		twed2   ONLINE       0     0     0

    errors: No known data errors

but the system is extremely soggy and hard to light.

do i need to do some sort of remove at the zfs layer?

randy


More information about the freebsd-fs mailing list