ZFS help!
Mike Tancsa
mike at sentex.net
Mon Jan 31 19:15:55 UTC 2011
On 1/29/2011 10:13 PM, James R. Van Artsdalen wrote:
> On 1/28/2011 4:46 PM, Mike Tancsa wrote:
>>
>> I had just added another set of disks to my zfs array. It looks like the
>> drive cage with the new drives is faulty. I had added a couple of files
>> to the main pool, but not much. Is there any way to restore the pool
>> below ? I have a lot of files on ad0,1,4,6 and ada4,5,6,7 and perhaps
>> one file on the new drives in the bad cage.
>
> Get another enclosure and verify it works OK. Then move the disks from
> the suspect enclosure to the tested enclosure and try to import the pool.
>
> The problem may be cabling or the controller instead - you didn't
> specify how the disks were attached or which version of FreeBSD you're
> using.
>
OK, good news (for me) it seems. New cage and all seems to be recognized correctly. The history is
...
2010-04-22.14:27:38 zpool add tank1 raidz /dev/ada4 /dev/ada5 /dev/ada6 /dev/ada7
2010-06-11.13:49:33 zfs create tank1/argus-data
2010-06-11.13:49:41 zfs create tank1/argus-data/previous
2010-06-11.13:50:38 zfs set compression=off tank1/argus-data
2010-08-06.12:20:59 zpool replace tank1 ad1 ad1
2010-09-16.10:17:51 zpool upgrade -a
2011-01-28.11:45:43 zpool add tank1 raidz /dev/ada0 /dev/ada1 /dev/ada2 /dev/ada3
FreeBSD RELENG_8 from last week, 8G of RAM, amd64.
zpool status -v
pool: tank1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank1 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad0 ONLINE 0 0 0
ad1 ONLINE 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ada5 ONLINE 0 0 0
ada8 ONLINE 0 0 0
ada7 ONLINE 0 0 0
ada6 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
/tank1/argus-data/previous/argus-sites-radium.2011.01.28.16.00
tank1/argus-data:<0xc6>
/tank1/argus-data/argus-sites-radium
0(offsite)# zpool get all tank1
NAME PROPERTY VALUE SOURCE
tank1 size 14.5T -
tank1 used 7.56T -
tank1 available 6.94T -
tank1 capacity 52% -
tank1 altroot - default
tank1 health ONLINE -
tank1 guid 7336939736750289319 default
tank1 version 15 default
tank1 bootfs - default
tank1 delegation on default
tank1 autoreplace off default
tank1 cachefile - default
tank1 failmode wait default
tank1 listsnapshots on local
Do I just want to do a scrub ?
Unfortunately, http://www.sun.com/msg/ZFS-8000-8A gives a 503
---Mike
More information about the freebsd-fs
mailing list