gvinum - problem on hard disk
Felipe Neuwald
felipe at neuwald.biz
Mon Oct 22 03:57:51 PDT 2007
Hi Ulf,
Thank you for your information. As you can see, it worked:
[root at fileserver ~]# gvinum list
4 drives:
D a State: up /dev/ad4 A: 0/238474 MB
(0%)
D b State: up /dev/ad5 A: 0/238475 MB
(0%)
D c State: up /dev/ad6 A: 0/238475 MB
(0%)
D d State: up /dev/ad7 A: 0/238475 MB
(0%)
1 volume:
V data State: up Plexes: 1 Size:
931 GB
1 plex:
P data.p0 S State: up Subdisks: 4 Size:
931 GB
4 subdisks:
S data.p0.s3 State: up D: d Size:
232 GB
S data.p0.s2 State: up D: c Size:
232 GB
S data.p0.s1 State: up D: b Size:
232 GB
S data.p0.s0 State: up D: a Size:
232 GB
[root at fileserver ~]# fsck -t ufs -y /dev/gvinum/data
** /dev/gvinum/data
** Last Mounted on /data
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
258700 files, 419044280 used, 53985031 free (39599 frags, 6743179
blocks, 0.0% fragmentation)
***** FILE SYSTEM MARKED CLEAN *****
[root at fileserver ~]# mount -t ufs /dev/gvinum/data /data
[root at fileserver ~]# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /tmp (ufs, local, soft-updates)
/dev/ad0s1e on /usr (ufs, local, soft-updates)
/dev/ad0s1f on /var (ufs, local, soft-updates)
/dev/gvinum/data on /data (ufs, local)
[root at fileserver ~]#
Now, I have to advice the customer again to make a backup file server.
Thank you very much,
Felipe Neuwald.
Ulf Lilleengen escreveu:
On fre, okt 19, 2007 at 03:43:14 -0200, Felipe Neuwald wrote:
Hi folks,
I have one gvinum raid on a FreeBSD 6.1-RELEASE machine. There are 4
disks running, as you can see:
[root at fileserver ~]# gvinum list
4 drives:
D a State: up /dev/ad4 A: 0/238474 MB (0%)
D b State: up /dev/ad5 A: 0/238475 MB (0%)
D c State: up /dev/ad6 A: 0/238475 MB (0%)
D d State: up /dev/ad7 A: 0/238475 MB (0%)
1 volume:
V data State: down Plexes: 1 Size: 931 GB
1 plex:
P data.p0 S State: down Subdisks: 4 Size: 931 GB
4 subdisks:
S data.p0.s3 State: stale D: d Size: 232 GB
S data.p0.s2 State: up D: c Size: 232 GB
S data.p0.s1 State: up D: b Size: 232 GB
S data.p0.s0 State: up D: a Size: 232 GB
But, as you can see, the data.p0.s3 is "stale". What should I do to try
recover this and get the raid up again (and recover information)
Hello,
Since your plex organization is RAID0 (striping), recovering after a drive
failure is a problem since you don't have any redundancy, but if you didn't
replace any drives etc, this could just be gvinum fooling around. In that
case, doing a 'gvinum setstate -f up data.p0.s3' should get the volume up
again.
More information about the freebsd-geom
mailing list