can snapshots become corrupted ? Is fsck'ing /dev/md0 sensible ?

Joe Schmoe non_secure at
Sat Jan 21 09:26:45 PST 2006


Thank you very much for your response.

On Sat, 21 Jan 2006, Oliver Fromme wrote:

> Joe Schmoe <non_secure at> wrote:
>  > Let's say I have a running filesystem, and the
>  > crashes, and (for whatever reason) I mount and
run the
>  > filesystem in an unclean state.  While in this
>  > unclean, running state, I create a snapshot on
>  >
>  > Now let's say I unmount the filesystem and fsck
it for
>  > real.  It gets marked clean.  Is the snapshot
>  > resides on that filesystem still dirty ?
> Disclaimer:  I haven't tried that, so this is just
> Yes, the snapshot is probably still "dirty".  But it
> shouldn't matter, because you can only mount it
> anyway.

Ok, yes, the snapshot can only be mounted read-only -
this is true.  However, the snapshot itself (whether
mounted or not) is continually being changed as files
are being changed or deleted on the filesystem in
question.  So if the snapshot is corrupt, and I start
making changes/deletions on the (now clean)
filesystem, then wouldn't there be problems ?

> It probably depends how "dirty" it is.  If you had
> updates enabled and the disk is reliable (i.e. not
> IDE/ATA disk with write-cache enabled), then there
> only unused blocks not marked as free.  It is save
> mount such a filesystem.  But in all other cases,
> a dirty filesystem read/write (forcibly) can indeed
> instability.  It doesn't matter if snapshots are
> or not.

Ok, understood.  However, once I do a full and
successful fsck on that filesystem, it is completely
safe again, regardless of how long or how often I ran
it while it was dirty, right ?

Here are some further, chronological details of what I

- I had a perfectly clean filesystem
- I made 2-3 snapshots on that filesystem
- the system crashed
- I _did not_ fsck the filesystem completely
- I made 2-3 _more_ snapshots on that filesystem
- suddenly, all attempts to rsync the files from this
filesystem to a remote host caused an almost immediate
hard lock of the system
- I then did a full, successful fsck of the
filesystem.  It is now totally clean.
- rsync of that filesystem to a remote system still
causes almost immediate crashes
- I mounted the filesystem read-only
- the rsync _succeeds_ (note, when the filesystem is
read-only, softupdates are disabled)
- I mount it read-write again, and the rsync crashes
the system again
- I delete all snapshots on the filesystem, and now
the rsync works perfectly, as expected, whether the
filesystem is read-write or read-only.

So the first question is the one I am asking, and
requesting further clarification about above.  It is
the question of corrupt snapshots on clean

The second question I need to ask is, when I am
rsyncing this filesystem to a remote host, why is it
not a read-only operation ?  My rsync process, because
this filesystem was the _source_, and not the
destination, should not have written anything to this
filesystem.  However, it succeeded when the fs was
read-only (softupdates were off) and it failed when
the filesystem was read-write (softupdates on).  Is
there some kind of manipulation of the source
filesystem that rsync does that would be equal to a
lot of writing to the source disk ?

It is my understanding that soft-updates only deal
with writes to the disk, so I am very confused about
that behavior.

Thank you very much for your help.

Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 

More information about the freebsd-fs mailing list