I'm not against this in principle, but in practise it is almost useless.
Modern disk drives do bad sector remapping at write time, so unless something
is terribly wrong you will never see a write error (which is exactly the time
that the filesystem could do such remapping). Normally, you will only see
an error like this at read time, at which point it is too late to fix.
If you do an fsck, it would normally detect the read error and try to write
back the repaired data, and cause the device to do remapping. It will not
normally be possible to regenerate metadata with anything less than a full
fsck (if at all).
> > > Fault tollerance should be done at a lower level than the filesystem.
> >
> > I know it _should_ to live in a nice and easy world. Unfortunately
> > real life is different. The simple question is: you have tons of
> > low-level drivers for all kinds of storage media, but you have
> > comparably few filesystems. To me this sound like the preferred
> > place for this type of behaviour can be fs, because all drivers
> > inherit the feature if it lives in fs.
>
> The sort of feature you are describing would really belong in a
> separate layer, somewhat analogous to the MD driver, but for defect
> management. You could create a virtual block device that has 90% of
> the capacity of the real block device, then allocte spare blocks from
> the real device if and when blocks failed.
Hmm, like the "bad blocks relocation" plugin for EVMS?
Cheers, Andreas
-- Andreas Dilger http://sourceforge.net/projects/ext2resize/ http://www-mddsp.enel.ucalgary.ca/People/adilger/- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/