Essentially I believe the idea of a redundant array sounds safer than it
really is in practice, especially when dealing with very large arrays
and with level 5 arrays. The reasons why this is so are manifold,
suffice to say that a few years of actually using such devices shows
that they have much more potential for catastrophic failure and latent
failure (you don't know it's broken until you go to use it and find out
it's broken) than a well designed tape archive or backup.
Not that disk to disk backups are a completely bad idea. In my
experience a combination works best. For example, automatic backups to
reserved disks or disk arrays on remote systems every night, but once a
week tape snapshots of that data. It's a lot of tapes but over time it
will prove to be worthwhile. If the data volume is too high, simple
backup scripts that write every file only once (essentially an archive)
to tape to make it more practical.
-Kanoa
Roy Sigurd Karlsbakk wrote:
>On Thursday 03 October 2002 13:20, jbradford@dial.pipex.com wrote:
>
>>Might it not be a good idea to DD the raw contents of each disk to a tape
>>drive, just incase you fubar the array? It would be time consuming, but at
>>least you could restore your data in the event that it gets corrupted.
>>
>
>er
>
>16 120GB disks?
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/