(The bottle neck is the network, at 1GB.) The third feature mdadm RAID provides is increased I/O which is not usable in most home environments. This may result in the loss of an old or weak second disk which means losing the entire data store. Given the size of today's disks, this may go on for several hours to days. Unfortunately when recreating a disk, it thrashes the remaining disks, putting them through a torture test, as it re-silvers a new disk. Where mdadm RAID is concerned, it's only usable features are the common mount point and the ability to recreate a dead disk. increased parrallel I/O (which can't be used) recovery of a single drive (with notable risk) an aggregation of disks - the group appears as one large drive. Realistically, how many files are you adding between SYNC operations? If you feel like you've done a lot of work, you can always run a manual sync at the end of the day. In such an event, with SNAPRAID, you'd only lose files added as of the last SYNC event. While the loss of a disk is not to be taken lightly, it will happen eventually, it is still a infrequent event. Mdadm RAID can't do any of the bulleted features listed above. ![]() If two disks are lost in RAID 5, it's over. Any remaining data disks still have their data. The same applies if you lose 2 data disks. If you lose a disk and decide not to recover it, you've only lost the data on that particular disk. It can scrub for silent errors which means it protects from bit rot. Deleted files and folders can be restored, as of their state during the last SYNC It's a type of backup for files and folders. SNAPRAID can restore a disk (Just like mdadm RAID) SNAPRAID does a few things you may not be aware of. (In my opinion, an UPS is good insurance in any case.) Without a battery backed hardware RAID controller, users really should have severs running RAID on an UPS. If file writes are in progress and power is lost, there's real potential for data corruption or even losing the array. The wrote hole you're talking about is with regard to mdadm and hardware RAID controllers. ![]() The write hole occurs when there is new data on a data disk, and another disk dies before the new data is synced.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |