Some of the material in is restricted to members of the community. By logging in, you may be able to gain additional access to certain collections or items. If you have questions about access or logging in, please use the form on the Contact Page.
Large-scale storage arrays are always in high demand by universities, government agencies, web search engines, and research laboratories. This unvarying need for more data storage has begun to push storage array magnitudes into an unknown stratum. As storage systems continue to outgrow the terabyte class and move into the petabyte range, these colossal arrays begin to show design limitations. This thesis focuses primarily on disk drives as the building blocks of reliable large-scale storage arrays. As a feasibility baseline, the overall reliability of large-scale storage arrays should be greater than that of a single disk. However, petabyte- and exabyte-sized systems, requiring thousands to millions of disk drives, present a serious challenge in terms of reliability. Therefore, multi-level redundancy schemes must be used in order to slow these dwindling reliabilities. This work, based upon the previous research of redundant arrays of independent disks (RAID) by Patterson et al., introduces the reliability analysis of dual- and tri-level Grouped RAID (GRAID) configurations. As storage arrays rapidly increase in size, the use of multi-level redundancy is essential. Design recommendations for various large-scale storage arrays, ranging from 100 Tebibytes (TiB) to 100 Exbibytes (EiB), can be generated using the custom reliability calculator tool written in MATLAB. The analysis of these design recommendations shows that dual-level GRAID configurations are only recommended for array magnitudes up to 5 PiB. Beyond this threshold, tri-level GRAID demonstrates feasibility for storage magnitudes up to 100 EiB and beyond.