自动管理文件存储-实践与经验(摘要)

D. J. Rigby
{"title":"自动管理文件存储-实践与经验(摘要)","authors":"D. J. Rigby","doi":"10.1109/MASS.1994.373033","DOIUrl":null,"url":null,"abstract":"At the Rutherford Appleton Laboratory (RAL), we have over many years run a range of different filestore systems with automatic migration and hierarchy management. All provide transparent automatic-recall facilities from tape-style media to disk when data are referenced by a user or an application; therefore, these systems provide more or less transparent support for a filestore larger than the available disk space. However, the systems on IBM MVS (ASM2), IBM VM (local), Cray COS (archiver), and Cray Unicos (dm) have many major differences, and some of these systems have been more satisfactory than others. In this paper, I shall briefly describe the system we developed and still run in VM. It has some features that are unique among those systems mentioned. Firstly, it was designed from the start as a total filestore management system (for the standard IBM W C M S minidisk system) that happens to store data in devices at different parts of a hierarchy rather than as a system compromised by being grafted onto a separate disk filestore. Secondly, it migrates and recalls files always in groups (minidisks). In general, I shall attempt to highlight good and bad points of the various systems that are relevant to the design considerations of new managed storage systems. I shall describe the managed storage and network access system we have recently developed based on our and others’ previous experiences. Our system is now managing over 10 terabytes (Tbytes) of data that were traditionally stored on unmanaged tapes and processed in a central mainframe complex. The system completely separates the issues related to managing the data, the hierarchy of the hardware, and resilience from the issues related to user and application interfaces to the data. In addition, the system attempts to provide both a traditional-style (“virtual-tape”) interface which is still the only widely understood bulk-data interface natural to the full range of systems (Unix, VMS, VM, and MVS) that are still in use and more abstract “modem”-style interfaces. We have taken the issue of data integrity very seriously, and I shall describe how we have approached this issue. I shall also describe the extra features that we have considered essential but are not generally found on other mass storage systems. While our design has benefited from the IEEE mass storage model, the system is above all a working one, and ideals have at times been compromised. The system has been running for over 2 years, handling over 10 Tbits of data. It is allowing us, with little disruption to users, to phase out large holdings of unmanaged nine-track and 18-track tapes and to give a much more reliable, more secure, and faster service, using a hierarchy of modem 0.5-inch tape robots and cost-effective, manual, 4-millimeter (mm) cassettes. Our system interfaces to any network-attached system and is used by users and applications on all our types of computer (Unix, VMS, VM, and DOS) for both shared and unshared data.","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automatic Managed Filestores - Practice and Experience (Abstract)\",\"authors\":\"D. J. Rigby\",\"doi\":\"10.1109/MASS.1994.373033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"At the Rutherford Appleton Laboratory (RAL), we have over many years run a range of different filestore systems with automatic migration and hierarchy management. All provide transparent automatic-recall facilities from tape-style media to disk when data are referenced by a user or an application; therefore, these systems provide more or less transparent support for a filestore larger than the available disk space. However, the systems on IBM MVS (ASM2), IBM VM (local), Cray COS (archiver), and Cray Unicos (dm) have many major differences, and some of these systems have been more satisfactory than others. In this paper, I shall briefly describe the system we developed and still run in VM. It has some features that are unique among those systems mentioned. Firstly, it was designed from the start as a total filestore management system (for the standard IBM W C M S minidisk system) that happens to store data in devices at different parts of a hierarchy rather than as a system compromised by being grafted onto a separate disk filestore. Secondly, it migrates and recalls files always in groups (minidisks). In general, I shall attempt to highlight good and bad points of the various systems that are relevant to the design considerations of new managed storage systems. I shall describe the managed storage and network access system we have recently developed based on our and others’ previous experiences. Our system is now managing over 10 terabytes (Tbytes) of data that were traditionally stored on unmanaged tapes and processed in a central mainframe complex. The system completely separates the issues related to managing the data, the hierarchy of the hardware, and resilience from the issues related to user and application interfaces to the data. In addition, the system attempts to provide both a traditional-style (“virtual-tape”) interface which is still the only widely understood bulk-data interface natural to the full range of systems (Unix, VMS, VM, and MVS) that are still in use and more abstract “modem”-style interfaces. We have taken the issue of data integrity very seriously, and I shall describe how we have approached this issue. I shall also describe the extra features that we have considered essential but are not generally found on other mass storage systems. While our design has benefited from the IEEE mass storage model, the system is above all a working one, and ideals have at times been compromised. The system has been running for over 2 years, handling over 10 Tbits of data. It is allowing us, with little disruption to users, to phase out large holdings of unmanaged nine-track and 18-track tapes and to give a much more reliable, more secure, and faster service, using a hierarchy of modem 0.5-inch tape robots and cost-effective, manual, 4-millimeter (mm) cassettes. Our system interfaces to any network-attached system and is used by users and applications on all our types of computer (Unix, VMS, VM, and DOS) for both shared and unshared data.\",\"PeriodicalId\":436281,\"journal\":{\"name\":\"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MASS.1994.373033\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MASS.1994.373033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在Rutherford Appleton实验室(RAL),多年来我们运行了一系列具有自动迁移和层次管理的不同文件存储系统。当数据被用户或应用程序引用时,它们都提供从磁带式介质到磁盘的透明自动召回功能;因此,这些系统为大于可用磁盘空间的文件存储库提供了或多或少透明的支持。然而,IBM MVS (ASM2)、IBM VM(本地)、Cray COS(归档器)和Cray Unicos (dm)上的系统有许多主要的差异,其中一些系统比其他系统更令人满意。在本文中,我将简要描述我们开发并仍在VM中运行的系统。它在上述系统中具有一些独特的功能。首先,它从一开始就被设计为一个完整的文件存储管理系统(对于标准的IBM wcm ms迷你磁盘系统),它恰好将数据存储在层次结构不同部分的设备中,而不是作为一个被嫁接到单独的磁盘文件存储上的系统。其次,它总是分组(minidisk)迁移和召回文件。总的来说,我将尝试强调与新托管存储系统的设计考虑相关的各种系统的优点和缺点。我将介绍基于我们和其他人以前的经验,我们最近开发的托管存储和网络接入系统。我们的系统现在管理着超过10tb (Tbytes)的数据,这些数据传统上存储在非托管磁带上,并在中央大型机综合体中进行处理。该系统完全将与管理数据、硬件层次结构和弹性相关的问题与与用户和应用程序接口相关的问题分离开来。此外,该系统还尝试提供传统风格(“虚拟磁带”)接口和更抽象的“调制解调器”风格接口,后者仍然是目前仍在使用的所有系统(Unix、VMS、VM和MVS)中唯一被广泛理解的大容量数据接口。我们非常认真地对待数据完整性问题,我将描述我们是如何处理这个问题的。我还将描述我们认为必不可少但在其他大容量存储系统中通常找不到的额外特性。虽然我们的设计受益于IEEE大容量存储模型,但该系统首先是一个可工作的系统,并且理想有时会受到损害。该系统已运行2年多,处理数据量超过10万亿比特。它使我们能够在对用户几乎没有干扰的情况下,逐步淘汰大量无管理的9轨和18轨磁带,并提供更可靠、更安全、更快捷的服务,使用现代化的0.5英寸磁带机器人和成本效益高的4毫米手动磁带。我们的系统与任何网络连接的系统相连接,供所有类型的计算机(Unix、VMS、VM和DOS)上的用户和应用程序用于共享和非共享数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automatic Managed Filestores - Practice and Experience (Abstract)
At the Rutherford Appleton Laboratory (RAL), we have over many years run a range of different filestore systems with automatic migration and hierarchy management. All provide transparent automatic-recall facilities from tape-style media to disk when data are referenced by a user or an application; therefore, these systems provide more or less transparent support for a filestore larger than the available disk space. However, the systems on IBM MVS (ASM2), IBM VM (local), Cray COS (archiver), and Cray Unicos (dm) have many major differences, and some of these systems have been more satisfactory than others. In this paper, I shall briefly describe the system we developed and still run in VM. It has some features that are unique among those systems mentioned. Firstly, it was designed from the start as a total filestore management system (for the standard IBM W C M S minidisk system) that happens to store data in devices at different parts of a hierarchy rather than as a system compromised by being grafted onto a separate disk filestore. Secondly, it migrates and recalls files always in groups (minidisks). In general, I shall attempt to highlight good and bad points of the various systems that are relevant to the design considerations of new managed storage systems. I shall describe the managed storage and network access system we have recently developed based on our and others’ previous experiences. Our system is now managing over 10 terabytes (Tbytes) of data that were traditionally stored on unmanaged tapes and processed in a central mainframe complex. The system completely separates the issues related to managing the data, the hierarchy of the hardware, and resilience from the issues related to user and application interfaces to the data. In addition, the system attempts to provide both a traditional-style (“virtual-tape”) interface which is still the only widely understood bulk-data interface natural to the full range of systems (Unix, VMS, VM, and MVS) that are still in use and more abstract “modem”-style interfaces. We have taken the issue of data integrity very seriously, and I shall describe how we have approached this issue. I shall also describe the extra features that we have considered essential but are not generally found on other mass storage systems. While our design has benefited from the IEEE mass storage model, the system is above all a working one, and ideals have at times been compromised. The system has been running for over 2 years, handling over 10 Tbits of data. It is allowing us, with little disruption to users, to phase out large holdings of unmanaged nine-track and 18-track tapes and to give a much more reliable, more secure, and faster service, using a hierarchy of modem 0.5-inch tape robots and cost-effective, manual, 4-millimeter (mm) cassettes. Our system interfaces to any network-attached system and is used by users and applications on all our types of computer (Unix, VMS, VM, and DOS) for both shared and unshared data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信