{"title":"自动管理文件存储-实践与经验(摘要)","authors":"D. J. Rigby","doi":"10.1109/MASS.1994.373033","DOIUrl":null,"url":null,"abstract":"At the Rutherford Appleton Laboratory (RAL), we have over many years run a range of different filestore systems with automatic migration and hierarchy management. All provide transparent automatic-recall facilities from tape-style media to disk when data are referenced by a user or an application; therefore, these systems provide more or less transparent support for a filestore larger than the available disk space. However, the systems on IBM MVS (ASM2), IBM VM (local), Cray COS (archiver), and Cray Unicos (dm) have many major differences, and some of these systems have been more satisfactory than others. In this paper, I shall briefly describe the system we developed and still run in VM. It has some features that are unique among those systems mentioned. Firstly, it was designed from the start as a total filestore management system (for the standard IBM W C M S minidisk system) that happens to store data in devices at different parts of a hierarchy rather than as a system compromised by being grafted onto a separate disk filestore. Secondly, it migrates and recalls files always in groups (minidisks). In general, I shall attempt to highlight good and bad points of the various systems that are relevant to the design considerations of new managed storage systems. I shall describe the managed storage and network access system we have recently developed based on our and others’ previous experiences. Our system is now managing over 10 terabytes (Tbytes) of data that were traditionally stored on unmanaged tapes and processed in a central mainframe complex. The system completely separates the issues related to managing the data, the hierarchy of the hardware, and resilience from the issues related to user and application interfaces to the data. In addition, the system attempts to provide both a traditional-style (“virtual-tape”) interface which is still the only widely understood bulk-data interface natural to the full range of systems (Unix, VMS, VM, and MVS) that are still in use and more abstract “modem”-style interfaces. We have taken the issue of data integrity very seriously, and I shall describe how we have approached this issue. I shall also describe the extra features that we have considered essential but are not generally found on other mass storage systems. While our design has benefited from the IEEE mass storage model, the system is above all a working one, and ideals have at times been compromised. The system has been running for over 2 years, handling over 10 Tbits of data. It is allowing us, with little disruption to users, to phase out large holdings of unmanaged nine-track and 18-track tapes and to give a much more reliable, more secure, and faster service, using a hierarchy of modem 0.5-inch tape robots and cost-effective, manual, 4-millimeter (mm) cassettes. Our system interfaces to any network-attached system and is used by users and applications on all our types of computer (Unix, VMS, VM, and DOS) for both shared and unshared data.","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automatic Managed Filestores - Practice and Experience (Abstract)\",\"authors\":\"D. J. Rigby\",\"doi\":\"10.1109/MASS.1994.373033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"At the Rutherford Appleton Laboratory (RAL), we have over many years run a range of different filestore systems with automatic migration and hierarchy management. All provide transparent automatic-recall facilities from tape-style media to disk when data are referenced by a user or an application; therefore, these systems provide more or less transparent support for a filestore larger than the available disk space. However, the systems on IBM MVS (ASM2), IBM VM (local), Cray COS (archiver), and Cray Unicos (dm) have many major differences, and some of these systems have been more satisfactory than others. In this paper, I shall briefly describe the system we developed and still run in VM. It has some features that are unique among those systems mentioned. Firstly, it was designed from the start as a total filestore management system (for the standard IBM W C M S minidisk system) that happens to store data in devices at different parts of a hierarchy rather than as a system compromised by being grafted onto a separate disk filestore. Secondly, it migrates and recalls files always in groups (minidisks). In general, I shall attempt to highlight good and bad points of the various systems that are relevant to the design considerations of new managed storage systems. I shall describe the managed storage and network access system we have recently developed based on our and others’ previous experiences. Our system is now managing over 10 terabytes (Tbytes) of data that were traditionally stored on unmanaged tapes and processed in a central mainframe complex. The system completely separates the issues related to managing the data, the hierarchy of the hardware, and resilience from the issues related to user and application interfaces to the data. In addition, the system attempts to provide both a traditional-style (“virtual-tape”) interface which is still the only widely understood bulk-data interface natural to the full range of systems (Unix, VMS, VM, and MVS) that are still in use and more abstract “modem”-style interfaces. We have taken the issue of data integrity very seriously, and I shall describe how we have approached this issue. I shall also describe the extra features that we have considered essential but are not generally found on other mass storage systems. While our design has benefited from the IEEE mass storage model, the system is above all a working one, and ideals have at times been compromised. The system has been running for over 2 years, handling over 10 Tbits of data. It is allowing us, with little disruption to users, to phase out large holdings of unmanaged nine-track and 18-track tapes and to give a much more reliable, more secure, and faster service, using a hierarchy of modem 0.5-inch tape robots and cost-effective, manual, 4-millimeter (mm) cassettes. Our system interfaces to any network-attached system and is used by users and applications on all our types of computer (Unix, VMS, VM, and DOS) for both shared and unshared data.\",\"PeriodicalId\":436281,\"journal\":{\"name\":\"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MASS.1994.373033\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MASS.1994.373033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automatic Managed Filestores - Practice and Experience (Abstract)
At the Rutherford Appleton Laboratory (RAL), we have over many years run a range of different filestore systems with automatic migration and hierarchy management. All provide transparent automatic-recall facilities from tape-style media to disk when data are referenced by a user or an application; therefore, these systems provide more or less transparent support for a filestore larger than the available disk space. However, the systems on IBM MVS (ASM2), IBM VM (local), Cray COS (archiver), and Cray Unicos (dm) have many major differences, and some of these systems have been more satisfactory than others. In this paper, I shall briefly describe the system we developed and still run in VM. It has some features that are unique among those systems mentioned. Firstly, it was designed from the start as a total filestore management system (for the standard IBM W C M S minidisk system) that happens to store data in devices at different parts of a hierarchy rather than as a system compromised by being grafted onto a separate disk filestore. Secondly, it migrates and recalls files always in groups (minidisks). In general, I shall attempt to highlight good and bad points of the various systems that are relevant to the design considerations of new managed storage systems. I shall describe the managed storage and network access system we have recently developed based on our and others’ previous experiences. Our system is now managing over 10 terabytes (Tbytes) of data that were traditionally stored on unmanaged tapes and processed in a central mainframe complex. The system completely separates the issues related to managing the data, the hierarchy of the hardware, and resilience from the issues related to user and application interfaces to the data. In addition, the system attempts to provide both a traditional-style (“virtual-tape”) interface which is still the only widely understood bulk-data interface natural to the full range of systems (Unix, VMS, VM, and MVS) that are still in use and more abstract “modem”-style interfaces. We have taken the issue of data integrity very seriously, and I shall describe how we have approached this issue. I shall also describe the extra features that we have considered essential but are not generally found on other mass storage systems. While our design has benefited from the IEEE mass storage model, the system is above all a working one, and ideals have at times been compromised. The system has been running for over 2 years, handling over 10 Tbits of data. It is allowing us, with little disruption to users, to phase out large holdings of unmanaged nine-track and 18-track tapes and to give a much more reliable, more secure, and faster service, using a hierarchy of modem 0.5-inch tape robots and cost-effective, manual, 4-millimeter (mm) cassettes. Our system interfaces to any network-attached system and is used by users and applications on all our types of computer (Unix, VMS, VM, and DOS) for both shared and unshared data.