{"title":"Automated optical mass storage systems with 3-beam magneto-optical disk drives","authors":"I. Yamada, M. Saito, Akinori Watanabe, K. Itao","doi":"10.1109/MASS.1991.160227","DOIUrl":"https://doi.org/10.1109/MASS.1991.160227","url":null,"abstract":"An automated optical mass storage system (optical MSS) with high-speed magnetooptical (MO) disk drives has been developed. It features a high data transfer rate for writing with the use of the 130-mm ISO standard MO disk, and a high storage efficiency of disk cartridges. As the key device, a high-speed MO disk drive that provides a high-speed data writing capability of about 10 times that of conventional MO disk drives has been developed. The optical MSS provides a data transfer rate for reading and writing of 2.1 MB/s, a storage capacity of 250 GB to 1 TB, and an average cartridge handling time of 5 s. From performance simulations, the optical MSS is proven to be applicable to a low-traffic, random-access file that stores multimedia data and a high-speed direct access storage device (DASD) backup file.<<ETX>>","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125990496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributed storage management in high energy physics","authors":"J. Shiers","doi":"10.1109/MASS.1991.160220","DOIUrl":"https://doi.org/10.1109/MASS.1991.160220","url":null,"abstract":"To cope with the large quantities of data produced in high energy physics, CERN has developed a system for the management of, and access to, data in a fully distributed environment. The principal user interface is via a package known as FATMEN (file and tape management: experimental needs), which provides a worldwide distributed file catalog and offers system- and medium-independent access to data. The software runs on a large variety of platforms, including VM/CMS, MVS, VAX/VMS, and UNIX systems. TCP/IP, DECnet, and Bitnet networks are currently supported for the transfer of catalog updates. Particular attention is given to the FATMEN catalogs, the FATMEN naming scheme, access to data, migration, and security and reliability.<<ETX>>","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126029400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design for a transparent, distributed file system","authors":"D. Mecozzi, J. Minton","doi":"10.1109/MASS.1991.160215","DOIUrl":"https://doi.org/10.1109/MASS.1991.160215","url":null,"abstract":"At the Lawrence Livermore National Laboratory (LLNL), caching and migration protocols have been designed to integrate distributed UniTree File Management System servers running on separate machines to create a single file system. These protocols allow files to migrate between levels of a storage hierarchy to create a unified distributed storage system. The design provides clients with a single method for accessing files, regardless of file location. File caching provides clients with optimal performance, while file migration enables file servers to optimize their space utilization. The key features of the system include use of unique, location-independent file caching, and a locking mechanism to synchronize access to the system's files and manage conflicts related to multiple copies of the files. A shift in LLNL policy to acquire vendor-supported software prevented the completion of the implementation of this unified storage system. However, the design solves many problems that can occur when providing a transparent distributed file system.<<ETX>>","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125673775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mass storage requirements in the intelligence community","authors":"Tom Myers, E. Williams","doi":"10.1109/MASS.1991.160203","DOIUrl":"https://doi.org/10.1109/MASS.1991.160203","url":null,"abstract":"The major requirements established among a significant portion of mass storage system workloads in the intelligence community are large numbers (10/sup 7/-10/sup 9/) of objects and a range of small (10 bits) to large-sized (10/sup 8/-10/sup 10/ bits) data objects. Two very different but representative workloads are presented, one centralized and the other distributed.<<ETX>>","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126499980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Swift: a storage architecture for large objects","authors":"L. Cabrera, D. Long","doi":"10.1109/MASS.1991.160223","DOIUrl":"https://doi.org/10.1109/MASS.1991.160223","url":null,"abstract":"The authors describe an input/output architecture called Swift that addresses the problem of storing and retrieving very large data objects from slow secondary storage at very high data rates. Swift addresses the problem of providing data rates required by digital video by exploiting the available interconnection capacity and by using several slower storage devices in parallel. Two studies have been performed to validate the Swift architecture: a simulation study and an Ethernet-based, proof-of-concept implementation. Both studies indicate that the aggregation principle proposed in Swift can yield very high data-rates. >","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114733547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The HADES file server","authors":"H. Reuter","doi":"10.1109/MASS.1991.160221","DOIUrl":"https://doi.org/10.1109/MASS.1991.160221","url":null,"abstract":"HADES (Heidelberg Automatic Data management and Editor System), a file server for the IBM/370 world under the operating systems VM or MVS, is described. It may be used from CMS, MVS batch, and TSO directly and from UNIX systems via FTP and NFS. HADES has its own data management system independent of the host operating system. It uses automatic data migration onto tape to save disk space and keeps two tape copies of each file. Since there are nearly no limitations on file size and number of files, HADES files can be used as easily as bitfiles. Some of the requirements defined in the IEEE Reference Model are fulfilled by an automatic backup process for CMS minidisks that was designed on top of the HADES file system.<<ETX>>","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128093413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing the I/O behavior of supercomputer applications","authors":"E. L. Miller, R. Katz","doi":"10.1109/MASS.1991.160208","DOIUrl":"https://doi.org/10.1109/MASS.1991.160208","url":null,"abstract":"The authors describe the collection and analysis of supercomputer I/O (input/output) traces on a Cray Y-MP. Analyzing these traces, which came primarily from programs with high I/O requirements, shows the file system I/O patterns that these applications exhibit. The authors classify application I/Os into three categories, required, checkpoint, and data staging, and show how memory size and CPU speed are likely to affect each category. An analysis of the data shows that data staging I/O dominates when it is present.<<ETX>>","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"22 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133106925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmarking a network storage service","authors":"S. M. Kelly, R. A. Haynes, M.J. Ernest","doi":"10.1109/MASS.1991.160206","DOIUrl":"https://doi.org/10.1109/MASS.1991.160206","url":null,"abstract":"Benchmarking a network file server introduces some unique considerations over traditional benchmarking scenarios. Since the user is executing on a client system interconnected to the file server, the client and network must be provided for during benchmarking. During a recent procurement action, Sandia National Laboratories (SNL) was challenged to develop a benchmark suite that would accurately test the network requirements. The authors describe the benchmark design and summarize the experience gained from the benchmark execution. SNL offered three possible benchmark configurations. All options were used by different vendors. Therefore, each option was tested and each successfully executed the benchmark tests. Since the tests employed the actual commands the end users will execute, SNL feels that it has obtained a high level of assurance that end-user functionality and performance have been achieved. >","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125068374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Local area gigabit networking","authors":"D. Tolmie","doi":"10.1109/MASS.1991.160197","DOIUrl":"https://doi.org/10.1109/MASS.1991.160197","url":null,"abstract":"It is noted that networks using gigabit speeds are just starting to become available and offer a whole new set of problems and potentials. The author addresses what the higher speeds are being used for, the 'standards' efforts specifying the higher-speed channels, the network architectures being proposed, and some of the open problems requiring extensive further work. It is noted that HIPPI (High Performance Parallel Interface) and FC (Fibre Channel) will provide some of the basic building blocks for these networks. Further work needs to be done in higher-layer protocols and long-distance networks to achieve national goals.<<ETX>>","PeriodicalId":158477,"journal":{"name":"[1991] Digest of Papers Eleventh IEEE Symposium on Mass Storage Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115315826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}