{"title":"Automatic Managed Filestores - Practice and Experience (Abstract)","authors":"D. J. Rigby","doi":"10.1109/MASS.1994.373033","DOIUrl":"https://doi.org/10.1109/MASS.1994.373033","url":null,"abstract":"At the Rutherford Appleton Laboratory (RAL), we have over many years run a range of different filestore systems with automatic migration and hierarchy management. All provide transparent automatic-recall facilities from tape-style media to disk when data are referenced by a user or an application; therefore, these systems provide more or less transparent support for a filestore larger than the available disk space. However, the systems on IBM MVS (ASM2), IBM VM (local), Cray COS (archiver), and Cray Unicos (dm) have many major differences, and some of these systems have been more satisfactory than others. In this paper, I shall briefly describe the system we developed and still run in VM. It has some features that are unique among those systems mentioned. Firstly, it was designed from the start as a total filestore management system (for the standard IBM W C M S minidisk system) that happens to store data in devices at different parts of a hierarchy rather than as a system compromised by being grafted onto a separate disk filestore. Secondly, it migrates and recalls files always in groups (minidisks). In general, I shall attempt to highlight good and bad points of the various systems that are relevant to the design considerations of new managed storage systems. I shall describe the managed storage and network access system we have recently developed based on our and others’ previous experiences. Our system is now managing over 10 terabytes (Tbytes) of data that were traditionally stored on unmanaged tapes and processed in a central mainframe complex. The system completely separates the issues related to managing the data, the hierarchy of the hardware, and resilience from the issues related to user and application interfaces to the data. In addition, the system attempts to provide both a traditional-style (“virtual-tape”) interface which is still the only widely understood bulk-data interface natural to the full range of systems (Unix, VMS, VM, and MVS) that are still in use and more abstract “modem”-style interfaces. We have taken the issue of data integrity very seriously, and I shall describe how we have approached this issue. I shall also describe the extra features that we have considered essential but are not generally found on other mass storage systems. While our design has benefited from the IEEE mass storage model, the system is above all a working one, and ideals have at times been compromised. The system has been running for over 2 years, handling over 10 Tbits of data. It is allowing us, with little disruption to users, to phase out large holdings of unmanaged nine-track and 18-track tapes and to give a much more reliable, more secure, and faster service, using a hierarchy of modem 0.5-inch tape robots and cost-effective, manual, 4-millimeter (mm) cassettes. Our system interfaces to any network-attached system and is used by users and applications on all our types of computer (Unix, VMS, VM, and DOS) for both shared and unshared data","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125030506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Magnetic storage technology-the 1990s-evolution or revolution?","authors":"A. Hoagland","doi":"10.1109/MASS.1994.373018","DOIUrl":"https://doi.org/10.1109/MASS.1994.373018","url":null,"abstract":"Summary form only given. The rate of progress in disk drive technology, as measured by the increase in areal density, has been advancing at somewhat better than a 60-percent compound growth rate (CGR), starting with this decade, in comparison with the historic CGR of nearly 32 percent over the previous 40 years. If we look at the CGR of areal density over relatively shorter time periods, we find that in the 1950s and early 1960s, a CGR of as high as 90 percent was reached. This CGR is not surprising for the introductory phase of a technology being exploited for data storage for the first time. Based on the extrapolation of the historic rate, we would have anticipated products with densities of 1 to 2 gigabits per square inch shipping in 1998. However, if the current 60-percent growth rate is sustained, we should see the availability of drives in the 10-gigabits-per-square-inch range by the year 2000. This dramatic difference in projected storage densities carries profound implications on the use of storage devices, the applications that will be developed, and the form that the devices take. This tutorial covers the current status of magnetic storage technology and future trends, highlighting the as yet untapped potential for further advances. >","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126601983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multimedia and Image Storage Requirements","authors":"R. Hersch","doi":"10.1109/MASS.1994.373035","DOIUrl":"https://doi.org/10.1109/MASS.1994.373035","url":null,"abstract":"The expansion of multimedia networks and systems depends on real-time support for media streams and interactive multimedia services. Multimedia data are essentially continuous, heterogeneous, and isochronous, three characteristics with strong real-time implications when combined. At the same time, some multimedia services, like video-on-demand or distributed simulation, are real-time applications with sophisticated temporal functionalities in their user inter$ace. In this paper, we analyze the main problems in building such real-time multimedia systems, and we discuss under an architectural prospect some technological solutions, especially those regarding determinism and eficient synchronization in the storage, processing, and communication of audio and video data.","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115302805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sequoia 2000: a next-generation information system for the study of global change","authors":"J. Dozier, M. Stonebraker, J. Frew","doi":"10.1109/MASS.1994.373028","DOIUrl":"https://doi.org/10.1109/MASS.1994.373028","url":null,"abstract":"Better data management is crucial to the success of scientific investigations of global change. New modes of research about the Earth, especially the synergistic interactions between observations and models, require massive amounts of diverse data to be stored, organized, accessed, distributed, visualized, and analyzed. To address technical issues of better data management, participants in Sequoia 2000, a collaborative effort between computer scientists and Earth scientists at several campuses of the University of California and at Digital Equipment Corporation (DEC), apply refinements in computing to specific applications. The software architecture includes layers for a common device interface, the file system, the database management system (DBMS), applications, and the network. Early prototype applications of this software include a global-change data schema, integration of a general circulation model (GCM), remote sensing, and a data system for climate studies. Longer range efforts include transfer protocols for moving elements of the database, controllers for secondary and tertiary storage, distributed file system, and a distributed DBMS.<<ETX>>","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115129448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The National Storage Laboratory (NSL): overview and status","authors":"R. Watson, R. Coyne","doi":"10.1109/MASS.1994.373025","DOIUrl":"https://doi.org/10.1109/MASS.1994.373025","url":null,"abstract":"The National Storage Laboratory (NSL) was organized to investigate, demonstrate, and commercialize high-performance hardware and software storage technologies that promise to remove network computing bottlenecks and to provide critically needed new storage system functionality. This paper briefly outlines the NSL's goals, the NSL collaboration, the NSL's current status and organization, and the applications drive for the NSL.<<ETX>>","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114548569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Practical Experiences","authors":"W. Sell","doi":"10.1109/MASS.1994.373044","DOIUrl":"https://doi.org/10.1109/MASS.1994.373044","url":null,"abstract":"The European Laboratory for Particle Physics (CERN) is a high-energy particle physics laboratory based at a site shared between France and Switzerland, near Geneva. Altogether, an average of 4,000 staff and visitors are on site, using a lot of data currently about 80 terabytes (Tbytes) and a lot of computing power currently about 5,000 workstations of many types. Efforts have been made by CERN’s Computing and Networks Division (CN) to contain the growth in manpower requirements for manipulating data on tapes and cartridges. Initially, this involved making the manual arrangements as efficient as possible, but recent efforts have been directed toward trying to make use of automatic libraries. Beginning with a large protoope Haushahn system, we have now installed two IBM 3495 L50s, an Exabyte 120, an IGM-ATL, and an IBM 3494. Each of these machines has had both excellent and deplorable features, and each has provoked serious problems when exposed to the user community. Not all of these difficulties were expected, and some seem unlikely to be remedied. However, some may be avoidable.","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134311848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emerging Technologies","authors":"W. Sell","doi":"10.1109/MASS.1994.373045","DOIUrl":"https://doi.org/10.1109/MASS.1994.373045","url":null,"abstract":"Powerful but nevertheless easy-to-operate systems for backing up and reconstructing data from clients are necessary in a distributed heterogeneous environment. The concept of the Siemens Nixdorf Informationssysteme (SNI) AG is based on a clientherver model. It runs under the control of a SINIX server and includes backup clients for various UNIX systems, PCs such as those with MSDOS/Windows, and PCs in the Novell NetWare network. Logical data backup and restore actions for any file or disk partition in a UNIX system and for any file in a PC system can be performed automatically and at defined times during an operation. The backup media are tapes, DAT cartridges, magneto-optical disks, and associated autochangers and jukeboxes. Backup and storage system The backup and storage management solution at SNI for all SINIX V5.41 systems includes base technology from Legato Networker@. Networker supports heterogeneous clients in a open distributed environment. Clientherver concept Networker, which is based on a client/server concept (see Figure l), provides networkwide backup and recovery capabilities. In this concept, the clients are all machines in the network, whose data need to be protected against accidental loss or deletion. The server normally is equipped with the backup devices and automatically ensures backup and restore actions at utmost performance for all files in the network. The Networker server is based on SINIX V5.41. At SNI, the server runs on all RISC and CISC machines, such as RM600s, RM~OOS, MX300s, MXSOOs, and PCs. Available clients are: all SINIX systems; the UNIX systems from Digital Equipment Corporation (DEC), Hewlett-Packard (HP), IBM, ICL, SCO, SGI, Sony, Sun Microsystems, and Univel; PCs such as those with MSDOS/Windows; and PCs in the Novell NetWare network. Networker is the de facto industry standard for UNIX network backup. Further backup clients for example, those for OS12 and Windows-NT will soon be available and, because of the standardized communication protocol, may easily be added to any Networker server. To send and receive data, the client and the server use remote procedure call (RPC), an industry-standard networking protocol.","PeriodicalId":436281,"journal":{"name":"Proceedings Thirteenth IEEE Symposium on Mass Storage Systems. Toward Distributed Storage and Data Management Systems","volume":"118 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116701242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}