{"title":"Data management requirements for high energy physics in the year 2000","authors":"J. Shiers","doi":"10.1109/MASS.1993.289784","DOIUrl":"https://doi.org/10.1109/MASS.1993.289784","url":null,"abstract":"It is noted that the data storage and management requirements of future high energy physics (HEP) experiments, such as those planned for the Large Hadron Collider or the Superconducting Supercollider, will greatly exceed those of current experiments. A global requirement for the storage of 10 to 100 petabytes of new HEP data per year is foreseen. The author discusses the lessons learned from existing home-grown solutions, such as those described at previous symposia, current trends in data management and storage, and future requirements. Particular emphasis is placed on the specific needs of HEP, integration with user-level code, and the suitability of the IEEE Mass Storage System Reference Model and commercial solutions in such an environment. The specific needs of end users, data managers, and central support staff are addressed.<<ETX>>","PeriodicalId":225568,"journal":{"name":"[1993] Proceedings Twelfth IEEE Symposium on Mass Storage systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130688936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Architecture and implementation of an on-line data archive and distribution system","authors":"B. Bhasker, M. E. V. Steenberg, B. Jacobs","doi":"10.1109/MASS.1993.289762","DOIUrl":"https://doi.org/10.1109/MASS.1993.289762","url":null,"abstract":"The authors present a layered architecture of an on-line data archive and distribution system. An operational system based on this architecture has been developed at NASA's National Space Science Data Center to distribute space science data to the world scientific community. The implemented architecture stores the data files in CYGNET optical-disk juke boxes using Sony's 6.5 GByte optical disks. The architecture utilizes meta-data to locate and deliver the data. The system also supports the use of catalogs to search and identify the relevant data.<<ETX>>","PeriodicalId":225568,"journal":{"name":"[1993] Proceedings Twelfth IEEE Symposium on Mass Storage systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124091336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel storage and retrieval of pixmap images","authors":"R. Hersch","doi":"10.1109/MASS.1993.289756","DOIUrl":"https://doi.org/10.1109/MASS.1993.289756","url":null,"abstract":"To fulfill the requirement of rapid access to huge amounts of uncompressed pixmap image data, a parallel image server architecture is proposed, based on arrays of intelligent disk nodes, with each disk node composed of one processor and one disk. It is shown how images can be partitioned into extents and efficiently distributed among available intelligent disk nodes. The image server's performance is analyzed according to various parameters such as the number of cooperating disk nodes, the sizes of image file extents, the available communication throughput, and the processing power of disk node and image server processors. Important image access speed improvements are obtained by image extent caching and image part extraction in disk nodes. With T800 transputer-based technology, a system composed of eight disk nodes offers access to three full-color 512*512 pixmap image parts per second (2.4 megabytes per second). For the same configuration but with the recently announced T9000 transputer, image access throughput is eight images per second (6.8 megabytes per second).<<ETX>>","PeriodicalId":225568,"journal":{"name":"[1993] Proceedings Twelfth IEEE Symposium on Mass Storage systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124511873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A spacecraft mass storage optical disk system","authors":"Glenn D. Hines, S. Jurczyk, R. Hodson","doi":"10.1109/MASS.1993.289748","DOIUrl":"https://doi.org/10.1109/MASS.1993.289748","url":null,"abstract":"NASA (the US National Aeronautics and Space Administration) has established a program to develop a high-performance (high-rate, large-capacity) optical disk recorder. An expandable, adaptable system concept is proposed based on disk drive modules and a modular controller. Drive performance goals are ten gigabyte capacity, 300 megabit per second transfer rate, 10/sup -12/ corrected bit error rate, and 150 millisecond access time. This performance is achieved by writing eight data tracks in parallel on both sides of a 14-inch optical disk using two independent heads. System goals are 160 gigabyte capacity, 1.2 gigabit per second data rate with concurrent input/output (I/O), 250 millisecond access time, and two- to five-year operating life on orbit. The system can be configured to meet various applications. This versatility is provided by the controller, which provides command processing, multiple drive synchronization, data buffering, basic file management, error processing, and status reporting.<<ETX>>","PeriodicalId":225568,"journal":{"name":"[1993] Proceedings Twelfth IEEE Symposium on Mass Storage systems","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124652919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of a mass storage system to science data management","authors":"O. Graf, Merritt E. Jones, Fayne Sisco","doi":"10.1109/MASS.1993.289760","DOIUrl":"https://doi.org/10.1109/MASS.1993.289760","url":null,"abstract":"Approaches to the issues of data ingestion, data restructuring, physical data models, relationship file structure to data system performance, data product generation, data transfer to remote users, data subset extraction, data browsing, and user interface have been examined. The High Performance Data System architecture provides an environment for bringing together the technologies of mass storage, large bandwidth data networks, high-performance data processing, and intelligent data access. The prototype system demonstrates an approach to these issues. In addition, the design process has defined some important requirements for the mass storage file system, such as logical grouping of files, aggregate file writes, and multiple dynamic storage device hierarchies.<<ETX>>","PeriodicalId":225568,"journal":{"name":"[1993] Proceedings Twelfth IEEE Symposium on Mass Storage systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131682947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Dwyer, B. K. Stewart, D. Aberle, M. Boechat, Lawrence Yao, D. Marciano
{"title":"Electronic archiving for radiology image management systems","authors":"S. Dwyer, B. K. Stewart, D. Aberle, M. Boechat, Lawrence Yao, D. Marciano","doi":"10.1109/MASS.1993.289783","DOIUrl":"https://doi.org/10.1109/MASS.1993.289783","url":null,"abstract":"It is pointed out that the use of electronic archiving in a radiology department must be supported by image acquisition modes, high data rate local area networks, ultrahigh-resolution gray-scale display workstations, and hard-copy image recording stations. The requirements for mass storage of radiographic images and the required support system are presented. The electronic archiving of all radiographic images requires the following technologies: (1) modality interfaces; (2) film digitizers; (3) networks; (4) magnetic and optical archival systems; (5) film printers; and (6) gray-scale display stations. Estimates of the necessary number of film digitizers and film printers are discussed. A typical image management network is presented.<<ETX>>","PeriodicalId":225568,"journal":{"name":"[1993] Proceedings Twelfth IEEE Symposium on Mass Storage systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130498654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Striped tape arrays","authors":"A. Drapeau, R. Katz","doi":"10.1109/MASS.1993.289751","DOIUrl":"https://doi.org/10.1109/MASS.1993.289751","url":null,"abstract":"How data striping ideas apply to arrays of magnetic tape drives is being investigated. Data striping increases throughput and reduces response time for large accesses to a storage system. Striped magnetic tape systems are particularly appealing because many inexpensive magnetic tape drives have low bandwidth. Striping may offer dramatic performance improvements for these systems. Several important issues in designing striped tape systems are considered: the choice of tape drives and robots, whether to stripe within or between robots, and the choice of the best scheme for distributing data on cartridges. One of the most troublesome problems in striped-tape arrays is the synchronization of transfers across tape drives. Another issue is how improved devices will affect the desirability of striping in the future. The results of simulations comparing the performance of striped-tape systems to nonstriped systems are presented.<<ETX>>","PeriodicalId":225568,"journal":{"name":"[1993] Proceedings Twelfth IEEE Symposium on Mass Storage systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126617058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}