{"title":"Compactor: Optimization Framework at Staging I/O Nodes","authors":"V. Venkatesan, M. Chaarawi, Q. Koziol, E. Gabriel","doi":"10.1109/IPDPSW.2014.188","DOIUrl":null,"url":null,"abstract":"Data-intensive applications are largely influenced by I/O performance on HPC systems and the scalability of such applications to exascale primarily depends on the scalability of the I/O performance on HPC systems in the future. To mitigate the I/O performance, recent HPC systems make use of staging nodes to delegate I/O requests and in-situ data analysis. In this paper, we present the Compactor framework and also present three optimizations to improve I/O performance at the data staging nodes. The first optimization performs collective buffering across requests from multiple processes. In the second optimization, we present a way to steal writes to service read request at the staging node. Finally, we also provide a way to \"morph\" write requests from the same process. All optimizations were implemented as a part of the Exascale FastForward I/O stack. We evaluated the optimizations over a PVFS2 file system using a micro-benchmark and Flash I/O benchmark. Our results indicate significant performance benefits with our framework. In the best case the compactor is able to provide up to 70% improvement in performance.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPSW.2014.188","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Data-intensive applications are largely influenced by I/O performance on HPC systems and the scalability of such applications to exascale primarily depends on the scalability of the I/O performance on HPC systems in the future. To mitigate the I/O performance, recent HPC systems make use of staging nodes to delegate I/O requests and in-situ data analysis. In this paper, we present the Compactor framework and also present three optimizations to improve I/O performance at the data staging nodes. The first optimization performs collective buffering across requests from multiple processes. In the second optimization, we present a way to steal writes to service read request at the staging node. Finally, we also provide a way to "morph" write requests from the same process. All optimizations were implemented as a part of the Exascale FastForward I/O stack. We evaluated the optimizations over a PVFS2 file system using a micro-benchmark and Flash I/O benchmark. Our results indicate significant performance benefits with our framework. In the best case the compactor is able to provide up to 70% improvement in performance.