2019 IEEE International Conference on Networking, Architecture and Storage (NAS)最新文献

筛选
英文 中文
NAS 2019 Cover Page NAS 2019封面
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/nas.2019.8834710
{"title":"NAS 2019 Cover Page","authors":"","doi":"10.1109/nas.2019.8834710","DOIUrl":"https://doi.org/10.1109/nas.2019.8834710","url":null,"abstract":"","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129147505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Transfer Learning to Reduce Training Overhead of HPC Data in Machine Learning 探索迁移学习减少机器学习中HPC数据的训练开销
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/NAS.2019.8834723
Tong Liu, Shakeel Alibhai, Jinzhen Wang, Qing Liu, Xubin He, Chentao Wu
{"title":"Exploring Transfer Learning to Reduce Training Overhead of HPC Data in Machine Learning","authors":"Tong Liu, Shakeel Alibhai, Jinzhen Wang, Qing Liu, Xubin He, Chentao Wu","doi":"10.1109/NAS.2019.8834723","DOIUrl":"https://doi.org/10.1109/NAS.2019.8834723","url":null,"abstract":"Nowadays, scientific simulations on high-performance computing (HPC) systems can generate large amounts of data (in the scale of terabytes or petabytes) per run. When this huge amount of HPC data is processed by machine learning applications, the training overhead will be significant. Typically, the training process for a neural network can take several hours to complete, if not longer. When machine learning is applied to HPC scientific data, the training time can take several days or even weeks. Transfer learning, an optimization usually used to save training time or achieve better performance, has potential for reducing this large training overhead. In this paper, we apply transfer learning to a machine learning HPC application. We find that transfer learning can reduce training time without, in most cases, significantly increasing the error. This indicates transfer learning can be very useful for working with HPC datasets in machine learning applications.","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121706856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
NAS 2019 Committees
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/nas.2019.8834718
{"title":"NAS 2019 Committees","authors":"","doi":"10.1109/nas.2019.8834718","DOIUrl":"https://doi.org/10.1109/nas.2019.8834718","url":null,"abstract":"","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130267474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HCMonitor: An Accurate Measurement System for High Concurrent Network Services HCMonitor:高并发网络服务的精确测量系统
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/NAS.2019.8834716
Hui Song, Wenli Zhang, Ke Liu, Yifan Shen, Mingyu Chen
{"title":"HCMonitor: An Accurate Measurement System for High Concurrent Network Services","authors":"Hui Song, Wenli Zhang, Ke Liu, Yifan Shen, Mingyu Chen","doi":"10.1109/NAS.2019.8834716","DOIUrl":"https://doi.org/10.1109/NAS.2019.8834716","url":null,"abstract":"As user-interactive services grow explosively in datacenters, latency has become one of the most deciding factors on user experience. Therefore, estimating the latency and detecting anomalies from the expected latency is essential to evaluate services’ performance. Although many existing tools have been used widely, their estimation methods can be divided into two categories. First, the traffic-sample-based approaches sample the network traffic for accelerating the estimation rather than measure every response time. Second, the full-traffic-based approaches, such as tcpdump and wrk, analyze data from kernel and leave the latency computation to the client-side. In this paper, we attempt to compute the applications’ server-side latency for every request in real-time, and eliminate kernel processing delay. We propose a system named HCMonitor. It monitors all the traffic by switch mirroring, which results in high throughput and more accuracy in server-side latency estimation. The latency measurement is transparent to network services and can be displayed in real time. Our evaluations show HCMonitor obtains higher throughput than tcpdump by over 1000 times. Compared to wrk, the tail latency accuracy estimated by HCMonitor shows a promotion by up to 72%~76% in high concurrent network, by eliminating delay produced by packet transfer, kernel network stack and packets queuing on client side.","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114759560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Cluster-wide Deduplication Based on Ceph 迈向基于Ceph的集群范围重复数据删除
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/NAS.2019.8834729
Jinpeng Wang, Yang Wang, Hekang Wang, Kejiang Ye, Chengzhong Xu, Shuibing He, Lingfang Zeng
{"title":"Towards Cluster-wide Deduplication Based on Ceph","authors":"Jinpeng Wang, Yang Wang, Hekang Wang, Kejiang Ye, Chengzhong Xu, Shuibing He, Lingfang Zeng","doi":"10.1109/NAS.2019.8834729","DOIUrl":"https://doi.org/10.1109/NAS.2019.8834729","url":null,"abstract":"In this paper, we design an efficient deduplication algorithm based on the distributed storage architecture of Ceph. The algorithm uses on-line block-level data deduplication technology to complete data slicing, which neither affects the data storage process in Ceph nor alter other interfaces and functions in Ceph. Without relying on any central node, the algorithm maintains the characteristics of Ceph by designing a special hash object to store the data fingerprint, and uses the CRUSH algorithm to judge the data duplication based on calculation, instead of global search. The algorithm replaces the duplicate data with the deduplicated objects, which storage their fingerprints with less storage space. We compare the effects of different block sizes with respect to the performance and deduplication rates through experimental studies, and select the most appropriate block size in our prototype implementation. The experimental results show that the algorithm can not only effectively save the storage space but also improve the bandwidth utilization when reading and writing the duplicate data.","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122481289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
NAS 2019 Program Committee NAS 2019计划委员会
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/nas.2019.8834715
{"title":"NAS 2019 Program Committee","authors":"","doi":"10.1109/nas.2019.8834715","DOIUrl":"https://doi.org/10.1109/nas.2019.8834715","url":null,"abstract":"","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130703590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NAS 2019 Index 关于我们2019索引
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/nas.2019.8834711
{"title":"NAS 2019 Index","authors":"","doi":"10.1109/nas.2019.8834711","DOIUrl":"https://doi.org/10.1109/nas.2019.8834711","url":null,"abstract":"","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LT-TCO: A TCO Calculation Model of Data Centers for Long-Term Data Preservation lttco:面向数据长期保存的数据中心TCO计算模型
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/NAS.2019.8834714
Wenrui Yan, Jie Yao, Q. Cao, Yifan Zhang
{"title":"LT-TCO: A TCO Calculation Model of Data Centers for Long-Term Data Preservation","authors":"Wenrui Yan, Jie Yao, Q. Cao, Yifan Zhang","doi":"10.1109/NAS.2019.8834714","DOIUrl":"https://doi.org/10.1109/NAS.2019.8834714","url":null,"abstract":"Data centers have been becoming public utilities to provide large-scale computing and storage services. The Total Cost of Ownership (TCO) models for such data centers are paramount to deeply understand their cost of investment and maintenance, the cost composition of internal components, and further cost optimization directions. Existing data center TCO models focus on either high-performance data centers or key subsystems such as IT facility, lacking of holistic analysis of the data centers designed for long-term data preservation. The long-term data centers can be built with different combinations of storage media such as HDDs, tapes, and optical discs. Meanwhile, during the long operation period, devices replacement and data migration are necessary and are not negligible in cost.In order to comprehensively and quantitatively understand the cost of long-term data preservations, we proposed LT-TCO, a TCO calculation model for data centers over time. LT-TCO simulates the construction and operation of a data center to calculate the expenditure of each year. It also introduces the cost of devices replacement and data migration during the long running period. Based on the storage media as optical discs, tapes, HDDs, and SSDs, LT-TCO evaluates the corresponding capital and operational expenditure under different developing rates. The simulation result shows that in long-term preservation, data migration cost takes more than 96% of the operational expenditure. And the TCO of optical disc data centers could be the least among four storage media.","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131372643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adaptive SSD Cache Architecture Simultaneously Using Multiple Caches 同时使用多个缓存的自适应SSD缓存架构
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/NAS.2019.8834724
Nikolaus Jeremic, Helge Parzyjegla, Gero Mühl
{"title":"An Adaptive SSD Cache Architecture Simultaneously Using Multiple Caches","authors":"Nikolaus Jeremic, Helge Parzyjegla, Gero Mühl","doi":"10.1109/NAS.2019.8834724","DOIUrl":"https://doi.org/10.1109/NAS.2019.8834724","url":null,"abstract":"Due to a notably higher cost per bit of storage capacity, NAND flash memory solid state drives (SSDs) are not expected to completely replace hard disk drives (HDDs) in the near future. Using SSDs as a cache for HDDs, however, proved very effective in increasing the performance of storage systems. The performance increase depends strongly on both the SSD cache design and the workload applied to the storage system. However, distinct parts of the data may exhibit significantly different access patterns that potentially change rapidly and unpredictably over time. This particularly applies to complex dynamic systems, such as virtualized environments. Existing SSD cache architectures are not able to exploit such differences in access patterns, as they only use a single cache design, even when adapting certain cache parameters.In this paper, we propose a novel generic architecture for adaptive block-level SSD caches that simultaneously employs multiple SSD caches with complementary designs. The goal is to use for each kind of access pattern the SSD cache design that fits the pattern best. Results of our experimental evaluation show that the proposed SSD cache architecture adapts to different workloads well. For a broad range of workloads, it provides an overall throughput that is comparable to the respective best single cache design, whereas it is able to outperform these cache designs for superimposed mixed workloads and workloads with changing characteristics.","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115525149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CCPNC: A Cooperative Caching Strategy Based on Content Popularity and Node Centrality CCPNC:基于内容流行度和节点中心性的协同缓存策略
2019 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2019-08-01 DOI: 10.1109/NAS.2019.8834733
Yunming Mo, Jinxing Bao, Shaobing Wang, Yaxiong Ma, Han Liang, Jiabao Huang, Ping Lu, Jincai Chen
{"title":"CCPNC: A Cooperative Caching Strategy Based on Content Popularity and Node Centrality","authors":"Yunming Mo, Jinxing Bao, Shaobing Wang, Yaxiong Ma, Han Liang, Jiabao Huang, Ping Lu, Jincai Chen","doi":"10.1109/NAS.2019.8834733","DOIUrl":"https://doi.org/10.1109/NAS.2019.8834733","url":null,"abstract":"The in-network caching mechanism is one of the core technologies of the Content Centric Network (CCN) and has been increasingly concerned. In order to improve the cache hit ratio of the content centric network cache system and increase the content diversity of the cache system, this paper proposes a cooperative caching strategy based on content popularity and node centrality, called CCPNC. The CCPNC caching strategy comprehensively considers content popularity and node distribution rules. It can separately cache content objects based on different popularity and mobilize the core routing nodes in the network to work together with non-core routing nodes. The CCPNC caching strategy not only makes use of the core routing node cache resources to provide faster popular content services for a wide range of users, but also avoids unnecessary high-frequency cache replacement of the core routing nodes. Meanwhile, it utilizes the cache resources of non-core routing nodes to provide more convenient non-popular content services. Through simulation experiments, it is found that the CCPNC caching strategy can effectively balance the distribution of content objects in the cache system and improve the cache hit ratio of the content centric network, while reducing the average routing hop and average request latency of content backhaul.","PeriodicalId":230796,"journal":{"name":"2019 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124034295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信