2021 IEEE International Conference on Networking, Architecture and Storage (NAS)最新文献

筛选
英文 中文
Deflection-Aware Routing Algorithm in Network on Chip against Soft Errors and Crosstalk Faults 针对软错误和串扰故障的片上网络偏转感知路由算法
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605392
Hadi Zamani, Z. Shirmohammadi, Ali Jahanshahi
{"title":"Deflection-Aware Routing Algorithm in Network on Chip against Soft Errors and Crosstalk Faults","authors":"Hadi Zamani, Z. Shirmohammadi, Ali Jahanshahi","doi":"10.1109/nas51552.2021.9605392","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605392","url":null,"abstract":"Marching into nano-scale technology, probability of soft errors and crosstalk faults has increased by about 6-7 times. Since buffers occupy about 40-90% of the switch area, the probability of soft errors in switches is significant. We propose a deflection-aware routing algorithm (DAR) combined with an information redundancy technique to cover the soft errors and crosstalk faults in the header flow control units (FLIT). We also introduce an interleaving method along with a simple hamming code to tolerate the errors in data and tail FLITs. The proposed methods have been evaluated in both circuit and simulation level through a simulator written in C++, Booksim 2, and Synopsys Design Compiler. The evaluation results show that we can cover the soft errors and crosstalk faults with reasonable power and performance overhead of 3% and 6.5% respectively.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114590353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GO: Out-Of-Core Partitioning of Large Irregular Graphs GO:大型不规则图的out - core Partitioning
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605433
Gurneet Kaur, Rajesh K. Gupta
{"title":"GO: Out-Of-Core Partitioning of Large Irregular Graphs","authors":"Gurneet Kaur, Rajesh K. Gupta","doi":"10.1109/nas51552.2021.9605433","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605433","url":null,"abstract":"Single-PC, disk-based processing of large irregular graphs has recently gained much popularity. At the core of a disk-based system is a static graph partitioning that must be created before the processing starts. By handling one partition at a time, graphs that do not fit in memory are processed on a single machine. However, the multilevel graph partitioning algorithms used by the most sophisticated partitioners cannot be run on the same machine as their memory requirements far exceed the size of the graph. The popular memory efficient Mt-Metis graph partitioner requires 4.8× to 13.8× the memory needed to hold the entire graph in memory. To overcome this problem, we present the GO out-of-core graph partitioner that can successfully partition large graphs on a single machine. GO performs just two passes over the entire input graph, partition creation pass that creates balanced partitions and partition refinement pass that reduces edgecuts. Both passes function in a memory constrained manner via disk-based processing. GO successfully partitions large graphs for which Mt-Metis runs out of memory. For graphs that can be successfully partitioned by Mt-Metis on a single machine, GO produces balanced 8-way partitions with 11.8× to 76.2× fewer edgecuts using 1.9× to 8.3× less memory in comparable runtime.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128731448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A New PIS Accelerator for Text Searching 一个新的PIS文本搜索加速器
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605387
Yunxin Huang, Aiguo Song, Yafei Yang
{"title":"A New PIS Accelerator for Text Searching","authors":"Yunxin Huang, Aiguo Song, Yafei Yang","doi":"10.1109/nas51552.2021.9605387","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605387","url":null,"abstract":"We propose a new design of a hardware accelerator for processing regular expression to speedup text search inside SSD storage (Processing in Storage: PIS). The unique features include parallel processing of 32 streams to quickly identify the first matched character under scan mode and match four characters concurrently under matching mode. In addition, we present a new approach of combining forward and backward scan to accomplish the first character search efficiently. Our experimental results show that the new parallel algorithm reduces the depth of logic circuit and the hybrid architecture performs as well as the Linux Grep algorithm does.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129086920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of A Multi-Path Reconfigurable Traffic Monitoring System 一种多路径可重构交通监控系统设计
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605385
Liang-Min Wang, Timothy Miskell, J. Morgan, Edwin Verplanke
{"title":"Design of A Multi-Path Reconfigurable Traffic Monitoring System","authors":"Liang-Min Wang, Timothy Miskell, J. Morgan, Edwin Verplanke","doi":"10.1109/nas51552.2021.9605385","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605385","url":null,"abstract":"As network bandwidth consumption continues to grow exponentially, real-time traffic data analysis becomes increasingly challenging and expensive. In many cases, network traffic monitoring can only be achieved via hardware Test Access Point (TAP) devices. Due to the intrusiveness and inflexibility of deploying hardware devices, this approach is intractable within an SDN environment where dynamic network resource allocation is key to the orchestration of network services. This paper presents a novel mirror tunnel design to achieve near hardware level TAP-as-a-Service (TaaS) performance through network device mirror offloading, while retaining resource reconfigurability. Mirror tunneling is a hybrid approach whereby a software TAP transports traffic from a source device to a mirror tunnel device. Traffic is then mirrored in place and sent to the destination device. The combination of a software TAP with the mirroring capabilities of the underlying hardware empowers system administrators to create a dynamically reconfigurable multi-path traffic mirroring system. As demonstrated in the benchmark results, this approach is efficient in terms of network bandwidth consumption and computational resources. In addition, this methodology is designed to mirror traffic in high-throughput environments with minimal to no impact on the source Virtual Network Functions (VNFs).","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129673568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edges: Evenly Distributing Garbage-Collections for Enterprise SSDs via Stochastic Optimization 边缘:基于随机优化的企业ssd均匀分布垃圾收集
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605402
Shuyi Pei, Jing Yang, Bin Li
{"title":"Edges: Evenly Distributing Garbage-Collections for Enterprise SSDs via Stochastic Optimization","authors":"Shuyi Pei, Jing Yang, Bin Li","doi":"10.1109/nas51552.2021.9605402","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605402","url":null,"abstract":"Solid-state drives (SSDs) have been widely used in various computing systems owing to their significant advantages over hard disk drives (HDDs). One critical challenge that hinders its further adoption in enterprise systems is to resolve the performance variability issue caused by the garbage collection (GC) process that frees flash memory containing invalid data. To overcome this challenge, we formulate a stochastic optimization model that characterizes the nature of the GC process and considers both total GC count and GC distribution over time. Based on the optimization model, we propose Edges, an innovative self-adaptive GC strategy that evenly distributes GCs for enterprise SSDs. The key insight behind Edges is that the number of invalid pages is a finer-grained metric of triggering GCs than the number of free blocks. By testing various traces from practical applications, we show that Edges is able to reduce the total GC counts by as high as 70.17% and GC variance by up to 57.29%, compared to the state-of-the-art GC algorithm.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127001581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locality-aware Thread Block Design in Single and Multi-GPU Graph Processing 单gpu和多gpu图形处理中的位置感知线程块设计
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605484
Quan Fan, Zizhong Chen
{"title":"Locality-aware Thread Block Design in Single and Multi-GPU Graph Processing","authors":"Quan Fan, Zizhong Chen","doi":"10.1109/nas51552.2021.9605484","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605484","url":null,"abstract":"Graphics Processing Unit (GPU) has been adopted to process graphs effectively. Recently, multi-GPU systems are also exploited for greater performance boost. To process graphs on multiple GPUs in parallel, input graphs should be partitioned into parts using partitioning schemes. The partitioning schemes can impact the communication overhead, locality of memory accesses, and further improve the overall performance. We found that both intra-GPU data sharing and inter-GPU communication can be summarized as inter-TB communication. Based on this key idea, we propose a new graph partitioning scheme by redefining the input graph as a TB Graph with calculated vertex and edge weights, and then partition it to reduce intra & inter-GPU communication overhead and improve the locality at the granularity of Thread Blocks (TB). We also propose to develop a partitioning and mapping scheme for heterogeneous architectures including physical links with different bandwidths. The experimental results on graph partitioning show that our scheme is effective to improve the overall performance of the Breadth First Search (BFS) by up to 33%.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126465430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning-based Vulnerability Study of Interpose PUFs as Security Primitives for IoT Networks 基于机器学习的物联网网络安全原语干预puf漏洞研究
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605405
Bipana Thapaliya, Khalid T. Mursi, Yu Zhuang
{"title":"Machine Learning-based Vulnerability Study of Interpose PUFs as Security Primitives for IoT Networks","authors":"Bipana Thapaliya, Khalid T. Mursi, Yu Zhuang","doi":"10.1109/nas51552.2021.9605405","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605405","url":null,"abstract":"Security is of importance for communication networks, and many network nodes, like sensors and IoT devices, are resource-constrained. Physical Unclonable Functions (PUFs) leverage physical variations of the integrated circuits to produce responses unique to individual circuits and have the potential for delivering security for low-cost networks. But before a PUF can be adopted for security applications, all security vulnerabilities must be discovered. Recently, a new PUF known as Interpose PUF (IPUF) was proposed, which was tested to be secure against reliability-based modeling attacks and machine learning attacks when the attacked IPUF is of small size. A recent study showed IPUFs succumbed to a divide-and-conquer attack, and the attack method requires the position of the interpose bit known to the attacker, a condition that can be easily obfuscated by using a random interpose position. Thus, large IPUFs may still remain secure against all known modeling attacks if the interpose position is unknown to attackers. In this paper, we present a new modeling attack method of IPUFs using multilayer neural networks, and the attack method requires no knowledge of the interpose position. Our attack was tested on simulated IPUFs and silicon IPUFs implemented on FPGAs, and the results showed that many IPUFs which were resilient against existing attacks cannot withstand our new attack method, revealing a new vulnerability of IPUFs by re-defining the boundary between secure and insecure regions in the IPUF parameter space.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128812212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
CALC: A Content-Aware Learning Cache for Storage Systems CALC:存储系统的内容感知学习缓存
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605381
Maher Kachmar, D. Kaeli
{"title":"CALC: A Content-Aware Learning Cache for Storage Systems","authors":"Maher Kachmar, D. Kaeli","doi":"10.1109/nas51552.2021.9605381","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605381","url":null,"abstract":"In today’s enterprise storage systems, supported services such as data deduplication are becoming a common feature adopted in the data center, especially as new storage technologies mature. Static partitioning of storage system resources, including CPU cores and memory caches, may lead to missing Service Level Agreement (SLAs) thresholds, such as the Data Reduction Rate (DRR) or IO latency. However, typical storage system applications exhibit a workload pattern that can be learned. By learning these pattern, we are better equipped to address several storage system resource partitioning challenges, issues that cannot be overcome with traditional manual tuning and primitive feedback mechanisms.We propose a Content-Aware Learning Cache (CALC) that uses online reinforcement learning models (Q-Learning, SARSA and Actor-Critic) to actively partition the storage system cache between a data digest cache, content cache, and address-based data cache to improve cache hit performance, while maximizing data reduction rates. Using traces from popular storage applications, we show how our machine learning approach is robust and can out-perform an iterative search method for various datasets and cache sizes. Our content-aware learning cache improves hit rates by 7.1% when compared to iterative search methods, and 18.2% when compared to traditional LRU-based data cache implementation.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115713822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
World's #1 CRM Scale Challenges 世界排名第一的CRM规模挑战
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605424
{"title":"World's #1 CRM Scale Challenges","authors":"","doi":"10.1109/nas51552.2021.9605424","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605424","url":null,"abstract":"In this talk, we will first describe scale challenges of the world's #1 CRM (Customer Relationship Management) platform that operates from the Cloud, executes billions of business tractions daily for hundreds of thousands of customer companies around the world. We will then describe how Salesforce researchers and engineers utilize computer science principles such as Amdahl's Law, temporal and spatial locality, plus big data and machine learning, to make software execute fast and efficiently on various types of compute, network, and storage architectures to meet the ever growing scale challenges.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123372119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of a High-Throughput Virtual Switch Port Monitoring System 高吞吐量虚拟交换机端口监控系统的实现
2021 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2021-10-01 DOI: 10.1109/nas51552.2021.9605360
Liang-Min Wang, Timothy Miskell, Patrick Fu, Cunming Liang, Edwin Verplanke
{"title":"Implementation of a High-Throughput Virtual Switch Port Monitoring System","authors":"Liang-Min Wang, Timothy Miskell, Patrick Fu, Cunming Liang, Edwin Verplanke","doi":"10.1109/nas51552.2021.9605360","DOIUrl":"https://doi.org/10.1109/nas51552.2021.9605360","url":null,"abstract":"As SDN-based networking infrastructure continues to evolve, an increasing number of traditional network functions are deployed over virtualized networks. Similar to fixed function switching networks, traffic monitoring in a Software Defined Network is critical in order to ensure the security and performance of the underlying infrastructure. In the context of virtualized networks, deployment of a virtualized TAP service has been reported as an effective VNF that can provide the same monitoring capabilities as a physical TAP. For most virtual switch implementations, e.g., OvS, network device virtualization is based upon a para-virtualization technology, i.e., VIRTIO. One of the primary use cases for port mirroring is inter-VM communication, i.e., packet streams that exist between virtual network devices, which remains prohibitively expensive for TAP devices. Specifically, it has been observed that virtual TAPs can contribute up to 70% performance degradation to the source VNF(s). With reference to prior work, we previously presented a feasibility study that included a novel approach towards the reduction of port-mirroring overhead. In this paper we present our latest contributions, in which we integrate our design into OvS and develop a VLAN based filtering scheme to pass traffic from a source device to a monitoring device. In this case, both devices may reside either within the same or different switch domains. Furthermore, we present an improvement over RSPAN and discuss its feasibility in delivering mirrored traffic across switch domains, which, in contrast to ERSPAN, does not require an L3 overlay network.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122813353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信