2018 IEEE International Conference on Networking, Architecture and Storage (NAS)最新文献

筛选
英文 中文
A Logic-Based Attack Graph for Analyzing Network Security Risk Against Potential Attack 针对潜在攻击分析网络安全风险的基于逻辑的攻击图
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515733
Feng Yi, Huang Yi Cai, F. Z. Xin
{"title":"A Logic-Based Attack Graph for Analyzing Network Security Risk Against Potential Attack","authors":"Feng Yi, Huang Yi Cai, F. Z. Xin","doi":"10.1109/NAS.2018.8515733","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515733","url":null,"abstract":"In this paper, we present LAPA, a framework for automatically analyzing network security risk and generating attack graph for potential attack. The key novelty in our work is that we represent the properties of networks and zero day vulnerabilities, and use logical reasoning algorithm to generate potential attack path to determine if the attacker can exploit these vulnerabilities. In order to demonstrate the efficacy, we have implemented the LAPA framework and compared with three previous network vulnerability analysis methods. Our analysis results have a low rate of false negatives and less cost of processing time due to the worst case assumption and logical property specification and reasoning. We have also conducted a detailed study of the efficiency for generation attack graph with different value of attack path number, attack path depth and network size, which affect the processing time mostly. We estimate that LAPA can produce high quality results for a large portion of networks.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131985618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Performance Evaluation and Analysis for MPI-Based Data Movement in Virtual Switch Network 基于mpi的虚拟交换网络数据移动性能评价与分析
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515734
Dan Huang, Jun Wang, Dezhi Han
{"title":"Performance Evaluation and Analysis for MPI-Based Data Movement in Virtual Switch Network","authors":"Dan Huang, Jun Wang, Dezhi Han","doi":"10.1109/NAS.2018.8515734","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515734","url":null,"abstract":"Virtualization technologies have been widely deployed in data centers and private clusters to provide highly efficient and elastic resource provisioning. Further, virtualization has been extended to the network layer, known as network virtualization. For example, independent virtual switches have become the primary provider of network services for various virtual machines, such as VMware, Xen and Docker. This approach allows the physical network to be decoupled from the overlying virtual switch networks. However, network virutalization introduces performance degradation and scalability bottleneck to communication-intensive frameworks, such as MPI. We quantify and analyze the performance degradation involved with collective communications as well as bursty asynchronous transmission (BAT) in vswitch network environments. Our experiments illustrate that the performance of MPI communication can be degraded up to 5x in the virtual environment.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124754135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tier-Code: An XOR-Based RAID-6 Code with Improved Write and Degraded-Mode Read Performance Tier-Code:基于xor的RAID-6代码,具有改进的写性能和降级模式的读性能
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515729
Bingzhe Li, Meng Yang, S. Mohajer, Weikang Qian, D. Lilja
{"title":"Tier-Code: An XOR-Based RAID-6 Code with Improved Write and Degraded-Mode Read Performance","authors":"Bingzhe Li, Meng Yang, S. Mohajer, Weikang Qian, D. Lilja","doi":"10.1109/NAS.2018.8515729","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515729","url":null,"abstract":"The RAID-6 configuration is more tolerant of disk failures than other RAID levels because of its ability to tolerate two disk failures. However, previous RAID-6 codes suffer from two major overheads - the time of encoding or decoding processes plus the need to access multiple blocks when updating parities or recovering failed blocks. For example, the PS and Reed-Solomon codes do not have optimal computation complexity, while P-code, X-code and RDP-code must access multiple blocks to update parities during write operations. This work proposes a new XOR- based RAID-6 code, called Tier-code, which not only achieves the optimal parity computation complexity, but also increases the write and degraded-mode read performance compared to previous codes. It uses two tiers of coding, one at the block level and the other at the chunk level. Experimental results of software testing, simulation and ASIC synthesis for this new hierarchical code demonstrate that Tier-code can outperform the previous RAID-6 codes in both write performance and degraded-mode read performance while maintaining the optimal computation complexity in both hardware and software implementations.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127660072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Analysis of Different Convolution Algorithms in GPU Environment 不同卷积算法在GPU环境下的性能分析
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515695
Rui Xu, Sheng Ma, Yang Guo
{"title":"Performance Analysis of Different Convolution Algorithms in GPU Environment","authors":"Rui Xu, Sheng Ma, Yang Guo","doi":"10.1109/NAS.2018.8515695","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515695","url":null,"abstract":"Convolutional neural networks (CNNs) have a wide range of applications in image and video recognition, recommender systems and natural language processing. But CNNs are computationally intensive, and its computational cost is hard to accept. In order to speed up the calculations, people focus on optimizing convolution that account for most of the proportion of CNNs' operation. So, many algorithms have been proposed to accelerate the operation of convolution layers. However, each algorithm has its advantages and disadvantages, and there is no one algorithm that can handle all situations. In this paper, we examine the performance of various algorithms in GPU environment. By building a customized CNN model, we have fully explored the impact of the neural structure on the performance of algorithms, including inference/training speed, memory consumption and power consumption. In addition to the algorithms, we also focus on how their implementations in GPU environment affect their performance. We trace the kernel functions of these implementations to further generalize the characteristics of these algorithms. Finally, we summarize the characteristics of each algorithm., and design a strategy to assigns the appropriate implementation for different convolutional layers in CNNs. With our strategy, we can make AlexNet run 1.2x to 2.8x faster than other strategies in GPU environment. This work has very important meaning for understanding these algorithms and may provide insights for further optimizations of the architecture of GPUs and accelerators.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125471442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MemBrain: Automated Application Guidance for Hybrid Memory Systems MemBrain:混合存储系统的自动应用指南
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515694
Matthew Ben Olson, Tong Zhou, Michael R. Jantz, K. Doshi, M. G. Lopez, Oscar R. Hernandez
{"title":"MemBrain: Automated Application Guidance for Hybrid Memory Systems","authors":"Matthew Ben Olson, Tong Zhou, Michael R. Jantz, K. Doshi, M. G. Lopez, Oscar R. Hernandez","doi":"10.1109/NAS.2018.8515694","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515694","url":null,"abstract":"Computer systems with multiple tiers of memory devices with different latency, bandwidth, and capacity char­acteristics are quickly becoming mainstream. Due to cost and physical limitations, device tiers that enable better performance typically include less capacity. Such heterogeneous memory systems require alternative data management strategies to utilize the capacity-constrained resources efficiently. However, current techniques are often limited because they rely on inflexible hardware caching or manual modifications to source code. This paper introduces MemBrain, a new memory management framework that automates the production and use of data-tiering guidance for applications on hybrid memory systems. MemBrain employs program profiling and source code analysis to enable transparent and efficient data placement across different types of memory. It automatically clusters data with similar expected usage patterns into page-aligned regions of virtual addresses (arenas), and uses offline profile feedback to direct low-level tier assignments for each region. We evaluate MemBrain on an Intel Knights Landing server machine with an upper tier of limited capacity (but higher bandwidth) MCDRAM and a lower tier of conventional DDR4 using a selection of high-bandwidth benchmarks from SPEC CPU 2017 as well as two proxy apps (Lulesh and AMG), and one full scale scientific application (QMCPACK). Our evaluation shows that MemBrain can achieve significant performance and efficiency improvements compared to current guided and unguided management strategies.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128085887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Tolerating Soft Errors in Deep Learning Accelerators with Reliable On-Chip Memory Designs
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515692
Arash AziziMazreah, Yongbin Gu, Xiang Gu, Lizhong Chen
{"title":"Tolerating Soft Errors in Deep Learning Accelerators with Reliable On-Chip Memory Designs","authors":"Arash AziziMazreah, Yongbin Gu, Xiang Gu, Lizhong Chen","doi":"10.1109/NAS.2018.8515692","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515692","url":null,"abstract":"Deep learning neural network (DNN) accelerators have been increasingly deployed in many fields recently, including safety-critical applications such as autonomous vehicles and unmanned aircrafts. Meanwhile, the vulnerability of DNN accelerators to soft errors (e.g., caused by high-energy particle strikes) rapidly increases as manufacturing technology continues to scale down. A failure in the operation of DNN accelerators may lead to catastrophic consequences. Among the existing reliability techniques that can be applied to DNN accelerators, fully-hardened SRAM cells are more attractive due to their low overhead in terms of area, power and delay. However, current fully-hardened SRAM cells can only tolerate soft errors produced by single-node-upsets (SNUs), and cannot fully resist the soft errors caused by multiple-node-upsets (MNUs). In this paper, a Zero-Biased MNU-Aware SRAM Cell (ZBMA) is proposed for DNN accelerators based on two observations: first, the data (feature maps, weights) in DNNs has a strong bias towards zero; second, data flipping from zero to one is more likely to cause a failure of DNN outputs. The proposed memory cell provides a robust immunity against node upsets, and reduces the leakage current dramatically when zero is stored in the cell. Evaluation results show that when the proposed memory cell is integrated in a DNN accelerator, the total static power of the accelerator is reduced by 2.6X and 1.79X compared with the one based on the conventional and on state-of-the-art full-hardened memory cells, respectively. In terms of reliability, the DNN accelerator based on the proposed memory cell can reduce 99.99% of false outputs caused by soft errors across different DNNs.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133789427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Multi-User Optimal Offloading: Leveraging Mobility and Allocating Resources in Mobile Edge Cloud Computing 多用户优化卸载:在移动边缘云计算中利用移动性和资源分配
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515725
Hongyan Yu, Jiadi Liu, Songtao Guo
{"title":"Multi-User Optimal Offloading: Leveraging Mobility and Allocating Resources in Mobile Edge Cloud Computing","authors":"Hongyan Yu, Jiadi Liu, Songtao Guo","doi":"10.1109/NAS.2018.8515725","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515725","url":null,"abstract":"Mobile cloud computing (MCC), as a prospective computing paradigm, can significantly enhance computation capability and save energy of smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto the resource-rich center cloud. Compared to a center cloud, an edge cloud can provide services to nearby SMDs with lower latency. However, the edge cloud may be mobile and its resources are limited to multiple nearby users. In this paper, we aim to minimize the total execution cost of multiple devices by offloading the computation from SMDs onto edge clouds in an edge cloud computing (ECC) system. By considering the mobility of SMDs and edge clouds, we first formulate the total cost minimization problem under the constraints of application completion deadline and connection time between SMDs and edge clouds as well as the limited computing resource of both edge clouds and SMDs. Then, by solving the minimization problem, we propose an optimal offloading selection strategy based on a game model, and an edge cloud payoff competition algorithm to optimally allocate edge cloud resource to SMDs to achieve the minimum total execution cost. Experimental results show that our offloading strategy can effectively reduce energy consumption and application completion time compared with the state-of-the-art methods.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128304555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Energy-Efficient Dynamic Task Offloading for Energy Harvesting Mobile Cloud Computing 能量收集移动云计算的节能动态任务卸载
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515736
Yongqiang Zhang, Jianbo He, Songtao Guo
{"title":"Energy-Efficient Dynamic Task Offloading for Energy Harvesting Mobile Cloud Computing","authors":"Yongqiang Zhang, Jianbo He, Songtao Guo","doi":"10.1109/NAS.2018.8515736","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515736","url":null,"abstract":"Mobile-edge cloud computing (MEC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and prolong the lifetime of mobile devices (MDs) by offloading computation-intensive tasks to the cloud. This paper considers applying simultaneous wireless information and power transfer (SWIPT) technique to a multi-user computation offloading problem for mobile-edge cloud computing, where energy-limited mobile devices (MDs) harvest energy form the ambient radio-frequency (RF) signal. We investigate partial computation offloading by jointly optimizing MDs' clock frequency, transmit power and offloading ratio with the system design objective of minimizing energy cost of mobile devices. To this end, we first formulate an energy cost minimization problem constrained by task completion time and finite mobile- edge cloud computation capacity. Then, by exploiting alternative optimization (AO) based on difference of convex function (DC) programming and linear programming, we design an iterative algorithm for clock frequency control, transmission power allocation, offloading ratio and power splitting ratio to solve the non-convex optimization problem. Our simulation results reveal that the proposed algorithm can converge within a few iterations and yield minimum system energy cost.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121761806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
An Optimized Implementation for Concurrent LSM-Structured Key-Value Stores 并发lsm结构键值存储的优化实现
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515730
Li Liu, Hua Wang, Ke Zhou
{"title":"An Optimized Implementation for Concurrent LSM-Structured Key-Value Stores","authors":"Li Liu, Hua Wang, Ke Zhou","doi":"10.1109/NAS.2018.8515730","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515730","url":null,"abstract":"Log-Structured Merge Trees (LSM) based key-value (KV) stores such as LevelDB and HyperLevelDB, use a compaction strategy which brings frequent compaction operations, to store key-value items in sorted order. However, large numbers of compactions impose a negative impact on write and read performance for random data-intensive workloads. To remedy this problem, this paper presents OHDB, an optimization of HyperLevelDB for random data-intensive workloads. OHDB implements two stand-alone techniques in the disk component of LSM structure to optimize the concurrent compactions. One is dividing KV items by prefix at the first level in the disk component, to reduce the frequency of overlapping in key range among data files, and thus reduces the amount of compactions. The other is separating the first level in the disk component from the rest levels, and organizing them in two disks individually, to increase parallelism of disk writes of compactions. We evaluate three OHDBs which are OHDB with each of the technique and OHDB with the combination of both respectively, using micro-benchmarks with random write- intensive and read-intensive workloads. Experimental results show that OHDB reduces the amount of compactions by a factor of up to 4x, and improves the write and read performance for random data-intensive workloads under various settings.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130440463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Travel Route Designing in Wireless Sensor Networks with Mobile Sink 具有移动Sink的无线传感器网络最优行进路径设计
2018 IEEE International Conference on Networking, Architecture and Storage (NAS) Pub Date : 2018-10-01 DOI: 10.1109/NAS.2018.8515720
Jiqiang Tang, Songtao Guo, Yuanyuan Yang
{"title":"Optimal Travel Route Designing in Wireless Sensor Networks with Mobile Sink","authors":"Jiqiang Tang, Songtao Guo, Yuanyuan Yang","doi":"10.1109/NAS.2018.8515720","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515720","url":null,"abstract":"In this paper, we propose a shortest travel route planning scheme that takes into account the spatial characteristics of wireless transmissions for mobile data gathering in wireless sensor networks. We formulate the shortest travel route problem (STRP) as a covering salesman problem (CSP), which is regarded as a mixed integer nonlinear programming and also as a non- convex programming problem. To solve the STRP problem, we propose a heuristic algorithm named decomposition algorithm (DA), which decomposes the STRP problem into two subprob­lems: access sequence problem and position determining problem. We conduct extensive simulation to verify the effectiveness of the proposed algorithm and show that the DA algorithm can plan the shortest travel route in large scale WSNs other than small scale WSNs by the classical traveling salesman problem (TSP) algorithms.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127240290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信