Proceedings of the ACM on Measurement and Analysis of Computing Systems最新文献

筛选
英文 中文
A First Look at Wi-Fi 6 in Action: Throughput, Latency, Energy Efficiency, and Security Wi-Fi 6的实际应用:吞吐量、延迟、能效和安全性
Ruofeng Liu, Nakjung Choi
{"title":"A First Look at Wi-Fi 6 in Action: Throughput, Latency, Energy Efficiency, and Security","authors":"Ruofeng Liu, Nakjung Choi","doi":"10.1145/3579451","DOIUrl":"https://doi.org/10.1145/3579451","url":null,"abstract":"This paper presents a first-of-its-kind performance measurement of Wi-Fi 6 (IEEE 802.11ax) using real experiments. Our experiments focus on multi-client scenarios. The results reveal the impact of the new channel access mechanisms (i.e., OFDMA and TWT) on the spectrum efficiency, energy consumption, latency, and network security. (i) A comparison with the legacy CSMA/CA scheme shows that the commodity Wi-Fi 6 achieves 3× overall throughput and dramatically reduce the latency (5×) when coexisting with legacy Wi-Fi network. (ii) However, the current OFDMA implementation significantly increases the power consumption (6×), implying a design tradeoff between throughput and latency gain versus the cost of energy consumption. (iii) Finally, TWT negotiating procedure is vulnerable to various malicious attacks. We believe that our findings provide critical insights for the scheduling algorithm design, power optimization, and security protection of the next-generation WLANs.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116123799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DaeMon: Architectural Support for Efficient Data Movement in Fully Disaggregated Systems 在完全分解的系统中对有效数据移动的体系结构支持
Christina Giannoula, Kailong Huang, Jonathan Tang, N. Koziris, G. Goumas, Zeshan A. Chishti, Nandita Vijaykumar
{"title":"DaeMon: Architectural Support for Efficient Data Movement in Fully Disaggregated Systems","authors":"Christina Giannoula, Kailong Huang, Jonathan Tang, N. Koziris, G. Goumas, Zeshan A. Chishti, Nandita Vijaykumar","doi":"10.1145/3579445","DOIUrl":"https://doi.org/10.1145/3579445","url":null,"abstract":"Resource disaggregation offers a cost effective solution to resource scaling, utilization, and failure-handling in data centers by physically separating hardware devices in a server. Servers are architected as pools of processor, memory, and storage devices, organized as independent failure-isolated components interconnected by a high-bandwidth network. A critical challenge, however, is the high performance penalty of accessing data from a remote memory module over the network. Addressing this challenge is difficult as disaggregated systems have high runtime variability in network latencies/bandwidth, and page migration can significantly delay critical path cache line accesses in other pages. This paper conducts a characterization analysis on different data movement strategies in fully disaggregated systems, evaluates their performance overheads in a variety of workloads, and introduces DaeMon, the first software-transparent mechanism to significantly alleviate data movement overheads in fully disaggregated systems. First, to enable scalability to multiple hardware components in the system, we enhance each compute and memory unit with specialized engines that transparently handle data migrations. Second, to achieve high performance and provide robustness across various network, architecture and application characteristics, we implement a synergistic approach of bandwidth partitioning, link compression, decoupled data movement of multiple granularities, and adaptive granularity selection in data movements. We evaluate DaeMon in a wide variety of workloads at different network and architecture configurations using a state-of-the-art simulator. DaeMon improves system performance and data access costs by 2.39× and 3.06×, respectively, over the widely-adopted approach of moving data at page granularity.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123463262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gacha Game Analysis and Design Gacha游戏分析与设计
Canhui Chen, Zhixuan Fang
{"title":"Gacha Game Analysis and Design","authors":"Canhui Chen, Zhixuan Fang","doi":"10.1145/3579438","DOIUrl":"https://doi.org/10.1145/3579438","url":null,"abstract":"Gacha game is a special opaque selling approach, where the seller is selling gacha pulls to the buyer. Each gacha pull provides a certain probability for the buyer to win the gacha game reward. The gacha game has been enthusiastically embraced in numerous online video games and has a wide range of potential applications.In this work, we model the complex interaction between the seller and the buyer as a Stackelberg game, where the sequential decision of the buyer is modeled as a Markov Decision Process (MDP). We define the whale property in the context of gacha games. Then, we show that this is the necessary condition to achieve optimal revenue. Moreover, we provide the revenue-optimal gacha game design and show that it is equivalent to the single-item single-bidder Myerson auction.We further explore two popular multi-item gacha games, namely, the sequential multi-item gacha game and the banner-based multi-item gacha game. We also discuss the subsidies in the gacha game and demonstrate how subsidies may encourage the buyer to engage in grinding behavior. Finally, we provide a case study on blockchain systems as gacha games.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124179393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Duo: A High-Throughput Reconfigurable Datacenter Network Using Local Routing and Control 使用本地路由和控制的高吞吐量可重构数据中心网络
Johannes Zerwas, Csaba Györgyi, Andreas Blenk, Stefan Schmid, C. Avin
{"title":"Duo: A High-Throughput Reconfigurable Datacenter Network Using Local Routing and Control","authors":"Johannes Zerwas, Csaba Györgyi, Andreas Blenk, Stefan Schmid, C. Avin","doi":"10.1145/3579449","DOIUrl":"https://doi.org/10.1145/3579449","url":null,"abstract":"The performance of many cloud-based applications critically depends on the capacity of the underlying datacenter network. A particularly innovative approach to improve the throughput in datacenters is enabled by emerging optical technologies, which allow to dynamically adjust the physical network topology, both in an oblivious or demand-aware manner. However, such topology engineering, i.e., the operation and control of dynamic datacenter networks, is considered complex and currently comes with restrictions and overheads. We present Duo, a novel demand-aware reconfigurable rack-to-rack datacenter network design realized with a simple and efficient control plane. Duo is based on the well-known de Bruijn topology (implemented using a small number of optical circuit switches) and the key observation that this topology can be enhanced using dynamic (''opportunistic'') links between its nodes. In contrast to previous systems, Duo has several desired features: i) It makes effective use of the network capacity by supporting integrated and multi-hop routing (paths that combine both static and dynamic links). ii) It uses a work-conserving queue scheduling which enables out-of-the-box TCP support. iii) Duo employs greedy routing that is implemented using standard IP longest prefix match with small forwarding tables. And iv) during topological reconfigurations, routing tables require only local updates, making this approach ideal for dynamic networks. We evaluate Duo in end-to-end packet-level simulations, comparing it to the state-of-the-art static and dynamic networks designs. We show that Duo provides higher throughput, shorter paths, lower flow completion times for high priority flows, and minimal packet reordering, all using existing network and transport layer protocols. We also report on a proof-of-concept implementation of Duo's control and data plane.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133695455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Network Monitoring on Multi-Pipe Switches 多管交换机网络监控
Marco Chiesa, F. Verdi
{"title":"Network Monitoring on Multi-Pipe Switches","authors":"Marco Chiesa, F. Verdi","doi":"10.1145/3579321","DOIUrl":"https://doi.org/10.1145/3579321","url":null,"abstract":"Programmable switches have been widely used to design network monitoring solutions that operate in the fast data-plane level, e.g., detecting heavy hitters, super-spreaders, computing flow size distributions and their entropy. Many existing works on networking monitoring assume switches deploy a single memory that is accessible by each processed packet. However, high-speed ASIC switches increasingly deploymultiple independent pipes, each equipped with its own independent memory thatcannot be accessed by other pipes. In this work, we initiate the study of deploying existing heavy-hitter data-plane monitoring solutions on multi-pipe switches where packets of a \"flow\" may spread over multiple pipes, i.e., stored into distinct memories. We first quantify the accuracy degradation due to splitting a monitoring data structure across multiple pipes (e.g., up to 3000x worse flow-size estimation average error). We then present PipeCache, a system that adaptsexisting data-plane mechanisms to multi-pipe switches by carefully storing all the monitoring information of each traffic class into exactly one specific pipe (as opposed to replicate the information on multiple pipes). PipeCache relies on the idea of briefly storing monitoring information into a per-pipe cache and then piggybacking this information onto existing data packets to the correct pipeentirely at data-plane speed. We implement PipeCache on ASIC switches and we evaluate it using a real-world trace. We show that existing data-plane mechanisms achieves accuracy levels and memory requirements similar to single-pipe deployments when augmented with PipeCache (i.e., up to 16x lower memory requirements).","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123020223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Go-to-Controller is Better: Efficient and Optimal LPM Caching with Splicing 转到控制器更好:带拼接的高效和最优LPM缓存
Itamar Gozlan, C. Avin, Gil Einziger, Gabriel Scalosub
{"title":"Go-to-Controller is Better: Efficient and Optimal LPM Caching with Splicing","authors":"Itamar Gozlan, C. Avin, Gil Einziger, Gabriel Scalosub","doi":"10.1145/3579441","DOIUrl":"https://doi.org/10.1145/3579441","url":null,"abstract":"Modern data center networks are required to support huge and complex forwarding policies as they handle the traffic of the various tenants. However, these policies cannot be stored in their entirety within the limited memory available at commodity switches. The common approach in such scenarios is to have SDN controllers manage the memory available at the switch as a fast cache, updating and changing the forwarding rules in the cache according to the workloads dynamics and the global policy at hand. Many such policies, such as Longest-prefix-match (LPM) policies, introduce dependencies between the forwarding rules. Ensuring that the cache content is always consistent with the global policy often requires the switch to store (potentially many) superfluous rules, which may lead to suboptimal performance in terms of delay and throughput. To overcome these deficiencies, previous work suggested the concept of splicing, where modified Go-to-Controller rules can be inserted into the cache to improve performance while maintaining consistency. These works focused mostly on heuristics, and it was conjectured that the problem is computationally intractable. As our main result, we show that the problem of determining the optimal set of rules, with splicing, can actually be solved efficiently by presenting a polynomial-time algorithm that produces an optimal solution, i.e., for a given cache size we find an optimal set of rules, some of which are go-to-controller, which maximize the total weight of the cache while maintaining consistency. However, such optimality comes at a cost, encompassed by the fact that our algorithm has a significantly larger running time than SoTA solutions which do not employ splicing. Therefore, we further present a heuristic exhibiting close-to-optimal performance, with significantly improved running time, matching that of the best algorithm, which does not employ splicing. In addition, we present the results of an evaluation study that compares the performance of our solutions with that of SoTA approaches, showing that splicing can reduce the cache miss ratio by as much as 30%, without increasing the cache size. Lastly, we propose a simple and fast-to-compute metric (that is consistency-oblivious) in order to evaluate the potential benefits of splicing compared to classical LPM-caching, for a given policy and traffic distribution. We show that our metric is highly correlated with such benefits, thus serving as an indication of whether splicing should be incorporated within the system architecture.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121377682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting and Measuring Aggressive Location Harvesting in Mobile Apps via Data-flow Path Embedding 通过数据流路径嵌入检测和测量移动应用程序中的侵略性位置收集
Haoran Lu, Qingchuan Zhao, Yongliang Chen, Xiaojing Liao, Zhiqiang Lin
{"title":"Detecting and Measuring Aggressive Location Harvesting in Mobile Apps via Data-flow Path Embedding","authors":"Haoran Lu, Qingchuan Zhao, Yongliang Chen, Xiaojing Liao, Zhiqiang Lin","doi":"10.1145/3579447","DOIUrl":"https://doi.org/10.1145/3579447","url":null,"abstract":"Today, location-based services have become prevalent in the mobile platform, where mobile apps provide specific services to a user based on his or her location. Unfortunately, mobile apps can aggressively harvest location data with much higher accuracy and frequency than they need because the coarse-grained access control mechanism currently implemented in mobile operating systems (e.g., Android) cannot regulate such behavior. This unnecessary data collection violates the data minimization policy, yet no previous studies have investigated privacy violations from this perspective, and existing techniques are insufficient to address this violation. To fill this knowledge gap, we take the first step toward detecting and measuring this privacy risk in mobile apps at scale. Particularly, we annotate and release thefirst dataset to characterize those aggressive location harvesting apps and understand the challenges of automatic detection and classification. Next, we present a novel system, LocationScope, to address these challenges by(i) uncovering how an app collects locations and how to use such data through a fine-tuned value set analysis technique,(ii) recognizing the fine-grained location-based services an app provides via embedding data-flow paths, which is a combination of program analysis and machine learning techniques, extracted from its location data usages, and(iii) identifying aggressive apps with an outlier detection technique achieving a precision of 97% in aggressive app detection. Our technique has further been applied to millions of free Android apps from Google Play as of 2019 and 2021. Highlights of our measurements on detected aggressive apps include their growing trend from 2019 to 2021 and the app generators' significant contribution of aggressive location harvesting apps.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114493090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fiat Lux: Illuminating IPv6 Apportionment with Different Datasets Fiat Lux:不同数据集的IPv6分配
Amanda Hsu, Frank H. Li, P. Pearce
{"title":"Fiat Lux: Illuminating IPv6 Apportionment with Different Datasets","authors":"Amanda Hsu, Frank H. Li, P. Pearce","doi":"10.1145/3579334","DOIUrl":"https://doi.org/10.1145/3579334","url":null,"abstract":"IPv6 adoption continues to grow, making up more than 40% of client traffic to Google globally. While the ubiquity of the IPv4 address space makes it comparably easier to understand, the vast and less studied IPv6 address space motivates a variety of works detailing methodology to collect and analyze IPv6 properties, many of which use knowledge from specific data sources as a lens for answering research questions. Despite such work, questions remain on basic properties such as the appropriate prefix size for different research tasks. Our work fills this knowledge gap by presenting an analysis of the apportionment of the IPv6 address space from the ground-up, using data and knowledge from numerous data sources simultaneously, aimed at identifying how to leverage IPv6 address information for a variety of research tasks. Utilizing WHOIS data from RIRs, routing data, and hitlists, we highlight fundamental differences in apportionment sizes and structural properties depending on data source and examination method. We focus on the different perspectives each dataset offers and the disjoint, heterogeneous nature of these datasets when taken together. We additionally leverage a graph-based analysis method for these datasets that allows us to draw conclusions regarding when and how to intersect the datasets and their utility. The differences in each dataset's perspective is not due to dataset problems but rather stems from a variety of differing structural and deployment behaviors across RIRs and IPv6 providers alike. In light of these inconsistencies, we discuss network address partitioning, best practices, and considerations for future IPv6 measurement and analysis projects.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115083945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DiffForward: On Balancing Forwarding Traffic for Modern Cloud Block Services via Differentiated Forwarding DiffForward:基于差异化转发的现代云块业务转发流量均衡研究
Wenzheng Zhu, Yongkun Li, Erci Xu, Fei Li, Yinlong Xu, John C.S. Lui
{"title":"DiffForward: On Balancing Forwarding Traffic for Modern Cloud Block Services via Differentiated Forwarding","authors":"Wenzheng Zhu, Yongkun Li, Erci Xu, Fei Li, Yinlong Xu, John C.S. Lui","doi":"10.1145/3579444","DOIUrl":"https://doi.org/10.1145/3579444","url":null,"abstract":"Modern cloud block service provides cloud users with virtual block disks (VDisks), and it usually relies on a forwarding layer consisting of multiple proxy servers to forward the block-level writes from applications to the underlying distributed storage. However, we discover that severe traffic imbalance exists among the proxy servers at the forwarding layer, thus creating a performance bottleneck which severely prolongs the latency of accessing VDisks. Worse yet, due to the diverse access patterns of VDisk s, stable traffic and burst traffic coexist at the forwarding layer, and thus making existing load balancing designs inefficient for balancing the traffic at the forwarding layer of VDisk s, as they are unaware of and also lacks the ability to differentiate the decomposable burst and stable traffic. To this end, we propose a novel traffic forwarding scheme DiffForward for cloud block services. DiffForward differentiates the burst traffic from stable traffic in an accurate and efficient way at the client side, then it forwards the burst traffic to a decentralized distributed log store to realize real-time load balance by writing the data in a round-robin manner and balances the stable traffic by segmentation. DiffForward also judiciously coordinates the stable and burst traffic and preserves strong consistency under differentiated forwarding. Extensive experiments with reallife workloads on our prototype show that DiffForward effectively balances the traffic at the forwarding layer at a fine-grained subsecond level, thus significantly reducing the write latency of VDisks.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121675958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Asynchronous Automata Processing on GPUs gpu上的异步自动机处理
Hongyuan Liu, Sreepathi Pai, Adwait Jog
{"title":"Asynchronous Automata Processing on GPUs","authors":"Hongyuan Liu, Sreepathi Pai, Adwait Jog","doi":"10.1145/3579453","DOIUrl":"https://doi.org/10.1145/3579453","url":null,"abstract":"Finite-state automata serve as compute kernels for many application domains such as pattern matching and data analytics. Existing approaches on GPUs exploit three levels of parallelism in automata processing tasks: 1)~input stream level, 2)~automaton-level and 3)~state-level. Among these, only state-level parallelism is intrinsic to automata while the other two levels of parallelism depend on the number of automata and input streams to be processed. As GPU resources increase, a parallelism-limited automata processing task can underutilize GPU compute resources. To this end, we propose AsyncAP, a low-overhead approach that optimizes for both scalability and throughput. Our insight is that most automata processing tasks have an additional source of parallelism originating from the input symbols which has not been leveraged before. Making the matching process associated with the automata tasks asynchronous, i.e., parallel GPU threads start processing an input stream from different input locations instead of processing it serially, improves throughput significantly and scales with input length. When the task does not have enough parallelism to utilize all the GPU cores, detailed evaluation across 12 evaluated applications shows that AsyncAP achieves up to 58× speedup on average over the state-of-the-art GPU automata processing engine. When the tasks have enough parallelism to utilize GPU cores, AsyncAP still achieves 2.4× speedup.","PeriodicalId":426760,"journal":{"name":"Proceedings of the ACM on Measurement and Analysis of Computing Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121321660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信