Proceedings of the Seventh ACM Symposium on Cloud Computing最新文献

筛选
英文 中文
STeP: Scalable Tenant Placement for Managing Database-as-a-Service Deployments 步骤:用于管理数据库即服务部署的可伸缩租户安置
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987575
Rebecca Taft, Willis Lang, Jennie Duggan, Aaron J. Elmore, M. Stonebraker, D. DeWitt
{"title":"STeP: Scalable Tenant Placement for Managing Database-as-a-Service Deployments","authors":"Rebecca Taft, Willis Lang, Jennie Duggan, Aaron J. Elmore, M. Stonebraker, D. DeWitt","doi":"10.1145/2987550.2987575","DOIUrl":"https://doi.org/10.1145/2987550.2987575","url":null,"abstract":"Public cloud providers with Database-as-a-Service offerings must efficiently allocate computing resources to each of their customers. An effective assignment of tenants both reduces the number of physical servers in use and meets customer expectations at a price point that is competitive in the cloud market. For public cloud vendors like Microsoft and Amazon, this means packing millions of users' databases onto hundreds or thousands of servers. This paper studies tenant placement by examining a publicly released dataset of anonymized customer resource usage statistics from Microsoft's Azure SQL Database production system over a three-month period. We implemented the STeP framework to ingest and analyze this large dataset. STeP allowed us to use this production dataset to evaluate several new algorithms for packing database tenants onto servers. These techniques produce highly efficient packings by collocating tenants with compatible resource usage patterns. The evaluation shows that under a production-sourced customer workload, these techniques are robust to variations in the number of nodes, keeping performance objective violations to a minimum even for high-density tenant packings. In comparison to the algorithm used in production at the time of data collection, our algorithms produce up to 90% fewer performance objective violations and save up to 32% of total operational costs for the cloud provider.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115003619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Job-aware Scheduling in Eagle: Divide and Stick to Your Probes Eagle中的作业感知调度:划分并坚持您的探针
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987563
Pamela Delgado, Diego Didona, Florin Dinu, W. Zwaenepoel
{"title":"Job-aware Scheduling in Eagle: Divide and Stick to Your Probes","authors":"Pamela Delgado, Diego Didona, Florin Dinu, W. Zwaenepoel","doi":"10.1145/2987550.2987563","DOIUrl":"https://doi.org/10.1145/2987550.2987563","url":null,"abstract":"We present Eagle, a new hybrid data center scheduler for data-parallel programs. Eagle dynamically divides the nodes of the data center in partitions for the execution of long and short jobs, thereby avoiding head-of-line blocking. Furthermore, it provides job awareness and avoids stragglers by a new technique, called Sticky Batch Probing (SBP). The dynamic partitioning of the data center nodes is accomplished by a technique called Succinct State Sharing (SSS), in which the distributed schedulers are informed of the locations where long jobs are executing. SSS is particularly easy to implement with a hybrid scheduler, in which the centralized scheduler places long jobs. With SBP, when a distributed scheduler places a probe for a job on a node, the probe stays there until all tasks of the job have been completed. When finishing the execution of a task corresponding to probe P, rather than executing a task corresponding to the next probe P' in its queue, the node may choose to execute another task corresponding to P. We use SBP in combination with a distributed approximation of Shortest Remaining Processing Time (SRPT) with starvation prevention. We have implemented Eagle as a Spark plugin, and we have measured job completion times for a subset of the Google trace on a 100-node cluster for a variety of cluster loads. We provide simulation results for larger clusters, different traces, and for comparison with other scheduling disciplines. We show that Eagle outperforms other state-of-the-art scheduling solutions at most percentiles, and is more robust against mis-estimation of task duration.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121059854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
STYX: Stream Processing with Trustworthy Cloud-based Execution STYX:流处理与可信的基于云的执行
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987574
J. Stephen, Savvas Savvides, V. Sundaram, Masoud Saeida Ardekani, P. Eugster
{"title":"STYX: Stream Processing with Trustworthy Cloud-based Execution","authors":"J. Stephen, Savvas Savvides, V. Sundaram, Masoud Saeida Ardekani, P. Eugster","doi":"10.1145/2987550.2987574","DOIUrl":"https://doi.org/10.1145/2987550.2987574","url":null,"abstract":"With the advent of the Internet of Things (IoT), billions of devices are expected to continuously collect and process sensitive data (e.g., location, personal health). Due to limited computational capacity available on IoT devices, the current de facto model for building IoT applications is to send the gathered data to the cloud for computation. While private cloud infrastructures for handling large amounts of data streams are expensive to build, using low cost public (untrusted) cloud infrastructures for processing continuous queries including on sensitive data leads to concerns over data confidentiality. This paper presents STYX, a novel programming abstraction and managed runtime system, that ensures confidentiality of IoT applications whilst leveraging the public cloud for continuous query processing. The key idea is to intelligently utilize partially homomorphic encryption to perform as many computationally intensive operations as possible in the untrusted cloud. STYX provides a simple abstraction to the IoT developer to hide the complexities of (1) applying complex cryptographic primitives, (2) reasoning about performance of such primitives, (3) deciding which computations can be executed in an untrusted tier, and (4) optimizing cloud resource usage. An empirical evaluation with benchmarks and case studies shows the feasibility of our approach.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124621600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Automating Failure Testing Research at Internet Scale 互联网规模下自动化故障测试研究
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987555
P. Alvaro, K. Andrus, Chris Sanden, Casey Rosenthal, Ali Basiri, L. Hochstein
{"title":"Automating Failure Testing Research at Internet Scale","authors":"P. Alvaro, K. Andrus, Chris Sanden, Casey Rosenthal, Ali Basiri, L. Hochstein","doi":"10.1145/2987550.2987555","DOIUrl":"https://doi.org/10.1145/2987550.2987555","url":null,"abstract":"Large-scale distributed systems must be built to anticipate and mitigate a variety of hardware and software failures. In order to build confidence that fault-tolerant systems are correctly implemented, Netflix (and similar enterprises) regularly run failure drills in which faults are deliberately injected in their production system. The combinatorial space of failure scenarios is too large to explore exhaustively. Existing failure testing approaches either randomly explore the space of potential failures randomly or exploit the \"hunches\" of domain experts to guide the search. Random strategies waste resources testing \"uninteresting\" faults, while programmer-guided approaches are only as good as human intuition and only scale with human effort. In this paper, we describe how we adapted and implemented a research prototype called lineage-driven fault injection (LDFI) to automate failure testing at Netflix. Along the way, we describe the challenges that arose adapting the LDFI model to the complex and dynamic realities of the Netflix architecture. We show how we implemented the adapted algorithm as a service atop the existing tracing and fault injection infrastructure, and present early results.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114620716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Ako: Decentralised Deep Learning with Partial Gradient Exchange Ako:局部梯度交换的去中心化深度学习
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987586
Pijika Watcharapichat, V. Morales, R. Fernandez, P. Pietzuch
{"title":"Ako: Decentralised Deep Learning with Partial Gradient Exchange","authors":"Pijika Watcharapichat, V. Morales, R. Fernandez, P. Pietzuch","doi":"10.1145/2987550.2987586","DOIUrl":"https://doi.org/10.1145/2987550.2987586","url":null,"abstract":"Distributed systems for the training of deep neural networks (DNNs) with large amounts of data have vastly improved the accuracy of machine learning models for image and speech recognition. DNN systems scale to large cluster deployments by having worker nodes train many model replicas in parallel; to ensure model convergence, parameter servers periodically synchronise the replicas. This raises the challenge of how to split resources between workers and parameter servers so that the cluster CPU and network resources are fully utilised without introducing bottlenecks. In practice, this requires manual tuning for each model configuration or hardware type. We describe Ako, a decentralised dataflow-based DNN system without parameter servers that is designed to saturate cluster resources. All nodes execute workers that fully use the CPU resources to update model replicas. To synchronise replicas as often as possible subject to the available network bandwidth, workers exchange partitioned gradient updates directly with each other. The number of partitions is chosen so that the used network bandwidth remains constant, independently of cluster size. Since workers eventually receive all gradient partitions after several rounds, convergence is unaffected. For the ImageNet benchmark on a 64-node cluster, Ako does not require any resource allocation decisions, yet converges faster than deployments with parameter servers.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124180909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
Characterizing Private Clouds: A Large-Scale Empirical Analysis of Enterprise Clusters 表征私有云:企业集群的大规模实证分析
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987584
Ignacio Cano, Srinivas Aiyar, A. Krishnamurthy
{"title":"Characterizing Private Clouds: A Large-Scale Empirical Analysis of Enterprise Clusters","authors":"Ignacio Cano, Srinivas Aiyar, A. Krishnamurthy","doi":"10.1145/2987550.2987584","DOIUrl":"https://doi.org/10.1145/2987550.2987584","url":null,"abstract":"There is an increasing trend in the use of on-premise clusters within companies. Security, regulatory constraints, and enhanced service quality push organizations to work in these so called private cloud environments. On the other hand, the deployment of private enterprise clusters requires careful consideration of what will be necessary or may happen in the future, both in terms of compute demands and failures, as they lack the public cloud's flexibility to immediately provision new nodes in case of demand spikes or node failures. In order to better understand the challenges and tradeoffs of operating in private settings, we perform, to the best of our knowledge, the first extensive characterization of on-premise clusters. Specifically, we analyze data ranging from hardware failures to typical compute/storage requirements and workload profiles, from a large number of Nutanix clusters deployed at various companies. We show that private cloud hardware failure rates are lower, and that load/demand needs are more predictable than in other settings. Finally, we demonstrate the value of the measurements by using them to provide an analytical model for computing durability in private clouds, as well as a machine learning-driven approach for characterizing private clouds' growth.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129437268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
The Case for RackOut: Scalable Data Serving Using Rack-Scale Systems RackOut案例:使用机架级系统的可扩展数据服务
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987577
Stanko Novakovic, Alexandros Daglis, Edouard Bugnion, Babak Falsafi, Boris Grot
{"title":"The Case for RackOut: Scalable Data Serving Using Rack-Scale Systems","authors":"Stanko Novakovic, Alexandros Daglis, Edouard Bugnion, Babak Falsafi, Boris Grot","doi":"10.1145/2987550.2987577","DOIUrl":"https://doi.org/10.1145/2987550.2987577","url":null,"abstract":"To provide low latency and high throughput guarantees, most large key-value stores keep the data in the memory of many servers. Despite the natural parallelism across lookups, the load imbalance, introduced by heavy skew in the popularity distribution of keys, limits performance. To avoid violating tail latency service-level objectives, systems tend to keep server utilization low and organize the data in micro-shards, which provides units of migration and replication for the purpose of load balancing. These techniques reduce the skew, but incur additional monitoring, data replication and consistency maintenance overheads. In this work, we introduce RackOut, a memory pooling technique that leverages the one-sided remote read primitive of emerging rack-scale systems to mitigate load imbalance while respecting service-level objectives. In RackOut, the data is aggregated at rack-scale granularity, with all of the participating servers in the rack jointly servicing all of the rack's micro-shards. We develop a queuing model to evaluate the impact of RackOut at the datacenter scale. In addition, we implement a RackOut proof-of-concept key-value store, evaluate it on two experimental platforms based on RDMA and Scale-Out NUMA, and use these results to validate the model. Our results show that RackOut can increase throughput up to 6x for RDMA and 8.6x for Scale-Out NUMA compared to a scale-out deployment, while respecting tight tail latency service-level objectives.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123066355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Disciplined Inconsistency with Consistency Types 有纪律的不一致与一致性类型
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987559
B. Holt, James Bornholt, Irene Zhang, Dan R. K. Ports, M. Oskin, L. Ceze
{"title":"Disciplined Inconsistency with Consistency Types","authors":"B. Holt, James Bornholt, Irene Zhang, Dan R. K. Ports, M. Oskin, L. Ceze","doi":"10.1145/2987550.2987559","DOIUrl":"https://doi.org/10.1145/2987550.2987559","url":null,"abstract":"Distributed applications and web services, such as online stores or social networks, are expected to be scalable, available, responsive, and fault-tolerant. To meet these steep requirements in the face of high round-trip latencies, network partitions, server failures, and load spikes, applications use eventually consistent datastores that allow them to weaken the consistency of some data. However, making this transition is highly error-prone because relaxed consistency models are notoriously difficult to understand and test. In this work, we propose a new programming model for distributed data that makes consistency properties explicit and uses a type system to enforce consistency safety. With the Inconsistent, Performance-bound, Approximate (IPA) storage system, programmers specify performance targets and correctness requirements as constraints on persistent data structures and handle uncertainty about the result of datastore reads using new consistency types. We implement a prototype of this model in Scala on top of an existing datastore, Cassandra, and use it to make performance/correctness tradeoffs in two applications: a ticket sales service and a Twitter clone. Our evaluation shows that IPA prevents consistency-based programming errors and adapts consistency automatically in response to changing network conditions, performing comparably to weak consistency and 2-10× faster than strong consistency.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115952960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Radiatus: a Shared-Nothing Server-Side Web Architecture Radiatus:一个无共享的服务器端Web架构
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987571
Raymond Cheng, W. Scott, Paul Ellenbogen, Jon Howell, Franziska Roesner, A. Krishnamurthy, T. Anderson
{"title":"Radiatus: a Shared-Nothing Server-Side Web Architecture","authors":"Raymond Cheng, W. Scott, Paul Ellenbogen, Jon Howell, Franziska Roesner, A. Krishnamurthy, T. Anderson","doi":"10.1145/2987550.2987571","DOIUrl":"https://doi.org/10.1145/2987550.2987571","url":null,"abstract":"Web applications are a frequent target of successful attacks. In most web frameworks, the damage is amplified by the fact that application code is responsible for security enforcement. In this paper, we design and evaluate Radiatus, a shared-nothing web framework where application-specific computation and storage on the server is contained within a sandbox with the privileges of the end-user. By strongly isolating users, user data and service availability can be protected from application vulnerabilities. To make Radiatus practical at the scale of modern web applications, we introduce a distributed capabilities system to allow fine-grained secure resource sharing across the many distributed services that compose an application. We analyze the strengths and weaknesses of a shared-nothing web architecture, which protects applications from a large class of vulnerabilities, but adds an overhead of 60.7% per server and requires an additional 31MB of memory per active user. We demonstrate that the system can scale to 20K operations per second on a 500-node AWS cluster.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133880326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Availability Knob: Flexible User-Defined Availability in the Cloud 可用性旋钮:灵活的用户自定义的云中可用性
Proceedings of the Seventh ACM Symposium on Cloud Computing Pub Date : 2016-10-05 DOI: 10.1145/2987550.2987556
Mohammad Shahrad, D. Wentzlaff
{"title":"Availability Knob: Flexible User-Defined Availability in the Cloud","authors":"Mohammad Shahrad, D. Wentzlaff","doi":"10.1145/2987550.2987556","DOIUrl":"https://doi.org/10.1145/2987550.2987556","url":null,"abstract":"Failure is inevitable in cloud environments. Finding the root cause of a failure can be very complex or at times nearly impossible. Different cloud customers have varying availability demands as well as a diverse willingness to pay for availability. In contrast to existing solutions that try to provide higher and higher availability in the cloud, we propose the Availability Knob (AK). AK provides flexible, user-defined, availability in IaaS clouds, allowing the IaaS cloud customer to express their desire for availability to the cloud provider. Complementary to existing high-reliability solutions and not requiring hardware changes, AK enables more efficient markets. This leads to reduced provider costs, increased provider profit, and improved user satisfaction when compared to an IaaS cloud with no ability to convey availability needs. We leverage game theory to derive incentive compatible pricing, which not only enables AK to function with no knowledge of the root cause of failure but also function under adversarial situations where users deliberately cause downtime. We develop a high-level stochastic simulator to test AK in large-scale IaaS clouds over long time periods. We also prototype AK in OpenStack to explore availability-API tradeoffs and to provide a grounded, real-world, implementation. Our results show that deploying AK leads to more than 10% cost reduction for providers and improves user satisfaction. It also enables providers to set variable profit margins based on the risk of not meeting availability guarantees and the disparity in availability supply/demand. Variable profit margins enable cloud providers to improve their profit by as much as 20%.","PeriodicalId":362207,"journal":{"name":"Proceedings of the Seventh ACM Symposium on Cloud Computing","volume":"22 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120856650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信