2014 IEEE International Parallel & Distributed Processing Symposium Workshops最新文献

筛选
英文 中文
Revisiting Edge and Node Parallelism for Dynamic GPU Graph Analytics 重新审视动态GPU图形分析的边缘和节点并行性
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.157
Adam McLaughlin, David A. Bader
{"title":"Revisiting Edge and Node Parallelism for Dynamic GPU Graph Analytics","authors":"Adam McLaughlin, David A. Bader","doi":"10.1109/IPDPSW.2014.157","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.157","url":null,"abstract":"Betweenness Centrality is a widely used graph analytic that has applications such as finding influential people in social networks, analyzing power grids, and studying protein interactions. However, its complexity makes its exact computation infeasible for large graphs of interest. Furthermore, networks tend to change over time, invalidating previously calculated results and encouraging new analyses regarding how centrality metrics vary with time. While GPUs have dominated regular, structured application domains, their high memory throughput and massive parallelism has made them a suitable target architecture for irregular, unstructured applications as well. In this paper we compare and contrast two GPU implementations of an algorithm for dynamic betweenness centrality. We show that typical network updates affect the centrality scores of a surprisingly small subset of the total number of vertices in the graph. By efficiently mapping threads to units of work we achieve up to a 110x speedup over a CPU implementation of the algorithm and can update the analytic 45x faster on average than a static recomputation on the GPU.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130978286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Scalable Fast Multipole Accelerated Vortex Methods 可扩展快速多极加速涡旋方法
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.110
Qi Hu, N. Gumerov, Rio Yokota, L. Barba, R. Duraiswami
{"title":"Scalable Fast Multipole Accelerated Vortex Methods","authors":"Qi Hu, N. Gumerov, Rio Yokota, L. Barba, R. Duraiswami","doi":"10.1109/IPDPSW.2014.110","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.110","url":null,"abstract":"The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels -- the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131873067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Hybrid Metaheuristic for Annual Hydropower Generation Optimization 年度水力发电优化的混合元启发式算法
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.53
A. Nakib, E. Talbi, A. Fuser
{"title":"Hybrid Metaheuristic for Annual Hydropower Generation Optimization","authors":"A. Nakib, E. Talbi, A. Fuser","doi":"10.1109/IPDPSW.2014.53","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.53","url":null,"abstract":"In this paper, an hybrid metaheuristic based solution is proposed to solve the annual optimal hydro generation scheduling problem. The problem of the hydro generation scheduling is formulated as a continuous non-linear optimization problem and solved using enhanced combination of metaheuristics: random greedy, evolutionnary algorithm and, pseudo dynamic programming. The obtained results upon one year of the application of the proposed method on the horizon of one year of hydropower generation (while in the literature most authors are limited to one week) demonstrate the efficiency of the proposed algorithm.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115506774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Compactor: Optimization Framework at Staging I/O Nodes 压缩器:分段I/O节点的优化框架
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.188
V. Venkatesan, M. Chaarawi, Q. Koziol, E. Gabriel
{"title":"Compactor: Optimization Framework at Staging I/O Nodes","authors":"V. Venkatesan, M. Chaarawi, Q. Koziol, E. Gabriel","doi":"10.1109/IPDPSW.2014.188","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.188","url":null,"abstract":"Data-intensive applications are largely influenced by I/O performance on HPC systems and the scalability of such applications to exascale primarily depends on the scalability of the I/O performance on HPC systems in the future. To mitigate the I/O performance, recent HPC systems make use of staging nodes to delegate I/O requests and in-situ data analysis. In this paper, we present the Compactor framework and also present three optimizations to improve I/O performance at the data staging nodes. The first optimization performs collective buffering across requests from multiple processes. In the second optimization, we present a way to steal writes to service read request at the staging node. Finally, we also provide a way to \"morph\" write requests from the same process. All optimizations were implemented as a part of the Exascale FastForward I/O stack. We evaluated the optimizations over a PVFS2 file system using a micro-benchmark and Flash I/O benchmark. Our results indicate significant performance benefits with our framework. In the best case the compactor is able to provide up to 70% improvement in performance.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121969641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Process Simulation of Complex Biochemical Pathways in Explicit 3D Space Enabled by Heterogeneous Computing Platform 基于异构计算平台的显式三维空间复杂生化途径过程模拟
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.199
Jie Li, A. Salighehdar, N. Ganesan
{"title":"Process Simulation of Complex Biochemical Pathways in Explicit 3D Space Enabled by Heterogeneous Computing Platform","authors":"Jie Li, A. Salighehdar, N. Ganesan","doi":"10.1109/IPDPSW.2014.199","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.199","url":null,"abstract":"Biological pathways typically consist of dozens of reacting chemical species and hundreds of equations describing reactions within the biological system. Modeling and simulation of such biological pathways in explicit process space is a computationally intensive due to the size of the system complexity and nature of the interactions. Such biological pathways exhibit considerable behavioral complexity in multiple fundamental cellular processes. Hence, there is a strong need for new underlying simulation algorithms as well as need for newer computing platforms, systems and techniques. In this work we present a novel heterogeneous computing platform to accelerate the simulation study of such complex biochemical pathways in 3D reaction process space. Several tasks involved in the simulation study has been carefully partitioned to run on a combination of reconfigurable hardware and a massively parallel processor, such as the GPU. This paper also presents an implementation to accelerate one of the most compute intensive tasks - sifting through the reaction space to determine reacting particles. Finally, we present the new heterogeneous computing framework integrating a FPGA and GPU to accelerate the computation and obtain better performance over the use of any single platform. The platform achieves 5x total speedup when compared to a single GPU-only platform. Besides, the extensible architecture is general enough to be used to study a variety of biological pathways in order to gain deeper insights into biomolecular systems.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122054091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive N to P Portfolio for Solving Constraint Programming Problems on Top of the Parallel Bobpp Framework 基于并行Bobpp框架的自适应N to P组合求解约束规划问题
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.171
Tarek Menouer, B. L. Cun
{"title":"Adaptive N to P Portfolio for Solving Constraint Programming Problems on Top of the Parallel Bobpp Framework","authors":"Tarek Menouer, B. L. Cun","doi":"10.1109/IPDPSW.2014.171","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.171","url":null,"abstract":"This paper presents a parallelization of Constraint Programming (CP) solver, based on the portfolio principle, in order to quickly solve constraint satisfaction and optimisation problems. The portfolio principle is widely used in the parallelization of boolean SATisfiability (SAT) and CP solvers. It is based on the running of N search strategies for the same problem using N computing cores. Each core uses its own strategy in order to perform a search that is different form the other ones. The first strategy that responds to the needs of the user stops all other strategies. In the usual portfolio principle, the number of search strategies is limited compared to the current number of the computing cores used by parallel machines. The idea of this article is to run N search strategies for the same CP problem and schedule these strategies using P computing cores (P > N). The novelty is that the scheduling of these N strategies is dynamically performed between the different computing cores. The goal is to adapt the scheduling of the search strategies so as to favour the strategy that finds a solution quickly. The performances obtained with this adaptive portfolio solver are illustrated by solving the CP problems modeled using FlatZinc format and solved using the OR-Tools solver on top of the parallel Bobpp framework.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125765594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
ParLearning Keynote
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.229
E. Xing
{"title":"ParLearning Keynote","authors":"E. Xing","doi":"10.1109/IPDPSW.2014.229","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.229","url":null,"abstract":"Bio: Dr. Eric Xing is an associate professor in the School of Computer Science at Carnegie Mellon University. His principal research interests lie in the development of machine learning and statistical methodology; especially for solving problems involving automated learning, reasoning, and decision-making in high-dimensional and dynamic possible worlds; and for building quantitative models and predictive understandings of biological systems. Professor Xing received a Ph.D. in Molecular Biology from Rutgers University, and another Ph.D. in Computer Science from UC Berkeley. His current work involves, 1) foundations of statistical learning, including theory and algorithms for estimating time/space varying-coefficient models, sparse structured input/output models, and nonparametric Bayesian models; 2) computational and statistical analysis of gene regulation, genetic variation, and disease associations; and 3) application of statistical learning in social networks, data mining, vision. Professor Xing has published over 150 peer-reviewed papers, and is an associate editor of the Journal of the American Statistical Association, Annals of Applied Statistics, the IEEE Transactions of Pattern Analysis and Machine Intelligence, the PLoS Journal of Computational Biology, and an Action Editor of the Machine Learning journal. He is a recipient of the NSF Career Award, the Alfred P. Sloan Research Fellowship in Computer Science, the United States Air Force Young Investigator Award, and the IBM Open Collaborative Research Faculty Award. 2014 IEEE 28th International Parallel & Distributed Processing Symposium Workshops","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130103000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSPP Introduction and Committees LSPP介绍和委员会
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.226
D. Kerbyson, R. Rajamony, C. Weems
{"title":"LSPP Introduction and Committees","authors":"D. Kerbyson, R. Rajamony, C. Weems","doi":"10.1109/IPDPSW.2014.226","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.226","url":null,"abstract":"Workshop Theme The workshop on Large-Scale Parallel Processing is a forum that focuses on computer systems that utilize thousands of processors and beyond. Large-scale systems, referred to by some as extreme-scale and Ultra-scale, have many important research aspects that need detailed examination in order for their effective design, deployment, and utilization to take place. These include handling the substantial increase in multi-core on a chip, the ensuing interconnection hierarchy, communication, and synchronization mechanisms. Increasingly this is becoming an issue of co-design involving performance, power and reliability aspects. The workshop aims to bring together researchers from different communities working on challenging problems in this area for a dynamic exchange of ideas. Work at early stages of development as well as work that has been demonstrated in practice is equally welcome. Of particular interest are papers that identify and analyze novel ideas rather than providing incremental advances in the following areas: • Large-scale systems: exploiting parallelism at large-scale, the coordination of large numbers of processing elements, synchronization and communication at large-scale, programming models and productivity • Novel architectures and experimental systems: the design of novel systems, the use emerging technologies such as Non-Volatile Memory, Silicon Photonics, application-specific accelerators and future trends. • Monitoring, Analysis, and Modeling: tools and techniques for gathering performance, power, thermal, reliability, and other data from existing large scale systems, analyzing such data offline or in real time for system tuning, and modeling of similar factors in projected system installations. • Multi-core: utilization of increased parallelism on a single chip, the possible integration of these into largescale systems, and dealing with the resulting hierarchical connectivity. • Energy Management: Techniques, strategies, and experiences relating to the energy management and optimization of large-scale systems. • Applications: novel algorithmic and application methods, experiences in the design and use of applications that scale to large-scales, overcoming of limitations, performance analysis and insights gained. • Warehouse Computing: dealing with the issues in advanced datacenters that are increasingly moving from co-locating many servers to having a large number of servers working cohesively, impact of both software and hardware designs and optimizations to achieve best cost-performance efficiency.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127961057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ABC2: Adaptively Balancing Computation and Communication in a DSM Cluster of Multicores for Irregular Applications 非规则应用中DSM多核集群的自适应平衡计算和通信
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.51
S. C. Koduru, Keval Vora, Rajiv Gupta
{"title":"ABC2: Adaptively Balancing Computation and Communication in a DSM Cluster of Multicores for Irregular Applications","authors":"S. C. Koduru, Keval Vora, Rajiv Gupta","doi":"10.1109/IPDPSW.2014.51","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.51","url":null,"abstract":"Graph-based applications have become increasingly important in many application domains. The large graph sizes offer data level parallelism at a scale that makes it attractive to run such applications on distributed shared memory (DSM) based modern clusters composed of multicore machines. Our analysis of several graph applications that rely on speculative parallelism or asynchronous parallelism shows that the balance between computation and communication differs between applications. In this paper, we study this balance in the context of DSMs and exploit the multiple cores present in modern multicore machines by creating three kinds of threads which allows us to dynamically balance computation and communication: compute threads to exploit data level parallelism in the computation, fetch threads that replicate data into object-stores before it is accessed by compute threads, and update threads that make results computed by compute threads visible to all compute threads by writing them to DSM. We observe that the best configuration for above mechanisms varies across different inputs in addition to the variation across different applications. To this end, we design ABC2: a runtime algorithm that automatically configures the DSM using simple runtime information such as: observed object prefetch and update queue lengths. This runtime algorithm achieves speedups close to that of the best hand-optimized configurations.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131242746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Predicting an Optimal Sparse Matrix Format for SpMV Computation on GPU 预测GPU上SpMV计算的最优稀疏矩阵格式
2014 IEEE International Parallel & Distributed Processing Symposium Workshops Pub Date : 2014-05-19 DOI: 10.1109/IPDPSW.2014.160
B. Neelima, G. R. M. Reddy, Prakash S. Raghavendra
{"title":"Predicting an Optimal Sparse Matrix Format for SpMV Computation on GPU","authors":"B. Neelima, G. R. M. Reddy, Prakash S. Raghavendra","doi":"10.1109/IPDPSW.2014.160","DOIUrl":"https://doi.org/10.1109/IPDPSW.2014.160","url":null,"abstract":"Many-threaded architecture based Graphics Processing Units (GPUs) are good for general purpose computations for achieving high performance. The processor has latency hiding mechanism through which it hides the memory access time in such a way that when one warp (group of 32 threads) is computing, the other warps perform memory bound access. But for memory access bound irregular applications such as Sparse Matrix Vector Multiplication (SpMV), memory access times are high and hence improving the performance of such applications on GPU is a challenging research issue. Further, optimizing SpMV time on GPU is an important task for iterative applications like jacobi and conjugate gradient. However, there is a need to consider the overheads caused while computing SpMV on GPU. Transforming the input matrix to a desired format and communicating the data from CPU to GPU are non-trivial overheads associated with SpMV computation on GPU. If the chosen format is not suitable for the given input sparse matrix then desired performance improvements cannot be achieved. Motivated by this observation, this paper proposes a method to chose an optimal sparse matrix format, focusing on the applications where CPU to GPU communication time and pre-processing time are nontrivial. The experimental results show that the predicted format by the model matches with that of the actual high performing format when total SpMV time in terms of pre-processing time, CPU to GPU communication time and SpMV computation time on GPU, is taken into account. The model predicts an optimal format for any given input sparse matrix with a very small overhead of prediction within an application. Compared to the format to achieve high performance only on GPU, our approach is more comprehensive and valuable. This paper also proposes to use a communication and pre-processing overhead optimizing sparse matrix format to be used when these overheads are non trivial.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"PC-20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126666376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信