Proceedings of the IEEE/ACM SC98 Conference最新文献

筛选
英文 中文
TFLOPS PFS: Architecture and Design of a Highly Efficient Parallel File System TFLOPS PFS:高效并行文件系统的架构与设计
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10003
Sharad Garg
{"title":"TFLOPS PFS: Architecture and Design of a Highly Efficient Parallel File System","authors":"Sharad Garg","doi":"10.1109/SC.1998.10003","DOIUrl":"https://doi.org/10.1109/SC.1998.10003","url":null,"abstract":"In recent years, many commercial Massively Parallel Processor (MPP) systems have been available available to the computing community. These systems provide very high processing power (up to hundreds of GFLOPs), and can scale efficiently with the number of processors. However, many scientific and commercial applications that run on these multiprocessors may not experience significant benefit in terms of speedup and are bottlenecked by their I/O requirements. Although these multiprocessors may be configured with sufficient I/O hardware, the file system software often fails to provide the available I/O bandwidth to the application, and causes severe performance performance degradation for I/O intensive applications. A highly efficient parallel file system has been implemented on Intel's Teraflops (TFLOPS) machine and provides a sustained I/O bandwidth of 1 GB/sec. This file system provides almost 95% of the available raw hardware I/O bandwidth and the I/O bandwidth scales proportional to the available I/O nodes. Intel's TFLOPS machine is the first Accelerated Strategic Computing Initiative (ASCI) machine that DOE has acquired. This computer is 10 times more powerful than the fastest machine today, and will be used primarily to simulate nuclear testing and to ensure the safety and effectiveness of the nation's nuclear weapons stockpile. This machine contains over 9000 Intel's Pentium Pro processors, and will provide a peak CPU performance of 1.8 teraflops. This papers presents the I/O design and architecture of Intel's TFLOPS supercomputer, describes the Cougar OS I/O and its interface with the Intel's Parallel File System.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115737227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Agent Middleware for Heterogeneous Scientific Simulations 异构科学仿真的代理中间件
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10014
S. Ho, S. Itoh, S. Ihara, R. Schlichting
{"title":"Agent Middleware for Heterogeneous Scientific Simulations","authors":"S. Ho, S. Itoh, S. Ihara, R. Schlichting","doi":"10.1109/SC.1998.10014","DOIUrl":"https://doi.org/10.1109/SC.1998.10014","url":null,"abstract":"The current technology of parallel and distributed systems allows users to exploit a variety of resources across networks. However, the support provided is often insufficient for computational scientists to simulate complicated real-world scenarios in which different kinds of scientific applications need to be combined to perform high fidelity simulations. As a result, users waste a large amount of time and effort developing custom techniques for performing semantic-level communication between heterogeneous scientific simulations. This paper describes a new middleware system that provides high-level transparency in the form of agents that automatically transfer and transform data between simulations that use different mathematical and physical modeling approaches. Based on a specification that correlates different discrete points in finite difference method (FDM), finite element method (FEM) or particle simulations, the agents provide a variety of techniques for semantically transforming the values associated with correlated points and automatically determine to which processes the values must be transferred. To facilitate use and minimize impact on user programs, the agent system includes three types of library calls that manage task identification, register different kinds of discrete points and construct a correlation table according to the specification, and transfer messages that incorporate extraction and transformation of the values on the correlated points. Another library specially optimized for parallel simulations that use a SPMD (Single Program Multiple Data stream) structure is also offered to control communication through the agents. A prototype system has been developed on the Hitachi SR2201 parallel machine as well as workstation clusters, and applied to several example applications. These include an advanced device simulation that combines quantum transport simulation with electric potential simulation, and a simulation of thermal flow resulting from high-frequency device operation that hybridizes molecular dynamics simulation with macroscopic continuum simulation. These combinations can be efficiently realized using the small number of library calls within the agent system together with additional routines that change the data formats of discrete points. The time overhead of the agent calculations is shown experimentally to agree closely with the theoretically-predicted values modeled as a function of the number of discrete points and domain decomposition in parallel simulations. This value becomes insignificant compared with that of the simulation processes for heterogeneous large-scale simulations.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114747882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Efficient Selection Algorithms on Distributed Memory Computers 分布式存储计算机的高效选择算法
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10054
E. Saukas, S.W. Song
{"title":"Efficient Selection Algorithms on Distributed Memory Computers","authors":"E. Saukas, S.W. Song","doi":"10.1109/SC.1998.10054","DOIUrl":"https://doi.org/10.1109/SC.1998.10054","url":null,"abstract":"Consider the selection problem of determining the k th smallest element of a sequence of n elements. Under the CGM (Coarse Grained Multicomputer) model with p processors and O (n/p) local memory, we present a deterministic parallel algorithm for the selection problem that requires O(log p) communication rounds. Besides requiring a low number of communication rounds, the algorithm also attempts to minimize the total amount of data transmitted in each round (only O(p) except in the last round). The basic algorithm is then extended to solve the problem of q simultaneous selections using the same input sequence, also in O(log p) communication rounds and asymptotically same local computing time (if q = O(p) ). The simultaneous selection algorithm gives rise to a communication efficient sorting algorithm, with O(log p) communication rounds and a total of O(p2) data transmitted in each round except in the last one. In addition to showing theoretical complexities, we present very promising experimental results obtained on two parallel machines that show almost linear speedup, indicating the efficiency and scalability of the proposed algorithms. To our knowledge, this is the best deterministic CGM algorithm in the literature for the selection problem.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127512982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Supporting Runtime Tool Interaction for Parallel Simulations 支持并行仿真的运行时工具交互
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10009
C. Harrop, Steven T. Hackstadt, J. Cuny, A. Malony, Laura S. Magde
{"title":"Supporting Runtime Tool Interaction for Parallel Simulations","authors":"C. Harrop, Steven T. Hackstadt, J. Cuny, A. Malony, Laura S. Magde","doi":"10.1109/SC.1998.10009","DOIUrl":"https://doi.org/10.1109/SC.1998.10009","url":null,"abstract":"Scientists from many disciplines now routinely use modeling and simulation techniques to study physical and biological phenomena. Advances in high-performance architectures and networking have made it possible to build complex simulations with parallel and distributed interacting components. Unfortunately, the software needed to support such complex simulations has lagged behind hardware developments. We focus here on one aspect of such support: runtime program interaction. We have developed a runtime interaction framework and we have implemented a specific instance of it for an application in seismic tomography. That instance, called TierraLab, extends the geoscientists' existing (legacy) tomography code with runtime interaction capabilities which they access through a MATLAB interface. The scientist can stop a program, retrieve data, analyze and visualize that data with existing MATLAB routines, modify the data, and resume execution. They can do this all within a familiar MATLAB-like environment without having to be concerned with any of the low- level details of parallel or distributed data distribution. Data distribution is handled transparently by the Distributed Array Query and Visualization (DAQV) system. Our framework allows scientists to construct and maintain their own customized runtime interaction system.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"74 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131691409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Infrastructure for Efficient Parallel Job Execution in Terascale Computing Environments 在兆级计算环境中高效并行作业执行的基础结构
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10026
J. Moreira, W. Chan, L. Fong, H. Franke, M. Jette
{"title":"An Infrastructure for Efficient Parallel Job Execution in Terascale Computing Environments","authors":"J. Moreira, W. Chan, L. Fong, H. Franke, M. Jette","doi":"10.1109/SC.1998.10026","DOIUrl":"https://doi.org/10.1109/SC.1998.10026","url":null,"abstract":"Recent Terascale computing environments, such as those in the Department of Energy Accelerated Strategic Computing Initiative, present a new challenge to job scheduling and execution systems. The traditional way to concurrently execute multiple jobs in such large machines is through space-sharing: each job is given dedicated use of a pool of processors. Previous work in this area has demonstrated the benefits of sharing the parallel machine's resources not only spatially but also temporally. Time-sharing creates virtual processors for the execution of jobs. The scheduling is typically performed cyclically and each time-slice of the cycle can be considered an independent virtual machine. When all tasks of a parallel job are scheduled to run on the same time-slice (same virtual machine), gang-scheduling is accomplished. Research has shown that gang-scheduling can greatly improve system utilization and job response time in large parallel systems. We are developing GangLL, a research prototype system for performing gang-scheduling on the ASCI Blue-Pacific machine, an IBM RS/6000 SP to be installed at Lawrence Livermore National Laboratory. This machine consists of several hundred nodes, interconnected by a high-speed communication switch. GangLL is organized as a centralized scheduler that performs global decision-making, and a local daemon in each node that controls job execution according to those decisions. The centralized scheduler builds an Ousterhout matrix that precisely defines the temporal and spatial allocation of tasks in the system. Once the matrix is built, it is distributed to each of the local daemons using a scalable hierarchical distributions scheme. A two-phase commit is used in the distribution scheme to guarantee that all local daemons have consistent information. The local daemons enforce the schedule dedicated by the Ousterhout matrix in their corresponding nodes. This requires suspending and resuming execution of tasks and multiplexing access to the communication switch. Large supercomputing centers tend to have their own job scheduling systems, to handle site specific conditions. Therefore, we are designing GangLL so that it can interact with an external site scheduler. The goal is to let the site scheduler control spatial allocation of jobs, if so desired, and to decide when jobs run. GangLL then performs the detailed temporal allocation and controls the actual execution of jobs. The site scheduler can control the fraction of a shared processor that a job receives through an execution factor parameter. To quantify the benefits of our gang-scheduling system to job execution in a large parallel system, we simulate the system with a realistic workload. We measure performance parameters under various degrees of time-sharing, characterized by the multiprogramming level. Our results show that higher multiprogramming levels lead to higher system utilization and lower job response times. We also report some results from the initial d","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"151 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131122516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Fine-Grain Cycle Stealing for Networks of Workstations 工作站网络的细粒度周期窃取
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10011
K. D. Ryu, J. Hollingsworth
{"title":"Fine-Grain Cycle Stealing for Networks of Workstations","authors":"K. D. Ryu, J. Hollingsworth","doi":"10.1109/SC.1998.10011","DOIUrl":"https://doi.org/10.1109/SC.1998.10011","url":null,"abstract":"Studies have shown that a significant fraction of the time, workstations are idle. In this paper we present a new scheduling policy called Linger-Longer that exploits the fine-grained availability of workstations to run sequential and parallel jobs. We present a two-level workload characterization study and use it to simulate a cluster of workstations running our new policy. We compare two variations of our policy to two previous policies: Immediate- Eviction and Pause-and-Migrate. Our study shows that the Linger-Longer policy can improve the throughput of foreign jobs on cluster by 60% with only a 0.5% slowdown of foreground jobs. For parallel computing, we showed that the Linger-Longer policy outperforms reconfiguration strategies when the processor utilization by the local process is 20% or less in both synthetic bulk synchronous and real data-parallel applications","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133245390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
A Grid-Enabled MPI: Message Passing in Heterogeneous Distributed Computing Systems 支持网格的MPI:异构分布式计算系统中的消息传递
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10051
Ian T Foster, N. Karonis
{"title":"A Grid-Enabled MPI: Message Passing in Heterogeneous Distributed Computing Systems","authors":"Ian T Foster, N. Karonis","doi":"10.1109/SC.1998.10051","DOIUrl":"https://doi.org/10.1109/SC.1998.10051","url":null,"abstract":"Application development for high-performance distributed computing systems, or computational grids as they are sometimes called, requires ``grid-enabled'' tools that hide mundane aspects of the heterogeneous grid environment without compromising performance. As part of an investigation of these issues, we have developed MPICH-G, a grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers at different sites using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus grid toolkit. In this paper, we describe the MPICH-G implementation and present preliminary performance results.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132122949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 294
Pthreads for Dynamic and Irregular Parallelism 动态和不规则并行的Pthreads
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10005
G. Narlikar, G. Blelloch
{"title":"Pthreads for Dynamic and Irregular Parallelism","authors":"G. Narlikar, G. Blelloch","doi":"10.1109/SC.1998.10005","DOIUrl":"https://doi.org/10.1109/SC.1998.10005","url":null,"abstract":"High performance applications on shared memory machines have typically been written in a coarse grained style, with one heavyweight thread per processor. In comparison, programming with a large number of lightweight, parallel threads has several advantages, including simpler coding for programs with irregular and dynamic parallelism, and better adaptability to a changing number of processors. The programmer can express a new thread to execute each individual parallel task; the implementation dynamically creates and schedules these threads onto the processors, and effectively balances the load. However, unless the threads scheduler is designed carefully, the parallel program may suffer poor space and time performance. In this paper, we study the performance of a native, lightweight POSIX threads (Pthreads) library on a shared memory machine running Solaris; to our knowledge, the Solaris library is one of the most efficient user-level implementations of the Pthreads standard available today. To evaluate this Pthreads implementation, we use a set of parallel programs that dynamically create a large number of threads. The programs include dense and sparse matrix multiplies, two N-body codes, a data classifier, a volume rendering benchmark, and a high performance FFT package. We find the existing threads scheduler to be unsuitable for executing such programs. We show how simple modifications to the Pthreads scheduler can result in significantly improved space and time performance for the programs; the modified scheduler results in as much as 44% less running time and 63% less memory requirement compared to the original Pthreads implementation. Our results indicate that, provided we use a good scheduler, the rich functionality and standard API of Pthreads can be combined with the advantages of dynamic, lightweight threads to result in high performance.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125175471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
CCF: Collaborative Computing Frameworks 协作计算框架
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10040
V. Sunderam, S. Y. Cheung, M. D. Hirsch, S. Chodrow, M. Grigni, A. Krantz, I. Rhee, Paul A. Gray, Soeren Olesen, P. Hutto, Julie Sult
{"title":"CCF: Collaborative Computing Frameworks","authors":"V. Sunderam, S. Y. Cheung, M. D. Hirsch, S. Chodrow, M. Grigni, A. Krantz, I. Rhee, Paul A. Gray, Soeren Olesen, P. Hutto, Julie Sult","doi":"10.1109/SC.1998.10040","DOIUrl":"https://doi.org/10.1109/SC.1998.10040","url":null,"abstract":"CCF (Collaborative Computing Frameworks) is a suite of software systems, communications protocols, and tools that enable collaborative, computer-based cooperative work. CCF constructs a virtual work environment on multiple computer systems connected over the Internet, to form a Collaboratory. In this setting, participants interact with each other, simultaneously access and operate computer applications, refer to global data repositories or archives, collectively create and manipulate documents or other artifacts, perform computational transformations, and conduct a number of other activities via telepresence. Research issues addressed in this project include problem solving environments and methodologies for laboratory and instrument-based scientific disciplines, and computer science issues in heterogeneous distributed systems. New approaches are being investigated and developed for fast multiway communication, robust geographically distributed data management methodologies, high-performance computational transforms inlined within collaboration sessions, and related auxiliary issues such as active documents, security, archival storage, and experiment management and control. In this paper, we discuss the design philosophy and systems rationale behind CCF, describe the major subsystems of the collaborative computing environment, and discuss the salient features of the system.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130873118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 SGI Origin 2000上自动并行化工具/编译器的比较
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10010
M. Frumkin, Michelle R. Hribar, Haoqiang Jin, A. Waheed, Jerry C. Yan
{"title":"A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000","authors":"M. Frumkin, Michelle R. Hribar, Haoqiang Jin, A. Waheed, Jerry C. Yan","doi":"10.1109/SC.1998.10010","DOIUrl":"https://doi.org/10.1109/SC.1998.10010","url":null,"abstract":"Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of three parallelization tools and compilers based on the NAS Parallel Benchmark and a CFD application, ARC3D, on the SGI Origin2000 multiprocessor. The tools and compilers compared include: 1) CAPTools: an interactive computer aided parallelization toolkit, 2) Portland Group's HPF compiler, and 3) the MIPSPro FORTRAN compiler available on the Origin2000, with support for shared memory multiprocessing directives and MP runtime library. The tools and compilers are evaluated in four areas: 1) required user interaction, 2) limitations, 3) portability and 4) performance. Based on these results, a discussion on the feasibility of computer-aided parallelization of aerospace applications is presented along with suggestions for future work.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130306327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信