TeraGrid Conference最新文献

筛选
英文 中文
An efficient parallelized discrete particle model for dense gas-solid flows on unstructured mesh 非结构网格上致密气固流动的高效并行离散粒子模型
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016752
C. L. Wu, K. Nandakumar
{"title":"An efficient parallelized discrete particle model for dense gas-solid flows on unstructured mesh","authors":"C. L. Wu, K. Nandakumar","doi":"10.1145/2016741.2016752","DOIUrl":"https://doi.org/10.1145/2016741.2016752","url":null,"abstract":"An efficient, parallelized implementation of discrete particle/element model (DPM or DEM) coupled with the computational fluid dynamics (CFD) model has been developed. Two parallelization strategies are used to partly overcome the poor load balancing problem due to the heterogeneous particle distribution in space. Firstly at the coarse-grained level, the solution domain is decomposed into partitions using bisection algorithm to minimize the number of faces at the partition boundaries while keeping almost equal number of cells in each partition. The solution of the gas-phase governing equations is performed on these partitions. Particles and the solution of their dynamics are associated with partitions according to their hosting cells. This makes no data exchange between processors when calculating the hydrodynamic forces on particles. By introducing proper data mapping between partitions, the cell void fraction is calculated accurately even if a particle is shared by several partitions. Neighboring partitions are grouped by a gross evaluation before simulation, with each group having similar particle number. The computation task of a group of partitions is assigned to a compute node, which has multi-cores or multiprocessors with a shared memory. Each core or processor in a node takes the computation of the gas governing equations in one partition. Processors communicate and exchange data through Message Passing Interface (MPI) at this coarse-grained parallelism. Secondly, the multithreading technique is used to parallelize the computation of the dynamics of the particles in each partition. The number of compute threads is determined according to the number of particles in partitions and the number of cores in a compute node. In such a way there is almost no waiting of the threads in a compute node. Since the particle numbers in all compute nodes are almost the same, the above strategy yields an efficient load balancing among compute nodes. Test numerical experiments on TeraGrid HPC cluster Queen Bee show that the developed code is efficient and scalable to simulate dense gas-solid flows with up to more than 10 millions of particles by 128 compute nodes. Bubbling in a middle-scale fluidized bed and granular Rayleigh-Taylor instability are well captured by the parallel code.","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126122703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance metrics and auditing framework for high performance computer systems 高性能计算机系统的性能度量和审计框架
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016759
T. Furlani, Matthew D. Jones, S. Gallo, Andrew E. Bruno, Charng-Da Lu, Amin Ghadersohi, Ryan J. Gentner, A. Patra, R. L. Deleon, G. Laszewski, Lizhe Wang, Ann Zimmerman
{"title":"Performance metrics and auditing framework for high performance computer systems","authors":"T. Furlani, Matthew D. Jones, S. Gallo, Andrew E. Bruno, Charng-Da Lu, Amin Ghadersohi, Ryan J. Gentner, A. Patra, R. L. Deleon, G. Laszewski, Lizhe Wang, Ann Zimmerman","doi":"10.1145/2016741.2016759","DOIUrl":"https://doi.org/10.1145/2016741.2016759","url":null,"abstract":"This paper describes a comprehensive auditing framework, XDMoD, for use by high performance computing centers to readily provide metrics regarding resource utilization (CPU hours, job size, wait time, etc), resource performance, and the center's impact in terms of scholarship and research. This role-based auditing framework is designed to meet the following objectives: (1) provide the user community with an easy to use tool to oversee their allocations and optimize their use of resources, (2) provide staff with easy access to performance metrics and diagnostics to monitor and tune resource performance for the benefit of the users, (3) provide senior management with a tool to easily monitor utilization, user base, and performance of resources, and (4) help ensure that the resources are effectively enabling research and scholarship. XDMoD is initially focused on the NSF TeraGrid (TG) and follow-on XSEDE (XD) program, where it will become a key component of the TG/XSEDE User Portal. However, this auditing system is intended to have a general applicability to any HPC system or center.\u0000 The XDMoD auditing system is architected using a set of modular components that facilitate the utilization of community contributed components information. It includes an active and reactive (as opposed to passive) service set accessible through a variety of endpoints such as web-based user interface, RESTful web services, and provided development tools. One component also provides a computationally lightweight and flexible application kernel auditing system that reflects best-in-class performance kernels to measure overall system performance with respect to existing applications that are actually being run by users. This allows continuous resource auditing to monitor all aspects of system performance, most critically from a completely user-centric point of view.","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129990398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Subset removal on massive data with Dash 使用Dash对海量数据进行子集移除
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016750
Jonathan Myers, M. Tatineni, R. Sinkovits
{"title":"Subset removal on massive data with Dash","authors":"Jonathan Myers, M. Tatineni, R. Sinkovits","doi":"10.1145/2016741.2016750","DOIUrl":"https://doi.org/10.1145/2016741.2016750","url":null,"abstract":"Ongoing efforts by the Large Synoptic Survey Telescope (LSST) involve the study of asteroid search algorithms and their performance on both real and simulated data. Images of the night sky reveal large numbers of events caused by the reflection of sunlight from asteroids. Detections from consecutive nights can then be grouped together into tracks that potentially represent small portions of the asteroids' sky-plane motion. The analysis of these tracks is extremely time consuming and there is strong interest in the development of techniques that can eliminate unnecessary tracks, thereby rendering the problem more manageable. One such approach is to collectively examine sets of tracks and discard those that are subsets of others. Our implementation of a subset removal algorithm has proven to be fast and accurate on modest sized collections of tracks, but unfortunately has extremely large memory requirements for realistic data sets and cannot effectively use conventional high performance computing resources. We report our experience running the subset removal algorithm on the TeraGrid Appro Dash system, which uses the vSMP software developed by ScaleMP to aggregate memory from across multiple compute nodes to provide access to a large, logical shared memory space. Our results show that Dash is ideally suited for this algorithm and has performance comparable to or superior to that obtained on specialized, heavily demanded, large-memory systems such as the SGI Altix UV.","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"14 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132153400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An information architecture based on publish/subscribe messaging 基于发布/订阅消息传递的信息体系结构
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016770
Warren Smith
{"title":"An information architecture based on publish/subscribe messaging","authors":"Warren Smith","doi":"10.1145/2016741.2016770","DOIUrl":"https://doi.org/10.1145/2016741.2016770","url":null,"abstract":"Cyberinfrastructures such as the TeraGrid often have information systems based on querying. While this pull-style information system is appropriate in some circumstances, there are many other circumstances where a push-style is more appropriate. This paper describes an information system based on push-style publish/subscribe messaging and evaluates the suitability of this approach.","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117078439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Securing science gateways 保护科学网关
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016781
Victor Hazlewood, M. Woitaszek
{"title":"Securing science gateways","authors":"Victor Hazlewood, M. Woitaszek","doi":"10.1145/2016741.2016781","DOIUrl":"https://doi.org/10.1145/2016741.2016781","url":null,"abstract":"Science gateways began to emerge and evolve on the NSF-sponsored national HPC cyberinfrastructure, known today as the TeraGrid, in the early 2000s. Currently, the TeraGrid supports twenty-five science gateways that utilize a diverse collection of software and methods for integrating with the TeraGrid. This paper surveys TeraGrid science gateway implementations and security models, details a pilot study highlighting changes employed by the GridAMP science gateway to securely access the Kraken supercomputer, and describes possible solutions and recommendations for improving the security posture of science gateway implementations across the TeraGrid. Securing TeraGrid science gateways by employing one or more methods that balance security, developer ease-of-use, and end user ease-of-use improves the overall security posture of the implementations of science gateways across the TeraGrid.","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129946036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Early experiences with the intel many integrated cores accelerated computing technology 早期的经验与英特尔许多集成核心加速计算技术
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016764
L. Koesterke, J. Boisseau, J. Cazes, K. Milfeld, D. Stanzione
{"title":"Early experiences with the intel many integrated cores accelerated computing technology","authors":"L. Koesterke, J. Boisseau, J. Cazes, K. Milfeld, D. Stanzione","doi":"10.1145/2016741.2016764","DOIUrl":"https://doi.org/10.1145/2016741.2016764","url":null,"abstract":"We report on early programming experiences with the Intel® Many Integrated Core (Intel® MIC) Co-processor. This new and x86 based technology is Intel's answer to GPU-based accelerators by NVIDIA, AMD and others. Accelerators have generally sparked interest in the HPC community because they have the potential to significantly increase the compute power of the next generation of supercomputers. The merits of accelerators for general HPC purposes are still very much under debate. Undoubtedly accelerators add more complexity to an already very complex cluster, and the programmability of accelerators will be the key to enticing the diverse HPC user community to this new technology, even if the performance promise may be large.\u0000 The study presented here is part of a much broader activity at the Texas Advanced Computing Center (TACC) that focuses on a wide range of accelerators (GPUs, FPGAs, Intel MIC coprocessor, etc.). The Intel MIC architecture is x86 based and supports languages and parallel programming paradigms commonly found on x86 CPUs, including OpenMP which has been widely accepted in the HPC community for thread-parallel programming. The scope of this initial study is limited to the investigation of the Intel MIC programming environment and particularly to the offload-OpenMP model.\u0000 Our initial experience with the Intel MIC platform has been very positive. The required code modifications to handle the data transfer and the offloading of parallel sections onto the Intel MIC co-processor are small and conveniently implemented as directives/pragmas to OpenMP constructs. (We use \"accelerators\" as a generic reference to Intel MIC Co-processors, GPUs, FPGAs, etc.)","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116607165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Enabling online geospatial isotopic model development and analysis 实现在线地理空间同位素模型开发和分析
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016783
Hyojeong Lee, Lan Zhao, G. Bowen, Christopher C. Miller, A. Kalangi, Tonglin Zhang, J. West
{"title":"Enabling online geospatial isotopic model development and analysis","authors":"Hyojeong Lee, Lan Zhao, G. Bowen, Christopher C. Miller, A. Kalangi, Tonglin Zhang, J. West","doi":"10.1145/2016741.2016783","DOIUrl":"https://doi.org/10.1145/2016741.2016783","url":null,"abstract":"In recent years, there has been a rapid growth in the amount of environmental data collected over large spatial and temporal scales. It presents unprecedented opportunities for new scientific discovery, while in the same time poses significant challenges to the research community on how to effectively identify and integrate these datasets into their research models and tools. In this paper, we describe the design and implementation of IsoMAP - a gateway for Isoscapes (isotopic landscapes) modeling, analysis and prediction. IsoMAP provides an online workspace that helps researchers access and integrate a number of disparate and diverse datasets, develop Isoscapes models over selected spatio-temporal domains using geo-statistical algorithms, and predict maps for the stable isotope ratios of water, plants, and soils. The IsoMAP system leverages the computation resources available on the TeraGrid to perform geospatial data operations and geostatistical model calculations. It builds on a variety of open source technologies for GIS, geospatial data management and processing, grid computing, and gateway development. The system was successfully used to teach a tutorial in the 2011 conference on the Roles of Stable Isotopes in Water Cycle Research. A post-tutorial survey was conducted. We review the users' feedback and present a future development plan based on that.","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133479861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Educational virtual clusters for on-demand MPI/Hadoop/Condor in FutureGrid FutureGrid中用于按需MPI/Hadoop/Condor的教育虚拟集群
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016793
R. Figueiredo, D. Wolinsky, Panoat Chuchaisri
{"title":"Educational virtual clusters for on-demand MPI/Hadoop/Condor in FutureGrid","authors":"R. Figueiredo, D. Wolinsky, Panoat Chuchaisri","doi":"10.1145/2016741.2016793","DOIUrl":"https://doi.org/10.1145/2016741.2016793","url":null,"abstract":"FutureGrid provides unique capabilities that enable researchers to deploy customized environments for their experiments in grid and cloud computing, and educators to deploy customized virtual private clusters for hands-on activities. A key enabling technology for this is virtualization and the provisioning of Infrastructure-asa-Service (IaaS) through cloud computing middleware. This extended abstract describes educational virtual appliances that are automatically self-configured to enable on-demand deployment of three popular distributed/cloud computing stacks (Condor, MPI, and Hadoop) within and/or across FutureGrid sites.","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133384413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An OAuth service for issuing certificates to science gateways for TeraGrid users 为TeraGrid用户向科学网关颁发证书的OAuth服务
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016776
J. Basney, Jeff Gaynor
{"title":"An OAuth service for issuing certificates to science gateways for TeraGrid users","authors":"J. Basney, Jeff Gaynor","doi":"10.1145/2016741.2016776","DOIUrl":"https://doi.org/10.1145/2016741.2016776","url":null,"abstract":"In this paper, we present a TeraGrid OAuth service, integrated with the TeraGrid User Portal and TeraGrid MyProxy service, that provides certificates to science gateways. The OAuth service eliminates the need for TeraGrid users to disclose their TeraGrid passwords to science gateways when accessing their individual TeraGrid accounts via gateway interfaces. Instead, TeraGrid users authenticate at the TeraGrid User Portal to approve issuance of a certificate by MyProxy to the science gateway they are using. We present the design and implementation of the TeraGrid OAuth service, describe the underlying network protocol, and discuss design decisions and security considerations we made while developing the service in consultation with TeraGrid working groups and staff.","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128865497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
UltraScan gateway enhancements: in collaboration with TeraGrid advanced user support UltraScan网关增强:与TeraGrid高级用户支持合作
TeraGrid Conference Pub Date : 2011-07-18 DOI: 10.1145/2016741.2016778
B. Demeler, Raminderjeet Singh, M. Pierce, E. Brookes, S. Marru, Bruce Dubbs
{"title":"UltraScan gateway enhancements: in collaboration with TeraGrid advanced user support","authors":"B. Demeler, Raminderjeet Singh, M. Pierce, E. Brookes, S. Marru, Bruce Dubbs","doi":"10.1145/2016741.2016778","DOIUrl":"https://doi.org/10.1145/2016741.2016778","url":null,"abstract":"The Ultrascan gateway provides a user friendly web interface for evaluation of experimental analytical ultracentrifuge data using the UltraScan modeling software. The analysis tasks are executed on the TeraGrid and campus computational resources. The gateway is highly successful in providing the service to end users and consistently listed among the top five gateway community account usage. This continued growth and challenges of sustainability needed additional support to revisit the job management architecture.\u0000 In this paper we describe the enhancements to the Ultrascan gateway middleware infrastructure provided through the TeraGrid Advanced User Support program. The advanced support efforts primarily focused on a) expanding the TeraGrid resources incorporate new machines; b) upgrading UltraScan's job management interfaces to use GRAM5 in place of the deprecated WS-GRAM; c) providing realistic usage scenarios to the GRAM5 and INCA resource testing and monitoring teams; d) creating general-purpose, resource-specific, and UltraScan-specific error handling and fault tolerance strategies; and e) providing forward and backward compatibility for the job management system between UltraScan's version 2 (currently in production) and version 3 (expected to be released mid-2011).","PeriodicalId":257555,"journal":{"name":"TeraGrid Conference","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121042715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信