2009 DoD High Performance Computing Modernization Program Users Group Conference最新文献

筛选
英文 中文
Use of the NRL DHPI System to Automate the Generation of Nomografs 使用NRL DHPI系统自动生成无影图
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.81
K. Obenschain, G. Patnaik, J. Boris
{"title":"Use of the NRL DHPI System to Automate the Generation of Nomografs","authors":"K. Obenschain, G. Patnaik, J. Boris","doi":"10.1109/HPCMP-UGC.2009.81","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.81","url":null,"abstract":"The objective of the paper is to automate generation of high-resolution Dispersion Nomografs for CT Analyst and to add the capability of generating the Nomografs using only high-end commodity hardware. CT-Analyst provides near-instantaneous urban plume prediction with unprecedented accuracy and ease of use. It has additional important features like backtrack from contamination reports and sensor readings to unknown, upwind source locations. These capabilities arise from using Dispersion Nomografs that are precomputed from detailed high-resolution three dimensional (3D) computational fluid dynamics (CFD) calculations.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"1 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125690274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tool and Process Improvement for High-Fidelity Compressor Simulations 高保真压缩机仿真的工具和工艺改进
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.22
Michael G. List, D. Car
{"title":"Tool and Process Improvement for High-Fidelity Compressor Simulations","authors":"Michael G. List, D. Car","doi":"10.1109/HPCMP-UGC.2009.22","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.22","url":null,"abstract":"Compressors for modern gas turbine engines are challenging to simulate. Disparate length and time scales exist in an aggressive adverse pressure gradient environment amongst a wide array of physical phenomena requiring refinement in both space and time. The resulting mesh sizes and CPU time required to complete time-accurate simulations have become staggering, though they will only continue to increase as the simulation strategy switches from Unsteady Reynolds-Averaged Navier-Stokes (URANS) to Detached Eddy Simulation (DES) and Large Eddy Simulation (LES). For the complex compressor flows, this transition has long been necessary. In order to more effectively simulate compressor flows, several tool developments have taken place, which result in better process and reduced engineer effort. Utilizing the Air Force Research Laboratory Department of Defense (DoD) Supercomputing Resource Center (AFRL DSRC) at Wright-Patterson AFB, improvements in geometry handling, grid generation methodologies, and solver features have reduced workload while benefiting simulation quality. Available applications such as Doxygen, Python, VTK, and Subversion created a productive collaboration environment suitable for both development and testing.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131480381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Solution of Ultra-Large Structural Mechanics Problems during CAP-I 2008 on the DaVinci System 在DaVinci体系上求解CAP-I - 2008期间的超大型结构力学问题
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.75
B. Andersson, Urban Falk, S. Fawaz
{"title":"Solution of Ultra-Large Structural Mechanics Problems during CAP-I 2008 on the DaVinci System","authors":"B. Andersson, Urban Falk, S. Fawaz","doi":"10.1109/HPCMP-UGC.2009.75","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.75","url":null,"abstract":"The finite element (FE)-code STRIPE was used during Capability Applications Project I (CAP-I) 2008 to efficiently and reliably solve ultra-large structures and materials problems having the order of a billion degrees of freedom on the IBM Power P6-system DaVinci. Problems of this size cannot be solved on the NAVY/Babbage system due to memory requirements or on the AFRL/Hawk system due to long execution wall-time requirements. STRIPE performance on the DaVinci and Babbage systems was compared and a factor of up to eight in speedup in wall-time was observed solving the largest problems solvable on Babbage. This significant increase in computational performance is due to a STRIPE code rewrite, a new finite element modeling (FEM)-approach, and the superior CPU and input/output (I/O) performance of the DaVinci system relative to Babbage. This paper describes the various techniques adopted during CAP-I on DaVinci to achieve high system scalability when solving, in short time, the world’s largest strength of materials problem related to aircraft maintenance and design. Support from major software vendors IBM, DoD Supercomputing Resource Center’s (DSRC’s) support specialists, as well as I/O specialists at ParaTools (as a part of the PET-program) have contributed to the successful CAP-I activities described here.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134258489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Scalability Study (as a Guide for HPC Operations at a Remote Test Facility) on DSRC HPC Systems of Radio Frequency Tomography Code Written for MATLAB® and Parallelized via Star-P® 基于MATLAB®编写并通过Star-P®并行化的射频断层扫描代码的DSRC HPC系统的可扩展性研究(作为远程测试设施的HPC操作指南)
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.67
B. Elton, S. Samsi, H. Smith, L. Humphrey, B. Guilfoos, S. Ahalt, A. Chalker, K. Magde, Niraj Srivastava, A. H. Abdullah, P. Boyle
{"title":"A Scalability Study (as a Guide for HPC Operations at a Remote Test Facility) on DSRC HPC Systems of Radio Frequency Tomography Code Written for MATLAB® and Parallelized via Star-P®","authors":"B. Elton, S. Samsi, H. Smith, L. Humphrey, B. Guilfoos, S. Ahalt, A. Chalker, K. Magde, Niraj Srivastava, A. H. Abdullah, P. Boyle","doi":"10.1109/HPCMP-UGC.2009.67","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.67","url":null,"abstract":"A team of researchers at the Air Force Research Laboratory in Rome, NY is building a remote test facility for developing a radio frequency (RF) tomography imaging capability. While at the test site and via batch reservations, they plan on employing the Army Research Laboratory Department of Defense Supercomputer Resource Center MJM distributed memory architecture system, while conducting operations at the test site. We present a scalability study of example RF tomography code, written in the M language of MATLAB and parallelized via Star-P, on the MJM system. The team can use the study to help guide operations while at the remote test facility. We are not attempting to show that the RF tomography code scales well; indeed, it suffers from communication bottlenecks in parts of the algorithms. Nonetheless, this is the code the team uses and, for planning purposes, the team needs to know how long it takes to produce images of a given size for a given number of processors with the existing algorithms.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132929689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Eddy Resolving Global Ocean Prediction 涡旋解析全球海洋预测
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.42
A. Wallcraft, E. Metzger, O. Smedstad
{"title":"Eddy Resolving Global Ocean Prediction","authors":"A. Wallcraft, E. Metzger, O. Smedstad","doi":"10.1109/HPCMP-UGC.2009.42","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.42","url":null,"abstract":"This is the first year of a three-year Challenge Project with the principal goal of performing the necessary research and development to prepare to provide real time depiction of the three-dimensional global ocean state at fine resolution (1/25° on the equator, 3.5 km at mid-latitudes, and 2 km in the Arctic). The prediction system won’t run in real time until FY12, since this is when the first computer large enough to run it in real time is expected to be available at NAVOCEANO. A major sub-goal of this effort is to test new capabilities in the existing 1/12° global HYbrid Coordinate Ocean Model (HYCOM) nowcast/forecast system and to transition some of these capabilities to NAVOCEANO in the existing 1/12° global system, and others in the 1/25° system. The new capabilities support (1) increased nowcast and forecast skill, the latter out to 30 days in many deep water regions, including regions of high Navy interest, such as the Western Pacific and the Arabian Sea/ Gulf of Oman, (2) boundary conditions for coastal models in very shallow water (to zero depth with wetting and drying), and (3) external and internal tides, the latter with initial testing at 1/12° but transition to NAVOCEANO only in the 1/25° system (all these will greatly benefit from the increase to 1/25° resolution). At 1/25°, the entire first year will be spent on initial climatologically forced non-assimilative simulations that are necessary before we can start data assimilation hindcasts. At 1/12°, we have started exploring improved model configurations with climatologically forced runs and testing improved data assimilation with hindcast cases.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133995793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Enabling High-Productivity SIP Application Development: Modeling and Simulation of Superconducting Quantum Interference Filters 实现高生产率SIP应用开发:超导量子干涉滤波器的建模与仿真
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.49
J. C. Chaves, A. Chalker, D. Hudak, V. Gadepally, F. Escobar, P. Longhini
{"title":"Enabling High-Productivity SIP Application Development: Modeling and Simulation of Superconducting Quantum Interference Filters","authors":"J. C. Chaves, A. Chalker, D. Hudak, V. Gadepally, F. Escobar, P. Longhini","doi":"10.1109/HPCMP-UGC.2009.49","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.49","url":null,"abstract":"The inherent complexity in utilizing and programming high performance computing (HPC) systems is the main obstacle to widespread exploitation of HPC resources and technologies in the Department of Defense (DoD). Consequently, there is the persistent need to simplify the programming interface for the generic user. This need is particularly acute in the Signal/Image Processing (SIP), Integrated Modeling and Test Environments (IMT), and related DoD communities where typical users have heterogeneous unconsolidated needs. Mastering the complexity of traditional programming tools (C, MPI, etc.) is often seen as a diversion of energy that could be applied to the study of the given scientific domain. Many SIP users instead prefer high-level languages (HLLs) within integrated development environments, such as MATLAB. We report on our collaborative effort to use a HLL distribution for HPC systems called ParaM to optimize and parallelize a compute-intensive Superconducting Quantum Interference Filter (SQIF) application provided by the Navy SPAWAR Systems Center in San Diego, CA. ParaM is an open-source HLL distribution developed at the Ohio Supercomputer Center (OSC), and includes support for processor architectures not supported by MATLAB (e.g., Itanium and POWER5) as well as support for high-speed interconnects (e.g., InfiniBand and Myrinet). We make use of ParaM installations available at the Army Research Laboratory (ARL) DoD Supercomputing Resource Center (DSRC) and OSC to perform a successful optimization/parallelization of the SQIF application. This optimization/parallelization may be used to assess the feasibility of using SQIF devices as extremely sensitive detectors for electromagnetic radiation which is of great importance to the Navy and DoD in general.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123854812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Application of Multi-block Grid and Parallelization Techniques in Hydrodynamic Modelling 多块网格及并行化技术在水动力建模中的应用
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.78
P. Luong, R. Chapman
{"title":"Application of Multi-block Grid and Parallelization Techniques in Hydrodynamic Modelling","authors":"P. Luong, R. Chapman","doi":"10.1109/HPCMP-UGC.2009.78","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.78","url":null,"abstract":"The Curvilinear Hydrodynamic 3-D (CH3D-WES) model is routinely applied in three-dimensional (3D) hydrodynamic and water quality modeling studies at the Engineering Research and Development Center (ERDC), Mississippi. Recent model improvements include the implementation of multiple grain size class sediment transport, grid wetting/drying, spatially and temporally varying wind and wave radiation stress gradient forcing. The practical application of the original single-block version of CH3D, which include the aforementioned model improvements have been limited to small computational domains and short simulation time periods, due to long computational processing time as well as large memory requirements. Critical to elimination of these restrictions was the implementation of data decomposition and Message Passing Interface (MPI), or a multi-block grid capability. The advantages of the multi-block grid parallel version of CH3D include: 1) the flexibility of site specific horizontal and vertical grid resolution assigned to each grid block, 2) block specific application of the sediment transport, wave radiation stress gradient forcing and computational cell wetting/drying model options, and 3) reduced memory and computational time requirements allowing larger computational domains and longer simulation time periods. To demonstrate the advantages of the multiblock capability, hydrodynamic and salinity transport simulations were performed utilizing the existing Mississippi Sound and Berwick Bay computational domains. A comparison of single block and multi-block predictions of salinity time series is presented. CPU wallclock times and load balancing between single-grid and multi-block grid on several high performance computer systems is discussed.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123924024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Million-Atom Count Simulations of the Effects of Carbon Nanotube Length Distributions on Fiber Mechanical Properties 碳纳米管长度分布对纤维力学性能影响的百万原子计数模拟
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.33
C. Cornwell, R. Haskins, J. Allen, C. R. Welch, R. Kirgan
{"title":"Million-Atom Count Simulations of the Effects of Carbon Nanotube Length Distributions on Fiber Mechanical Properties","authors":"C. Cornwell, R. Haskins, J. Allen, C. R. Welch, R. Kirgan","doi":"10.1109/HPCMP-UGC.2009.33","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.33","url":null,"abstract":"The extraordinary mechanical properties of carbon nanotubes (CNTs) make them prime candidates as a basis for super infrastructure materials. Ab initio, tight binding, and molecular dynamics simulations and recent experiments have shown that CNTs have tensile strengths up to about 15.5 million psi (110 GPa), Young’s modulus of 150 million psi (1 TPa), and density of about 80 lbs/ft3 (1.3 g/cm3). These qualities provide tensile strength-toweight and stiffness-to-weight ratios about 900 times and 30 times, respectively, those of high-strength (100,000- psi) steel. Building macromaterials that maintain these properties is challenging. Molecular defects, voids, foreign inclusions, and, in particular, weak intermolecular bonds have, to date, prevented macromaterials formed from CNTs from having the remarkable strength and stiffness characteristics of CNTs. The van der Waals forces associated with CNTsrepresent a force per unit length between CNTs. Accordingly, one would expect the bond strength between aligned CNTs to increase with overlap length. Real filaments are likely composed of CNTs with some distribution of lengths. To understand the effects that CNT length distributions have on the tensile strength of neat filaments of aligned CNTs, we performed a series of quenched molecular dynamics simulations on high performance computers using Sandia Laboratory’s Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code. The cross-section of each filament was composed of hexagonal closest-packed (HCP) array CNT strands that formed two HCP rings. The filaments were constructed by placing (5,5) chirality CNTs end to end. While the choice of a single-chirality CNT fiber is currently unrealizable, the use of a singlechirality fiber allowed us to focus only on the effects of CNT lengths on filament response. The lengths of the CNTs were randomly selected to have Gaussian distribution with the average length ranging from 100 to 1,600Å. A series of simulations were performed on filament with lengths ranging from 400 to 6,400Å. For each filament, the strain was increased in small increments and quenched between strain increments. The total tensile force on the filament was recorded and used to determine the uniaxial stress-strain response of the filaments. The results of the simulations quantified the improvements in Young’s modulus, tensile strength, and critical strain as a function of the increase in the average component CNT lengths. These are the first molecular dynamics simulations that the authors are aware of that treat statistical qualities of realistic CNT structures. The simulation results are being used to guide the molecular design of CNT filaments to achieve super (1 million psi) strength. The simulations would be impractical, and perhaps impossible, without massively parallel, highperformance computational platforms and molecular dynamics simulation tools optimized to run on such platforms.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124626246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Scalability of the CTH Shock Physics Code on the Cray XT Cray XT上CTH冲击物理代码的可扩展性
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.74
S. Schraml, T. M. Kendall
{"title":"Scalability of the CTH Shock Physics Code on the Cray XT","authors":"S. Schraml, T. M. Kendall","doi":"10.1109/HPCMP-UGC.2009.74","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.74","url":null,"abstract":"This paper presents an overview of an explicit message-passing paradigm for an Eulerian finite volume method for modeling solid dynamics problems involving shock wave propagation, multiple materials, and large deformations. Three-dimensional simulations of highvelocity impact were conducted on two scalable high performance computing systems to evaluate the performance of the message-passing code. Simulations were performed using greater than three billion computational cells running on more than 8,000 processor cores. The performance of the messagepassing code was found to scale linearly on the two computer systems evaluated across the range of cases considered.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124714724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Airflow Simulation over a Vegetated Soil Surface 植被土壤表面气流模拟
2009 DoD High Performance Computing Modernization Program Users Group Conference Pub Date : 2009-06-15 DOI: 10.1109/HPCMP-UGC.2009.9
P. Luong, R. Bernard, S. Howington
{"title":"Airflow Simulation over a Vegetated Soil Surface","authors":"P. Luong, R. Bernard, S. Howington","doi":"10.1109/HPCMP-UGC.2009.9","DOIUrl":"https://doi.org/10.1109/HPCMP-UGC.2009.9","url":null,"abstract":"The performance of infrared sensors under various meteorological and soil-surface conditions is a perennial concern for remote characterization of local environments. To aid in the testing and improvement of these sensors, computational fluid dynamics (CFD) models can provide realistic simulations of ambient airflow and temperature conditions. High CFD grid resolution is generally required for capturing the physical properties of a given region of interest, which may contain rocks, bushes, grasses, and other vegetation. In this study, the PAR3D model is used to compute spatially variable wind speeds and air temperatures, which will be coupled (in future work) with surface heat-exchange functions in ground-water and vegetation models. The resulting soil, rock, and vegetation temperatures can then be used to compute infrared images for these features, and the synthetic images can ultimately be used to test sensor performance. Thus, the eventual aim of the airflow, heat-transfer, and infrared computations is the production of high-resolution, synthetic infrared imagery for realistic surface environments.","PeriodicalId":268639,"journal":{"name":"2009 DoD High Performance Computing Modernization Program Users Group Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122013790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信