通过近似块外部流图加速平流

R. Bleile, L. Sugiyama, C. Garth, H. Childs
{"title":"通过近似块外部流图加速平流","authors":"R. Bleile, L. Sugiyama, C. Garth, H. Childs","doi":"10.2352/ISSN.2470-1173.2017.1.VDA-397","DOIUrl":null,"url":null,"abstract":"Flow visualization techniques involving extreme advection workloads are becoming increasingly popular. While these techniques often produce insightful images, the execution times to carry out the corresponding computations are lengthy. With this work, we introduce an alternative to traditional advection, which improves on performance at the cost of decreased accuracy. Our approach centers around block exterior flow maps (BEFMs), which can be used to accelerate flow computations by reducing redundant calculations. Our algorithm uses Lagrangian interpolation, but falls back to Eulerian advection whenever regions of high error are encountered. In our study, we demonstrate that the BEFM-based approach can lead to significant savings in time, with limited loss in accuracy. Introduction A myriad of scientific simulations, including those modeling fluid flow, astrophysics, fusion, thermal hydraulics, and others, model phenomena where constituents move through their volume. This movement is captured by a velocity field stored at every point on the mesh. Further, other vector fields, such as force fields for electricity, magnetism, and gravity, also govern movement and interaction. A wide range of flow visualization techniques are used to understand such vector fields. The large majority of these techniques rely on placing particles in the volume and analyzing the trajectories they follow. Traditionally, the particles are displaced through the volume using an advection step, i.e., solving an ordinary differential equation using a Runge-Kutta integrator. As computational power on modern desktops has increased, flow visualization algorithms have been empowered to consider designs that include more and more particles advecting for longer and longer periods. Techniques such as Line Integral Convolution and Finite-Time Lyapunov Exponents (FTLE) seed particles densely in a volume and examine where these particles end up. For these operations, and many others, only the ending position of the particle is needed, and not the details of the path the particle took to get there. Despite seemingly abundant computational power, some techniques have excessively long running times. For example, ocean modelers often study the FTLE within an ocean with both high seeding density and very long durations for the particles (years of simulation time) [2, 3]. As another example, fusion scientists are interested in FTLE computations inside a tokamak where particles travel for hundreds of rotations [1]. In both cases, FTLE calculations, even on supercomputers, can take tens of minutes. With this work, we consider an alternative to traditional Eulerian advection. The key observation that motivates the work is that, in conditions with dense seeding and long durations, particles will tread the same (or very similar) paths over and over. Where the current paradigm carries out the same computation over and over, we consider a new paradigm where a computation can be carried out a single time, and then reused. That said, we find that, while particle trajectories do often travel quite close to each other, they typically follow their own (slightly) unique paths. Therefore, to effectively reuse computations, we consider a method where we interpolate new trajectories from existing ones, effectively trading accuracy for speed. Our method depends on Block Exterior Flow Maps, or BEFMs. The idea behind BEFMs is to pre-compute known trajectories that lie on block boundaries. It assumes data in blockdecomposed, but this assumption is common when dealing with parallel, distributed-memory computations. When a computeintensive flow visualization algorithm is then calculated, it consults with the BEFMs and does Lagrangian-style interpolation from its known trajectories. While this approach introduces error, it can be considerably faster, since it avoids Eulerian advection steps inside each block. The contributions of the paper are as follows: • Introduction of BEFMs as an operator for accelerating dense particle advection calculations; • A novel method for generating an approximate BEFM that can be used in practice; • A study that evaluates the approximate BEFM approach, including comparisons with traditional advection. Related Work McLouglin et al. recently surveyed the state of the art in flow visualization [4], and the large majority of techniques they described incorporate particle advection. Any of these techniques could possibly benefit from the BEFM approach, although the tradeoff in accuracy is only worthwhile for those that have extreme computational costs, e.g., Line Integral Convolution [5], finite-time Lyapunov exponents [6], and Poincare analysis [7]. One solution for dealing with extreme advection workloads is parallelization. A summary of strategies for parallelizing particle advection problems on CPU clusters can be found in [8]. The basic approaches are to parallelize-over-data, parallelizeover-particles, or a hybrid of the two [9]. Recent results using parallelization-over-data demonstrated streamline computation on up to 32,768 processors and eight billion cells [11]. These parallelization approaches are complementary with our own. That is, traditional parallel approaches can be used in the current way, but the phase where they advect particles through a region could be replaced by our BEFM approach. In terms of precomputation, the most notable related work comes from Nouanesengsy et al. [10]. They precomputed flow patterns within a region and used the resulting statistics to decide which regions to load. While their precomputation and ours have similar elements, we are using the results of the precomputation in different ways: Nouanesengsy et al. for load balancing and ourselves to replace multiple integrations with one interpolation. In terms of accelerating particle advection through approximation, two works stand out. Brunton et al. [18] also looked at accelerating FTLE calculation, but they considered the unsteady state problem, and used previous calculations to accelerate new ones. While this is a compelling approach, it does not help with the steady state problem we consider. Hlwatsch et al. [15] employ an approach where flow is calculated by following hierarchical lines. This approach is well-suited for their use case, where all data fits within the memory of a GPU, but it is not clear how to build and connect hierarchical lines within a distributed memory parallel setting. In contrast, our method, by focusing on flow between exteriors of blocks, is well-suited for this type of parallelism. Bhatia et al. [19] studied edge maps, and the properties of flow across edge maps. While this work clearly has some similar elements to our, their focus was more on topology and accuracy, and less on accelerating particle advection workloads. Scientific visualization algorithms are increasingly using Lagrangian calculations of flow. Jobard et al. [12] presented a Lagrangian-Eulerian advection scheme which incorporated forward advection with a backward tracing Lagrangian step to more accurately shift textures during animation. Salzbrunn et al. delivered a technique for analyzing circulation and detecting vortex cores given predicates from pre-computed sets of streamlines [14] and pathlines [13]. Agranovsky et al. [16] focused on extracting a basis of Lagrangian flows as an in situ compression operator, while Chandler at al. [17] focused on how to interpolate new pathlines from arbitrary existing sets. Of these works, none share our focus on accelerating advection. Method Our method makes use of block exterior flow maps (BEFM). We begin by defining this mapping, in Section . We then describe our method, and how it incorporates these maps, in Section . Block Exterior Flow Map Definition In scientific computing, parallel simulation codes often partition their spatial volume over their compute nodes. Restated, each compute node will operate on one spatial region, and that compute node will be considered the “owner” of that region. Such a region is frequently referred to as a block. For example, a simulation over the spatial region X: [0-1], Y: [0-1], and Z: [0-1] and having N compute nodes could have N blocks, with each block covering a volume of 1 N . Consider a point P that lies on the exterior of a block B. If the velocity field points toward the interior of B at point P, then Eulerian advection of a particle originating at P will take the particle through the interior of B until it exits. In this case, the particle will exit B at some location P′, where P′ is also located on the exterior of B. The BEFM captures this mapping. The BEFM’s domain is all spatial locations on the exterior of blocks, and its range is also spatial locations on the exteriors of blocks. Further, for any given P in the BEFM’s domain, BEFM(P,B) will produce a location that is on B’s exterior. Saying it concisely, the BEFM is the mapping from particles at exteriors of blocks to the locations where those particles will exit the block under Eulerian advection. Figure 1 illustrates an example of a BEFM.","PeriodicalId":89305,"journal":{"name":"Visualization and data analysis","volume":"95 1","pages":"140-148"},"PeriodicalIF":0.0000,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Accelerating advection via approximate block exterior flow maps\",\"authors\":\"R. Bleile, L. Sugiyama, C. Garth, H. Childs\",\"doi\":\"10.2352/ISSN.2470-1173.2017.1.VDA-397\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Flow visualization techniques involving extreme advection workloads are becoming increasingly popular. While these techniques often produce insightful images, the execution times to carry out the corresponding computations are lengthy. With this work, we introduce an alternative to traditional advection, which improves on performance at the cost of decreased accuracy. Our approach centers around block exterior flow maps (BEFMs), which can be used to accelerate flow computations by reducing redundant calculations. Our algorithm uses Lagrangian interpolation, but falls back to Eulerian advection whenever regions of high error are encountered. In our study, we demonstrate that the BEFM-based approach can lead to significant savings in time, with limited loss in accuracy. Introduction A myriad of scientific simulations, including those modeling fluid flow, astrophysics, fusion, thermal hydraulics, and others, model phenomena where constituents move through their volume. This movement is captured by a velocity field stored at every point on the mesh. Further, other vector fields, such as force fields for electricity, magnetism, and gravity, also govern movement and interaction. A wide range of flow visualization techniques are used to understand such vector fields. The large majority of these techniques rely on placing particles in the volume and analyzing the trajectories they follow. Traditionally, the particles are displaced through the volume using an advection step, i.e., solving an ordinary differential equation using a Runge-Kutta integrator. As computational power on modern desktops has increased, flow visualization algorithms have been empowered to consider designs that include more and more particles advecting for longer and longer periods. Techniques such as Line Integral Convolution and Finite-Time Lyapunov Exponents (FTLE) seed particles densely in a volume and examine where these particles end up. For these operations, and many others, only the ending position of the particle is needed, and not the details of the path the particle took to get there. Despite seemingly abundant computational power, some techniques have excessively long running times. For example, ocean modelers often study the FTLE within an ocean with both high seeding density and very long durations for the particles (years of simulation time) [2, 3]. As another example, fusion scientists are interested in FTLE computations inside a tokamak where particles travel for hundreds of rotations [1]. In both cases, FTLE calculations, even on supercomputers, can take tens of minutes. With this work, we consider an alternative to traditional Eulerian advection. The key observation that motivates the work is that, in conditions with dense seeding and long durations, particles will tread the same (or very similar) paths over and over. Where the current paradigm carries out the same computation over and over, we consider a new paradigm where a computation can be carried out a single time, and then reused. That said, we find that, while particle trajectories do often travel quite close to each other, they typically follow their own (slightly) unique paths. Therefore, to effectively reuse computations, we consider a method where we interpolate new trajectories from existing ones, effectively trading accuracy for speed. Our method depends on Block Exterior Flow Maps, or BEFMs. The idea behind BEFMs is to pre-compute known trajectories that lie on block boundaries. It assumes data in blockdecomposed, but this assumption is common when dealing with parallel, distributed-memory computations. When a computeintensive flow visualization algorithm is then calculated, it consults with the BEFMs and does Lagrangian-style interpolation from its known trajectories. While this approach introduces error, it can be considerably faster, since it avoids Eulerian advection steps inside each block. The contributions of the paper are as follows: • Introduction of BEFMs as an operator for accelerating dense particle advection calculations; • A novel method for generating an approximate BEFM that can be used in practice; • A study that evaluates the approximate BEFM approach, including comparisons with traditional advection. Related Work McLouglin et al. recently surveyed the state of the art in flow visualization [4], and the large majority of techniques they described incorporate particle advection. Any of these techniques could possibly benefit from the BEFM approach, although the tradeoff in accuracy is only worthwhile for those that have extreme computational costs, e.g., Line Integral Convolution [5], finite-time Lyapunov exponents [6], and Poincare analysis [7]. One solution for dealing with extreme advection workloads is parallelization. A summary of strategies for parallelizing particle advection problems on CPU clusters can be found in [8]. The basic approaches are to parallelize-over-data, parallelizeover-particles, or a hybrid of the two [9]. Recent results using parallelization-over-data demonstrated streamline computation on up to 32,768 processors and eight billion cells [11]. These parallelization approaches are complementary with our own. That is, traditional parallel approaches can be used in the current way, but the phase where they advect particles through a region could be replaced by our BEFM approach. In terms of precomputation, the most notable related work comes from Nouanesengsy et al. [10]. They precomputed flow patterns within a region and used the resulting statistics to decide which regions to load. While their precomputation and ours have similar elements, we are using the results of the precomputation in different ways: Nouanesengsy et al. for load balancing and ourselves to replace multiple integrations with one interpolation. In terms of accelerating particle advection through approximation, two works stand out. Brunton et al. [18] also looked at accelerating FTLE calculation, but they considered the unsteady state problem, and used previous calculations to accelerate new ones. While this is a compelling approach, it does not help with the steady state problem we consider. Hlwatsch et al. [15] employ an approach where flow is calculated by following hierarchical lines. This approach is well-suited for their use case, where all data fits within the memory of a GPU, but it is not clear how to build and connect hierarchical lines within a distributed memory parallel setting. In contrast, our method, by focusing on flow between exteriors of blocks, is well-suited for this type of parallelism. Bhatia et al. [19] studied edge maps, and the properties of flow across edge maps. While this work clearly has some similar elements to our, their focus was more on topology and accuracy, and less on accelerating particle advection workloads. Scientific visualization algorithms are increasingly using Lagrangian calculations of flow. Jobard et al. [12] presented a Lagrangian-Eulerian advection scheme which incorporated forward advection with a backward tracing Lagrangian step to more accurately shift textures during animation. Salzbrunn et al. delivered a technique for analyzing circulation and detecting vortex cores given predicates from pre-computed sets of streamlines [14] and pathlines [13]. Agranovsky et al. [16] focused on extracting a basis of Lagrangian flows as an in situ compression operator, while Chandler at al. [17] focused on how to interpolate new pathlines from arbitrary existing sets. Of these works, none share our focus on accelerating advection. Method Our method makes use of block exterior flow maps (BEFM). We begin by defining this mapping, in Section . We then describe our method, and how it incorporates these maps, in Section . Block Exterior Flow Map Definition In scientific computing, parallel simulation codes often partition their spatial volume over their compute nodes. Restated, each compute node will operate on one spatial region, and that compute node will be considered the “owner” of that region. Such a region is frequently referred to as a block. For example, a simulation over the spatial region X: [0-1], Y: [0-1], and Z: [0-1] and having N compute nodes could have N blocks, with each block covering a volume of 1 N . Consider a point P that lies on the exterior of a block B. If the velocity field points toward the interior of B at point P, then Eulerian advection of a particle originating at P will take the particle through the interior of B until it exits. In this case, the particle will exit B at some location P′, where P′ is also located on the exterior of B. The BEFM captures this mapping. The BEFM’s domain is all spatial locations on the exterior of blocks, and its range is also spatial locations on the exteriors of blocks. Further, for any given P in the BEFM’s domain, BEFM(P,B) will produce a location that is on B’s exterior. Saying it concisely, the BEFM is the mapping from particles at exteriors of blocks to the locations where those particles will exit the block under Eulerian advection. Figure 1 illustrates an example of a BEFM.\",\"PeriodicalId\":89305,\"journal\":{\"name\":\"Visualization and data analysis\",\"volume\":\"95 1\",\"pages\":\"140-148\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-01-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Visualization and data analysis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2352/ISSN.2470-1173.2017.1.VDA-397\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visualization and data analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2352/ISSN.2470-1173.2017.1.VDA-397","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

涉及极端平流工作负载的流可视化技术正变得越来越流行。虽然这些技术经常产生深刻的图像,但执行相应计算的执行时间很长。通过这项工作,我们引入了传统平流的替代方案,以降低精度为代价提高了性能。我们的方法以块外部流图(BEFMs)为中心,该方法可以通过减少冗余计算来加速流计算。我们的算法采用拉格朗日插值,但在遇到高误差区域时又退回到欧拉平流。在我们的研究中,我们证明了基于befm的方法可以在有限的精度损失的情况下节省大量时间。无数的科学模拟,包括流体流动、天体物理学、核聚变、热工力学等建模,模拟成分在其体积中移动的现象。这种运动被存储在网格上每个点的速度场捕获。此外,其他矢量场,如电场、磁场和引力的力场,也控制着运动和相互作用。广泛的流可视化技术被用来理解这样的矢量场。这些技术中的大部分依赖于将粒子放置在体积中并分析它们所遵循的轨迹。传统上,粒子通过平流步骤在体积中位移,即使用龙格-库塔积分法求解常微分方程。随着现代桌面计算能力的提高,流动可视化算法已经被授权考虑设计,包括越来越多的粒子平流,时间越来越长。线积分卷积和有限时间李雅普诺夫指数(FTLE)等技术在体积中密集地播种粒子,并检查这些粒子的最终位置。对于这些操作,以及许多其他操作,只需要粒子的结束位置,而不需要粒子到达那里的路径的细节。尽管看起来计算能力很强,但有些技术的运行时间过长。例如,海洋建模者经常研究具有高播种密度和非常长的粒子持续时间(年模拟时间)的海洋中的FTLE[2,3]。另一个例子是,核聚变科学家对托卡马克内部的FTLE计算很感兴趣,在托卡马克中,粒子运行数百个旋转[1]。在这两种情况下,即使在超级计算机上,FTLE计算也可能需要数十分钟。通过这项工作,我们考虑了传统欧拉平流的替代方案。激发这项工作的关键观察结果是,在密集播种和长时间持续的条件下,粒子将一遍又一遍地走过相同(或非常相似)的路径。当当前的范式一遍又一遍地执行相同的计算时,我们考虑一个新的范式,其中计算可以一次执行,然后重用。也就是说,我们发现,虽然粒子轨迹经常彼此非常接近,但它们通常遵循自己(稍微)独特的路径。因此,为了有效地重用计算,我们考虑了一种方法,即从现有轨迹中插入新的轨迹,有效地以精度换取速度。我们的方法依赖于块外部流程图(BEFMs)。befm背后的思想是预先计算位于块边界上的已知轨迹。它假设数据是块分解的,但这种假设在处理并行、分布式内存计算时很常见。当计算密集的流动可视化算法计算时,它与BEFMs协商,并从已知的轨迹进行拉格朗日式插值。虽然这种方法会引入错误,但它可以相当快,因为它避免了每个块内的欧拉平流步骤。本文的贡献如下:•引入BEFMs作为加速致密粒子平流计算的算子;•一种可以在实践中使用的生成近似BEFM的新方法;一项评估近似BEFM方法的研究,包括与传统平流的比较。McLouglin等人最近调查了流动可视化技术的现状[4],他们描述的绝大多数技术都包含了粒子平流。这些技术中的任何一种都可能从BEFM方法中受益,尽管精度上的权衡只有在那些具有极端计算成本的情况下才值得,例如,线积分卷积[5]、有限时间李亚普诺夫指数[6]和庞加莱分析[7]。处理极端平流工作负载的一个解决方案是并行化。在CPU集群上并行化粒子平流问题的策略总结可以在[8]中找到。基本的方法是数据并行,粒子并行,或者两者的混合[9]。 最近使用数据并行化的结果显示,在多达32,768个处理器和80亿个单元上进行简化计算[11]。这些并行化方法与我们自己的方法是互补的。也就是说,传统的并行方法可以以当前的方式使用,但是它们通过一个区域的粒子平流的阶段可以被我们的BEFM方法所取代。在预计算方面,最值得注意的相关工作来自Nouanesengsy等人[10]。他们预先计算了一个区域内的流量模式,并使用结果统计来决定加载哪些区域。虽然他们的预计算和我们的预计算有相似的元素,但我们以不同的方式使用预计算的结果:Nouanesengsy等人用于负载平衡,而我们自己用一个插值代替多个集成。在通过近似加速粒子平流方面,有两个作品引人注目。Brunton等人[18]也研究了加速FTLE计算,但他们考虑了非定常问题,并使用以前的计算来加速新的计算。虽然这是一个令人信服的方法,但它对我们考虑的稳态问题没有帮助。Hlwatsch等人[15]采用了一种通过遵循分层线来计算流量的方法。这种方法非常适合他们的用例,其中所有数据都适合GPU的内存,但不清楚如何在分布式内存并行设置中构建和连接分层线。相比之下,我们的方法,通过关注块外部之间的流,非常适合这种类型的并行性。Bhatia等人[19]研究了边缘图,以及跨边缘图的流动特性。虽然这项工作显然与我们的工作有一些相似的元素,但他们的重点更多地放在拓扑和准确性上,而不是加速粒子平流负载。科学可视化算法越来越多地使用拉格朗日流计算。Jobard等人[12]提出了一种拉格朗日-欧拉平流方案,该方案将正向平流与向后跟踪拉格朗日步骤相结合,可以在动画过程中更准确地移动纹理。Salzbrunn等人提供了一种分析环流和检测涡旋核心的技术,该技术给出了预先计算的流线[14]和路径[13]集合中的谓词。Agranovsky等人[16]专注于提取拉格朗日流的基作为原位压缩算子,而Chandler等人[17]专注于如何从任意现有集合中插入新的路径。在这些作品中,没有一个像我们一样专注于加速平流。方法采用块外部流图(BEFM)。我们首先在章节中定义这个映射。然后,我们将在章节中描述我们的方法,以及它如何合并这些地图。在科学计算中,并行仿真代码常常在其计算节点上划分其空间体积。重申一下,每个计算节点将在一个空间区域上操作,并且该计算节点将被视为该区域的“所有者”。这样的区域通常被称为块。例如,在空间区域X:[0-1]、Y:[0-1]和Z:[0-1]上进行模拟,并且有N个计算节点,可以有N个块,每个块覆盖1n个体积。如果速度场在P点指向B的内部,那么从P点开始的粒子的欧拉平流将带着粒子穿过B的内部,直到它离开。在这种情况下,粒子将在某个位置P '离开B,其中P '也位于B的外部。BEFM捕获了这个映射。BEFM的域是块外部的所有空间位置,其范围也是块外部的空间位置。此外,对于BEFM域中的任何给定P, BEFM(P,B)将产生一个位于B外部的位置。简而言之,BEFM是在欧拉平流下,从物体外部的粒子到这些粒子将离开物体的位置的映射。图1说明了BEFM的一个示例。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Accelerating advection via approximate block exterior flow maps
Flow visualization techniques involving extreme advection workloads are becoming increasingly popular. While these techniques often produce insightful images, the execution times to carry out the corresponding computations are lengthy. With this work, we introduce an alternative to traditional advection, which improves on performance at the cost of decreased accuracy. Our approach centers around block exterior flow maps (BEFMs), which can be used to accelerate flow computations by reducing redundant calculations. Our algorithm uses Lagrangian interpolation, but falls back to Eulerian advection whenever regions of high error are encountered. In our study, we demonstrate that the BEFM-based approach can lead to significant savings in time, with limited loss in accuracy. Introduction A myriad of scientific simulations, including those modeling fluid flow, astrophysics, fusion, thermal hydraulics, and others, model phenomena where constituents move through their volume. This movement is captured by a velocity field stored at every point on the mesh. Further, other vector fields, such as force fields for electricity, magnetism, and gravity, also govern movement and interaction. A wide range of flow visualization techniques are used to understand such vector fields. The large majority of these techniques rely on placing particles in the volume and analyzing the trajectories they follow. Traditionally, the particles are displaced through the volume using an advection step, i.e., solving an ordinary differential equation using a Runge-Kutta integrator. As computational power on modern desktops has increased, flow visualization algorithms have been empowered to consider designs that include more and more particles advecting for longer and longer periods. Techniques such as Line Integral Convolution and Finite-Time Lyapunov Exponents (FTLE) seed particles densely in a volume and examine where these particles end up. For these operations, and many others, only the ending position of the particle is needed, and not the details of the path the particle took to get there. Despite seemingly abundant computational power, some techniques have excessively long running times. For example, ocean modelers often study the FTLE within an ocean with both high seeding density and very long durations for the particles (years of simulation time) [2, 3]. As another example, fusion scientists are interested in FTLE computations inside a tokamak where particles travel for hundreds of rotations [1]. In both cases, FTLE calculations, even on supercomputers, can take tens of minutes. With this work, we consider an alternative to traditional Eulerian advection. The key observation that motivates the work is that, in conditions with dense seeding and long durations, particles will tread the same (or very similar) paths over and over. Where the current paradigm carries out the same computation over and over, we consider a new paradigm where a computation can be carried out a single time, and then reused. That said, we find that, while particle trajectories do often travel quite close to each other, they typically follow their own (slightly) unique paths. Therefore, to effectively reuse computations, we consider a method where we interpolate new trajectories from existing ones, effectively trading accuracy for speed. Our method depends on Block Exterior Flow Maps, or BEFMs. The idea behind BEFMs is to pre-compute known trajectories that lie on block boundaries. It assumes data in blockdecomposed, but this assumption is common when dealing with parallel, distributed-memory computations. When a computeintensive flow visualization algorithm is then calculated, it consults with the BEFMs and does Lagrangian-style interpolation from its known trajectories. While this approach introduces error, it can be considerably faster, since it avoids Eulerian advection steps inside each block. The contributions of the paper are as follows: • Introduction of BEFMs as an operator for accelerating dense particle advection calculations; • A novel method for generating an approximate BEFM that can be used in practice; • A study that evaluates the approximate BEFM approach, including comparisons with traditional advection. Related Work McLouglin et al. recently surveyed the state of the art in flow visualization [4], and the large majority of techniques they described incorporate particle advection. Any of these techniques could possibly benefit from the BEFM approach, although the tradeoff in accuracy is only worthwhile for those that have extreme computational costs, e.g., Line Integral Convolution [5], finite-time Lyapunov exponents [6], and Poincare analysis [7]. One solution for dealing with extreme advection workloads is parallelization. A summary of strategies for parallelizing particle advection problems on CPU clusters can be found in [8]. The basic approaches are to parallelize-over-data, parallelizeover-particles, or a hybrid of the two [9]. Recent results using parallelization-over-data demonstrated streamline computation on up to 32,768 processors and eight billion cells [11]. These parallelization approaches are complementary with our own. That is, traditional parallel approaches can be used in the current way, but the phase where they advect particles through a region could be replaced by our BEFM approach. In terms of precomputation, the most notable related work comes from Nouanesengsy et al. [10]. They precomputed flow patterns within a region and used the resulting statistics to decide which regions to load. While their precomputation and ours have similar elements, we are using the results of the precomputation in different ways: Nouanesengsy et al. for load balancing and ourselves to replace multiple integrations with one interpolation. In terms of accelerating particle advection through approximation, two works stand out. Brunton et al. [18] also looked at accelerating FTLE calculation, but they considered the unsteady state problem, and used previous calculations to accelerate new ones. While this is a compelling approach, it does not help with the steady state problem we consider. Hlwatsch et al. [15] employ an approach where flow is calculated by following hierarchical lines. This approach is well-suited for their use case, where all data fits within the memory of a GPU, but it is not clear how to build and connect hierarchical lines within a distributed memory parallel setting. In contrast, our method, by focusing on flow between exteriors of blocks, is well-suited for this type of parallelism. Bhatia et al. [19] studied edge maps, and the properties of flow across edge maps. While this work clearly has some similar elements to our, their focus was more on topology and accuracy, and less on accelerating particle advection workloads. Scientific visualization algorithms are increasingly using Lagrangian calculations of flow. Jobard et al. [12] presented a Lagrangian-Eulerian advection scheme which incorporated forward advection with a backward tracing Lagrangian step to more accurately shift textures during animation. Salzbrunn et al. delivered a technique for analyzing circulation and detecting vortex cores given predicates from pre-computed sets of streamlines [14] and pathlines [13]. Agranovsky et al. [16] focused on extracting a basis of Lagrangian flows as an in situ compression operator, while Chandler at al. [17] focused on how to interpolate new pathlines from arbitrary existing sets. Of these works, none share our focus on accelerating advection. Method Our method makes use of block exterior flow maps (BEFM). We begin by defining this mapping, in Section . We then describe our method, and how it incorporates these maps, in Section . Block Exterior Flow Map Definition In scientific computing, parallel simulation codes often partition their spatial volume over their compute nodes. Restated, each compute node will operate on one spatial region, and that compute node will be considered the “owner” of that region. Such a region is frequently referred to as a block. For example, a simulation over the spatial region X: [0-1], Y: [0-1], and Z: [0-1] and having N compute nodes could have N blocks, with each block covering a volume of 1 N . Consider a point P that lies on the exterior of a block B. If the velocity field points toward the interior of B at point P, then Eulerian advection of a particle originating at P will take the particle through the interior of B until it exits. In this case, the particle will exit B at some location P′, where P′ is also located on the exterior of B. The BEFM captures this mapping. The BEFM’s domain is all spatial locations on the exterior of blocks, and its range is also spatial locations on the exteriors of blocks. Further, for any given P in the BEFM’s domain, BEFM(P,B) will produce a location that is on B’s exterior. Saying it concisely, the BEFM is the mapping from particles at exteriors of blocks to the locations where those particles will exit the block under Eulerian advection. Figure 1 illustrates an example of a BEFM.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信