A parallel framework for simplification of massive meshes

D. Brodsky, J. Pedersen
{"title":"A parallel framework for simplification of massive meshes","authors":"D. Brodsky, J. Pedersen","doi":"10.1109/PVGS.2003.1249038","DOIUrl":null,"url":null,"abstract":"As polygonal models rapidly grow to sizes orders of magnitudes bigger than the memory of commodity workstations, a viable approach to simplifying such models is parallel mesh simplification algorithms. A naive approach that divides the model into a number of equally sized chunks and distributes them to a number of potentially heterogeneous workstations is bound to fail. In severe cases the computation becomes virtually impossible due to significant slow downs because of memory thrashing. We present a general parallel framework for simplification of very large meshes. This framework ensures a near optimal utilization of the computational resources in a cluster of workstations by providing an intelligent partitioning of the model. This partitioning ensures a high quality output, low runtime due to intelligent load balancing, and high parallel efficiency by providing total memory utilization of each machine, thus guaranteeing not to trash the virtual memory system. To test the usability of our framework we have implemented a parallel version of R-Simp [Brodsky and Watson 2000].","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PVGS.2003.1249038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

As polygonal models rapidly grow to sizes orders of magnitudes bigger than the memory of commodity workstations, a viable approach to simplifying such models is parallel mesh simplification algorithms. A naive approach that divides the model into a number of equally sized chunks and distributes them to a number of potentially heterogeneous workstations is bound to fail. In severe cases the computation becomes virtually impossible due to significant slow downs because of memory thrashing. We present a general parallel framework for simplification of very large meshes. This framework ensures a near optimal utilization of the computational resources in a cluster of workstations by providing an intelligent partitioning of the model. This partitioning ensures a high quality output, low runtime due to intelligent load balancing, and high parallel efficiency by providing total memory utilization of each machine, thus guaranteeing not to trash the virtual memory system. To test the usability of our framework we have implemented a parallel version of R-Simp [Brodsky and Watson 2000].
一种用于大规模网格简化的并行框架
随着多边形模型迅速增长到比商用工作站的内存大几个数量级,一种可行的简化这种模型的方法是并行网格简化算法。将模型划分为许多大小相等的块并将它们分发到许多可能异构的工作站的幼稚方法注定会失败。在严重的情况下,由于内存抖动导致的显著慢速,计算几乎变得不可能。我们提出了一个通用的并行框架来简化非常大的网格。该框架通过提供模型的智能分区,确保了工作站集群中计算资源的近乎最佳利用。这种分区保证了高质量的输出,由于智能负载平衡而降低了运行时间,并且通过提供每台机器的总内存利用率而提高了并行效率,从而保证了不破坏虚拟内存系统。为了测试我们框架的可用性,我们实现了R-Simp的并行版本[Brodsky和Watson 2000]。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信