Parallel Tracking and Reconstruction of States in Heuristic Optimization Systems on GPUs

M. Köster, J. Groß, A. Krüger
{"title":"Parallel Tracking and Reconstruction of States in Heuristic Optimization Systems on GPUs","authors":"M. Köster, J. Groß, A. Krüger","doi":"10.1109/PDCAT46702.2019.00016","DOIUrl":null,"url":null,"abstract":"Modern heuristic optimization systems leverage the parallel processing power of Graphics Processing Units (GPUs). Many states are maintained and evaluated in parallel to improve runtime by orders of magnitudes in comparison to purely CPUbased approaches. A well known example is the parallel Monte Carlo tree search, which is often used in combination with more advanced machine-learning methods these days. However, all approaches require different optimization states in memory to update or manipulate variables and observe their behavior over time. Large real-world problems often require a large number of states that are typically limited by the amount of available memory. This is particularly challenging in cases in which older states (that are not currently being evaluated) are still required for backtracking purposes. In this paper, we propose a new general high-level approach to track and reconstruct states in the scope of heuristic optimization systems on GPUs. Our method has a considerably lower memory consumption compared to traditional approaches and scales well with the complexity of the optimization problem.","PeriodicalId":166126,"journal":{"name":"2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PDCAT46702.2019.00016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Modern heuristic optimization systems leverage the parallel processing power of Graphics Processing Units (GPUs). Many states are maintained and evaluated in parallel to improve runtime by orders of magnitudes in comparison to purely CPUbased approaches. A well known example is the parallel Monte Carlo tree search, which is often used in combination with more advanced machine-learning methods these days. However, all approaches require different optimization states in memory to update or manipulate variables and observe their behavior over time. Large real-world problems often require a large number of states that are typically limited by the amount of available memory. This is particularly challenging in cases in which older states (that are not currently being evaluated) are still required for backtracking purposes. In this paper, we propose a new general high-level approach to track and reconstruct states in the scope of heuristic optimization systems on GPUs. Our method has a considerably lower memory consumption compared to traditional approaches and scales well with the complexity of the optimization problem.
基于gpu的启发式优化系统状态并行跟踪与重构
现代启发式优化系统利用图形处理单元(gpu)的并行处理能力。与纯粹基于cpu的方法相比,并行地维护和评估许多状态,以提高运行时的数量级。一个众所周知的例子是并行蒙特卡罗树搜索,它经常与更先进的机器学习方法结合使用。然而,所有方法都需要在内存中使用不同的优化状态来更新或操作变量,并随时间观察它们的行为。现实世界中的大型问题通常需要大量的状态,而这些状态通常受到可用内存量的限制。这在旧状态(目前没有被评估)仍然需要回溯的情况下尤其具有挑战性。在本文中,我们提出了一种新的通用高级方法来跟踪和重建gpu上启发式优化系统的状态。与传统方法相比,我们的方法具有相当低的内存消耗,并且可以很好地扩展优化问题的复杂性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信