Differential Performance Analysis Workflow for Algorithmic Changes

Isabel Thärigen, Joachim Protze, F. Orland, Marc-André Hermanns
{"title":"Differential Performance Analysis Workflow for Algorithmic Changes","authors":"Isabel Thärigen, Joachim Protze, F. Orland, Marc-André Hermanns","doi":"10.1109/ProTools54808.2021.00007","DOIUrl":null,"url":null,"abstract":"Most performance analysis tools used in HPC focus on the analysis of a single configuration of an application. In this work, we instead present a novel performance analysis workflow, supporting the comparison of varied code versions and running conditions. There exist different code versions for many applications because they comprise parts that can be implemented in various ways or already exist in third-party libraries, like linear solvers. Additionally, varied running conditions like scaling of execution units or exchanging the input data can influence the performance behavior. Performance comparison of different application configurations helps determine the best configuration and understand differences in behavior. Such measurements are often not supported directly and are cumbersome to handle manually with current performance measurement and analysis tools. This work presents a workflow based on the JUelich Benchmarking Environment (JUBE) that automatically handles the multitude of measurements and data collation after an initial manual configuration. Furthermore, we introduce diagrams suited for a clear and precise presentation of the collected performance data. The proposed workflow is showcased using two applications CalculiX and Jukkr. Our application studies highlight that our workflow allows a detailed performance analysis while still being easy to use. We, therefore, encourage integrating our approach of multi-configuration diagrams into broadly used HPC visual performance exploration tools.","PeriodicalId":369391,"journal":{"name":"2021 IEEE/ACM International Workshop on Programming and Performance Visualization Tools (ProTools)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM International Workshop on Programming and Performance Visualization Tools (ProTools)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ProTools54808.2021.00007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Most performance analysis tools used in HPC focus on the analysis of a single configuration of an application. In this work, we instead present a novel performance analysis workflow, supporting the comparison of varied code versions and running conditions. There exist different code versions for many applications because they comprise parts that can be implemented in various ways or already exist in third-party libraries, like linear solvers. Additionally, varied running conditions like scaling of execution units or exchanging the input data can influence the performance behavior. Performance comparison of different application configurations helps determine the best configuration and understand differences in behavior. Such measurements are often not supported directly and are cumbersome to handle manually with current performance measurement and analysis tools. This work presents a workflow based on the JUelich Benchmarking Environment (JUBE) that automatically handles the multitude of measurements and data collation after an initial manual configuration. Furthermore, we introduce diagrams suited for a clear and precise presentation of the collected performance data. The proposed workflow is showcased using two applications CalculiX and Jukkr. Our application studies highlight that our workflow allows a detailed performance analysis while still being easy to use. We, therefore, encourage integrating our approach of multi-configuration diagrams into broadly used HPC visual performance exploration tools.
算法变化的差异性能分析工作流
HPC中使用的大多数性能分析工具都侧重于分析应用程序的单个配置。在这项工作中,我们提出了一个新的性能分析工作流,支持不同代码版本和运行条件的比较。许多应用程序都存在不同的代码版本,因为它们包含可以以各种方式实现的部分,或者已经存在于第三方库中,比如线性求解器。此外,不同的运行条件(如缩放执行单元或交换输入数据)可能会影响性能行为。不同应用程序配置的性能比较有助于确定最佳配置并了解行为差异。这样的度量通常不被直接支持,并且使用当前的性能度量和分析工具手动处理非常麻烦。这项工作提出了一个基于JUelich基准环境(JUBE)的工作流,该工作流在初始手动配置后自动处理大量测量和数据整理。此外,我们还介绍了适合于清晰和精确地表示收集到的性能数据的图表。建议的工作流使用两个应用程序calcullix和Jukkr来展示。我们的应用程序研究强调,我们的工作流允许详细的性能分析,同时仍然易于使用。因此,我们鼓励将我们的多配置图方法集成到广泛使用的高性能计算可视化性能探索工具中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信