Parallel sorting algorithm classification: is manual instrumentation necessary?

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Michael McKinsey , Dewi Yokelson , Stephanie Brink , Tom Scogland , Olga Pearce
{"title":"Parallel sorting algorithm classification: is manual instrumentation necessary?","authors":"Michael McKinsey ,&nbsp;Dewi Yokelson ,&nbsp;Stephanie Brink ,&nbsp;Tom Scogland ,&nbsp;Olga Pearce","doi":"10.1016/j.future.2025.108170","DOIUrl":null,"url":null,"abstract":"<div><div>Understanding parallel algorithms is crucial for accelerating scientific simulations on complex, distributed memory, high-performance computers. Modern algorithm classification approaches learn semantics directly from source code to differentiate between algorithms, however, accessing source code is not always possible. We can learn about parallel algorithms from observing their performance, as programs running the same algorithms and using the same hardware should exhibit similar performance characteristics. We present an approach to learn algorithm classes from parallel performance data directly in order to classify algorithms without access to the source code. We extend previous work to enable classifying parallel sorting algorithms using automatic instrumentation instead of requiring manual region annotations in the source code. In this work, we design and demonstrate a study for classification of parallel sorting algorithms using parallel performance data collected from automatic instrumentation, and evaluate the performance of our new methodology on classification. We leverage Caliper to collect the performance data, Thicket for our exploratory data analysis (EDA), and PyTorch and Scikit-learn to evaluate the effectiveness of random forests, support vector machines (SVMs), decision trees, neural networks, and logistic regressions on parallel performance data. Additionally, we study noise in parallel performance data, whether the removal of noise and pre-processing of the data is necessary to accurately classify parallel sorting algorithms, and determine the effectiveness of features created from performance data. We demonstrate classification accuracy for these five different models of up to 97.7% across four different parallel algorithm classes.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"176 ","pages":"Article 108170"},"PeriodicalIF":6.2000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25004649","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Understanding parallel algorithms is crucial for accelerating scientific simulations on complex, distributed memory, high-performance computers. Modern algorithm classification approaches learn semantics directly from source code to differentiate between algorithms, however, accessing source code is not always possible. We can learn about parallel algorithms from observing their performance, as programs running the same algorithms and using the same hardware should exhibit similar performance characteristics. We present an approach to learn algorithm classes from parallel performance data directly in order to classify algorithms without access to the source code. We extend previous work to enable classifying parallel sorting algorithms using automatic instrumentation instead of requiring manual region annotations in the source code. In this work, we design and demonstrate a study for classification of parallel sorting algorithms using parallel performance data collected from automatic instrumentation, and evaluate the performance of our new methodology on classification. We leverage Caliper to collect the performance data, Thicket for our exploratory data analysis (EDA), and PyTorch and Scikit-learn to evaluate the effectiveness of random forests, support vector machines (SVMs), decision trees, neural networks, and logistic regressions on parallel performance data. Additionally, we study noise in parallel performance data, whether the removal of noise and pre-processing of the data is necessary to accurately classify parallel sorting algorithms, and determine the effectiveness of features created from performance data. We demonstrate classification accuracy for these five different models of up to 97.7% across four different parallel algorithm classes.
并行排序算法分类:是否需要人工仪表?
理解并行算法对于在复杂、分布式内存、高性能计算机上加速科学模拟至关重要。现代算法分类方法直接从源代码学习语义来区分算法,然而,访问源代码并不总是可能的。我们可以通过观察并行算法的性能来了解并行算法,因为运行相同算法并使用相同硬件的程序应该表现出相似的性能特征。我们提出了一种直接从并行性能数据中学习算法类的方法,以便在不访问源代码的情况下对算法进行分类。我们扩展了以前的工作,使并行排序算法能够使用自动检测来分类,而不需要在源代码中手动标注区域。在这项工作中,我们设计并演示了一项使用从自动仪器收集的并行性能数据对并行排序算法进行分类的研究,并评估了我们的新方法在分类上的性能。我们利用Caliper收集性能数据,利用Thicket进行探索性数据分析(EDA),利用PyTorch和Scikit-learn评估随机森林、支持向量机(svm)、决策树、神经网络和逻辑回归对并行性能数据的有效性。此外,我们研究了并行性能数据中的噪声,是否需要去除噪声和数据预处理来准确分类并行排序算法,并确定从性能数据中创建的特征的有效性。我们证明了这五种不同模型的分类准确率在四种不同的并行算法类中高达97.7%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信