Amit Mankodi, Amit Bhatt, B. Chaudhury, Rajat Kumar, Aditya Amrutiya
{"title":"评估不同计算机系统性能预测的机器学习模型","authors":"Amit Mankodi, Amit Bhatt, B. Chaudhury, Rajat Kumar, Aditya Amrutiya","doi":"10.1109/CONECCT50063.2020.9198512","DOIUrl":null,"url":null,"abstract":"Performance prediction is an active area of research due to its applicability in the advancements of hardware-software co-development. Several empirical machine-learning models, such as linear models, non-linear models, probabilistic models, tree-based models and, neural networks, are used for performance prediction. Furthermore, the prediction model’s accuracy may vary depending on performance data gathered for different software types (compute-bound, memory-bound) and different hardware (simulation-based or physical systems). We have examined fourteen machine-learning models on simulation-based hardware and physical systems by executing several benchmark programs with different computation and data access patterns. Our results show that the tree-based machine-learning models outperform all other models with median absolute percentage error (MedAPE) of less than 5% followed by bagging and boosting models that help to improve weak learners. We have also observed that prediction accuracy is higher on simulation-based hardware due to its deterministic nature as compared to physical systems. Moreover, in physical systems, the prediction accuracy of memory-bound algorithms is higher as compared to compute-bound algorithms due to manufacturer variability in processors.","PeriodicalId":261794,"journal":{"name":"2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Evaluating Machine Learning Models for Disparate Computer Systems Performance Prediction\",\"authors\":\"Amit Mankodi, Amit Bhatt, B. Chaudhury, Rajat Kumar, Aditya Amrutiya\",\"doi\":\"10.1109/CONECCT50063.2020.9198512\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Performance prediction is an active area of research due to its applicability in the advancements of hardware-software co-development. Several empirical machine-learning models, such as linear models, non-linear models, probabilistic models, tree-based models and, neural networks, are used for performance prediction. Furthermore, the prediction model’s accuracy may vary depending on performance data gathered for different software types (compute-bound, memory-bound) and different hardware (simulation-based or physical systems). We have examined fourteen machine-learning models on simulation-based hardware and physical systems by executing several benchmark programs with different computation and data access patterns. Our results show that the tree-based machine-learning models outperform all other models with median absolute percentage error (MedAPE) of less than 5% followed by bagging and boosting models that help to improve weak learners. We have also observed that prediction accuracy is higher on simulation-based hardware due to its deterministic nature as compared to physical systems. Moreover, in physical systems, the prediction accuracy of memory-bound algorithms is higher as compared to compute-bound algorithms due to manufacturer variability in processors.\",\"PeriodicalId\":261794,\"journal\":{\"name\":\"2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CONECCT50063.2020.9198512\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONECCT50063.2020.9198512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating Machine Learning Models for Disparate Computer Systems Performance Prediction
Performance prediction is an active area of research due to its applicability in the advancements of hardware-software co-development. Several empirical machine-learning models, such as linear models, non-linear models, probabilistic models, tree-based models and, neural networks, are used for performance prediction. Furthermore, the prediction model’s accuracy may vary depending on performance data gathered for different software types (compute-bound, memory-bound) and different hardware (simulation-based or physical systems). We have examined fourteen machine-learning models on simulation-based hardware and physical systems by executing several benchmark programs with different computation and data access patterns. Our results show that the tree-based machine-learning models outperform all other models with median absolute percentage error (MedAPE) of less than 5% followed by bagging and boosting models that help to improve weak learners. We have also observed that prediction accuracy is higher on simulation-based hardware due to its deterministic nature as compared to physical systems. Moreover, in physical systems, the prediction accuracy of memory-bound algorithms is higher as compared to compute-bound algorithms due to manufacturer variability in processors.