{"title":"MLPP:应用程序性能预测的迁移学习和模型蒸馏探索","authors":"J. Gunasekaran, Cyan Subhra Mishra","doi":"10.1109/nas51552.2021.9605431","DOIUrl":null,"url":null,"abstract":"Performance prediction for applications is quintessential towards detecting malicious hardware and software vulnerabilities. Typically application performance is predicted using the profiling data generated from hardware tools such as linux perf. By leveraging the data, prediction models, both machine learning (ML) based and non ML-based have been proposed. However a majority of these models suffer from either loss in prediction accuracy, very large model sizes, and/or lack of general applicability to different hardware types such as wearables, handhelds, desktops etc. To address the aforementioned inefficiencies, in this paper we proposed MLPP, a machine learning based performance prediction model which can accurately predict application performance, and at the same time be easily transferable to a wide both mobile and desktop hardware platforms by leveraging transfer learning technique. Furthermore, MLPP incorporates model distillation techniques to significantly reduce the model size. Through our extensive experimentation and evaluation we show that MLPP can achieve up to 92.5% prediction accuracy while reducing the model size by up to 3.5 ×.","PeriodicalId":135930,"journal":{"name":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MLPP: Exploring Transfer Learning and Model Distillation for Predicting Application Performance\",\"authors\":\"J. Gunasekaran, Cyan Subhra Mishra\",\"doi\":\"10.1109/nas51552.2021.9605431\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Performance prediction for applications is quintessential towards detecting malicious hardware and software vulnerabilities. Typically application performance is predicted using the profiling data generated from hardware tools such as linux perf. By leveraging the data, prediction models, both machine learning (ML) based and non ML-based have been proposed. However a majority of these models suffer from either loss in prediction accuracy, very large model sizes, and/or lack of general applicability to different hardware types such as wearables, handhelds, desktops etc. To address the aforementioned inefficiencies, in this paper we proposed MLPP, a machine learning based performance prediction model which can accurately predict application performance, and at the same time be easily transferable to a wide both mobile and desktop hardware platforms by leveraging transfer learning technique. Furthermore, MLPP incorporates model distillation techniques to significantly reduce the model size. Through our extensive experimentation and evaluation we show that MLPP can achieve up to 92.5% prediction accuracy while reducing the model size by up to 3.5 ×.\",\"PeriodicalId\":135930,\"journal\":{\"name\":\"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)\",\"volume\":\"147 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/nas51552.2021.9605431\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Networking, Architecture and Storage (NAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/nas51552.2021.9605431","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MLPP: Exploring Transfer Learning and Model Distillation for Predicting Application Performance
Performance prediction for applications is quintessential towards detecting malicious hardware and software vulnerabilities. Typically application performance is predicted using the profiling data generated from hardware tools such as linux perf. By leveraging the data, prediction models, both machine learning (ML) based and non ML-based have been proposed. However a majority of these models suffer from either loss in prediction accuracy, very large model sizes, and/or lack of general applicability to different hardware types such as wearables, handhelds, desktops etc. To address the aforementioned inefficiencies, in this paper we proposed MLPP, a machine learning based performance prediction model which can accurately predict application performance, and at the same time be easily transferable to a wide both mobile and desktop hardware platforms by leveraging transfer learning technique. Furthermore, MLPP incorporates model distillation techniques to significantly reduce the model size. Through our extensive experimentation and evaluation we show that MLPP can achieve up to 92.5% prediction accuracy while reducing the model size by up to 3.5 ×.