{"title":"Explainable machine learning-based cybersecurity detection using LIME and Secml","authors":"Sawsan Alodibat, Ashraf Ahmad, Mohammad Azzeh","doi":"10.1109/JEEIT58638.2023.10185893","DOIUrl":null,"url":null,"abstract":"The field of Explainable Artificial Intelligence (XAI) has gained significant momentum in recent years. This discipline is focused on developing novel approaches to explain and interpret the functioning of machine learning algorithms. As machine learning techniques increasingly adopt “black box” methods, there is growing confusion about how these algorithms work and make decisions. This uncertainty has made it challenging to implement machine learning in sensitive and critical fields. To address this issue, research in machine learning interpretability has become crucial. One particular area that requires attention is the detection process and classification of malware. Handling and preparing data for malware detection poses significant difficulties for machine learning algorithms. Thus, explainability is a critical requirement in current research. Our research leverages XAI, a novel design of explainable artificial intelligence that uses cybersecurity data to gain knowledge about the composition of malware from the Microsoft large benchmark dataset-Microsoft Malware Classification Challenge (BIG 2015). We use the LIME explainability technique and the Secml python library to develop explainable prediction results about the class of malware. We achieved 94% accuracy using Decision Tree classifier.","PeriodicalId":177556,"journal":{"name":"2023 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT)","volume":"79 11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/JEEIT58638.2023.10185893","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The field of Explainable Artificial Intelligence (XAI) has gained significant momentum in recent years. This discipline is focused on developing novel approaches to explain and interpret the functioning of machine learning algorithms. As machine learning techniques increasingly adopt “black box” methods, there is growing confusion about how these algorithms work and make decisions. This uncertainty has made it challenging to implement machine learning in sensitive and critical fields. To address this issue, research in machine learning interpretability has become crucial. One particular area that requires attention is the detection process and classification of malware. Handling and preparing data for malware detection poses significant difficulties for machine learning algorithms. Thus, explainability is a critical requirement in current research. Our research leverages XAI, a novel design of explainable artificial intelligence that uses cybersecurity data to gain knowledge about the composition of malware from the Microsoft large benchmark dataset-Microsoft Malware Classification Challenge (BIG 2015). We use the LIME explainability technique and the Secml python library to develop explainable prediction results about the class of malware. We achieved 94% accuracy using Decision Tree classifier.