通过静态:通过可解释性揭开恶意软件可视化的神秘面纱

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Matteo Brosolo, Vinod P., Mauro Conti
{"title":"通过静态:通过可解释性揭开恶意软件可视化的神秘面纱","authors":"Matteo Brosolo,&nbsp;Vinod P.,&nbsp;Mauro Conti","doi":"10.1016/j.jisa.2025.104063","DOIUrl":null,"url":null,"abstract":"<div><div>Security researchers face growing challenges in rapidly identifying and classifying malware strains for effective protection. While Convolutional Neural Networks (CNNs) have emerged as powerful visual classifiers for this task, critical issues of robustness and explainability, well-studied in domains like medicine, remain underaddressed in malware analysis. Although these models achieve strong performance without manual feature engineering, their replicability and decision-making processes remain poorly understood. Two technical barriers have limited progress: first, the lack of obvious methods for selecting and evaluating explainability techniques due to their inherent complexity, and second the substantial computational resources required for replicating and tuning these models across diverse environments, which requires extensive computational power and time investments often beyond typical research constraints. Our study addresses these gaps through comprehensive replication of six CNN architectures, evaluating both performance and explainability using Class Activation Maps (CAMs) including GradCAM and HiResCAM. We conduct experiments across standard datasets (MalImg, Big2015) and our new VX-Zoo collection, systematically comparing how different models interpret inputs. Our analysis reveals distinct patterns in malware family identification while providing concrete explanations for CNN decisions. Furthermore, we demonstrate how these interpretability insights can enhance Visual Transformers, achieving F1-score yielding substantial improvements in F1 score, ranging from 2% to 8%, across the datasets compared to benchmark values.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"91 ","pages":"Article 104063"},"PeriodicalIF":3.8000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Through the static: Demystifying malware visualization via explainability\",\"authors\":\"Matteo Brosolo,&nbsp;Vinod P.,&nbsp;Mauro Conti\",\"doi\":\"10.1016/j.jisa.2025.104063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Security researchers face growing challenges in rapidly identifying and classifying malware strains for effective protection. While Convolutional Neural Networks (CNNs) have emerged as powerful visual classifiers for this task, critical issues of robustness and explainability, well-studied in domains like medicine, remain underaddressed in malware analysis. Although these models achieve strong performance without manual feature engineering, their replicability and decision-making processes remain poorly understood. Two technical barriers have limited progress: first, the lack of obvious methods for selecting and evaluating explainability techniques due to their inherent complexity, and second the substantial computational resources required for replicating and tuning these models across diverse environments, which requires extensive computational power and time investments often beyond typical research constraints. Our study addresses these gaps through comprehensive replication of six CNN architectures, evaluating both performance and explainability using Class Activation Maps (CAMs) including GradCAM and HiResCAM. We conduct experiments across standard datasets (MalImg, Big2015) and our new VX-Zoo collection, systematically comparing how different models interpret inputs. Our analysis reveals distinct patterns in malware family identification while providing concrete explanations for CNN decisions. Furthermore, we demonstrate how these interpretability insights can enhance Visual Transformers, achieving F1-score yielding substantial improvements in F1 score, ranging from 2% to 8%, across the datasets compared to benchmark values.</div></div>\",\"PeriodicalId\":48638,\"journal\":{\"name\":\"Journal of Information Security and Applications\",\"volume\":\"91 \",\"pages\":\"Article 104063\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Security and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214212625001000\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625001000","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

安全研究人员在快速识别和分类恶意软件以进行有效保护方面面临着越来越大的挑战。虽然卷积神经网络(cnn)已经成为这项任务的强大视觉分类器,但在医学等领域得到充分研究的鲁棒性和可解释性的关键问题,在恶意软件分析中仍然没有得到充分解决。尽管这些模型在没有手动特征工程的情况下实现了强大的性能,但它们的可复制性和决策过程仍然很难理解。两个技术障碍限制了进展:首先,由于其固有的复杂性,缺乏选择和评估可解释性技术的明显方法;其次,在不同环境中复制和调整这些模型所需的大量计算资源,这需要大量的计算能力和时间投资,通常超出典型的研究限制。我们的研究通过全面复制六种CNN架构来解决这些差距,使用类激活图(CAMs)(包括GradCAM和HiResCAM)评估性能和可解释性。我们在标准数据集(MalImg, Big2015)和我们新的VX-Zoo集合上进行实验,系统地比较不同模型如何解释输入。我们的分析揭示了恶意软件家族识别的独特模式,同时为CNN的决定提供了具体的解释。此外,我们展示了这些可解释性见解如何增强视觉变形器,实现F1得分,与基准值相比,在数据集上F1得分显著提高,从2%到8%不等。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Through the static: Demystifying malware visualization via explainability
Security researchers face growing challenges in rapidly identifying and classifying malware strains for effective protection. While Convolutional Neural Networks (CNNs) have emerged as powerful visual classifiers for this task, critical issues of robustness and explainability, well-studied in domains like medicine, remain underaddressed in malware analysis. Although these models achieve strong performance without manual feature engineering, their replicability and decision-making processes remain poorly understood. Two technical barriers have limited progress: first, the lack of obvious methods for selecting and evaluating explainability techniques due to their inherent complexity, and second the substantial computational resources required for replicating and tuning these models across diverse environments, which requires extensive computational power and time investments often beyond typical research constraints. Our study addresses these gaps through comprehensive replication of six CNN architectures, evaluating both performance and explainability using Class Activation Maps (CAMs) including GradCAM and HiResCAM. We conduct experiments across standard datasets (MalImg, Big2015) and our new VX-Zoo collection, systematically comparing how different models interpret inputs. Our analysis reveals distinct patterns in malware family identification while providing concrete explanations for CNN decisions. Furthermore, we demonstrate how these interpretability insights can enhance Visual Transformers, achieving F1-score yielding substantial improvements in F1 score, ranging from 2% to 8%, across the datasets compared to benchmark values.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信