Leveraging activation and optimisation layers as dynamic strategies in the multi-task fuzzing scheme

IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Sadegh Bamohabbat Chafjiri, Phil Legg, Michail-Antisthenis Tsompanas, Jun Hong
{"title":"Leveraging activation and optimisation layers as dynamic strategies in the multi-task fuzzing scheme","authors":"Sadegh Bamohabbat Chafjiri,&nbsp;Phil Legg,&nbsp;Michail-Antisthenis Tsompanas,&nbsp;Jun Hong","doi":"10.1016/j.csi.2025.104011","DOIUrl":null,"url":null,"abstract":"<div><div>Fuzzing is a common technique for identifying vulnerabilities in software. Recent approaches, like She et al.’s Multi-Task Fuzzing (MTFuzz), use neural networks to improve fuzzing efficiency. However, key elements like network architecture and hyperparameter tuning are still not well-explored. Factors like activation layers, optimisation function design, and vanishing gradient strategies can significantly impact fuzzing results by improving test case selection. This paper delves into these aspects to improve neural network-driven fuzz testing.</div><div>We focus on three key neural network parameters to improve fuzz testing: the Leaky Rectified Linear Unit (LReLU) activation, Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimisation, and sensitivity analysis. LReLU adds non-linearity, aiding feature extraction, while Nadam helps to improve weight updates by considering both current and future gradient directions. Sensitivity analysis optimises layer selection for gradient calculation, enhancing fuzzing efficiency.</div><div>Based on these insights, we propose LMTFuzz, a novel fuzzing scheme optimised for these Machine Learning (ML) strategies. We explore the individual and combined effects of LReLU, Nadam, and sensitivity analysis, as well as their hybrid configurations, across six different software targets. Experimental results demonstrate that LReLU, individually or when paired with sensitivity analysis, significantly enhances fuzz testing performance. However, when combined with Nadam, LReLU shows improvement on some targets, though less pronounced than its combination with sensitivity analysis. This combination improves accuracy, reduces loss, and increases edge coverage, with improvements of up to 23.8%. Furthermore, it leads to a significant increase in unique bug detection, with some targets detecting up to 2.66 times more bugs than baseline methods.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104011"},"PeriodicalIF":4.1000,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Standards & Interfaces","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0920548925000406","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Fuzzing is a common technique for identifying vulnerabilities in software. Recent approaches, like She et al.’s Multi-Task Fuzzing (MTFuzz), use neural networks to improve fuzzing efficiency. However, key elements like network architecture and hyperparameter tuning are still not well-explored. Factors like activation layers, optimisation function design, and vanishing gradient strategies can significantly impact fuzzing results by improving test case selection. This paper delves into these aspects to improve neural network-driven fuzz testing.
We focus on three key neural network parameters to improve fuzz testing: the Leaky Rectified Linear Unit (LReLU) activation, Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimisation, and sensitivity analysis. LReLU adds non-linearity, aiding feature extraction, while Nadam helps to improve weight updates by considering both current and future gradient directions. Sensitivity analysis optimises layer selection for gradient calculation, enhancing fuzzing efficiency.
Based on these insights, we propose LMTFuzz, a novel fuzzing scheme optimised for these Machine Learning (ML) strategies. We explore the individual and combined effects of LReLU, Nadam, and sensitivity analysis, as well as their hybrid configurations, across six different software targets. Experimental results demonstrate that LReLU, individually or when paired with sensitivity analysis, significantly enhances fuzz testing performance. However, when combined with Nadam, LReLU shows improvement on some targets, though less pronounced than its combination with sensitivity analysis. This combination improves accuracy, reduces loss, and increases edge coverage, with improvements of up to 23.8%. Furthermore, it leads to a significant increase in unique bug detection, with some targets detecting up to 2.66 times more bugs than baseline methods.
利用激活层和优化层作为多任务模糊方案中的动态策略
模糊测试是识别软件漏洞的常用技术。最近的方法,如She等人的多任务模糊(MTFuzz),使用神经网络来提高模糊效率。然而,像网络架构和超参数调优这样的关键元素仍然没有得到很好的探索。激活层、优化函数设计和消失梯度策略等因素可以通过改进测试用例选择来显著影响模糊结果。本文对这些方面进行了深入研究,以改进神经网络驱动的模糊测试。我们关注三个关键的神经网络参数来改进模糊测试:Leaky Rectified Linear Unit (LReLU)激活、nesterov加速自适应矩估计(Nadam)优化和灵敏度分析。LReLU增加了非线性,帮助特征提取,而Nadam通过考虑当前和未来的梯度方向来帮助改进权重更新。灵敏度分析优化了梯度计算的层选择,提高了模糊效率。基于这些见解,我们提出了LMTFuzz,这是一种针对这些机器学习(ML)策略优化的新型模糊测试方案。我们在六个不同的软件目标上探索了LReLU、Nadam和敏感性分析的单独和联合效应,以及它们的混合配置。实验结果表明,LReLU无论是单独使用还是与灵敏度分析结合使用,都能显著提高模糊测试的性能。然而,当与Nadam联合使用时,LReLU在某些目标上显示出改善,尽管不如与敏感性分析结合使用明显。这种组合提高了精度,减少了损耗,并增加了边缘覆盖,改进幅度高达23.8%。此外,它还显著增加了独特的错误检测,一些目标检测到的错误比基线方法多2.66倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Standards & Interfaces
Computer Standards & Interfaces 工程技术-计算机:软件工程
CiteScore
11.90
自引率
16.00%
发文量
67
审稿时长
6 months
期刊介绍: The quality of software, well-defined interfaces (hardware and software), the process of digitalisation, and accepted standards in these fields are essential for building and exploiting complex computing, communication, multimedia and measuring systems. Standards can simplify the design and construction of individual hardware and software components and help to ensure satisfactory interworking. Computer Standards & Interfaces is an international journal dealing specifically with these topics. The journal • Provides information about activities and progress on the definition of computer standards, software quality, interfaces and methods, at national, European and international levels • Publishes critical comments on standards and standards activities • Disseminates user''s experiences and case studies in the application and exploitation of established or emerging standards, interfaces and methods • Offers a forum for discussion on actual projects, standards, interfaces and methods by recognised experts • Stimulates relevant research by providing a specialised refereed medium.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信