DeFinder: Error-sensitive testing of deep neural networks via vulnerability interpretation

IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Aoshuang Ye , Shilin Zhang , Benxiao Tang , Jianpeng Ke , Yiru Zhao , Tao Peng
{"title":"DeFinder: Error-sensitive testing of deep neural networks via vulnerability interpretation","authors":"Aoshuang Ye ,&nbsp;Shilin Zhang ,&nbsp;Benxiao Tang ,&nbsp;Jianpeng Ke ,&nbsp;Yiru Zhao ,&nbsp;Tao Peng","doi":"10.1016/j.jnca.2025.104212","DOIUrl":null,"url":null,"abstract":"<div><div>DNN testing evaluates the vulnerability of neural networks through <em>adversarial test cases</em>. The developers implement minor perturbations to the seed inputs to generate test cases, which are guided by meticulously designed testing criteria. Nevertheless, current coverage-guided testing methods rely on covering model states rather than analyzing the influence of seed inputs on inducing erroneous behaviors. In this paper, we propose a novel DNN testing method called DeFinder, which generates error-sensitive tests by implementing an explainable framework for neural networks to establish correlations between model vulnerability and seed inputs. By systematically analyzing vulnerable regions within seed inputs, DeFinder significantly improves the test suite’s ability to maximize test coverage and expose errors. To validate the effectiveness of DeFinder, we conduct comprehensive experiments with nine deep neural network models from two popular computer vision datasets. We compare the proposed method with several state-of-the-art DNN testing tools. The experimental results demonstrate that DeFinder improves the error-triggering ratio by up to 58% and increases test coverage by up to 4.3%. For reproducibility, the artifact for this work is available at public repository: <span><span>https://github.com/Konatazz/DeFinder</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"241 ","pages":"Article 104212"},"PeriodicalIF":8.0000,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Network and Computer Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1084804525001092","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

DNN testing evaluates the vulnerability of neural networks through adversarial test cases. The developers implement minor perturbations to the seed inputs to generate test cases, which are guided by meticulously designed testing criteria. Nevertheless, current coverage-guided testing methods rely on covering model states rather than analyzing the influence of seed inputs on inducing erroneous behaviors. In this paper, we propose a novel DNN testing method called DeFinder, which generates error-sensitive tests by implementing an explainable framework for neural networks to establish correlations between model vulnerability and seed inputs. By systematically analyzing vulnerable regions within seed inputs, DeFinder significantly improves the test suite’s ability to maximize test coverage and expose errors. To validate the effectiveness of DeFinder, we conduct comprehensive experiments with nine deep neural network models from two popular computer vision datasets. We compare the proposed method with several state-of-the-art DNN testing tools. The experimental results demonstrate that DeFinder improves the error-triggering ratio by up to 58% and increases test coverage by up to 4.3%. For reproducibility, the artifact for this work is available at public repository: https://github.com/Konatazz/DeFinder.
定义:通过漏洞解释对深度神经网络进行错误敏感测试
DNN测试通过对抗性测试用例来评估神经网络的脆弱性。开发人员实现对种子输入的微小扰动,以生成由精心设计的测试标准指导的测试用例。然而,目前的覆盖引导测试方法依赖于覆盖模型状态,而不是分析种子输入对诱导错误行为的影响。在本文中,我们提出了一种新的DNN测试方法,称为DeFinder,它通过实现神经网络的可解释框架来建立模型脆弱性和种子输入之间的相关性,从而生成错误敏感测试。通过系统地分析种子输入中的脆弱区域,DeFinder显著提高了测试套件最大化测试覆盖率和暴露错误的能力。为了验证DeFinder的有效性,我们对来自两个流行的计算机视觉数据集的九个深度神经网络模型进行了全面的实验。我们将提出的方法与几种最先进的深度神经网络测试工具进行比较。实验结果表明,DeFinder将错误触发率提高了58%,测试覆盖率提高了4.3%。为了再现性,这项工作的工件可以在公共存储库中获得:https://github.com/Konatazz/DeFinder。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Network and Computer Applications
Journal of Network and Computer Applications 工程技术-计算机:跨学科应用
CiteScore
21.50
自引率
3.40%
发文量
142
审稿时长
37 days
期刊介绍: The Journal of Network and Computer Applications welcomes research contributions, surveys, and notes in all areas relating to computer networks and applications thereof. Sample topics include new design techniques, interesting or novel applications, components or standards; computer networks with tools such as WWW; emerging standards for internet protocols; Wireless networks; Mobile Computing; emerging computing models such as cloud computing, grid computing; applications of networked systems for remote collaboration and telemedicine, etc. The journal is abstracted and indexed in Scopus, Engineering Index, Web of Science, Science Citation Index Expanded and INSPEC.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信