DeepLogic: Priority Testing of Deep Learning Through Interpretable Logic Units

IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Chenhao Lin;Xingliang Zhang;Chao Shen
{"title":"DeepLogic: Priority Testing of Deep Learning Through Interpretable Logic Units","authors":"Chenhao Lin;Xingliang Zhang;Chao Shen","doi":"10.23919/cje.2022.00.451","DOIUrl":null,"url":null,"abstract":"With the increasing deployment of deep learning-based systems in various scenes, it is becoming important to conduct sufficient testing and evaluation of deep learning models to improve their interpretability and robustness. Recent studies have proposed different criteria and strategies for deep neural network (DNN) testing. However, they rarely conduct effective testing on the robustness of DNN models and lack interpretability. This paper proposes a new priority testing criterion, called DeepLogic, to analyze the robustness of the DNN models from the perspective of model interpretability. We first define the neural units in DNN with the highest average activation probability as “interpretable logic units”. We analyze the changes in these units to evaluate the model's robustness by conducting adversarial attacks. After that, the interpretable logic units of the inputs are taken as context attributes, and the probability distribution of the softmax layer in the model is taken as internal attributes to establish a comprehensive test prioritization framework. The weight fusion of context and internal factors is carried out, and the test cases are sorted according to this priority. The experimental results on four popular DNN models using eight testing metrics show that our DeepLogic significantly outperforms existing state-of-the-art methods.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 4","pages":"948-964"},"PeriodicalIF":1.6000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10606210","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chinese Journal of Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10606210/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

With the increasing deployment of deep learning-based systems in various scenes, it is becoming important to conduct sufficient testing and evaluation of deep learning models to improve their interpretability and robustness. Recent studies have proposed different criteria and strategies for deep neural network (DNN) testing. However, they rarely conduct effective testing on the robustness of DNN models and lack interpretability. This paper proposes a new priority testing criterion, called DeepLogic, to analyze the robustness of the DNN models from the perspective of model interpretability. We first define the neural units in DNN with the highest average activation probability as “interpretable logic units”. We analyze the changes in these units to evaluate the model's robustness by conducting adversarial attacks. After that, the interpretable logic units of the inputs are taken as context attributes, and the probability distribution of the softmax layer in the model is taken as internal attributes to establish a comprehensive test prioritization framework. The weight fusion of context and internal factors is carried out, and the test cases are sorted according to this priority. The experimental results on four popular DNN models using eight testing metrics show that our DeepLogic significantly outperforms existing state-of-the-art methods.
DeepLogic:通过可解释逻辑单元优先测试深度学习
随着基于深度学习的系统在各种场景中的部署越来越多,对深度学习模型进行充分的测试和评估以提高其可解释性和鲁棒性变得越来越重要。最近的研究提出了不同的深度神经网络(DNN)测试标准和策略。然而,它们很少对 DNN 模型的鲁棒性进行有效测试,缺乏可解释性。本文提出了一种新的优先测试标准,称为 DeepLogic,从模型可解释性的角度分析 DNN 模型的鲁棒性。我们首先将 DNN 中平均激活概率最高的神经单元定义为 "可解释逻辑单元"。我们通过分析这些单元的变化来评估模型的鲁棒性。然后,将输入的可解释逻辑单元作为上下文属性,将模型中 softmax 层的概率分布作为内部属性,从而建立一个全面的测试优先级排序框架。对上下文和内部因素进行权重融合,并根据此优先级对测试用例进行排序。使用八项测试指标对四种流行的 DNN 模型进行的实验结果表明,我们的 DeepLogic 明显优于现有的最先进方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Chinese Journal of Electronics
Chinese Journal of Electronics 工程技术-工程:电子与电气
CiteScore
3.70
自引率
16.70%
发文量
342
审稿时长
12.0 months
期刊介绍: CJE focuses on the emerging fields of electronics, publishing innovative and transformative research papers. Most of the papers published in CJE are from universities and research institutes, presenting their innovative research results. Both theoretical and practical contributions are encouraged, and original research papers reporting novel solutions to the hot topics in electronics are strongly recommended.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信