MetaA: Multi-Dimensional Evaluation of Testing Ability via Adversarial Examples in Deep Learning

Siqi Gu, Jiawei Liu, Zhan-wei Hui, Wenhong Liu, Zhenyu Chen
{"title":"MetaA: Multi-Dimensional Evaluation of Testing Ability via Adversarial Examples in Deep Learning","authors":"Siqi Gu, Jiawei Liu, Zhan-wei Hui, Wenhong Liu, Zhenyu Chen","doi":"10.1109/QRS57517.2022.00104","DOIUrl":null,"url":null,"abstract":"Deep learning (DL) has shown superior performance in many areas, making the quality assurance of DL-based software particularly important. Adversarial examples are generated by deliberately adding subtle perturbations in input samples and can easily attack less reliable DL models. Most existing works only utilize a single metric to evaluate the generated adversarial examples, such as attacking success rate or structure similarity measure. The problem is that they cannot avoid extreme testing situations and provide multifaceted evaluation results.This paper presents MetaA, a multi-dimensional evaluation framework for testing ability of adversarial examples in deep learning. Evaluating the testing ability represents measuring the testing performance to make improvements. Specifically, MetaA performs comprehensive validation on generating adversarial examples from two horizontal and five vertical dimensions. We design MetaA according to the definition of the adversarial examples and the issue mentioned in [1] that how to enrich the evaluation dimension rather than merely quantifying the improvement of DL and software.We conduct several analyses and comparative experiments vertically and horizontally to evaluate the reliability and effectiveness of MetaA. The experimental results show that MetaA can avoid speculation and reach agreement among different indicators when they reflect inconsistencies. The detailed and comprehensive analysis of evaluation results can further guide the optimization of adversarial examples and the quality assurance of DL-based software.","PeriodicalId":143812,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QRS57517.2022.00104","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Deep learning (DL) has shown superior performance in many areas, making the quality assurance of DL-based software particularly important. Adversarial examples are generated by deliberately adding subtle perturbations in input samples and can easily attack less reliable DL models. Most existing works only utilize a single metric to evaluate the generated adversarial examples, such as attacking success rate or structure similarity measure. The problem is that they cannot avoid extreme testing situations and provide multifaceted evaluation results.This paper presents MetaA, a multi-dimensional evaluation framework for testing ability of adversarial examples in deep learning. Evaluating the testing ability represents measuring the testing performance to make improvements. Specifically, MetaA performs comprehensive validation on generating adversarial examples from two horizontal and five vertical dimensions. We design MetaA according to the definition of the adversarial examples and the issue mentioned in [1] that how to enrich the evaluation dimension rather than merely quantifying the improvement of DL and software.We conduct several analyses and comparative experiments vertically and horizontally to evaluate the reliability and effectiveness of MetaA. The experimental results show that MetaA can avoid speculation and reach agreement among different indicators when they reflect inconsistencies. The detailed and comprehensive analysis of evaluation results can further guide the optimization of adversarial examples and the quality assurance of DL-based software.
元:深度学习中基于对抗性示例的测试能力多维评估
深度学习(DL)在许多领域显示出卓越的性能,这使得基于DL的软件的质量保证变得尤为重要。对抗性示例是通过故意在输入样本中添加微妙的扰动而生成的,可以很容易地攻击不太可靠的深度学习模型。大多数现有的工作只使用单一的度量来评估生成的对抗示例,例如攻击成功率或结构相似性度量。问题是他们无法避免极端的测试情况,并提供多方面的评估结果。本文提出了一种用于深度学习中对抗样例测试能力的多维评估框架meta。评估测试能力代表测量测试性能以做出改进。具体来说,MetaA从两个水平和五个垂直维度生成对抗性示例进行全面验证。我们根据对抗性示例的定义和[1]中提到的如何丰富评估维度而不仅仅是量化DL和软件的改进的问题来设计meta。我们在纵向和横向上进行了多次分析和对比实验,以评估meta的可靠性和有效性。实验结果表明,当不同指标反映不一致性时,meta可以避免猜测,并达成一致。对评价结果进行详细、全面的分析,可以进一步指导对抗性样例的优化和基于dl的软件的质量保证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信