神经网络的双边界稳健验证方法

Yueyue Yang, Qun Fang, Yajing Tang, Yuchen Feng, Yihui Yan, Yong Xu
{"title":"神经网络的双边界稳健验证方法","authors":"Yueyue Yang, Qun Fang, Yajing Tang, Yuchen Feng, Yihui Yan, Yong Xu","doi":"10.1007/s11227-024-06402-4","DOIUrl":null,"url":null,"abstract":"<p>As a prominent and appealing technology, neural networks have been widely applied in numerous fields, with one of the most notable applications being autonomous driving. However, the intrinsic structure of neural networks presents a black box problem, leading to emergent security issues in driving and networking that remain unresolved. To this end, we introduce a novel method for robust validation of neural networks, named as Dual Boundary Robust (DBR). Specifically, we creatively integrate adversarial attack design, including perturbations like outliers, with outer boundary defenses, in which the inner and outer boundaries are combined with methods such as floating-point polyhedra and boundary intervals. Demonstrate the robustness of the DBR’s anti-interference ability and security performance, and to reduce the black box-induced emergent security problems of neural networks. Compared with the traditional method, the outer boundary of DBR combined with the theory of convex relaxation can appropriately tighten the boundary interval of DBR used in neural networks, which significantly reduces the over-tightening of the potential for severe security issues and has better robustness. Furthermore, extensive experimentation on individually trained neural networks validates the flexibility and scalability of DBR in safeguarding larger regions.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"32 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A dual boundary robust verification method for neural networks\",\"authors\":\"Yueyue Yang, Qun Fang, Yajing Tang, Yuchen Feng, Yihui Yan, Yong Xu\",\"doi\":\"10.1007/s11227-024-06402-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>As a prominent and appealing technology, neural networks have been widely applied in numerous fields, with one of the most notable applications being autonomous driving. However, the intrinsic structure of neural networks presents a black box problem, leading to emergent security issues in driving and networking that remain unresolved. To this end, we introduce a novel method for robust validation of neural networks, named as Dual Boundary Robust (DBR). Specifically, we creatively integrate adversarial attack design, including perturbations like outliers, with outer boundary defenses, in which the inner and outer boundaries are combined with methods such as floating-point polyhedra and boundary intervals. Demonstrate the robustness of the DBR’s anti-interference ability and security performance, and to reduce the black box-induced emergent security problems of neural networks. Compared with the traditional method, the outer boundary of DBR combined with the theory of convex relaxation can appropriately tighten the boundary interval of DBR used in neural networks, which significantly reduces the over-tightening of the potential for severe security issues and has better robustness. Furthermore, extensive experimentation on individually trained neural networks validates the flexibility and scalability of DBR in safeguarding larger regions.</p>\",\"PeriodicalId\":501596,\"journal\":{\"name\":\"The Journal of Supercomputing\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Supercomputing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11227-024-06402-4\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Supercomputing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11227-024-06402-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

神经网络作为一项杰出而有吸引力的技术,已被广泛应用于众多领域,其中最引人注目的应用之一是自动驾驶。然而,神经网络的内在结构带来了一个黑箱问题,导致驾驶和网络中出现的安全问题仍未得到解决。为此,我们引入了一种用于神经网络稳健验证的新方法,命名为双边界稳健(DBR)。具体来说,我们创造性地将对抗性攻击设计(包括异常值等扰动)与外部边界防御相结合,其中内部和外部边界与浮点多面体和边界区间等方法相结合。证明 DBR 抗干扰能力和安全性能的鲁棒性,减少黑盒引起的神经网络突发安全问题。与传统方法相比,DBR 的外边界结合凸松弛理论,可以适当收紧神经网络中使用的 DBR 边界区间,大大降低了过度收紧可能带来的严重安全问题,具有更好的鲁棒性。此外,在单独训练的神经网络上进行的大量实验验证了 DBR 在保护较大区域方面的灵活性和可扩展性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A dual boundary robust verification method for neural networks

A dual boundary robust verification method for neural networks

As a prominent and appealing technology, neural networks have been widely applied in numerous fields, with one of the most notable applications being autonomous driving. However, the intrinsic structure of neural networks presents a black box problem, leading to emergent security issues in driving and networking that remain unresolved. To this end, we introduce a novel method for robust validation of neural networks, named as Dual Boundary Robust (DBR). Specifically, we creatively integrate adversarial attack design, including perturbations like outliers, with outer boundary defenses, in which the inner and outer boundaries are combined with methods such as floating-point polyhedra and boundary intervals. Demonstrate the robustness of the DBR’s anti-interference ability and security performance, and to reduce the black box-induced emergent security problems of neural networks. Compared with the traditional method, the outer boundary of DBR combined with the theory of convex relaxation can appropriately tighten the boundary interval of DBR used in neural networks, which significantly reduces the over-tightening of the potential for severe security issues and has better robustness. Furthermore, extensive experimentation on individually trained neural networks validates the flexibility and scalability of DBR in safeguarding larger regions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信