{"title":"神经网络的双边界稳健验证方法","authors":"Yueyue Yang, Qun Fang, Yajing Tang, Yuchen Feng, Yihui Yan, Yong Xu","doi":"10.1007/s11227-024-06402-4","DOIUrl":null,"url":null,"abstract":"<p>As a prominent and appealing technology, neural networks have been widely applied in numerous fields, with one of the most notable applications being autonomous driving. However, the intrinsic structure of neural networks presents a black box problem, leading to emergent security issues in driving and networking that remain unresolved. To this end, we introduce a novel method for robust validation of neural networks, named as Dual Boundary Robust (DBR). Specifically, we creatively integrate adversarial attack design, including perturbations like outliers, with outer boundary defenses, in which the inner and outer boundaries are combined with methods such as floating-point polyhedra and boundary intervals. Demonstrate the robustness of the DBR’s anti-interference ability and security performance, and to reduce the black box-induced emergent security problems of neural networks. Compared with the traditional method, the outer boundary of DBR combined with the theory of convex relaxation can appropriately tighten the boundary interval of DBR used in neural networks, which significantly reduces the over-tightening of the potential for severe security issues and has better robustness. Furthermore, extensive experimentation on individually trained neural networks validates the flexibility and scalability of DBR in safeguarding larger regions.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"32 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A dual boundary robust verification method for neural networks\",\"authors\":\"Yueyue Yang, Qun Fang, Yajing Tang, Yuchen Feng, Yihui Yan, Yong Xu\",\"doi\":\"10.1007/s11227-024-06402-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>As a prominent and appealing technology, neural networks have been widely applied in numerous fields, with one of the most notable applications being autonomous driving. However, the intrinsic structure of neural networks presents a black box problem, leading to emergent security issues in driving and networking that remain unresolved. To this end, we introduce a novel method for robust validation of neural networks, named as Dual Boundary Robust (DBR). Specifically, we creatively integrate adversarial attack design, including perturbations like outliers, with outer boundary defenses, in which the inner and outer boundaries are combined with methods such as floating-point polyhedra and boundary intervals. Demonstrate the robustness of the DBR’s anti-interference ability and security performance, and to reduce the black box-induced emergent security problems of neural networks. Compared with the traditional method, the outer boundary of DBR combined with the theory of convex relaxation can appropriately tighten the boundary interval of DBR used in neural networks, which significantly reduces the over-tightening of the potential for severe security issues and has better robustness. Furthermore, extensive experimentation on individually trained neural networks validates the flexibility and scalability of DBR in safeguarding larger regions.</p>\",\"PeriodicalId\":501596,\"journal\":{\"name\":\"The Journal of Supercomputing\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Supercomputing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11227-024-06402-4\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Supercomputing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11227-024-06402-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A dual boundary robust verification method for neural networks
As a prominent and appealing technology, neural networks have been widely applied in numerous fields, with one of the most notable applications being autonomous driving. However, the intrinsic structure of neural networks presents a black box problem, leading to emergent security issues in driving and networking that remain unresolved. To this end, we introduce a novel method for robust validation of neural networks, named as Dual Boundary Robust (DBR). Specifically, we creatively integrate adversarial attack design, including perturbations like outliers, with outer boundary defenses, in which the inner and outer boundaries are combined with methods such as floating-point polyhedra and boundary intervals. Demonstrate the robustness of the DBR’s anti-interference ability and security performance, and to reduce the black box-induced emergent security problems of neural networks. Compared with the traditional method, the outer boundary of DBR combined with the theory of convex relaxation can appropriately tighten the boundary interval of DBR used in neural networks, which significantly reduces the over-tightening of the potential for severe security issues and has better robustness. Furthermore, extensive experimentation on individually trained neural networks validates the flexibility and scalability of DBR in safeguarding larger regions.