{"title":"DDR-Defense:带有检测器、去噪器和转换器的3D防御网络","authors":"Yukun Zhao, Xinyun Zhang, Shuang Ren","doi":"10.1109/ICCC56324.2022.10065933","DOIUrl":null,"url":null,"abstract":"Recently, 3D deep neural networks have been fully developed and applied to many high-safety tasks. However, due to the uninterpretability of deep learning networks, adversarial examples can easily prompt a normally trained deep learning model to make wrong predictions. In this paper, we propose a new point cloud defense network named DDR-Defense, a framework for defending neural network classifiers against adversarial examples. DDR-Defense neither modifies the number of the points in the input samples nor the protected classifiers so that it can protect most classification models. DDR-Defense first distinguishes adversarial examples from normal examples through a reconstruction-based detector. The detector can prevent errors caused by processing the entire input samples, thereby improving the security of the defense network. For adversarial examples, we first use the statistical outlier removal (SOR) method for denoising, then use a reformer to rebuild them. In this paper, We design a new reformer based on FoldingNet and variational autoencoder, named Folding-VAE. We test DDR-Defense on the ModelNet40 dataset and find that it has a better defense effect than other existing 3D defense networks, especially in saliency maps attack and LG-GAN attack. The lightweight detector, denoiser, and reformer framework ensures the security and efficiency of 3D defense for most application scenarios. Our research will provide a basis for improving the robustness of deep learning models on 3D point clouds.","PeriodicalId":263098,"journal":{"name":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DDR-Defense: 3D Defense Network with a Detector, a Denoiser, and a Reformer\",\"authors\":\"Yukun Zhao, Xinyun Zhang, Shuang Ren\",\"doi\":\"10.1109/ICCC56324.2022.10065933\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, 3D deep neural networks have been fully developed and applied to many high-safety tasks. However, due to the uninterpretability of deep learning networks, adversarial examples can easily prompt a normally trained deep learning model to make wrong predictions. In this paper, we propose a new point cloud defense network named DDR-Defense, a framework for defending neural network classifiers against adversarial examples. DDR-Defense neither modifies the number of the points in the input samples nor the protected classifiers so that it can protect most classification models. DDR-Defense first distinguishes adversarial examples from normal examples through a reconstruction-based detector. The detector can prevent errors caused by processing the entire input samples, thereby improving the security of the defense network. For adversarial examples, we first use the statistical outlier removal (SOR) method for denoising, then use a reformer to rebuild them. In this paper, We design a new reformer based on FoldingNet and variational autoencoder, named Folding-VAE. We test DDR-Defense on the ModelNet40 dataset and find that it has a better defense effect than other existing 3D defense networks, especially in saliency maps attack and LG-GAN attack. The lightweight detector, denoiser, and reformer framework ensures the security and efficiency of 3D defense for most application scenarios. Our research will provide a basis for improving the robustness of deep learning models on 3D point clouds.\",\"PeriodicalId\":263098,\"journal\":{\"name\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCC56324.2022.10065933\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC56324.2022.10065933","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DDR-Defense: 3D Defense Network with a Detector, a Denoiser, and a Reformer
Recently, 3D deep neural networks have been fully developed and applied to many high-safety tasks. However, due to the uninterpretability of deep learning networks, adversarial examples can easily prompt a normally trained deep learning model to make wrong predictions. In this paper, we propose a new point cloud defense network named DDR-Defense, a framework for defending neural network classifiers against adversarial examples. DDR-Defense neither modifies the number of the points in the input samples nor the protected classifiers so that it can protect most classification models. DDR-Defense first distinguishes adversarial examples from normal examples through a reconstruction-based detector. The detector can prevent errors caused by processing the entire input samples, thereby improving the security of the defense network. For adversarial examples, we first use the statistical outlier removal (SOR) method for denoising, then use a reformer to rebuild them. In this paper, We design a new reformer based on FoldingNet and variational autoencoder, named Folding-VAE. We test DDR-Defense on the ModelNet40 dataset and find that it has a better defense effect than other existing 3D defense networks, especially in saliency maps attack and LG-GAN attack. The lightweight detector, denoiser, and reformer framework ensures the security and efficiency of 3D defense for most application scenarios. Our research will provide a basis for improving the robustness of deep learning models on 3D point clouds.