{"title":"SLFB-CNN: An interpretable neural network privacy protection framework","authors":"De Li, Yuhang Hu, Jinyan Wang","doi":"10.1109/CIS52066.2020.00070","DOIUrl":null,"url":null,"abstract":"The feedforward-designed convolutional neural network (FF-CNN) method was recently proposed by Kuo et al. It has strong interpretability and low training complexity. In this paper, we have proposed two improvements (1) We merge two algorithms Layer-wise Relevance Propagation (LRP) and FF-CNN to build an interpretable neural network framework called LFB-CNN. The back-propagation (BP) algorithm is used to train the fully connected layer of FF-CNN. Meanwhile, the LRP algorithm is used to decompose and calculate the correlation between the input and output of the fully connected layer, and further improve the model performance without reducing the interpretability. (2) We conducted a privacy analysis on the LFB-CNN framework. Once the parameters of the framework are disclosed, the privacy of the data provider will be leaked. Therefore, we use differential privacy to propose a secure LFB-CNN (SLFB-CNN) algorithm. At last, we verified the effectiveness of our proposed method on the MNIST, Fashion-MNIST and CIFAR-10 datasets.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 16th International Conference on Computational Intelligence and Security (CIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIS52066.2020.00070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The feedforward-designed convolutional neural network (FF-CNN) method was recently proposed by Kuo et al. It has strong interpretability and low training complexity. In this paper, we have proposed two improvements (1) We merge two algorithms Layer-wise Relevance Propagation (LRP) and FF-CNN to build an interpretable neural network framework called LFB-CNN. The back-propagation (BP) algorithm is used to train the fully connected layer of FF-CNN. Meanwhile, the LRP algorithm is used to decompose and calculate the correlation between the input and output of the fully connected layer, and further improve the model performance without reducing the interpretability. (2) We conducted a privacy analysis on the LFB-CNN framework. Once the parameters of the framework are disclosed, the privacy of the data provider will be leaked. Therefore, we use differential privacy to propose a secure LFB-CNN (SLFB-CNN) algorithm. At last, we verified the effectiveness of our proposed method on the MNIST, Fashion-MNIST and CIFAR-10 datasets.