{"title":"神经网络感知器学习算法的隐私保护协议","authors":"Saeed Samet, Ali Miri","doi":"10.1109/IS.2008.4670499","DOIUrl":null,"url":null,"abstract":"Neural networks have become increasingly important in areas such as medical diagnosis, bio-informatics, intrusion detection, and homeland security. In most of these applications, one major issue is preserving privacy of individualpsilas private information and sensitive data. In this paper, we propose two secure protocols for perceptron learning algorithm when input data is horizontally and vertically partitioned among the parties. These protocols can be applied in both linearly separable and non-separable datasets, while not only data belonging to each party remains private, but the final learning model is also securely shared among those parties. Parties then can jointly and securely apply the constructed model to predict the output corresponding to their target data. Also, these protocols can be used incrementally, i.e. they process new coming data, adjusting the previously constructed network.","PeriodicalId":305750,"journal":{"name":"2008 4th International IEEE Conference Intelligent Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Privacy-preserving protocols for perceptron learning algorithm in neural networks\",\"authors\":\"Saeed Samet, Ali Miri\",\"doi\":\"10.1109/IS.2008.4670499\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks have become increasingly important in areas such as medical diagnosis, bio-informatics, intrusion detection, and homeland security. In most of these applications, one major issue is preserving privacy of individualpsilas private information and sensitive data. In this paper, we propose two secure protocols for perceptron learning algorithm when input data is horizontally and vertically partitioned among the parties. These protocols can be applied in both linearly separable and non-separable datasets, while not only data belonging to each party remains private, but the final learning model is also securely shared among those parties. Parties then can jointly and securely apply the constructed model to predict the output corresponding to their target data. Also, these protocols can be used incrementally, i.e. they process new coming data, adjusting the previously constructed network.\",\"PeriodicalId\":305750,\"journal\":{\"name\":\"2008 4th International IEEE Conference Intelligent Systems\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 4th International IEEE Conference Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IS.2008.4670499\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 4th International IEEE Conference Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IS.2008.4670499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Privacy-preserving protocols for perceptron learning algorithm in neural networks
Neural networks have become increasingly important in areas such as medical diagnosis, bio-informatics, intrusion detection, and homeland security. In most of these applications, one major issue is preserving privacy of individualpsilas private information and sensitive data. In this paper, we propose two secure protocols for perceptron learning algorithm when input data is horizontally and vertically partitioned among the parties. These protocols can be applied in both linearly separable and non-separable datasets, while not only data belonging to each party remains private, but the final learning model is also securely shared among those parties. Parties then can jointly and securely apply the constructed model to predict the output corresponding to their target data. Also, these protocols can be used incrementally, i.e. they process new coming data, adjusting the previously constructed network.