Privacy‐preserving credit risk analysis based on homomorphic encryption aware logistic regression in the cloud

V. V. L. Divakar Allavarpu, V. Naresh, A. Krishna Mohan
{"title":"Privacy‐preserving credit risk analysis based on homomorphic encryption aware logistic regression in the cloud","authors":"V. V. L. Divakar Allavarpu, V. Naresh, A. Krishna Mohan","doi":"10.1002/spy2.372","DOIUrl":null,"url":null,"abstract":"With the growing significance of Credit Risk Analysis (CRA) with a focus on privacy, there is a pressing demand for a Privacy Preserving Machine Learning (PPML) decision support system. In this context, we introduce a framework for privacy‐preserving credit risk analysis that utilizes Homomorphic Encryption aware Logistic Regression (HELR) on encrypted data. The implementation involves the use of TenSEAL and Torch libraries for Logistic Regression (LR), integrating the proposed HELR on polynomial degrees 3 and 5 across German, Taiwan, Japan, and Australian datasets. The presented model yields satisfactory results compared to non‐Homomorphic Encryption (HE) models, demonstrating a minimal accuracy difference ranging from 0.5% to 7.8%. Notably, HELR_g5 outperforms HELR_g3, exhibiting a higher Area Under Curve (AUC) value. Additionally, a thorough security analysis indicates the resilience of the proposed system against various privacy attacks, including poison attacks, evasion attacks, member inference attacks, model inversion attacks, and model extraction attacks at different stages of machine learning. Finally, in the comparative analysis, we highlight that the proposed model ensures data privacy, encompassing training privacy and model privacy during the training phase, as well as input and output privacy during the inference phase a level of privacy not achieved by existing systems.","PeriodicalId":506233,"journal":{"name":"SECURITY AND PRIVACY","volume":"407 25","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SECURITY AND PRIVACY","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/spy2.372","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the growing significance of Credit Risk Analysis (CRA) with a focus on privacy, there is a pressing demand for a Privacy Preserving Machine Learning (PPML) decision support system. In this context, we introduce a framework for privacy‐preserving credit risk analysis that utilizes Homomorphic Encryption aware Logistic Regression (HELR) on encrypted data. The implementation involves the use of TenSEAL and Torch libraries for Logistic Regression (LR), integrating the proposed HELR on polynomial degrees 3 and 5 across German, Taiwan, Japan, and Australian datasets. The presented model yields satisfactory results compared to non‐Homomorphic Encryption (HE) models, demonstrating a minimal accuracy difference ranging from 0.5% to 7.8%. Notably, HELR_g5 outperforms HELR_g3, exhibiting a higher Area Under Curve (AUC) value. Additionally, a thorough security analysis indicates the resilience of the proposed system against various privacy attacks, including poison attacks, evasion attacks, member inference attacks, model inversion attacks, and model extraction attacks at different stages of machine learning. Finally, in the comparative analysis, we highlight that the proposed model ensures data privacy, encompassing training privacy and model privacy during the training phase, as well as input and output privacy during the inference phase a level of privacy not achieved by existing systems.
基于云中同态加密感知逻辑回归的隐私保护信用风险分析
随着注重隐私的信用风险分析(CRA)的重要性与日俱增,对隐私保护机器学习(PPML)决策支持系统的需求日益迫切。在此背景下,我们介绍了一种保护隐私的信用风险分析框架,该框架在加密数据上利用了同态加密感知逻辑回归(HELR)。在实施过程中,我们使用了用于逻辑回归(LR)的 TenSEAL 和 Torch 库,并在德国、台湾、日本和澳大利亚数据集的多项式度 3 和 5 上集成了所提出的 HELR。与非同态加密(HE)模型相比,所提出的模型取得了令人满意的结果,显示出 0.5% 到 7.8% 的最小精度差异。值得注意的是,HELR_g5 优于 HELR_g3,表现出更高的曲线下面积(AUC)值。此外,全面的安全性分析表明,所提出的系统能够抵御各种隐私攻击,包括机器学习不同阶段的中毒攻击、规避攻击、成员推理攻击、模型反转攻击和模型提取攻击。最后,在比较分析中,我们强调所提出的模型能确保数据隐私,包括训练阶段的训练隐私和模型隐私,以及推理阶段的输入和输出隐私,这是现有系统无法达到的隐私水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信