Data Protection Issues in Automated Decision-Making Systems Based on Machine Learning: Research Challenges

Network Pub Date : 2024-03-01 DOI:10.3390/network4010005
Paraskevi Christodoulou, Konstantinos Limniotis
{"title":"Data Protection Issues in Automated Decision-Making Systems Based on Machine Learning: Research Challenges","authors":"Paraskevi Christodoulou, Konstantinos Limniotis","doi":"10.3390/network4010005","DOIUrl":null,"url":null,"abstract":"Data protection issues stemming from the use of machine learning algorithms that are used in automated decision-making systems are discussed in this paper. More precisely, the main challenges in this area are presented, putting emphasis on how important it is to simultaneously ensure the accuracy of the algorithms as well as privacy and personal data protection for the individuals whose data are used for training the corresponding models. In this respect, we also discuss how specific well-known data protection attacks that can be mounted in processes based on such algorithms are associated with a lack of specific legal safeguards; to this end, the General Data Protection Regulation (GDPR) is used as the basis for our evaluation. In relation to these attacks, some important privacy-enhancing techniques in this field are also surveyed. Moreover, focusing explicitly on deep learning algorithms as a type of machine learning algorithm, we further elaborate on one such privacy-enhancing technique, namely, the application of differential privacy to the training dataset. In this respect, we present, through an extensive set of experiments, the main difficulties that occur if one needs to demonstrate that such a privacy-enhancing technique is, indeed, sufficient to mitigate all the risks for the fundamental rights of individuals. More precisely, although we manage—by the proper configuration of several algorithms’ parameters—to achieve accuracy at about 90% for specific privacy thresholds, it becomes evident that even these values for accuracy and privacy may be unacceptable if a deep learning algorithm is to be used for making decisions concerning individuals. The paper concludes with a discussion of the current challenges and future steps, both from a legal as well as from a technical perspective.","PeriodicalId":19145,"journal":{"name":"Network","volume":"113 41","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Network","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/network4010005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Data protection issues stemming from the use of machine learning algorithms that are used in automated decision-making systems are discussed in this paper. More precisely, the main challenges in this area are presented, putting emphasis on how important it is to simultaneously ensure the accuracy of the algorithms as well as privacy and personal data protection for the individuals whose data are used for training the corresponding models. In this respect, we also discuss how specific well-known data protection attacks that can be mounted in processes based on such algorithms are associated with a lack of specific legal safeguards; to this end, the General Data Protection Regulation (GDPR) is used as the basis for our evaluation. In relation to these attacks, some important privacy-enhancing techniques in this field are also surveyed. Moreover, focusing explicitly on deep learning algorithms as a type of machine learning algorithm, we further elaborate on one such privacy-enhancing technique, namely, the application of differential privacy to the training dataset. In this respect, we present, through an extensive set of experiments, the main difficulties that occur if one needs to demonstrate that such a privacy-enhancing technique is, indeed, sufficient to mitigate all the risks for the fundamental rights of individuals. More precisely, although we manage—by the proper configuration of several algorithms’ parameters—to achieve accuracy at about 90% for specific privacy thresholds, it becomes evident that even these values for accuracy and privacy may be unacceptable if a deep learning algorithm is to be used for making decisions concerning individuals. The paper concludes with a discussion of the current challenges and future steps, both from a legal as well as from a technical perspective.
基于机器学习的自动决策系统中的数据保护问题:研究挑战
本文讨论了在自动决策系统中使用机器学习算法所产生的数据保护问题。更确切地说,本文介绍了这一领域的主要挑战,并强调了同时确保算法的准确性以及保护个人隐私和个人数据的重要性,因为个人数据被用于训练相应的模型。在这方面,我们还讨论了在基于此类算法的流程中可能出现的众所周知的特定数据保护攻击是如何与缺乏特定法律保障相关联的;为此,我们将《通用数据保护条例》(GDPR)作为评估的基础。针对这些攻击,我们还调查了该领域一些重要的隐私增强技术。此外,我们明确将深度学习算法作为机器学习算法的一种类型,进一步阐述了其中一种隐私增强技术,即对训练数据集应用差异隐私。在这方面,如果需要证明这种隐私增强技术确实足以降低个人基本权利所面临的所有风险,我们将通过一系列广泛的实验来展示所遇到的主要困难。更确切地说,虽然我们通过对几种算法参数的适当配置,在特定隐私阈值下实现了约 90% 的准确率,但如果深度学习算法要用于做出与个人相关的决策,那么即使这些准确率和隐私值也可能是不可接受的。本文最后从法律和技术角度讨论了当前的挑战和未来的步骤。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信