To be forgotten or to be fair: unveiling fairness implications of machine unlearning methods

Dawen Zhang, Shidong Pan, Thong Hoang, Zhenchang Xing, Mark Staples, Xiwei Xu, Lina Yao, Qinghua Lu, Liming Zhu
{"title":"To be forgotten or to be fair: unveiling fairness implications of machine unlearning methods","authors":"Dawen Zhang,&nbsp;Shidong Pan,&nbsp;Thong Hoang,&nbsp;Zhenchang Xing,&nbsp;Mark Staples,&nbsp;Xiwei Xu,&nbsp;Lina Yao,&nbsp;Qinghua Lu,&nbsp;Liming Zhu","doi":"10.1007/s43681-023-00398-y","DOIUrl":null,"url":null,"abstract":"<div><p>The right to be forgotten (RTBF) allows individuals to request the removal of personal information from online platforms. Researchers have proposed machine unlearning algorithms as a solution for erasing specific data from trained models to support RTBF. However, these methods modify how data are fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help AI practitioners make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Results show that non-uniform data deletion with the variant of SISA leads to better fairness compared to ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. This research can help practitioners make informed decisions when implementing RTBF solutions that consider potential trade-offs on fairness.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"83 - 93"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00398-y.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00398-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The right to be forgotten (RTBF) allows individuals to request the removal of personal information from online platforms. Researchers have proposed machine unlearning algorithms as a solution for erasing specific data from trained models to support RTBF. However, these methods modify how data are fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help AI practitioners make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Results show that non-uniform data deletion with the variant of SISA leads to better fairness compared to ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. This research can help practitioners make informed decisions when implementing RTBF solutions that consider potential trade-offs on fairness.

被遗忘还是公平:揭示机器非学习方法对公平的影响
被遗忘权(RTBF)允许个人要求从网络平台上删除个人信息。研究人员提出了机器非学习算法,作为从训练好的模型中删除特定数据以支持 RTBF 的解决方案。然而,这些方法修改了数据输入模型的方式和训练的方式,这可能会从公平的角度损害人工智能伦理。为了帮助人工智能从业者在采用这些非学习方法时做出负责任的决定,我们首次对机器非学习方法进行了研究,以揭示其对公平性的影响。我们设计了两种典型的机器非学习方法(SISA 和 AmnesiacML),并在三种不同的删除策略下,以三种公平性数据集作为基线,对这两种方法和一种再训练方法(ORTR)进行了实验。结果表明,与 ORTR 和 AmnesiacML 相比,SISA 变体的非均匀数据删除会带来更好的公平性,而初始训练和均匀数据删除并不一定会影响这三种方法的公平性。这项研究可以帮助实践者在实施 RTBF 解决方案时做出明智的决策,考虑到公平性方面的潜在权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信