International Conference on Learning Representations最新文献

筛选
英文 中文
Stochastic Differentially Private and Fair Learning 随机差异、私有和公平学习
International Conference on Learning Representations Pub Date : 2022-10-17 DOI: 10.48550/arXiv.2210.08781
Andrew Lowy, Devansh Gupta, Meisam Razaviyayn
{"title":"Stochastic Differentially Private and Fair Learning","authors":"Andrew Lowy, Devansh Gupta, Meisam Razaviyayn","doi":"10.48550/arXiv.2210.08781","DOIUrl":"https://doi.org/10.48550/arXiv.2210.08781","url":null,"abstract":"Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals' health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge. In this paper, we provide the first stochastic differentially private algorithm for fair learning that is guaranteed to converge. Here, the term\"stochastic\"refers to the fact that our proposed algorithm converges even when minibatches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. As a byproduct of our convergence analysis, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes.","PeriodicalId":189973,"journal":{"name":"International Conference on Learning Representations","volume":"26 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131226805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Reverse-Engineering Black-Box Neural Networks 逆向工程黑箱神经网络
International Conference on Learning Representations Pub Date : 2017-11-06 DOI: 10.1007/978-3-030-28954-6_7
Seong Joon Oh, Maximilian Augustin, Mario Fritz, B. Schiele
{"title":"Towards Reverse-Engineering Black-Box Neural Networks","authors":"Seong Joon Oh, Maximilian Augustin, Mario Fritz, B. Schiele","doi":"10.1007/978-3-030-28954-6_7","DOIUrl":"https://doi.org/10.1007/978-3-030-28954-6_7","url":null,"abstract":"","PeriodicalId":189973,"journal":{"name":"International Conference on Learning Representations","volume":"33 1‐2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113955987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 302
Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities 将随机梯度推向二阶方法——非线性变换的反向传播学习
International Conference on Learning Representations Pub Date : 2013-01-15 DOI: 10.1007/978-3-642-42054-2_55
T. Vatanen, T. Raiko, H. Valpola, Yann LeCun
{"title":"Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities","authors":"T. Vatanen, T. Raiko, H. Valpola, Yann LeCun","doi":"10.1007/978-3-642-42054-2_55","DOIUrl":"https://doi.org/10.1007/978-3-642-42054-2_55","url":null,"abstract":"","PeriodicalId":189973,"journal":{"name":"International Conference on Learning Representations","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信