Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

筛选
英文 中文
FairPOT: Balancing AUC Performance and Fairness with Proportional Optimal Transport. FairPOT:平衡AUC性能和公平与比例最优传输。
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2025-10-01 Epub Date: 2025-10-15 DOI: 10.1609/aies.v8i2.36660
Pengxi Liu, Yi Shen, Matthew M Engelhard, Benjamin A Goldstein, Michael J Pencina, Nicoleta J Economou-Zavlanos, Michael M Zavlanos
{"title":"FairPOT: Balancing AUC Performance and Fairness with Proportional Optimal Transport.","authors":"Pengxi Liu, Yi Shen, Matthew M Engelhard, Benjamin A Goldstein, Michael J Pencina, Nicoleta J Economou-Zavlanos, Michael M Zavlanos","doi":"10.1609/aies.v8i2.36660","DOIUrl":"10.1609/aies.v8i2.36660","url":null,"abstract":"<p><p>Fairness metrics utilizing the area under the receiver operator characteristic curve (AUC) have gained increasing attention in high-stakes domains such as healthcare, finance, and criminal justice. In these domains, fairness is often evaluated over risk scores rather than binary outcomes, and a common challenge is that enforcing strict fairness can significantly degrade AUC performance. To address this challenge, we propose Fair Proportional Optimal Transport (FairPOT), a novel, model-agnostic post-processing framework that strategically aligns risk score distributions across different groups using optimal transport, but does so selectively by transforming a controllable proportion, i.e., the top- <math><mi>λ</mi></math> quantile, of scores within the disadvantaged group. By varying <math><mi>λ</mi></math> , our method allows for a tunable trade-off between reducing AUC disparities and maintaining overall AUC performance. Furthermore, we extend FairPOT to the partial AUC setting, enabling fairness interventions to concentrate on the highest-risk regions. Extensive experiments on synthetic, public, and clinical datasets show that FairPOT consistently outperforms existing post-processing techniques in both global and partial AUC scenarios, often achieving improved fairness with slight AUC degradation or even positive gains in utility. The computational efficiency and practical adaptability of FairPOT make it a promising solution for real-world deployment.</p>","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"8 2","pages":"1611-1622"},"PeriodicalIF":0.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12671453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Principles and Policy Recommendations for Comprehensive Genetic Data Governance. 遗传数据综合治理的原则和政策建议。
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2025-01-01 Epub Date: 2025-10-15 DOI: 10.1609/aies.v8i3.36701
Vivek Ramanan, Ria Vinod, Cole Williams, Sohini Ramachandran, Suresh Venkatasubramanian
{"title":"Principles and Policy Recommendations for Comprehensive Genetic Data Governance.","authors":"Vivek Ramanan, Ria Vinod, Cole Williams, Sohini Ramachandran, Suresh Venkatasubramanian","doi":"10.1609/aies.v8i3.36701","DOIUrl":"10.1609/aies.v8i3.36701","url":null,"abstract":"<p><p>Genetic data collection has become ubiquitous, producing genetic information about health, ancestry, and social traits. However, unregulated use-especially amid evolving scientific understanding-poses serious privacy and discrimination risks. These risks are intensified by advancing AI, particularly multi-modal systems integrating genetic, clinical, behavioral, and environmental data. In this work, we organize the uses of genetic data along four distinct 'pillars', and develop a risk assessment framework that identifies key values any governance system must preserve. In doing so, we draw on current privacy scholarship concerning contextual integrity, data relationality, and the Belmont principle. We apply the framework to four real-world case studies and identify critical gaps in existing regulatory frameworks and specific threats to privacy and personal liberties, particularly through genetic discrimination. Finally, we offer three policy recommendations for genetic data governance that safeguard individual rights in today's under-regulated ecosystem of large-scale genetic data collection and usage.</p>","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"8 3","pages":"2136-2149"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13077651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147694017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy Preserving Machine Learning Systems 保护隐私的机器学习系统
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2022-01-01 DOI: 10.1145/3514094.3539530
Soumia Zohra El Mestari
{"title":"Privacy Preserving Machine Learning Systems","authors":"Soumia Zohra El Mestari","doi":"10.1145/3514094.3539530","DOIUrl":"https://doi.org/10.1145/3514094.3539530","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"16 1","pages":"898"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82464017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIES '22: AAAI/ACM Conference on AI, Ethics, and Society, Oxford, United Kingdom, May 19 - 21, 2021 AAAI/ACM人工智能、伦理与社会会议,牛津,英国,2021年5月19 - 21日
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2022-01-01 DOI: 10.1145/3514094
{"title":"AIES '22: AAAI/ACM Conference on AI, Ethics, and Society, Oxford, United Kingdom, May 19 - 21, 2021","authors":"","doi":"10.1145/3514094","DOIUrl":"https://doi.org/10.1145/3514094","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86389791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bias in Artificial Intelligence Models in Financial Services 金融服务中人工智能模型的偏差
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2022-01-01 DOI: 10.1145/3514094.3539561
Ángel Pavón Pérez
{"title":"Bias in Artificial Intelligence Models in Financial Services","authors":"Ángel Pavón Pérez","doi":"10.1145/3514094.3539561","DOIUrl":"https://doi.org/10.1145/3514094.3539561","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"191 1","pages":"908"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76933461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Scale: The Universalist and Imperialist Narrative of Big Tech 规模:大科技的普遍主义和帝国主义叙事
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2021-01-01 DOI: 10.1145/3461702.3462474
Jessica de Jesus de Pinho Pinhal
{"title":"To Scale: The Universalist and Imperialist Narrative of Big Tech","authors":"Jessica de Jesus de Pinho Pinhal","doi":"10.1145/3461702.3462474","DOIUrl":"https://doi.org/10.1145/3461702.3462474","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"3 1","pages":"267-268"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84278797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021 AAAI/ACM人工智能、伦理与社会会议,虚拟事件,美国,2021年5月19-21日
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2021-01-01 DOI: 10.1145/3461702
{"title":"AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021","authors":"","doi":"10.1145/3461702","DOIUrl":"https://doi.org/10.1145/3461702","url":null,"abstract":"","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84537795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Toward Implementing the Agent-Deed-Consequence Model of Moral Judgment in Autonomous Vehicles 自动驾驶汽车道德判断的Agent-Deed-Consequence模型实现研究
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375853
Veljko Dubljević
{"title":"Toward Implementing the Agent-Deed-Consequence Model of Moral Judgment in Autonomous Vehicles","authors":"Veljko Dubljević","doi":"10.1145/3375627.3375853","DOIUrl":"https://doi.org/10.1145/3375627.3375853","url":null,"abstract":"Autonomous vehicles (AVs) and accidents they are involved in attest to the urgent need to consider the ethics of AI. The question dominating the discussion has been whether we want AVs to behave in a 'selfish' or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The Agent-Deed-Consequence (ADC) model [1-4] provides a promising account while also lending itself well to implementation in AI. The ADC model explains moral judgments by breaking them down into positive or negative intuitive evaluations of the Agent, Deed, and Consequence in any given situation. These intuitive evaluations combine to produce a judgment of moral acceptability. This explains the considerable flexibility and stability of human moral judgment that has yet to be replicated in AI. This paper examines the advantages and disadvantages of implementing the ADC model and how the model could inform future work on ethics of AI in general.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85247049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trade-offs in Fair Redistricting 公平选区划分的权衡
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375802
Zachary Schutzman
{"title":"Trade-offs in Fair Redistricting","authors":"Zachary Schutzman","doi":"10.1145/3375627.3375802","DOIUrl":"https://doi.org/10.1145/3375627.3375802","url":null,"abstract":"What constitutes a 'fair' electoral districting plan is a discussion dating back to the founding of the United States and, in light of several recent court cases, mathematical developments, and the approaching 2020 U.S. Census, is still a fiercely debated topic today. In light of the growing desire and ability to use algorithmic tools in drawing these districts, we discuss two prototypical formulations of fairness in this domain: drawing the districts by a neutral procedure or drawing them to intentionally induce an equitable electoral outcome. We then generate a large sample of districting plans for North Carolina and Pennsylvania and consider empirically how compactness and partisan symmetry, as instantiations of these frameworks, trade off with each other -- prioritizing the value of one of these necessarily comes at a cost in the other.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76015668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Fairness-aware Incentive Scheme for Federated Learning 基于公平性的联邦学习激励机制
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375840
Han Yu, Zelei Liu, Yang Liu, Tianjian Chen, Mingshu Cong, Xi Weng, D. Niyato, Qiang Yang
{"title":"A Fairness-aware Incentive Scheme for Federated Learning","authors":"Han Yu, Zelei Liu, Yang Liu, Tianjian Chen, Mingshu Cong, Xi Weng, D. Niyato, Qiang Yang","doi":"10.1145/3375627.3375840","DOIUrl":"https://doi.org/10.1145/3375627.3375840","url":null,"abstract":"In federated learning (FL), data owners \"share\" their local data in a privacy preserving manner in order to build a federated model, which in turn, can be used to generate revenues for the participants. However, in FL involving business participants, they might incur significant costs if several competitors join the same federation. Furthermore, the training and commercialization of the models will take time, resulting in delays before the federation accumulates enough budget to pay back the participants. The issues of costs and temporary mismatch between contributions and rewards have not been addressed by existing payoff-sharing schemes. In this paper, we propose the Federated Learning Incentivizer (FLI) payoff-sharing scheme. The scheme dynamically divides a given budget in a context-aware manner among data owners in a federation by jointly maximizing the collective utility while minimizing the inequality among the data owners, in terms of the payoff gained by them and the waiting time for receiving payoff. Extensive experimental comparisons with five state-of-the-art payoff-sharing schemes show that FLI is the most attractive to high quality data owners and achieves the highest expected revenue for a data federation.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86110274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 143
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书