Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society最新文献

筛选
英文 中文
A Geometric Solution to Fair Representations 公平表示的几何解
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375864
Yuzi He, K. Burghardt, Kristina Lerman
{"title":"A Geometric Solution to Fair Representations","authors":"Yuzi He, K. Burghardt, Kristina Lerman","doi":"10.1145/3375627.3375864","DOIUrl":"https://doi.org/10.1145/3375627.3375864","url":null,"abstract":"To reduce human error and prejudice, many high-stakes decisions have been turned over to machine algorithms. However, recent research suggests that this does not remove discrimination, and can perpetuate harmful stereotypes. While algorithms have been developed to improve fairness, they typically face at least one of three shortcomings: they are not interpretable, their prediction quality deteriorates quickly compared to unbiased equivalents, and %the methodology cannot easily extend other algorithms they are not easily transferable across models% (e.g., methods to reduce bias in random forests cannot be extended to neural networks) . To address these shortcomings, we propose a geometric method that removes correlations between data and any number of protected variables. Further, we can control the strength of debiasing through an adjustable parameter to address the trade-off between prediction quality and fairness. The resulting features are interpretable and can be used with many popular models, such as linear regression, random forest, and multilayer perceptrons. The resulting predictions are found to be more accurate and fair compared to several state-of-the-art fair AI algorithms across a variety of benchmark datasets. Our work shows that debiasing data is a simple and effective solution toward improving fairness.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77737279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
How to Put the Data Subject's Sovereignty into Practice. Ethical Considerations and Governance Perspectives 如何落实数据主体的主权。伦理考虑和治理观点
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3377142
P. Dabrock
{"title":"How to Put the Data Subject's Sovereignty into Practice. Ethical Considerations and Governance Perspectives","authors":"P. Dabrock","doi":"10.1145/3375627.3377142","DOIUrl":"https://doi.org/10.1145/3375627.3377142","url":null,"abstract":"Ethical considerations and governance approaches of AI are at a crossroads. Either one tries to convey the impression that one can bring back a status quo ante of our given \"onlife\"-era [1,2], or one accepts to get responsibly involved in a digital world in which informational self-determination can no longer be safeguarded and fostered through the old fashioned data protection principles of informed consent, purpose limitation and data economy [3,4,6]. The main focus of the talk is on how under the given conditions of AI and machine learning, data sovereignty (interpreted as controllability [not control (!)] of the data subject over the use of her data throughout the entire data processing cycle [5]) can be strengthened without hindering innovation dynamics of digital economy and social cohesion of fully digitized societies. In order to put this approach into practice the talk combines a presentation of the concept of data sovereignty put forward by the German Ethics Council [3] with recent research trends in effectively applying the AI ethics principles of explainability and enforceability [4-9].","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90805130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
"The Global South is everywhere, but also always somewhere": National Policy Narratives and AI Justice “全球南方无处不在,但也总是在某个地方”:国家政策叙事和人工智能正义
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375859
Amba Kak
{"title":"\"The Global South is everywhere, but also always somewhere\": National Policy Narratives and AI Justice","authors":"Amba Kak","doi":"10.1145/3375627.3375859","DOIUrl":"https://doi.org/10.1145/3375627.3375859","url":null,"abstract":"There is more attention than ever on the social implications of AI. In contrast to universalized paradigms of ethics and fairness, a growing body of critical work highlights bias and discrimination in AI within the frame of social justice and human rights (\"AI justice\"). However, the geographical location of much of this critique in the West could be engendering its own blind spots. The global supply chain of AI (data, computational power, natural resources, labor) today replicates historical colonial inequities, and the continued subordination of Global South countries. This paper draws attention to official narratives from the Indian government and the United Nations Conference on Trade and Development (UNCTAD) advocating for the role (and place) of these regions in the AI economy. Domestically, these policies are being contested for their top-down formulation, and reflect narrow industry interests. This underscores the need to approach the political economy of AI from varying altitudes - global, national, and from the perspective of communities whose lives and livelihoods are most directly impacted in this economy. Without a deliberate effort at centering this conversation it is inevitable that mainstream discourse on AI justice will grow parallel to (and potentially undercut) demands emanating from Global South governments and communities","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77631689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
The AI-development Connection - A View from the South 人工智能发展的联系——来自南方的视角
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3377139
Anita Gurumurthy
{"title":"The AI-development Connection - A View from the South","authors":"Anita Gurumurthy","doi":"10.1145/3375627.3377139","DOIUrl":"https://doi.org/10.1145/3375627.3377139","url":null,"abstract":"The socialisation of Artificial Intelligence and the reality of an intelligence economy mark an epochal moment. The impacts of AI are now systemic - restructuring economic organisation and value chains, public sphere architectures and sociality. These shifts carry deep geo-political implications, reinforcing historical exclusions and power relations and disrupting the norms and rules that hold ideas of equality and justice together. At the centre of this rapid change is the intelligent corporation and its obsessive pursuit of data. Directly impinging on bodies and places, the de facto rules forged by the intelligent corporation are disenfranchising the already marginal subjects of development. Using trade deals to liberalise data flows, tighten trade secret rules and enclose AI-based innovation, Big Tech and their political masters have effectively taken away the economic and political autonomy of states in the global south. Big Tech's impunity extends to a brazen exploitation - enslaving labour through data over-reach and violating female bodies to universalise data markets. Thinking through the governance of AI needs new frameworks that can grapple with the fraught questions of data sovereignty, economic democracy, and institutional ethics in a global world with local aspirations. Any effort towards norm development in this domain will need to see the geo-economics of digital intelligence and the geo-politics of development ideologies as two sides of the same coin.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87618795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Empirical Approach to Capture Moral Uncertainty in AI 捕捉人工智能中道德不确定性的经验方法
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375805
Andreia Martinho, M. Kroesen, C. Chorus
{"title":"An Empirical Approach to Capture Moral Uncertainty in AI","authors":"Andreia Martinho, M. Kroesen, C. Chorus","doi":"10.1145/3375627.3375805","DOIUrl":"https://doi.org/10.1145/3375627.3375805","url":null,"abstract":"As AI Systems become increasingly autonomous they are expected to engage in complex moral decision-making processes. For the purpose of guidance of such processes theoretical and empirical solutions have been sought. In this research we integrate both theoretical and empirical lines of thought to address the matters of moral reasoning in AI Systems. We reconceptualize a metanormative framework for decision-making under moral uncertainty within the Discrete Choice Analysis domain and we operationalize it through a latent class choice model. The discrete choice analysis-based formulation of the metanormative framework is theory-rooted and practical as it captures moral uncertainty through a small set of latent classes. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. In the proof of concept two AI systems make policy choices on behalf of a society but while one of the systems uses a baseline moral certain model the other uses a moral uncertain model. It was observed that there are cases in which the AI Systems disagree about the policy to be chosen which we believe is an indication about the relevance of moral uncertainty.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74485270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria 机器判断人类:正式化评估标准的希望和危险
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375839
Frank A. Pasquale
{"title":"Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria","authors":"Frank A. Pasquale","doi":"10.1145/3375627.3375839","DOIUrl":"https://doi.org/10.1145/3375627.3375839","url":null,"abstract":"Over the past decade, algorithmic accountability has become an important concern for social scientists, computer scientists, journalists, and lawyers [1]. Exposés have sparked vibrant debates about algorithmic sentencing. Researchers have exposed tech giants showing women ads for lower-paying jobs, discriminating against the aged, deploying deceptive dark patterns to trick consumers into buying things, and manipulating users toward rabbit holes of extremist content. Public-spirited regulators have begun to address algorithmic transparency and online fairness, building on the work of legal scholars who have called for technological due process, platform neutrality, and nondiscrimination principles. This policy work is just beginning, as experts translate academic research and activist demands into statutes and regulations. Lawmakers are proposing bills requiring basic standards of algorithmic transparency and auditing. We are starting down on a long road toward ensuring that AI-based hiring practices and financial underwriting are not used if they have a disparate impact on historically marginalized communities. And just as this \"first wave\" of algorithmic accountability research and activism has targeted existing systems, an emerging \"second wave\" of algorithmic accountability has begun to address more structural concerns. Both waves will be essential to ensure a fairer, and more genuinely emancipatory, political economy of technology. Second wave work is particularly important when it comes to illuminating the promise & perils of formalizing evaluative criteria.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82858879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Normative Principles for Evaluating Fairness in Machine Learning 评估机器学习公平性的规范原则
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375808
D. Leben
{"title":"Normative Principles for Evaluating Fairness in Machine Learning","authors":"D. Leben","doi":"10.1145/3375627.3375808","DOIUrl":"https://doi.org/10.1145/3375627.3375808","url":null,"abstract":"There are many incompatible ways to measure fair outcomes for machine learning algorithms. The goal of this paper is to characterize rates of success and error across protected groups (race, gender, sexual orientation) as a distribution problem, and describe the possible solutions to this problem according to different normative principles from moral and political philosophy. These normative principles are based on various competing attributes within a distribution problem: intentions, compensation, desert, consent, and consequences. Each principle will be applied to a sample risk-assessment classifier to demonstrate the philosophical arguments underlying different sets of fairness metrics.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86111238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Algorithmized but not Atomized? How Digital Platforms Engender New Forms of Worker Solidarity in Jakarta 算法化而非原子化?数位平台如何在雅加达催生新形式的工人团结
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375816
Rida Qadri
{"title":"Algorithmized but not Atomized? How Digital Platforms Engender New Forms of Worker Solidarity in Jakarta","authors":"Rida Qadri","doi":"10.1145/3375627.3375816","DOIUrl":"https://doi.org/10.1145/3375627.3375816","url":null,"abstract":"Jakarta's roads are green, filled as they are with the fluorescent green jackets, bright green logos and fluttering green banners of basecamps created by the city's digitized, 'online' motorbike-taxi drivers (ojol). These spaces function as waiting posts, regulatory institutions, information networks and spaces of solidarity for the ojol working for mobility-app companies, Grab and GoJek. Their existence though, presents a puzzle. In the world of on-demand matching, literature either predicts an isolated, atomized, disempowered digital worker or expects workers to have only temporary, online, ephemeral networks of mutual aid. Yet, Jakarta's ojol then introduce us to a new form of labor action that relies on an interface of the physical world and digital realm, complete with permanent shelters, quirky names, emblems, social media accounts and even their own emergency response service. This paper explores the contours of these labor formations and asks why digital workers in Jakarta are able to create collective structures of solidarity, even as app-mediated work may force them towards an individualized labor regime? I argue that these digital labor collectives are not accidental but a product of interactions between histories of social organization structures in Jakarta and affordances created by technological-mediation. Through participant observation and semi-structured interviews I excavate the bi-directional conversation between globalizing digital platforms and social norms, civic culture and labor market conditions in Jakarta which has allowed for particular forms of digital worker resistances to emerge. I recover power for the digital worker, who provides us with a path to resisting algorithmization of work while still participating in it through agentic labor actions rooted in shared identities, enabled by technological fluency and borne out of a desire for community.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81718799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
The Perils of Objectivity: Towards a Normative Framework for Fair Judicial Decision-Making 客观的危险:走向公正司法决策的规范框架
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375869
Andi Peng, Malina Simard-Halm
{"title":"The Perils of Objectivity: Towards a Normative Framework for Fair Judicial Decision-Making","authors":"Andi Peng, Malina Simard-Halm","doi":"10.1145/3375627.3375869","DOIUrl":"https://doi.org/10.1145/3375627.3375869","url":null,"abstract":"Fair decision-making in criminal justice relies on the recognition and incorporation of infinite shades of grey. In this paper, we detail how algorithmic risk assessment tools are counteractive to fair legal proceedings in social institutions where desired states of the world are contested ethically and practically. We provide a normative framework for assessing fair judicial decision-making, one that does not seek the elimination of human bias from decision-making as algorithmic fairness efforts currently focus on, but instead centers on sophisticating the incorporation of individualized or discretionary bias--a process that is requisitely human. Through analysis of a case study on social disadvantage, we use this framework to provide an assessment of potential features of consideration, such as political disempowerment and demographic exclusion, that are irreconcilable by current algorithmic efforts and recommend their incorporation in future reform.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75045485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Sensitivity Analysis for Offline Policy Evaluation 离线策略评估的贝叶斯敏感性分析
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society Pub Date : 2020-02-07 DOI: 10.1145/3375627.3375822
Jongbin Jung, Ravi Shroff, A. Feller, Sharad Goel
{"title":"Bayesian Sensitivity Analysis for Offline Policy Evaluation","authors":"Jongbin Jung, Ravi Shroff, A. Feller, Sharad Goel","doi":"10.1145/3375627.3375822","DOIUrl":"https://doi.org/10.1145/3375627.3375822","url":null,"abstract":"On a variety of complex decision-making tasks, from doctors prescribing treatment to judges setting bail, machine learning algorithms have been shown to outperform expert human judgments. One complication, however, is that it is often difficult to anticipate the effects of algorithmic policies prior to deployment, as one generally cannot use historical data to directly observe what would have happened had the actions recommended by the algorithm been taken. A common strategy is to model potential outcomes for alternative decisions assuming that there are no unmeasured confounders (i.e., to assume ignorability). But if this ignorability assumption is violated, the predicted and actual effects of an algorithmic policy can diverge sharply. In this paper we present a flexible Bayesian approach to gauge the sensitivity of predicted policy outcomes to unmeasured confounders. In particular, and in contrast to past work, our modeling framework easily enables confounders to vary with the observed covariates. We demonstrate the efficacy of our method on a large dataset of judicial actions, in which one must decide whether defendants awaiting trial should be required to pay bail or can be released without payment.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"205 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72940005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信