Bridging the gap from AI ethics research to practice

K. Baxter, Yoav Schlesinger, Sarah E. Aerni, Lewis Baker, Julie Dawson, K. Kenthapadi, Isabel M. Kloumann, Hanna M. Wallach
{"title":"Bridging the gap from AI ethics research to practice","authors":"K. Baxter, Yoav Schlesinger, Sarah E. Aerni, Lewis Baker, Julie Dawson, K. Kenthapadi, Isabel M. Kloumann, Hanna M. Wallach","doi":"10.1145/3351095.3375680","DOIUrl":null,"url":null,"abstract":"The study of fairness in machine learning applications has seen significant academic inquiry, research and publication in recent years. Concurrently, technology companies have begun to instantiate nascent program in AI ethics and product ethics more broadly. As a result of these efforts, AI ethics practitioners have piloted new processes to evaluate and ensure fairness in their machine learning applications. In this session, six industry practitioners, hailing from LinkedIn, Yoti, Microsoft, Pymetrics, Facebook, and Salesforce share insights from the work they have undertaken in the area of fairness, what has worked and what has not, lessons learned and best practices instituted as a result. • Krishnaram Kenthapadi presents LinkedIn's fairness-aware reranking for talent search. • Julie Dawson shares how Yoti applies ML fairness research to age estimation in their digital identity platform. • Hanna Wallach contributes how Microsoft is applying fairness principles in practice. • Lewis Baker presents Pymetric's fairness mechanisms in their hiring algorithm. • Isabel Kloumann presents Facebook's fairness assessment framework through a case study of fairness in a content moderation system. • Sarah Aerni contributes how Salesforce is building fairness features into the Einstein AI platform. Building on those insights, we discuss insights and brainstorm modalities through which to build upon the practitioners' work. Opportunities for further research or collaboration are identified, with the goal of developing a shared understanding of experiences and needs of AI ethics practitioners. Ultimately, the aim is to develop a playbook for more ethical and fair AI product development and deployment.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3375680","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The study of fairness in machine learning applications has seen significant academic inquiry, research and publication in recent years. Concurrently, technology companies have begun to instantiate nascent program in AI ethics and product ethics more broadly. As a result of these efforts, AI ethics practitioners have piloted new processes to evaluate and ensure fairness in their machine learning applications. In this session, six industry practitioners, hailing from LinkedIn, Yoti, Microsoft, Pymetrics, Facebook, and Salesforce share insights from the work they have undertaken in the area of fairness, what has worked and what has not, lessons learned and best practices instituted as a result. • Krishnaram Kenthapadi presents LinkedIn's fairness-aware reranking for talent search. • Julie Dawson shares how Yoti applies ML fairness research to age estimation in their digital identity platform. • Hanna Wallach contributes how Microsoft is applying fairness principles in practice. • Lewis Baker presents Pymetric's fairness mechanisms in their hiring algorithm. • Isabel Kloumann presents Facebook's fairness assessment framework through a case study of fairness in a content moderation system. • Sarah Aerni contributes how Salesforce is building fairness features into the Einstein AI platform. Building on those insights, we discuss insights and brainstorm modalities through which to build upon the practitioners' work. Opportunities for further research or collaboration are identified, with the goal of developing a shared understanding of experiences and needs of AI ethics practitioners. Ultimately, the aim is to develop a playbook for more ethical and fair AI product development and deployment.
弥合人工智能伦理研究与实践之间的差距
近年来,对机器学习应用公平性的研究已经出现了大量的学术调查、研究和出版物。与此同时,科技公司已经开始在更广泛的范围内实例化人工智能伦理和产品伦理方面的新生项目。作为这些努力的结果,人工智能伦理从业者已经试行了新的流程来评估和确保机器学习应用的公平性。在本次会议上,来自LinkedIn、Yoti、微软、Pymetrics、Facebook和Salesforce的六位行业从业者分享了他们在公平领域所做的工作,哪些有效,哪些无效,从中吸取的经验教训和制定的最佳实践。•Krishnaram Kenthapadi介绍了LinkedIn在人才搜索方面的公平重新排名。•朱莉·道森(Julie Dawson)分享了Yoti如何将机器学习公平性研究应用于其数字身份平台的年龄估计。•汉娜•瓦拉赫(Hanna Wallach)阐述了微软如何在实践中应用公平原则。•Lewis Baker介绍了Pymetric招聘算法中的公平机制。•Isabel Kloumann通过对内容审核系统公平性的案例研究,展示了Facebook的公平性评估框架。•Sarah Aerni介绍了Salesforce如何在Einstein人工智能平台中构建公平功能。在这些见解的基础上,我们讨论见解并集思广益,通过这些见解来建立实践者的工作。确定了进一步研究或合作的机会,目标是对人工智能伦理从业者的经验和需求形成共同的理解。最终的目标是为更道德和公平的人工智能产品开发和部署制定一个剧本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信