K. Baxter, Yoav Schlesinger, Sarah E. Aerni, Lewis Baker, Julie Dawson, K. Kenthapadi, Isabel M. Kloumann, Hanna M. Wallach
{"title":"弥合人工智能伦理研究与实践之间的差距","authors":"K. Baxter, Yoav Schlesinger, Sarah E. Aerni, Lewis Baker, Julie Dawson, K. Kenthapadi, Isabel M. Kloumann, Hanna M. Wallach","doi":"10.1145/3351095.3375680","DOIUrl":null,"url":null,"abstract":"The study of fairness in machine learning applications has seen significant academic inquiry, research and publication in recent years. Concurrently, technology companies have begun to instantiate nascent program in AI ethics and product ethics more broadly. As a result of these efforts, AI ethics practitioners have piloted new processes to evaluate and ensure fairness in their machine learning applications. In this session, six industry practitioners, hailing from LinkedIn, Yoti, Microsoft, Pymetrics, Facebook, and Salesforce share insights from the work they have undertaken in the area of fairness, what has worked and what has not, lessons learned and best practices instituted as a result. • Krishnaram Kenthapadi presents LinkedIn's fairness-aware reranking for talent search. • Julie Dawson shares how Yoti applies ML fairness research to age estimation in their digital identity platform. • Hanna Wallach contributes how Microsoft is applying fairness principles in practice. • Lewis Baker presents Pymetric's fairness mechanisms in their hiring algorithm. • Isabel Kloumann presents Facebook's fairness assessment framework through a case study of fairness in a content moderation system. • Sarah Aerni contributes how Salesforce is building fairness features into the Einstein AI platform. Building on those insights, we discuss insights and brainstorm modalities through which to build upon the practitioners' work. Opportunities for further research or collaboration are identified, with the goal of developing a shared understanding of experiences and needs of AI ethics practitioners. Ultimately, the aim is to develop a playbook for more ethical and fair AI product development and deployment.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Bridging the gap from AI ethics research to practice\",\"authors\":\"K. Baxter, Yoav Schlesinger, Sarah E. Aerni, Lewis Baker, Julie Dawson, K. Kenthapadi, Isabel M. Kloumann, Hanna M. Wallach\",\"doi\":\"10.1145/3351095.3375680\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The study of fairness in machine learning applications has seen significant academic inquiry, research and publication in recent years. Concurrently, technology companies have begun to instantiate nascent program in AI ethics and product ethics more broadly. As a result of these efforts, AI ethics practitioners have piloted new processes to evaluate and ensure fairness in their machine learning applications. In this session, six industry practitioners, hailing from LinkedIn, Yoti, Microsoft, Pymetrics, Facebook, and Salesforce share insights from the work they have undertaken in the area of fairness, what has worked and what has not, lessons learned and best practices instituted as a result. • Krishnaram Kenthapadi presents LinkedIn's fairness-aware reranking for talent search. • Julie Dawson shares how Yoti applies ML fairness research to age estimation in their digital identity platform. • Hanna Wallach contributes how Microsoft is applying fairness principles in practice. • Lewis Baker presents Pymetric's fairness mechanisms in their hiring algorithm. • Isabel Kloumann presents Facebook's fairness assessment framework through a case study of fairness in a content moderation system. • Sarah Aerni contributes how Salesforce is building fairness features into the Einstein AI platform. Building on those insights, we discuss insights and brainstorm modalities through which to build upon the practitioners' work. Opportunities for further research or collaboration are identified, with the goal of developing a shared understanding of experiences and needs of AI ethics practitioners. Ultimately, the aim is to develop a playbook for more ethical and fair AI product development and deployment.\",\"PeriodicalId\":377829,\"journal\":{\"name\":\"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3351095.3375680\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3375680","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bridging the gap from AI ethics research to practice
The study of fairness in machine learning applications has seen significant academic inquiry, research and publication in recent years. Concurrently, technology companies have begun to instantiate nascent program in AI ethics and product ethics more broadly. As a result of these efforts, AI ethics practitioners have piloted new processes to evaluate and ensure fairness in their machine learning applications. In this session, six industry practitioners, hailing from LinkedIn, Yoti, Microsoft, Pymetrics, Facebook, and Salesforce share insights from the work they have undertaken in the area of fairness, what has worked and what has not, lessons learned and best practices instituted as a result. • Krishnaram Kenthapadi presents LinkedIn's fairness-aware reranking for talent search. • Julie Dawson shares how Yoti applies ML fairness research to age estimation in their digital identity platform. • Hanna Wallach contributes how Microsoft is applying fairness principles in practice. • Lewis Baker presents Pymetric's fairness mechanisms in their hiring algorithm. • Isabel Kloumann presents Facebook's fairness assessment framework through a case study of fairness in a content moderation system. • Sarah Aerni contributes how Salesforce is building fairness features into the Einstein AI platform. Building on those insights, we discuss insights and brainstorm modalities through which to build upon the practitioners' work. Opportunities for further research or collaboration are identified, with the goal of developing a shared understanding of experiences and needs of AI ethics practitioners. Ultimately, the aim is to develop a playbook for more ethical and fair AI product development and deployment.