Corinne Cath, Mark Latonero, Vidushi Marda, Roya Pakzad
{"title":"Leap of FATE: human rights as a complementary framework for AI policy and practice","authors":"Corinne Cath, Mark Latonero, Vidushi Marda, Roya Pakzad","doi":"10.1145/3351095.3375665","DOIUrl":null,"url":null,"abstract":"The premise of this translation tutorial is that human rights serves as a complementary framework - in addition to Fairness, Accountability, Transparency, and Ethics - for guiding and governing artificial intelligence (AI) and machine learning research and development. Attendees will participate in a case study, which will demonstrate show how a human rights framework, grounded in international law, fundamental values, and global systems of accountability, can offer the technical community a practical approach to addressing global AI risks and harms. This tutorial discusses how human rights frameworks can inform, guide and govern AI policy and practice in a manner that is complementary to Fairness, Accountability, Transparency, and Ethics (FATE) frameworks. Using the case study of researchers developing a facial recognition API at a tech company and its use by a law enforcement client, we will engage the audience to think through the benefits and challenges of applying human rights frameworks to AI system design and deployment. We will do so by providing a brief overview of the international human rights law, and various non-binding human rights frameworks in relation to our current discussions around FATE and then apply them to contemporary debates and case studies","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3375665","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
The premise of this translation tutorial is that human rights serves as a complementary framework - in addition to Fairness, Accountability, Transparency, and Ethics - for guiding and governing artificial intelligence (AI) and machine learning research and development. Attendees will participate in a case study, which will demonstrate show how a human rights framework, grounded in international law, fundamental values, and global systems of accountability, can offer the technical community a practical approach to addressing global AI risks and harms. This tutorial discusses how human rights frameworks can inform, guide and govern AI policy and practice in a manner that is complementary to Fairness, Accountability, Transparency, and Ethics (FATE) frameworks. Using the case study of researchers developing a facial recognition API at a tech company and its use by a law enforcement client, we will engage the audience to think through the benefits and challenges of applying human rights frameworks to AI system design and deployment. We will do so by providing a brief overview of the international human rights law, and various non-binding human rights frameworks in relation to our current discussions around FATE and then apply them to contemporary debates and case studies