{"title":"隐私-公平-准确性前沿:用于算法权衡的计算法律和经济学工具包","authors":"Aniket Kesari","doi":"10.1145/3511265.3550437","DOIUrl":null,"url":null,"abstract":"Both law and computer science are concerned with developing frameworks for protecting privacy and ensuring fairness. Both fields often consider these two values separately and develop legal doctrines and machine learning metrics in isolation from one another. Yet, privacy and fairness values can conflict, especially when considered alongside the accuracy of an algorithm. The computer science literature often treats this problem as an \"impossibility theorem\" - we can have privacy or fairness but not both. Legal doctrine is similarly constrained by a focus on the inputs to a decision - did the decisionmaker intend to use information about protected attributes. Despite these challenges, there is a way forward. The law has integrated economic frameworks to consider tradeoffs in other domains, and a similar approach can clarify policymakers' thinking around balancing accuracy, privacy, and fairnesss. This piece illustrates this idea by using a law & economics lens to formalize the notion of a Privacy-Fairness-Accuracy frontier, and demonstrating this framework on a consumer lending dataset. An open-source Python software library and GUI will be made available.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The Privacy-Fairness-Accuracy Frontier: A Computational Law & Economics Toolkit for Making Algorithmic Tradeoffs\",\"authors\":\"Aniket Kesari\",\"doi\":\"10.1145/3511265.3550437\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Both law and computer science are concerned with developing frameworks for protecting privacy and ensuring fairness. Both fields often consider these two values separately and develop legal doctrines and machine learning metrics in isolation from one another. Yet, privacy and fairness values can conflict, especially when considered alongside the accuracy of an algorithm. The computer science literature often treats this problem as an \\\"impossibility theorem\\\" - we can have privacy or fairness but not both. Legal doctrine is similarly constrained by a focus on the inputs to a decision - did the decisionmaker intend to use information about protected attributes. Despite these challenges, there is a way forward. The law has integrated economic frameworks to consider tradeoffs in other domains, and a similar approach can clarify policymakers' thinking around balancing accuracy, privacy, and fairnesss. This piece illustrates this idea by using a law & economics lens to formalize the notion of a Privacy-Fairness-Accuracy frontier, and demonstrating this framework on a consumer lending dataset. An open-source Python software library and GUI will be made available.\",\"PeriodicalId\":254114,\"journal\":{\"name\":\"Proceedings of the 2022 Symposium on Computer Science and Law\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 Symposium on Computer Science and Law\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3511265.3550437\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 Symposium on Computer Science and Law","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3511265.3550437","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Privacy-Fairness-Accuracy Frontier: A Computational Law & Economics Toolkit for Making Algorithmic Tradeoffs
Both law and computer science are concerned with developing frameworks for protecting privacy and ensuring fairness. Both fields often consider these two values separately and develop legal doctrines and machine learning metrics in isolation from one another. Yet, privacy and fairness values can conflict, especially when considered alongside the accuracy of an algorithm. The computer science literature often treats this problem as an "impossibility theorem" - we can have privacy or fairness but not both. Legal doctrine is similarly constrained by a focus on the inputs to a decision - did the decisionmaker intend to use information about protected attributes. Despite these challenges, there is a way forward. The law has integrated economic frameworks to consider tradeoffs in other domains, and a similar approach can clarify policymakers' thinking around balancing accuracy, privacy, and fairnesss. This piece illustrates this idea by using a law & economics lens to formalize the notion of a Privacy-Fairness-Accuracy frontier, and demonstrating this framework on a consumer lending dataset. An open-source Python software library and GUI will be made available.