{"title":"公平还是不公平的算法微分?运气平均主义作为评估算法决策的镜头。","authors":"Laurens Naudts","doi":"10.2139/ssrn.3043707","DOIUrl":null,"url":null,"abstract":"Differentiation is often intrinsic to the functioning of algorithms. Within large data sets, ‘differentiating grounds’, such as correlations or patterns, are found, which in turn, can be applied by decision-makers to distinguish between individuals or groups of individuals. As the use of algorithms becomes more wide-spread, the chance that algorithmic forms of differentiation result in unfair outcomes increases. Intuitively, certain (random) algorithmic, classification acts, and the decisions that are based on them, seem to run counter to the fundamental notion of equality. It nevertheless remains difficult to articulate why exactly we find certain forms of algorithmic differentiation fair or unfair, vis-a-vis the general principle of equality. Concentrating on Dworkin’s notions brute and option luck, this discussion paper presents a luck egalitarian perspective as a potential approach for making this evaluation possible. The paper then considers whether this perspective can also inform us with regard to the interpretation of EU data protection legislation, and the General Data Protection Regulation in particular. Considering data protection’s direct focus on the data processes underlying algorithms, the GDPR might, when informed by egalitarian notions, form a more practically feasible way of governing algorithmic inequalities.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"296 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Fair or Unfair Algorithmic Differentiation? Luck Egalitarianism As a Lens for Evaluating Algorithmic Decision-Making.\",\"authors\":\"Laurens Naudts\",\"doi\":\"10.2139/ssrn.3043707\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Differentiation is often intrinsic to the functioning of algorithms. Within large data sets, ‘differentiating grounds’, such as correlations or patterns, are found, which in turn, can be applied by decision-makers to distinguish between individuals or groups of individuals. As the use of algorithms becomes more wide-spread, the chance that algorithmic forms of differentiation result in unfair outcomes increases. Intuitively, certain (random) algorithmic, classification acts, and the decisions that are based on them, seem to run counter to the fundamental notion of equality. It nevertheless remains difficult to articulate why exactly we find certain forms of algorithmic differentiation fair or unfair, vis-a-vis the general principle of equality. Concentrating on Dworkin’s notions brute and option luck, this discussion paper presents a luck egalitarian perspective as a potential approach for making this evaluation possible. The paper then considers whether this perspective can also inform us with regard to the interpretation of EU data protection legislation, and the General Data Protection Regulation in particular. Considering data protection’s direct focus on the data processes underlying algorithms, the GDPR might, when informed by egalitarian notions, form a more practically feasible way of governing algorithmic inequalities.\",\"PeriodicalId\":114865,\"journal\":{\"name\":\"ERN: Neural Networks & Related Topics (Topic)\",\"volume\":\"296 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ERN: Neural Networks & Related Topics (Topic)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3043707\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ERN: Neural Networks & Related Topics (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3043707","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fair or Unfair Algorithmic Differentiation? Luck Egalitarianism As a Lens for Evaluating Algorithmic Decision-Making.
Differentiation is often intrinsic to the functioning of algorithms. Within large data sets, ‘differentiating grounds’, such as correlations or patterns, are found, which in turn, can be applied by decision-makers to distinguish between individuals or groups of individuals. As the use of algorithms becomes more wide-spread, the chance that algorithmic forms of differentiation result in unfair outcomes increases. Intuitively, certain (random) algorithmic, classification acts, and the decisions that are based on them, seem to run counter to the fundamental notion of equality. It nevertheless remains difficult to articulate why exactly we find certain forms of algorithmic differentiation fair or unfair, vis-a-vis the general principle of equality. Concentrating on Dworkin’s notions brute and option luck, this discussion paper presents a luck egalitarian perspective as a potential approach for making this evaluation possible. The paper then considers whether this perspective can also inform us with regard to the interpretation of EU data protection legislation, and the General Data Protection Regulation in particular. Considering data protection’s direct focus on the data processes underlying algorithms, the GDPR might, when informed by egalitarian notions, form a more practically feasible way of governing algorithmic inequalities.