{"title":"Bias does not equal bias: a socio-technical typology of bias in data-based algorithmic systems","authors":"Paola Lopez","doi":"10.14763/2021.4.1598","DOIUrl":null,"url":null,"abstract":": This paper introduces a socio-technical typology of bias in data-driven machine learning and artificial intelligence systems. The typology is linked to the conceptualisations of legal antidiscrimination regulations, so that the concept of structural inequality—and, therefore, of undesirable bias—is defined accordingly. By analysing the controversial Austrian “AMS algorithm” as a case study as well as examples in the contexts of face detection, risk assessment and health care management, this paper defines the following three types of bias: firstly, purely technical bias as a systematic deviation of the datafied version of a phenomenon from reality; secondly, socio-technical bias as a systematic deviation due to structural inequalities, which must be strictly distinguished from, thirdly, societal bias, which depicts—correctly—the structural inequalities that prevail in society. This paper argues that a clear distinction must be made between different concepts of bias in such systems in order to analytically assess these systems and, subsequently, inform political action.","PeriodicalId":219999,"journal":{"name":"Internet Policy Rev.","volume":"269 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Policy Rev.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14763/2021.4.1598","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
: This paper introduces a socio-technical typology of bias in data-driven machine learning and artificial intelligence systems. The typology is linked to the conceptualisations of legal antidiscrimination regulations, so that the concept of structural inequality—and, therefore, of undesirable bias—is defined accordingly. By analysing the controversial Austrian “AMS algorithm” as a case study as well as examples in the contexts of face detection, risk assessment and health care management, this paper defines the following three types of bias: firstly, purely technical bias as a systematic deviation of the datafied version of a phenomenon from reality; secondly, socio-technical bias as a systematic deviation due to structural inequalities, which must be strictly distinguished from, thirdly, societal bias, which depicts—correctly—the structural inequalities that prevail in society. This paper argues that a clear distinction must be made between different concepts of bias in such systems in order to analytically assess these systems and, subsequently, inform political action.