{"title":"人工智能应用中的公平性","authors":"C. Shelley","doi":"10.1109/istas52410.2021.9629140","DOIUrl":null,"url":null,"abstract":"Applications of Artificial Intelligence (AI) that have broad, social impact for many people have recently increased greatly in number. They will continue to increase in ubiquity and impact for some time to come. In conjunction with this increase, many scholars have studied the nature of these impacts, including problems of fairness. Here, fairness refers to conflicts of interest between social groups that result from the configuration of these AI systems. One focus of research has been to define these fairness problems and to quantify them in a way that lends itself to calculation of fair outcomes. The purpose of this presentation is to show that this issue of fairness in AI is consistent with fairness problems posed by technological design in general and that addressing these problems goes beyond what can be readily quantified and calculated. For example, many such problems may be best resolved by forms of public consultation. This point is clarified by presenting an analytical tool, the Fairness Impact Assessment, and examples from AI and elsewhere.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fairness in AI applications\",\"authors\":\"C. Shelley\",\"doi\":\"10.1109/istas52410.2021.9629140\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Applications of Artificial Intelligence (AI) that have broad, social impact for many people have recently increased greatly in number. They will continue to increase in ubiquity and impact for some time to come. In conjunction with this increase, many scholars have studied the nature of these impacts, including problems of fairness. Here, fairness refers to conflicts of interest between social groups that result from the configuration of these AI systems. One focus of research has been to define these fairness problems and to quantify them in a way that lends itself to calculation of fair outcomes. The purpose of this presentation is to show that this issue of fairness in AI is consistent with fairness problems posed by technological design in general and that addressing these problems goes beyond what can be readily quantified and calculated. For example, many such problems may be best resolved by forms of public consultation. This point is clarified by presenting an analytical tool, the Fairness Impact Assessment, and examples from AI and elsewhere.\",\"PeriodicalId\":314239,\"journal\":{\"name\":\"2021 IEEE International Symposium on Technology and Society (ISTAS)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Symposium on Technology and Society (ISTAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/istas52410.2021.9629140\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/istas52410.2021.9629140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Applications of Artificial Intelligence (AI) that have broad, social impact for many people have recently increased greatly in number. They will continue to increase in ubiquity and impact for some time to come. In conjunction with this increase, many scholars have studied the nature of these impacts, including problems of fairness. Here, fairness refers to conflicts of interest between social groups that result from the configuration of these AI systems. One focus of research has been to define these fairness problems and to quantify them in a way that lends itself to calculation of fair outcomes. The purpose of this presentation is to show that this issue of fairness in AI is consistent with fairness problems posed by technological design in general and that addressing these problems goes beyond what can be readily quantified and calculated. For example, many such problems may be best resolved by forms of public consultation. This point is clarified by presenting an analytical tool, the Fairness Impact Assessment, and examples from AI and elsewhere.