{"title":"为机器编程:性别、种族、性、人工智能,以及可信度和欺骗的构建","authors":"Lucy Hall","doi":"10.14763/2021.4.1601","DOIUrl":null,"url":null,"abstract":"There is increasing recognition of the significance of the political, social, economic, and strategic effects of artificial intelligence (AI). This raises important ethical questions regarding the programming, use, and regulation of AI. This paper argues that both the programming and application of AI are inherently (cis)gendered, sexualised and racialised. AI is, after all, programmed by humans and the issue of who trains AI, teaches it to learn, and the ethics of doing so are therefore critical to avoiding the reproduction of (cis)gendered and racist stereotypes. The paper’s empirical focus is the EU-funded project iBorderCtrl, designed to manage security risks and enhance the speed of border crossings for third country nationals via the implementation of several AI-based technologies, including facial recognition and deception detection. By drawing together literature from 1) risk and security 2) AI and ethics/migration/asylum and 3) race, gender, (in)security, and AI, this paper explores the implications of lie detection for both regular border crossings and refugee protection with a conceptual focus on the intersections of gender, sexuality, and race. We argue here that AI border technologies such as iBorderCtrl pose a significant risk of both further marginalising and discriminating against LGBT persons, persons of colour, and asylum seekers and reinforcing existing non entree practices and policies. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster.","PeriodicalId":219999,"journal":{"name":"Internet Policy Rev.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Programming the machine: gender, race, sexuality, AI, and the construction of credibility and deceit a t the border\",\"authors\":\"Lucy Hall\",\"doi\":\"10.14763/2021.4.1601\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is increasing recognition of the significance of the political, social, economic, and strategic effects of artificial intelligence (AI). This raises important ethical questions regarding the programming, use, and regulation of AI. This paper argues that both the programming and application of AI are inherently (cis)gendered, sexualised and racialised. AI is, after all, programmed by humans and the issue of who trains AI, teaches it to learn, and the ethics of doing so are therefore critical to avoiding the reproduction of (cis)gendered and racist stereotypes. The paper’s empirical focus is the EU-funded project iBorderCtrl, designed to manage security risks and enhance the speed of border crossings for third country nationals via the implementation of several AI-based technologies, including facial recognition and deception detection. By drawing together literature from 1) risk and security 2) AI and ethics/migration/asylum and 3) race, gender, (in)security, and AI, this paper explores the implications of lie detection for both regular border crossings and refugee protection with a conceptual focus on the intersections of gender, sexuality, and race. We argue here that AI border technologies such as iBorderCtrl pose a significant risk of both further marginalising and discriminating against LGBT persons, persons of colour, and asylum seekers and reinforcing existing non entree practices and policies. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster.\",\"PeriodicalId\":219999,\"journal\":{\"name\":\"Internet Policy Rev.\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet Policy Rev.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14763/2021.4.1601\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Policy Rev.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14763/2021.4.1601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
摘要
人们越来越认识到人工智能(AI)在政治、社会、经济和战略方面的重要性。这就提出了关于人工智能编程、使用和监管的重要伦理问题。本文认为,人工智能的编程和应用都是固有的(顺)性别化、性别化和种族化的。毕竟,人工智能是由人类编程的,谁来训练人工智能,教它学习的问题,以及这样做的道德问题,对于避免(顺式)性别和种族主义刻板印象的再现至关重要。本文的实证重点是欧盟资助的iBorderCtrl项目,该项目旨在通过实施几种基于人工智能的技术(包括面部识别和欺骗检测)来管理安全风险并提高第三国国民过境的速度。通过汇集以下方面的文献:(1)风险与安全;(2)人工智能与伦理/移民/庇护;(3)种族、性别、(在)安全和人工智能,本文探讨了测谎对常规过境点和难民保护的影响,并从概念上关注性别、性和种族的交叉点。我们认为,人工智能边境技术(如iBorderCtrl)存在进一步边缘化和歧视LGBT人士、有色人种和寻求庇护者的重大风险,并强化了现有的非入境做法和政策。本文为《互联网政策评论》特刊《女权主义数据保护》的一部分,由Jens T. Theilen、Andreas Baur、Felix Bieker、Regina Ammicht Quinn、Marit Hansen和Gloria González Fuster特邀编辑。
Programming the machine: gender, race, sexuality, AI, and the construction of credibility and deceit a t the border
There is increasing recognition of the significance of the political, social, economic, and strategic effects of artificial intelligence (AI). This raises important ethical questions regarding the programming, use, and regulation of AI. This paper argues that both the programming and application of AI are inherently (cis)gendered, sexualised and racialised. AI is, after all, programmed by humans and the issue of who trains AI, teaches it to learn, and the ethics of doing so are therefore critical to avoiding the reproduction of (cis)gendered and racist stereotypes. The paper’s empirical focus is the EU-funded project iBorderCtrl, designed to manage security risks and enhance the speed of border crossings for third country nationals via the implementation of several AI-based technologies, including facial recognition and deception detection. By drawing together literature from 1) risk and security 2) AI and ethics/migration/asylum and 3) race, gender, (in)security, and AI, this paper explores the implications of lie detection for both regular border crossings and refugee protection with a conceptual focus on the intersections of gender, sexuality, and race. We argue here that AI border technologies such as iBorderCtrl pose a significant risk of both further marginalising and discriminating against LGBT persons, persons of colour, and asylum seekers and reinforcing existing non entree practices and policies. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster.