Programming the machine: gender, race, sexuality, AI, and the construction of credibility and deceit a t the border

Lucy Hall
{"title":"Programming the machine: gender, race, sexuality, AI, and the construction of credibility and deceit a t the border","authors":"Lucy Hall","doi":"10.14763/2021.4.1601","DOIUrl":null,"url":null,"abstract":"There is increasing recognition of the significance of the political, social, economic, and strategic effects of artificial intelligence (AI). This raises important ethical questions regarding the programming, use, and regulation of AI. This paper argues that both the programming and application of AI are inherently (cis)gendered, sexualised and racialised. AI is, after all, programmed by humans and the issue of who trains AI, teaches it to learn, and the ethics of doing so are therefore critical to avoiding the reproduction of (cis)gendered and racist stereotypes. The paper’s empirical focus is the EU-funded project iBorderCtrl, designed to manage security risks and enhance the speed of border crossings for third country nationals via the implementation of several AI-based technologies, including facial recognition and deception detection. By drawing together literature from 1) risk and security 2) AI and ethics/migration/asylum and 3) race, gender, (in)security, and AI, this paper explores the implications of lie detection for both regular border crossings and refugee protection with a conceptual focus on the intersections of gender, sexuality, and race. We argue here that AI border technologies such as iBorderCtrl pose a significant risk of both further marginalising and discriminating against LGBT persons, persons of colour, and asylum seekers and reinforcing existing non entree practices and policies. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster.","PeriodicalId":219999,"journal":{"name":"Internet Policy Rev.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Policy Rev.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14763/2021.4.1601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

There is increasing recognition of the significance of the political, social, economic, and strategic effects of artificial intelligence (AI). This raises important ethical questions regarding the programming, use, and regulation of AI. This paper argues that both the programming and application of AI are inherently (cis)gendered, sexualised and racialised. AI is, after all, programmed by humans and the issue of who trains AI, teaches it to learn, and the ethics of doing so are therefore critical to avoiding the reproduction of (cis)gendered and racist stereotypes. The paper’s empirical focus is the EU-funded project iBorderCtrl, designed to manage security risks and enhance the speed of border crossings for third country nationals via the implementation of several AI-based technologies, including facial recognition and deception detection. By drawing together literature from 1) risk and security 2) AI and ethics/migration/asylum and 3) race, gender, (in)security, and AI, this paper explores the implications of lie detection for both regular border crossings and refugee protection with a conceptual focus on the intersections of gender, sexuality, and race. We argue here that AI border technologies such as iBorderCtrl pose a significant risk of both further marginalising and discriminating against LGBT persons, persons of colour, and asylum seekers and reinforcing existing non entree practices and policies. Issue 4 This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster.
为机器编程:性别、种族、性、人工智能,以及可信度和欺骗的构建
人们越来越认识到人工智能(AI)在政治、社会、经济和战略方面的重要性。这就提出了关于人工智能编程、使用和监管的重要伦理问题。本文认为,人工智能的编程和应用都是固有的(顺)性别化、性别化和种族化的。毕竟,人工智能是由人类编程的,谁来训练人工智能,教它学习的问题,以及这样做的道德问题,对于避免(顺式)性别和种族主义刻板印象的再现至关重要。本文的实证重点是欧盟资助的iBorderCtrl项目,该项目旨在通过实施几种基于人工智能的技术(包括面部识别和欺骗检测)来管理安全风险并提高第三国国民过境的速度。通过汇集以下方面的文献:(1)风险与安全;(2)人工智能与伦理/移民/庇护;(3)种族、性别、(在)安全和人工智能,本文探讨了测谎对常规过境点和难民保护的影响,并从概念上关注性别、性和种族的交叉点。我们认为,人工智能边境技术(如iBorderCtrl)存在进一步边缘化和歧视LGBT人士、有色人种和寻求庇护者的重大风险,并强化了现有的非入境做法和政策。本文为《互联网政策评论》特刊《女权主义数据保护》的一部分,由Jens T. Theilen、Andreas Baur、Felix Bieker、Regina Ammicht Quinn、Marit Hansen和Gloria González Fuster特邀编辑。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信