Contesting efficacy: Tensions between risk and inclusion in computer vision technology

Future Humanities Pub Date : 2024-05-08 DOI:10.1002/fhu2.12
Morgan Klaus Scheuerman
{"title":"Contesting efficacy: Tensions between risk and inclusion in computer vision technology","authors":"Morgan Klaus Scheuerman","doi":"10.1002/fhu2.12","DOIUrl":null,"url":null,"abstract":"<p>Machine learning (ML) methods are now commonly used to make automated predictions about human beings—their lives and their characteristics. Vast amounts of individual data are aggregated to make predictions about people's shopping preferences, health status, or likelihood to recommit a crime. <i>Computer vision</i>, an ML task for training a computer to metaphorically ‘see’ specific objects, is a pertinent domain for examining the interaction between ML and human identity. <i>Facial analysis (FA)</i>, a subset of computer vision trained to complete tasks like facial classification and facial recognition, is trained to read visual data to make classifications about innate human identities. Identities like age (Lin et al., <span>2006</span>), gender (Khan et al., <span>2013</span>), ethnicity (Lu &amp; Jain, <span>2004</span>) and even sexual orientation (Wang &amp; Kosinski, <span>2017</span>). Often, decisions about identity characteristics are made without explicit user input—or even user knowledge. Users, effectively, become ‘targets’ of the system, having no ability to contest these classifications. Surrounding these identity classifications are concerns about bias (e.g., Buolamwini &amp; Gebru, <span>2018</span>), representation (e.g., Hamidi et al., <span>2018</span>; Keyes, <span>2018</span>) and the embracing of pseudoscientific practices like physiognomy (e.g., Agüera y Arcas et al., <span>2017</span>).</p><p>In this short paper, I present several considerations for contestability for computer vision. By contestability, I refer to the agency that an individual has to contest the inputs and outputs of a computer vision system—including how one's data is collected, defined and used. I specifically focus on one identity trait for which to ground consideration: <i>gender</i>. Gender is a salient characteristic to consider given that criticisms of computer vision have stemmed from concerns of both sexism and cissexism, discrimination against transgender and nonbinary communities (Hibbs, <span>2014</span>). Gender in computer vision has largely been presented as binary (i.e., male vs. female) and has been exclusive of genders beyond the cisgender norm (e.g., in automatic gender recognition (AGR) systems that classify gender explicitly [Hamidi et al., <span>2018</span>; Keyes, <span>2018</span>; Scheuerman et al., <span>2019</span>]; in facial recognition systems that fail to properly recognise noncisgender male faces [Albiero et al., <span>2020</span>, <span>2022</span>; Urbi, <span>2018</span>]).</p><p>More specifically, I question whether the efficacy of AI technologies, like computer vision, are the correct pathway to ‘inclusivity’ for historically marginalised identities, like cisgender women and trans communities. By efficacy, I refer to the technical capability of a computer vision system to accurately classify or recognise diverse genders. Inclusivity thus refers to the inclusion of diverse genders in effective classification, rather than solely a cisgender male/female binary approach to gender. That is, I question whether a technology <i>working effectively</i> on marginalised populations is truly the form of inclusivity we should be striving towards, given increasing scrutiny that AI should even be used to accomplish tasks that may alter human lives.</p><p>Gender is a highly ubiquitous identity characteristic classified in computer vision. So much so, that there are computer vision models trained specifically for the task of classifying gender. <i>AGR</i> has been coined to describe gender classification methods in computer vision, like facial and body analysis (Hamidi et al., <span>2018</span>). ML researchers have contributed a great deal of effort into improving methods in pattern recognition for improving gender classification tasks—specifically, improving the accuracy of such tasks (e.g., Akbulut et al., <span>2017</span>). Proposed methods range from extracting facial morphology (Ramey &amp; Salichs, <span>2014</span>) to modelling gait (Yu et al., <span>2009</span>) to extracting hair features (Lee &amp; Wei, <span>2013</span>). Gender classification in computer vision has become so ubiquitous, it has been featured in almost every commercial FA service available for purchase (e.g., Amazon Rekognition – Video and Image—AWS, <span>2019</span>; Face API—Facial Recognition Software Microsoft Azure, <span>2019</span>; Watson Visual Recognition, <span>2019</span>).</p><p>As with most ML techniques that use human characteristics, concerns about fairness and bias have inundated AGR. Efforts to ensure that the predefined gender categories perform fairly on gender recognition targets has become a major of focus of this literature. For example, Buolamwini and Gebru (<span>2018</span>) notably found higher gender misclassification rates on women with darker skin than both men and women with lighter skin. Research on improving gender classification in computer vision has discussed that gender in AGR is performed solely on a binary—male or female, man or woman, masculine or feminine. This classification schema leaves out those who traverse the gender binary, or fall outside of it: trans and/or nonbinary1 people. In both academic AGR literature (Keyes, <span>2018</span>; Scheuerman et al., <span>2020</span>) and commercial settings (Scheuerman et al., <span>2019</span>), the possibility for gender to exist outside of the cisnormative2 has been largely erased. In fact, because AGR is largely trained on, presumably, cisgender, binary, normative faces, is it much more likely to perform poorly even on binary transgender individuals (Scheuerman et al., <span>2019</span>).</p><p>The only AGR work to date focused on improving AGR for trans individuals has been focused on recognising individual trans people across physical gender transition, using screenshots of educatory gender transition videos scraped from YouTube (Vincent, <span>2017b</span>). This work is arguably more focused on trans identity as a security concern than improving efficacy on behalf of transgender individuals. Thus, concerns about fairness in AGR have extended beyond bias auditing, raising questions about representation in technical systems and the harmful effects simplistic representations could have on individuals with marginalised genders—and larger community norms around gender. Even when companies claim to only encourage gender at the aggregate level, rather than the individual level (e.g., Amazon Rekognition), the continued use of stereotypical and reductive gender categories fuels an increasingly hostile sociopolitical climate against noncisgender individuals (see 2024 Anti-Trans Bills, <span>2024</span>).</p><p>Scrutiny of AGR's efficacy on women and trans people, and its reliance on sex-gender conflations broadly (Scheuerman et al., <span>2021</span>), has led to some effective changes in commercial AGR systems recently. Many of the commercial systems which explicitly classify gender presented in critical scholarship (Buolamwini &amp; Gebru, <span>2018</span>; Scheuerman et al., <span>2019</span>) have since removed AGR from their functionalities. For example, Microsoft's Azure no longer offers gender classification due to the potential risks (Bird, <span>2022</span>). Gender, at least for larger technology organisations, has shifted from an obvious classification feature to an ethically fraught one (Gustafson et al., <span>2023</span>). However, academic research on AGR continues ahead, largely unconcerned, still touting the importance of gender classification tasks to the field of computer vision (e.g., Patel &amp; Patel, <span>2023</span>; Reddy et al., <span>2023</span>). Further, smaller companies across the globe continue to develop and deploy AGR (e.g., Clarifai, Face++, SenseTime).</p><p>Gender in computer vision is also not limited to AGR tasks. Gender also shows up in the qualitative labels in image tagging models (Barlas et al., <span>2021</span>; Katzman et al., <span>2023</span>; Scheuerman et al., <span>2019</span>, <span>2020</span>). Now, in the age of generative AI, gendered imagery is something that can be generated using text prompts (Bianchi et al., <span>2023</span>; Bird et al., <span>2023</span>; Sun et al., <span>2024</span>). And of course, gender is a salient identity factor in facial recognition technology (Albiero et al., <span>2020</span>, <span>2022</span>; Urbi, <span>2018</span>).</p><p>A major facet of concerns about the deployment of gender in computer vision is around <i>agency</i>: the agency to contest what classification decisions are made, the agency to define one's own gender in the classification schema, the agency to determine how gender characteristics will be used, and the agency to participate in training and evaluating AGR techniques in the first place. But giving users agency over how their identities are classified by a computer vision system presents several challenges—technically and ethically.</p><p>Researchers critical of gender classifications in computer vision technologies suggest, among other considerations, that agency over representation in a system can help alleviate some concerns about inadequate gender constructions (Hamidi et al., <span>2018</span>; Keyes, <span>2018</span>; Scheuerman et al., <span>2019</span>). Allowing individuals with diverse genders to define more nuanced and inclusive schemas for defining gender in computer vision systems could ideally alleviate concerns about stereotypes and cisnormative binaries. However, there are a number of barriers to implementing user input and contestable interfaces when dealing with ML-based systems, like computer vision. In particular, I will focus on the <i>technical</i> and the <i>ethical</i> obstacles to increasing user agency with the goal of more inclusive gender systems, highlighting what the tradeoffs might be when attempting to implement them. These challenges are not exhaustive, but provide a brief overview of some of the technical and social considerations (which may also often intersect or diverge) to building effective and inclusive gender approaches in computer vision.</p><p>What does it mean for computer vision learning to be <i>more inclusive</i>? At present, creating more inclusive models is largely proposed as creating <i>more effective</i> models. That is, computer vision models which work well on a more diverse population are perceived as a successful measure of inclusivity. Yet, many approaches to increasing the inclusiveness of computer vision tasks, such as AGR including trans populations, decreases its efficacy. Computer vision models with a ‘non-binary’ category will likely fail more often on binary categories.</p><p>Further, incorporating mechanisms for user agency and contestability, both technically and ethically, may lead to fewer training data, decreased diversity in that training data (dependent on who opts in), larger variation of gender labels, and users who simply want nothing to do with AGR. Increasing the number of genders which might be classified in AGR may create less accurate, and thus effective, classification systems. And many trans people may not be wished to be classified by such systems at all, meaning <i>effective</i> is not an appropriate measure of <i>inclusive</i>.</p><p>All in all, there are many technical barriers and ethical risks (and tradeoffs) to be considered when trying to implement more diverse identity classifications, begging the question of whether ‘more inclusive’ is the right path for FA technologies in the first place. Do we <i>want</i> our FA systems to be <i>effectively</i> inclusive? Do we want trans populations to be more accurately classified by AGR—and other computer vision—models? Or do we approach inclusivity from a different perspective—one which centres the social and political risks faced by trans communities, rather than the efficacy of AI?</p><p>If the answer is yes, to centre affected populations like trans communities first and foremost, we may have to sacrifice efficacy. We may have to accept systems which do not work well, but centre diversity and agency. We may have to accept systems which do not work well <i>on trans communities</i>, but otherwise protect them from being classified explicitly as, for example, ‘non-binary’. We may have to regulate how specific use cases, such as AGR, can be used, so those people misclassified by ineffective systems are not penalised. We may, in the end, have to stop developing AGR models altogether.</p><p>But most importantly, whether we choose to prioritise <i>effective</i> inclusivity or <i>ineffective</i> inclusivity, we have to establish stringent ethical policies that prevent the misuse of computer vision on nonconsenting and marginalised individuals. Interdisciplinary researchers focused on ethical AI have the opportunity to shift the axis of power towards the most marginalised in society; we have the capability of ensuring our systems are effective at progressing collective goals, which may actually mean ensuring they are <i>ineffective</i>.</p><p>The author declares no conflict of interest.</p>","PeriodicalId":100563,"journal":{"name":"Future Humanities","volume":"2 1-2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/fhu2.12","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Humanities","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/fhu2.12","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning (ML) methods are now commonly used to make automated predictions about human beings—their lives and their characteristics. Vast amounts of individual data are aggregated to make predictions about people's shopping preferences, health status, or likelihood to recommit a crime. Computer vision, an ML task for training a computer to metaphorically ‘see’ specific objects, is a pertinent domain for examining the interaction between ML and human identity. Facial analysis (FA), a subset of computer vision trained to complete tasks like facial classification and facial recognition, is trained to read visual data to make classifications about innate human identities. Identities like age (Lin et al., 2006), gender (Khan et al., 2013), ethnicity (Lu & Jain, 2004) and even sexual orientation (Wang & Kosinski, 2017). Often, decisions about identity characteristics are made without explicit user input—or even user knowledge. Users, effectively, become ‘targets’ of the system, having no ability to contest these classifications. Surrounding these identity classifications are concerns about bias (e.g., Buolamwini & Gebru, 2018), representation (e.g., Hamidi et al., 2018; Keyes, 2018) and the embracing of pseudoscientific practices like physiognomy (e.g., Agüera y Arcas et al., 2017).

In this short paper, I present several considerations for contestability for computer vision. By contestability, I refer to the agency that an individual has to contest the inputs and outputs of a computer vision system—including how one's data is collected, defined and used. I specifically focus on one identity trait for which to ground consideration: gender. Gender is a salient characteristic to consider given that criticisms of computer vision have stemmed from concerns of both sexism and cissexism, discrimination against transgender and nonbinary communities (Hibbs, 2014). Gender in computer vision has largely been presented as binary (i.e., male vs. female) and has been exclusive of genders beyond the cisgender norm (e.g., in automatic gender recognition (AGR) systems that classify gender explicitly [Hamidi et al., 2018; Keyes, 2018; Scheuerman et al., 2019]; in facial recognition systems that fail to properly recognise noncisgender male faces [Albiero et al., 20202022; Urbi, 2018]).

More specifically, I question whether the efficacy of AI technologies, like computer vision, are the correct pathway to ‘inclusivity’ for historically marginalised identities, like cisgender women and trans communities. By efficacy, I refer to the technical capability of a computer vision system to accurately classify or recognise diverse genders. Inclusivity thus refers to the inclusion of diverse genders in effective classification, rather than solely a cisgender male/female binary approach to gender. That is, I question whether a technology working effectively on marginalised populations is truly the form of inclusivity we should be striving towards, given increasing scrutiny that AI should even be used to accomplish tasks that may alter human lives.

Gender is a highly ubiquitous identity characteristic classified in computer vision. So much so, that there are computer vision models trained specifically for the task of classifying gender. AGR has been coined to describe gender classification methods in computer vision, like facial and body analysis (Hamidi et al., 2018). ML researchers have contributed a great deal of effort into improving methods in pattern recognition for improving gender classification tasks—specifically, improving the accuracy of such tasks (e.g., Akbulut et al., 2017). Proposed methods range from extracting facial morphology (Ramey & Salichs, 2014) to modelling gait (Yu et al., 2009) to extracting hair features (Lee & Wei, 2013). Gender classification in computer vision has become so ubiquitous, it has been featured in almost every commercial FA service available for purchase (e.g., Amazon Rekognition – Video and Image—AWS, 2019; Face API—Facial Recognition Software Microsoft Azure, 2019; Watson Visual Recognition, 2019).

As with most ML techniques that use human characteristics, concerns about fairness and bias have inundated AGR. Efforts to ensure that the predefined gender categories perform fairly on gender recognition targets has become a major of focus of this literature. For example, Buolamwini and Gebru (2018) notably found higher gender misclassification rates on women with darker skin than both men and women with lighter skin. Research on improving gender classification in computer vision has discussed that gender in AGR is performed solely on a binary—male or female, man or woman, masculine or feminine. This classification schema leaves out those who traverse the gender binary, or fall outside of it: trans and/or nonbinary1 people. In both academic AGR literature (Keyes, 2018; Scheuerman et al., 2020) and commercial settings (Scheuerman et al., 2019), the possibility for gender to exist outside of the cisnormative2 has been largely erased. In fact, because AGR is largely trained on, presumably, cisgender, binary, normative faces, is it much more likely to perform poorly even on binary transgender individuals (Scheuerman et al., 2019).

The only AGR work to date focused on improving AGR for trans individuals has been focused on recognising individual trans people across physical gender transition, using screenshots of educatory gender transition videos scraped from YouTube (Vincent, 2017b). This work is arguably more focused on trans identity as a security concern than improving efficacy on behalf of transgender individuals. Thus, concerns about fairness in AGR have extended beyond bias auditing, raising questions about representation in technical systems and the harmful effects simplistic representations could have on individuals with marginalised genders—and larger community norms around gender. Even when companies claim to only encourage gender at the aggregate level, rather than the individual level (e.g., Amazon Rekognition), the continued use of stereotypical and reductive gender categories fuels an increasingly hostile sociopolitical climate against noncisgender individuals (see 2024 Anti-Trans Bills, 2024).

Scrutiny of AGR's efficacy on women and trans people, and its reliance on sex-gender conflations broadly (Scheuerman et al., 2021), has led to some effective changes in commercial AGR systems recently. Many of the commercial systems which explicitly classify gender presented in critical scholarship (Buolamwini & Gebru, 2018; Scheuerman et al., 2019) have since removed AGR from their functionalities. For example, Microsoft's Azure no longer offers gender classification due to the potential risks (Bird, 2022). Gender, at least for larger technology organisations, has shifted from an obvious classification feature to an ethically fraught one (Gustafson et al., 2023). However, academic research on AGR continues ahead, largely unconcerned, still touting the importance of gender classification tasks to the field of computer vision (e.g., Patel & Patel, 2023; Reddy et al., 2023). Further, smaller companies across the globe continue to develop and deploy AGR (e.g., Clarifai, Face++, SenseTime).

Gender in computer vision is also not limited to AGR tasks. Gender also shows up in the qualitative labels in image tagging models (Barlas et al., 2021; Katzman et al., 2023; Scheuerman et al., 20192020). Now, in the age of generative AI, gendered imagery is something that can be generated using text prompts (Bianchi et al., 2023; Bird et al., 2023; Sun et al., 2024). And of course, gender is a salient identity factor in facial recognition technology (Albiero et al., 20202022; Urbi, 2018).

A major facet of concerns about the deployment of gender in computer vision is around agency: the agency to contest what classification decisions are made, the agency to define one's own gender in the classification schema, the agency to determine how gender characteristics will be used, and the agency to participate in training and evaluating AGR techniques in the first place. But giving users agency over how their identities are classified by a computer vision system presents several challenges—technically and ethically.

Researchers critical of gender classifications in computer vision technologies suggest, among other considerations, that agency over representation in a system can help alleviate some concerns about inadequate gender constructions (Hamidi et al., 2018; Keyes, 2018; Scheuerman et al., 2019). Allowing individuals with diverse genders to define more nuanced and inclusive schemas for defining gender in computer vision systems could ideally alleviate concerns about stereotypes and cisnormative binaries. However, there are a number of barriers to implementing user input and contestable interfaces when dealing with ML-based systems, like computer vision. In particular, I will focus on the technical and the ethical obstacles to increasing user agency with the goal of more inclusive gender systems, highlighting what the tradeoffs might be when attempting to implement them. These challenges are not exhaustive, but provide a brief overview of some of the technical and social considerations (which may also often intersect or diverge) to building effective and inclusive gender approaches in computer vision.

What does it mean for computer vision learning to be more inclusive? At present, creating more inclusive models is largely proposed as creating more effective models. That is, computer vision models which work well on a more diverse population are perceived as a successful measure of inclusivity. Yet, many approaches to increasing the inclusiveness of computer vision tasks, such as AGR including trans populations, decreases its efficacy. Computer vision models with a ‘non-binary’ category will likely fail more often on binary categories.

Further, incorporating mechanisms for user agency and contestability, both technically and ethically, may lead to fewer training data, decreased diversity in that training data (dependent on who opts in), larger variation of gender labels, and users who simply want nothing to do with AGR. Increasing the number of genders which might be classified in AGR may create less accurate, and thus effective, classification systems. And many trans people may not be wished to be classified by such systems at all, meaning effective is not an appropriate measure of inclusive.

All in all, there are many technical barriers and ethical risks (and tradeoffs) to be considered when trying to implement more diverse identity classifications, begging the question of whether ‘more inclusive’ is the right path for FA technologies in the first place. Do we want our FA systems to be effectively inclusive? Do we want trans populations to be more accurately classified by AGR—and other computer vision—models? Or do we approach inclusivity from a different perspective—one which centres the social and political risks faced by trans communities, rather than the efficacy of AI?

If the answer is yes, to centre affected populations like trans communities first and foremost, we may have to sacrifice efficacy. We may have to accept systems which do not work well, but centre diversity and agency. We may have to accept systems which do not work well on trans communities, but otherwise protect them from being classified explicitly as, for example, ‘non-binary’. We may have to regulate how specific use cases, such as AGR, can be used, so those people misclassified by ineffective systems are not penalised. We may, in the end, have to stop developing AGR models altogether.

But most importantly, whether we choose to prioritise effective inclusivity or ineffective inclusivity, we have to establish stringent ethical policies that prevent the misuse of computer vision on nonconsenting and marginalised individuals. Interdisciplinary researchers focused on ethical AI have the opportunity to shift the axis of power towards the most marginalised in society; we have the capability of ensuring our systems are effective at progressing collective goals, which may actually mean ensuring they are ineffective.

The author declares no conflict of interest.

效能之争:计算机视觉技术的风险与包容性之间的矛盾
目前,机器学习(ML)方法通常用于对人类--他们的生活及其特征--进行自动预测。大量个人数据被汇总起来,对人们的购物偏好、健康状况或再次犯罪的可能性进行预测。计算机视觉是一项用于训练计算机隐喻性地 "看到 "特定物体的人工智能任务,是研究人工智能与人类身份之间互动的一个相关领域。面部分析(FA)是计算机视觉的一个子集,经过训练后可完成面部分类和面部识别等任务。这些身份包括年龄(Lin 等人,2006 年)、性别(Khan 等人,2013 年)、种族(Lu &amp; Jain,2004 年)甚至性取向(Wang &amp; Kosinski,2017 年)。有关身份特征的决策往往没有明确的用户输入,甚至没有用户知识。用户实际上成了系统的 "目标",无法对这些分类提出异议。围绕着这些身份分类,人们对偏见(如 Buolamwini &amp; Gebru, 2018)、代表性(如 Hamidi 等人, 2018; Keyes, 2018)以及接受像相貌学这样的伪科学做法(如 Agüera y Arcas 等人, 2017)表示担忧。在这篇短文中,我提出了计算机视觉可竞争性的几个考虑因素。所谓可争辩性,我指的是个人对计算机视觉系统的输入和输出--包括如何收集、定义和使用个人数据--提出争辩的机构。我特别关注一种身份特征:性别。鉴于对计算机视觉的批评源于对性别歧视和顺性别歧视的担忧,以及对变性人和非二元群体的歧视(Hibbs,2014 年),性别是一个值得考虑的突出特征。计算机视觉中的性别在很大程度上被表述为二元(即男性与女性),并且不包括顺式性别规范之外的性别(例如,在明确进行性别分类的自动性别识别(AGR)系统中[Hamidi 等人,2018 年;Keyes,2018 年;Scheuerman 等人,2019 年];在明确进行性别分类的面部识别系统中[Hamidi 等人,2018 年;Keyes,2018 年;Scheuerman 等人,2019 年])、更具体地说,我质疑的是,人工智能技术(如计算机视觉)的功效是否是实现 "包容性 "的正确途径,以适应历史上被边缘化的身份,如顺性女性和跨性别群体。我所说的效能是指计算机视觉系统准确分类或识别不同性别的技术能力。因此,包容性指的是在有效分类中包含不同的性别,而不仅仅是顺性男/顺性女的二元性别法。也就是说,鉴于越来越多的人认为人工智能甚至应该被用来完成可能改变人类生活的任务,我质疑一项能有效解决边缘人群问题的技术是否真的是我们应该努力实现的包容性形式。在计算机视觉中,性别是一种无处不在的身份特征,以至于有一些计算机视觉模型是专门为性别分类任务而训练的。AGR 被用来描述计算机视觉中的性别分类方法,如面部和身体分析(Hamidi et al.)ML 研究人员在改进模式识别方法以改善性别分类任务--特别是提高此类任务的准确性--方面做出了大量努力(例如,Akbulut 等人,2017 年)。提出的方法包括从提取面部形态(Ramey &amp; Salichs, 2014)到步态建模(Yu 等人,2009)再到提取头发特征(Lee &amp; Wei, 2013)。计算机视觉中的性别分类已变得无处不在,几乎所有可供购买的商业FA服务中都有它的身影(例如,亚马逊Rekognition - Video and Image-AWS,2019年;Face API-Facial Recognition Software Microsoft Azure,2019年;Watson Visual Recognition,2019年)。努力确保预定义的性别类别在性别识别目标上表现公平已成为这些文献的主要关注点。例如,Buolamwini 和 Gebru(2018 年)发现,与肤色较浅的男性和女性相比,肤色较深的女性的性别误分类率更高。有关改进计算机视觉中性别分类的研究讨论了 AGR 中的性别分类仅在二元模式下进行--男性或女性、男人或女人、男性化或女性化。这种分类模式忽略了那些跨越性别二元对立或不属于二元对立的人:变性人和/或非二元对立者1。在AGR学术文献(Keyes,2018;Scheuerman等人,2020)和商业环境(Scheuerman等人,2010)中,AGR都是以男性或女性为对象的。 2019),性别存在于顺式规范2 之外的可能性在很大程度上被抹杀了。事实上,由于 AGR 主要是针对顺式性别、二元、规范面孔进行训练的,因此即使是针对二元跨性别者,其表现也可能差得多(Scheuerman 等人,2019 年)。迄今为止,唯一一项专注于改善跨性别者 AGR 的 AGR 工作,是利用从 YouTube 上截取的性别转换教育视频截图,专注于识别跨身体性别转换的跨性别者(Vincent,2017b)。可以说,这项工作更多关注的是作为安全问题的变性身份,而不是代表变性人提高效率。因此,对 AGR 公平性的担忧已经超出了偏见审计的范畴,引发了关于技术系统中的代表性问题,以及简单化的代表性可能对边缘化性别的个人--以及更大范围的社区性别规范--产生的有害影响。即使公司声称只在总体层面而非个体层面鼓励性别平等(例如,亚马逊的 Rekognition),其结果也是一样的、对 AGR 对女性和变性人的影响以及 AGR 对性-性别混淆的依赖(Scheuerman 等人,2021 年)的审查,导致商业 AGR 系统最近发生了一些有效的变化。许多在批判性学术研究中明确提出性别分类的商业系统(Buolamwini &amp; Gebru, 2018; Scheuerman et al.例如,由于潜在风险,微软的 Azure 不再提供性别分类(Bird,2022 年)。至少对于大型技术组织而言,性别已经从一个明显的分类功能转变为一个充满道德风险的功能(Gustafson et al.)然而,有关 AGR 的学术研究仍在继续,基本上不以为意,仍在吹嘘性别分类任务对计算机视觉领域的重要性(例如,Patel &amp; Patel, 2023; Reddy 等人,2023)。此外,全球各地的小公司也在继续开发和部署 AGR(如 Clarifai、Face++、SenseTime)。计算机视觉中的性别问题也不仅限于 AGR 任务,在图像标记模型的定性标签中也会出现性别(Barlas 等人,2021 年;Katzman 等人,2023 年;Scheuerman 等人,2019 年,2020 年)。现在,在人工智能生成时代,性别图像可以通过文本提示生成(Bianchi 等人,2023 年;Bird 等人,2023 年;Sun 等人,2024 年)。当然,在人脸识别技术中,性别也是一个突出的身份因素(Albiero 等人,2020 年,2022 年;Urbi,2018 年)。在计算机视觉中部署性别的一个主要关注点是围绕代理权:对做出何种分类决定提出质疑的代理权、在分类模式中定义自身性别的代理权、决定如何使用性别特征的代理权,以及首先参与培训和评估 AGR 技术的代理权。对计算机视觉技术中的性别分类持批评态度的研究人员建议,除其他考虑因素外,对系统中的代表性进行代理有助于减轻对性别建构不足的一些担忧(Hamidi 等人,2018 年;Keyes,2018 年;Scheuerman 等人,2019 年)。允许具有不同性别的个人为计算机视觉系统中的性别定义制定更加细致入微、更具包容性的方案,可以理想地减轻人们对刻板印象和顺式二元论的担忧。然而,在处理基于 ML 的系统(如计算机视觉)时,用户输入和可竞争界面的实施存在许多障碍。我将特别关注技术和伦理方面的障碍,这些障碍阻碍了以更具包容性的性别系统为目标的用户代理的增加,并强调了在尝试实现这些目标时可能需要做出的权衡。这些挑战并非详尽无遗,只是简要概述了在计算机视觉领域建立有效的包容性别方法所需的一些技术和社会考虑因素(这些因素也可能经常相互交叉或分歧)。目前,创建更具包容性的模型主要是为了创建更有效的模型。也就是说,计算机视觉模型如果能在更多样化的人群中发挥良好的作用,就会被视为成功的包容性措施。然而,许多提高计算机视觉任务包容性的方法,如将跨性别人群纳入 AGR,都会降低其有效性。具有 "非二进制 "类别的计算机视觉模型在二进制类别上可能更容易失败。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信