Estimating divergent moral and diversity preferences between AI builders and AI users

IF 2.8 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Zoe A. Purcell , Laura Charbit , Grégoire Borst , Anne-Marie Nussberger
{"title":"Estimating divergent moral and diversity preferences between AI builders and AI users","authors":"Zoe A. Purcell ,&nbsp;Laura Charbit ,&nbsp;Grégoire Borst ,&nbsp;Anne-Marie Nussberger","doi":"10.1016/j.cognition.2025.106198","DOIUrl":null,"url":null,"abstract":"<div><div>AI builders' preferences influence AI technologies throughout the development cycle, yet the demographic homogeneity of the AI workforce raises concerns about potential misalignments with the more diverse population of AI users. This study examines whether demographic disparities among AI builders and AI users lead to systematic differences in two critical domains: personal moral beliefs and preferences for diversity-related machine outputs. Using a pseudo-experimental, cross-sectional design, we assessed the moral beliefs and diversity preferences of adults (<em>N</em> = 519, 20+ years) and adolescents (<em>N</em> = 395, 15–19 years) with varying levels of actual or projected AI engagement. In our sample, males and adults with higher AI engagement exhibited stronger endorsement of instrumental harm and weaker support for diversity. Given the largely male composition of the AI workforce, these findings suggest there may be critical value gaps between current builders and users. In contrast, our adolescent data indicated that—developmental changes withstanding—these differences may narrow in future cohorts, particularly with greater gender balance. Our results provide initial support for a broader concern: that demographic homogeneity in the AI workforce may contribute to belief and expectation gaps between AI builders and users, underscoring the critical need for a diverse AI workforce to ensure alignment with societal values.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"263 ","pages":"Article 106198"},"PeriodicalIF":2.8000,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognition","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010027725001386","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

AI builders' preferences influence AI technologies throughout the development cycle, yet the demographic homogeneity of the AI workforce raises concerns about potential misalignments with the more diverse population of AI users. This study examines whether demographic disparities among AI builders and AI users lead to systematic differences in two critical domains: personal moral beliefs and preferences for diversity-related machine outputs. Using a pseudo-experimental, cross-sectional design, we assessed the moral beliefs and diversity preferences of adults (N = 519, 20+ years) and adolescents (N = 395, 15–19 years) with varying levels of actual or projected AI engagement. In our sample, males and adults with higher AI engagement exhibited stronger endorsement of instrumental harm and weaker support for diversity. Given the largely male composition of the AI workforce, these findings suggest there may be critical value gaps between current builders and users. In contrast, our adolescent data indicated that—developmental changes withstanding—these differences may narrow in future cohorts, particularly with greater gender balance. Our results provide initial support for a broader concern: that demographic homogeneity in the AI workforce may contribute to belief and expectation gaps between AI builders and users, underscoring the critical need for a diverse AI workforce to ensure alignment with societal values.
估计人工智能建设者和人工智能用户之间不同的道德和多样性偏好
在整个开发周期中,人工智能建设者的偏好会影响人工智能技术,但人工智能劳动力的人口同质性引发了人们对人工智能用户群体多样化的潜在错位的担忧。本研究考察了人工智能建设者和人工智能用户之间的人口统计学差异是否会导致两个关键领域的系统性差异:个人道德信仰和对与多样性相关的机器输出的偏好。采用伪实验横断面设计,我们评估了实际或预期人工智能参与程度不同的成年人(519岁,20岁以上)和青少年(395岁,15-19岁)的道德信仰和多样性偏好。在我们的样本中,人工智能参与度较高的男性和成年人表现出对工具伤害的更强认可,对多样性的支持更弱。鉴于人工智能劳动力主要由男性组成,这些发现表明,目前的建设者和用户之间可能存在关键的价值差距。相比之下,我们的青少年数据表明,在发育变化的影响下,这些差异可能会在未来的队列中缩小,特别是在性别平衡更大的情况下。我们的研究结果为一个更广泛的问题提供了初步支持:人工智能劳动力的人口同质性可能会导致人工智能建设者和用户之间的信念和期望差距,强调了多样化的人工智能劳动力以确保与社会价值观保持一致的迫切需要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognition
Cognition PSYCHOLOGY, EXPERIMENTAL-
CiteScore
6.40
自引率
5.90%
发文量
283
期刊介绍: Cognition is an international journal that publishes theoretical and experimental papers on the study of the mind. It covers a wide variety of subjects concerning all the different aspects of cognition, ranging from biological and experimental studies to formal analysis. Contributions from the fields of psychology, neuroscience, linguistics, computer science, mathematics, ethology and philosophy are welcome in this journal provided that they have some bearing on the functioning of the mind. In addition, the journal serves as a forum for discussion of social and political aspects of cognitive science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信