机器的偏见。生成语言模型如何回答民意调查?

IF 6.5 2区 社会学 Q1 SOCIAL SCIENCES, MATHEMATICAL METHODS
Julien Boelaert, Samuel Coavoux, Étienne Ollion, Ivaylo Petev, Patrick Präg
{"title":"机器的偏见。生成语言模型如何回答民意调查?","authors":"Julien Boelaert, Samuel Coavoux, Étienne Ollion, Ivaylo Petev, Patrick Präg","doi":"10.1177/00491241251330582","DOIUrl":null,"url":null,"abstract":"Generative artificial intelligence (AI) is increasingly presented as a potential substitute for humans, including as research subjects. However, there is no scientific consensus on how closely these in silico clones can emulate survey respondents. While some defend the use of these “synthetic users,” others point toward social biases in the responses provided by large language models (LLMs). In this article, we demonstrate that these critics are right to be wary of using generative AI to emulate respondents, but probably not for the right reasons. Our results show (i) that to date, models cannot replace research subjects for opinion or attitudinal research; (ii) that they display a strong bias and a low variance on each topic; and (iii) that this bias randomly varies from one topic to the next. We label this pattern “machine bias,” a concept we define, and whose consequences for LLM-based research we further explore.","PeriodicalId":21849,"journal":{"name":"Sociological Methods & Research","volume":"37 1","pages":""},"PeriodicalIF":6.5000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Machine Bias. How Do Generative Language Models Answer Opinion Polls?\",\"authors\":\"Julien Boelaert, Samuel Coavoux, Étienne Ollion, Ivaylo Petev, Patrick Präg\",\"doi\":\"10.1177/00491241251330582\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generative artificial intelligence (AI) is increasingly presented as a potential substitute for humans, including as research subjects. However, there is no scientific consensus on how closely these in silico clones can emulate survey respondents. While some defend the use of these “synthetic users,” others point toward social biases in the responses provided by large language models (LLMs). In this article, we demonstrate that these critics are right to be wary of using generative AI to emulate respondents, but probably not for the right reasons. Our results show (i) that to date, models cannot replace research subjects for opinion or attitudinal research; (ii) that they display a strong bias and a low variance on each topic; and (iii) that this bias randomly varies from one topic to the next. We label this pattern “machine bias,” a concept we define, and whose consequences for LLM-based research we further explore.\",\"PeriodicalId\":21849,\"journal\":{\"name\":\"Sociological Methods & Research\",\"volume\":\"37 1\",\"pages\":\"\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sociological Methods & Research\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/00491241251330582\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, MATHEMATICAL METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sociological Methods & Research","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/00491241251330582","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, MATHEMATICAL METHODS","Score":null,"Total":0}
引用次数: 0

摘要

生成式人工智能(AI)越来越多地被认为是人类的潜在替代品,包括作为研究对象。然而,对于这些在计算机上克隆的人能在多大程度上模仿调查对象,目前还没有科学共识。虽然有些人为使用这些“合成用户”辩护,但其他人指出大型语言模型(llm)提供的响应存在社会偏见。在本文中,我们证明了这些批评者对使用生成人工智能来模仿受访者持谨慎态度是正确的,但可能不是出于正确的原因。我们的研究结果表明(i)迄今为止,模型不能取代研究对象的意见或态度研究;(ii)他们在每个主题上表现出强烈的偏见和低方差;(iii)这种偏见会随话题的不同而随机变化。我们将这种模式称为“机器偏差”,这是我们定义的一个概念,我们将进一步探索其对基于法学硕士的研究的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Machine Bias. How Do Generative Language Models Answer Opinion Polls?
Generative artificial intelligence (AI) is increasingly presented as a potential substitute for humans, including as research subjects. However, there is no scientific consensus on how closely these in silico clones can emulate survey respondents. While some defend the use of these “synthetic users,” others point toward social biases in the responses provided by large language models (LLMs). In this article, we demonstrate that these critics are right to be wary of using generative AI to emulate respondents, but probably not for the right reasons. Our results show (i) that to date, models cannot replace research subjects for opinion or attitudinal research; (ii) that they display a strong bias and a low variance on each topic; and (iii) that this bias randomly varies from one topic to the next. We label this pattern “machine bias,” a concept we define, and whose consequences for LLM-based research we further explore.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
16.30
自引率
3.20%
发文量
40
期刊介绍: Sociological Methods & Research is a quarterly journal devoted to sociology as a cumulative empirical science. The objectives of SMR are multiple, but emphasis is placed on articles that advance the understanding of the field through systematic presentations that clarify methodological problems and assist in ordering the known facts in an area. Review articles will be published, particularly those that emphasize a critical analysis of the status of the arts, but original presentations that are broadly based and provide new research will also be published. Intrinsically, SMR is viewed as substantive journal but one that is highly focused on the assessment of the scientific status of sociology. The scope is broad and flexible, and authors are invited to correspond with the editors about the appropriateness of their articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信