“一定要检查重要信息!-免责声明在人工智能生成内容感知中的作用

Angelica Lermann Henestrosa , Joachim Kimmerle
{"title":"“一定要检查重要信息!-免责声明在人工智能生成内容感知中的作用","authors":"Angelica Lermann Henestrosa ,&nbsp;Joachim Kimmerle","doi":"10.1016/j.chbah.2025.100142","DOIUrl":null,"url":null,"abstract":"<div><div>Generative AI, and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their widespread availability, these models come with critical weaknesses, such as a lack of factual accuracy. Being informed about the advantages and disadvantages of these tools is essential for using AI safely and adequately, yet not everyone is aware of them. Therefore, we explored in three experimental studies how disclaimers affect people's perceptions of AI-authorship and AI-generated content on scientific topics. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI's strengths vs. limitations did not. In addition, we found participants to believe in the machine heuristic, that is, to attribute more accuracy and less bias to AI than to human authors. Study 2 revealed interaction effects between authorship and disclaimer type, providing insights into possible balancing effects of human-AI co-authorship. In Study 3, both strengths and limitations disclaimers induced higher credibility ratings than basic disclaimers. This research suggests that disclaimers fail to univocally influence the perception of AI-generated output. Further interventions should be developed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100142"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"“Always check important information!” - The role of disclaimers in the perception of AI-generated content\",\"authors\":\"Angelica Lermann Henestrosa ,&nbsp;Joachim Kimmerle\",\"doi\":\"10.1016/j.chbah.2025.100142\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Generative AI, and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their widespread availability, these models come with critical weaknesses, such as a lack of factual accuracy. Being informed about the advantages and disadvantages of these tools is essential for using AI safely and adequately, yet not everyone is aware of them. Therefore, we explored in three experimental studies how disclaimers affect people's perceptions of AI-authorship and AI-generated content on scientific topics. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI's strengths vs. limitations did not. In addition, we found participants to believe in the machine heuristic, that is, to attribute more accuracy and less bias to AI than to human authors. Study 2 revealed interaction effects between authorship and disclaimer type, providing insights into possible balancing effects of human-AI co-authorship. In Study 3, both strengths and limitations disclaimers induced higher credibility ratings than basic disclaimers. This research suggests that disclaimers fail to univocally influence the perception of AI-generated output. Further interventions should be developed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"4 \",\"pages\":\"Article 100142\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S294988212500026X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S294988212500026X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

生成式人工智能,特别是大型语言模型(llm),已经成为数字内容的普遍来源。尽管这些模型广泛可用,但它们也有严重的弱点,比如缺乏事实准确性。了解这些工具的优缺点对于安全、充分地使用人工智能至关重要,但并不是每个人都意识到这一点。因此,我们在三项实验研究中探讨了免责声明如何影响人们对人工智能作者身份和人工智能生成的科学主题内容的看法。此外,我们还调查了信息呈现和作者归属的影响——内容是由人工智能单独创作还是与人类共同创作。在整个实验中,免责声明类型对文本感知没有影响,对作者感知只有轻微影响。在研究1中,评估性(相对于中性)的信息呈现会降低可信度,而关于AI优势与局限性的信息则不会。此外,我们发现参与者相信机器启发式,也就是说,与人类作者相比,人工智能的准确性更高,偏见更少。研究2揭示了作者身份和免责声明类型之间的相互作用,为人类-人工智能共同作者身份可能产生的平衡效应提供了见解。在研究3中,优点免责声明和局限性免责声明都比基本免责声明引起更高的可信度评级。这项研究表明,免责声明并不能单一地影响对人工智能生成的输出的感知。应制定进一步的干预措施,以提高对法学硕士能力和局限性的认识,并倡导在处理人工智能生成的内容,特别是关于事实信息方面的道德实践。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
“Always check important information!” - The role of disclaimers in the perception of AI-generated content
Generative AI, and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their widespread availability, these models come with critical weaknesses, such as a lack of factual accuracy. Being informed about the advantages and disadvantages of these tools is essential for using AI safely and adequately, yet not everyone is aware of them. Therefore, we explored in three experimental studies how disclaimers affect people's perceptions of AI-authorship and AI-generated content on scientific topics. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI's strengths vs. limitations did not. In addition, we found participants to believe in the machine heuristic, that is, to attribute more accuracy and less bias to AI than to human authors. Study 2 revealed interaction effects between authorship and disclaimer type, providing insights into possible balancing effects of human-AI co-authorship. In Study 3, both strengths and limitations disclaimers induced higher credibility ratings than basic disclaimers. This research suggests that disclaimers fail to univocally influence the perception of AI-generated output. Further interventions should be developed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信