{"title":"“一定要检查重要信息!-免责声明在人工智能生成内容感知中的作用","authors":"Angelica Lermann Henestrosa , Joachim Kimmerle","doi":"10.1016/j.chbah.2025.100142","DOIUrl":null,"url":null,"abstract":"<div><div>Generative AI, and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their widespread availability, these models come with critical weaknesses, such as a lack of factual accuracy. Being informed about the advantages and disadvantages of these tools is essential for using AI safely and adequately, yet not everyone is aware of them. Therefore, we explored in three experimental studies how disclaimers affect people's perceptions of AI-authorship and AI-generated content on scientific topics. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI's strengths vs. limitations did not. In addition, we found participants to believe in the machine heuristic, that is, to attribute more accuracy and less bias to AI than to human authors. Study 2 revealed interaction effects between authorship and disclaimer type, providing insights into possible balancing effects of human-AI co-authorship. In Study 3, both strengths and limitations disclaimers induced higher credibility ratings than basic disclaimers. This research suggests that disclaimers fail to univocally influence the perception of AI-generated output. Further interventions should be developed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100142"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"“Always check important information!” - The role of disclaimers in the perception of AI-generated content\",\"authors\":\"Angelica Lermann Henestrosa , Joachim Kimmerle\",\"doi\":\"10.1016/j.chbah.2025.100142\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Generative AI, and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their widespread availability, these models come with critical weaknesses, such as a lack of factual accuracy. Being informed about the advantages and disadvantages of these tools is essential for using AI safely and adequately, yet not everyone is aware of them. Therefore, we explored in three experimental studies how disclaimers affect people's perceptions of AI-authorship and AI-generated content on scientific topics. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI's strengths vs. limitations did not. In addition, we found participants to believe in the machine heuristic, that is, to attribute more accuracy and less bias to AI than to human authors. Study 2 revealed interaction effects between authorship and disclaimer type, providing insights into possible balancing effects of human-AI co-authorship. In Study 3, both strengths and limitations disclaimers induced higher credibility ratings than basic disclaimers. This research suggests that disclaimers fail to univocally influence the perception of AI-generated output. Further interventions should be developed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"4 \",\"pages\":\"Article 100142\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S294988212500026X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S294988212500026X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
“Always check important information!” - The role of disclaimers in the perception of AI-generated content
Generative AI, and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their widespread availability, these models come with critical weaknesses, such as a lack of factual accuracy. Being informed about the advantages and disadvantages of these tools is essential for using AI safely and adequately, yet not everyone is aware of them. Therefore, we explored in three experimental studies how disclaimers affect people's perceptions of AI-authorship and AI-generated content on scientific topics. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI's strengths vs. limitations did not. In addition, we found participants to believe in the machine heuristic, that is, to attribute more accuracy and less bias to AI than to human authors. Study 2 revealed interaction effects between authorship and disclaimer type, providing insights into possible balancing effects of human-AI co-authorship. In Study 3, both strengths and limitations disclaimers induced higher credibility ratings than basic disclaimers. This research suggests that disclaimers fail to univocally influence the perception of AI-generated output. Further interventions should be developed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.