编造企业社会责任真实性:人工智能时代社会化媒体企业社会责任传播的虚幻真实效应

IF 4.1 3区 管理学 Q2 BUSINESS
Laura Illia , Rafael Ballester-Ripoll , Anika K. Clausen
{"title":"编造企业社会责任真实性:人工智能时代社会化媒体企业社会责任传播的虚幻真实效应","authors":"Laura Illia ,&nbsp;Rafael Ballester-Ripoll ,&nbsp;Anika K. Clausen","doi":"10.1016/j.pubrev.2025.102588","DOIUrl":null,"url":null,"abstract":"<div><div>Corporate Social Responsibility (CSR) communication via social media offers significant opportunities for organizations. Posts by third-party stakeholders allow for critical evaluation of CSR efforts, fostering authenticity through the anonymous, collective sharing of personal experiences. The advent of Large Language Models (LLMs), which facilitate the rapid and cost-effective creation of bot-driven posts, raises concerns about whether an increasing number of fabricated CSR messages could linearly influence an audience’s perception of a company’s CSR authenticity. We base our hypotheses on the Illusory Truth Effect, suggesting that perceived authenticity can increase with exposure to more messages. However, this effect only continues up to a certain tipping point, after which it plateaus. We tested our hypotheses in a study with 480 participants, presenting AI-generated CSR testimonials about Shell to three groups: zero, low, and high exposure. We found a significant increase in perceived CSR authenticity in the low exposure group compared to the zero group, with the effect tapering off in the high exposure group. We conclude that LLMs can effectively replace human-written CSR messages for a fraction of a cent, yet the main strength of LLMs—sheer volume, leading to repeated exposure—is unlikely to become a concern.</div></div>","PeriodicalId":48263,"journal":{"name":"Public Relations Review","volume":"51 3","pages":"Article 102588"},"PeriodicalIF":4.1000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fabricating CSR authenticity: The Illusory Truth Effect of CSR communication on social media in the AI era\",\"authors\":\"Laura Illia ,&nbsp;Rafael Ballester-Ripoll ,&nbsp;Anika K. Clausen\",\"doi\":\"10.1016/j.pubrev.2025.102588\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Corporate Social Responsibility (CSR) communication via social media offers significant opportunities for organizations. Posts by third-party stakeholders allow for critical evaluation of CSR efforts, fostering authenticity through the anonymous, collective sharing of personal experiences. The advent of Large Language Models (LLMs), which facilitate the rapid and cost-effective creation of bot-driven posts, raises concerns about whether an increasing number of fabricated CSR messages could linearly influence an audience’s perception of a company’s CSR authenticity. We base our hypotheses on the Illusory Truth Effect, suggesting that perceived authenticity can increase with exposure to more messages. However, this effect only continues up to a certain tipping point, after which it plateaus. We tested our hypotheses in a study with 480 participants, presenting AI-generated CSR testimonials about Shell to three groups: zero, low, and high exposure. We found a significant increase in perceived CSR authenticity in the low exposure group compared to the zero group, with the effect tapering off in the high exposure group. We conclude that LLMs can effectively replace human-written CSR messages for a fraction of a cent, yet the main strength of LLMs—sheer volume, leading to repeated exposure—is unlikely to become a concern.</div></div>\",\"PeriodicalId\":48263,\"journal\":{\"name\":\"Public Relations Review\",\"volume\":\"51 3\",\"pages\":\"Article 102588\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Public Relations Review\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0363811125000505\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Public Relations Review","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0363811125000505","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

摘要

通过社交媒体进行企业社会责任(CSR)传播为组织提供了重要的机会。第三方利益相关者的帖子允许对企业社会责任工作进行批判性评估,通过匿名、集体分享个人经验来培养真实性。大型语言模型(llm)的出现,促进了机器人驱动帖子的快速和经济高效的创建,引发了人们的担忧,即越来越多的捏造的企业社会责任信息是否会线性影响受众对公司企业社会责任真实性的看法。我们的假设基于虚幻真相效应,即感知到的真实性会随着接触更多信息而增加。然而,这种影响只会持续到某个临界点,之后就会趋于平稳。我们在480名参与者的研究中测试了我们的假设,将人工智能生成的关于壳牌的企业社会责任证明分为三组:零、低和高。我们发现,与零接触组相比,低接触组感知到的企业社会责任真实性显著增加,而高接触组的影响逐渐减弱。我们得出的结论是,法学硕士可以有效地以不到一美分的价格取代人工撰写的企业社会责任信息,但法学硕士的主要优势——数量庞大,导致重复曝光——不太可能成为一个问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fabricating CSR authenticity: The Illusory Truth Effect of CSR communication on social media in the AI era
Corporate Social Responsibility (CSR) communication via social media offers significant opportunities for organizations. Posts by third-party stakeholders allow for critical evaluation of CSR efforts, fostering authenticity through the anonymous, collective sharing of personal experiences. The advent of Large Language Models (LLMs), which facilitate the rapid and cost-effective creation of bot-driven posts, raises concerns about whether an increasing number of fabricated CSR messages could linearly influence an audience’s perception of a company’s CSR authenticity. We base our hypotheses on the Illusory Truth Effect, suggesting that perceived authenticity can increase with exposure to more messages. However, this effect only continues up to a certain tipping point, after which it plateaus. We tested our hypotheses in a study with 480 participants, presenting AI-generated CSR testimonials about Shell to three groups: zero, low, and high exposure. We found a significant increase in perceived CSR authenticity in the low exposure group compared to the zero group, with the effect tapering off in the high exposure group. We conclude that LLMs can effectively replace human-written CSR messages for a fraction of a cent, yet the main strength of LLMs—sheer volume, leading to repeated exposure—is unlikely to become a concern.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.00
自引率
19.00%
发文量
90
期刊介绍: The Public Relations Review is the oldest journal devoted to articles that examine public relations in depth, and commentaries by specialists in the field. Most of the articles are based on empirical research undertaken by professionals and academics in the field. In addition to research articles and commentaries, The Review publishes invited research in brief, and book reviews in the fields of public relations, mass communications, organizational communications, public opinion formations, social science research and evaluation, marketing, management and public policy formation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信