{"title":"2019冠状病毒病的谎言和真相:采用细化似然模型(ELM)和语言调查和字数统计(LIWC)来深入了解虚假信息(假新闻)中明显的说服技巧","authors":"Monica T. Whitty, Christopher Ruddy","doi":"10.1016/j.chbr.2025.100797","DOIUrl":null,"url":null,"abstract":"<div><div>The spread of disinformation and the harm this causes continues to be a cybersecurity concern. Technical methods, such as Artificial Intelligence (AI), employed to detect disinformation automatically are often inadequate because they fail to consider psychological theory that may help to inform the models. This research aimed to overcome this shortcoming by examining persuasive language evident in disinformation compared with genuine news. It applied the Elaboration Likelihood Model (ELM), a Dual Process Theory, to examine distinguishable cues in COVID-19 news stories: 70 fake and 70 genuine news stories. As predicted, fake news stories were more likely to contain the following cues: emotional appeals, repetition, celebrity figures, visual cues and loudness cues. In contrast, as predicted, genuine news stories were more likely to contain the following cues: rational appeals and statistics. Additionally, we conducted a Linguistic Inquiry and Word Count <strong>(</strong>LIWC) analysis, which revealed that positive emotions and tones were more prevalent in genuine news stories. However, fake news stories did not contain more negative emotions and tones compared with genuine stories. Loudness cues (e.g., exclamation marks, bold text, overuse of capital letters) stood out as one of the most significant differences in the use of persuasiveness across news types. This study demonstrates the importance of investigating how fake and genuine news compare by applying a psychological lens to interrogate the data and the utility of drawing from the ELM to inform the development of Large Language Models (LLMs) for automatic detection of fake news.</div></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"20 ","pages":"Article 100797"},"PeriodicalIF":5.8000,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"COVID-19 lies and truths: Employing the Elaboration Likelihood Model (ELM) and Linguistic Inquiry and Word Count (LIWC) to gain insights into the persuasive techniques evident in disinformation (fake news)\",\"authors\":\"Monica T. Whitty, Christopher Ruddy\",\"doi\":\"10.1016/j.chbr.2025.100797\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The spread of disinformation and the harm this causes continues to be a cybersecurity concern. Technical methods, such as Artificial Intelligence (AI), employed to detect disinformation automatically are often inadequate because they fail to consider psychological theory that may help to inform the models. This research aimed to overcome this shortcoming by examining persuasive language evident in disinformation compared with genuine news. It applied the Elaboration Likelihood Model (ELM), a Dual Process Theory, to examine distinguishable cues in COVID-19 news stories: 70 fake and 70 genuine news stories. As predicted, fake news stories were more likely to contain the following cues: emotional appeals, repetition, celebrity figures, visual cues and loudness cues. In contrast, as predicted, genuine news stories were more likely to contain the following cues: rational appeals and statistics. Additionally, we conducted a Linguistic Inquiry and Word Count <strong>(</strong>LIWC) analysis, which revealed that positive emotions and tones were more prevalent in genuine news stories. However, fake news stories did not contain more negative emotions and tones compared with genuine stories. Loudness cues (e.g., exclamation marks, bold text, overuse of capital letters) stood out as one of the most significant differences in the use of persuasiveness across news types. This study demonstrates the importance of investigating how fake and genuine news compare by applying a psychological lens to interrogate the data and the utility of drawing from the ELM to inform the development of Large Language Models (LLMs) for automatic detection of fake news.</div></div>\",\"PeriodicalId\":72681,\"journal\":{\"name\":\"Computers in human behavior reports\",\"volume\":\"20 \",\"pages\":\"Article 100797\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2025-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in human behavior reports\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S245195882500212X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S245195882500212X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
COVID-19 lies and truths: Employing the Elaboration Likelihood Model (ELM) and Linguistic Inquiry and Word Count (LIWC) to gain insights into the persuasive techniques evident in disinformation (fake news)
The spread of disinformation and the harm this causes continues to be a cybersecurity concern. Technical methods, such as Artificial Intelligence (AI), employed to detect disinformation automatically are often inadequate because they fail to consider psychological theory that may help to inform the models. This research aimed to overcome this shortcoming by examining persuasive language evident in disinformation compared with genuine news. It applied the Elaboration Likelihood Model (ELM), a Dual Process Theory, to examine distinguishable cues in COVID-19 news stories: 70 fake and 70 genuine news stories. As predicted, fake news stories were more likely to contain the following cues: emotional appeals, repetition, celebrity figures, visual cues and loudness cues. In contrast, as predicted, genuine news stories were more likely to contain the following cues: rational appeals and statistics. Additionally, we conducted a Linguistic Inquiry and Word Count (LIWC) analysis, which revealed that positive emotions and tones were more prevalent in genuine news stories. However, fake news stories did not contain more negative emotions and tones compared with genuine stories. Loudness cues (e.g., exclamation marks, bold text, overuse of capital letters) stood out as one of the most significant differences in the use of persuasiveness across news types. This study demonstrates the importance of investigating how fake and genuine news compare by applying a psychological lens to interrogate the data and the utility of drawing from the ELM to inform the development of Large Language Models (LLMs) for automatic detection of fake news.