{"title":"人工智能作为一个非政治裁判:使用替代来源来减少事实核查信息处理中的党派偏见","authors":"Myojung Chung, Won-Ki Moon, S. Mo Jones-Jang","doi":"10.1080/21670811.2023.2254820","DOIUrl":null,"url":null,"abstract":"AbstractWhile fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.Keywords: AIartificial intelligencefact-checkingmisinformationmessage credibilityfake newsmotivated reasoningsocial media Disclosure StatementNo potential conflict of interest was reported by the author(s).Notes1 A series of analysis of variance (ANOVA) and Chi-square tests found no significant demographic differences between conditions (p = .099 for age; p = .522 for gender; p = .417 for income; p = .364 for education; p = .549 for political partisanship; p = .153 for political ideology, p = .493 for frequency of social media use). Thus, randomization was deemed successful.2 To further explore differences in message credibility across the four fact-checking source labels, one-way ANOVA and a Bonferroni post hoc test were conducted. The results showed that there are significant differences across the four source labels in shaping message credibility, F(3, 641) = 2.82, p = .038, Cohen’s d = 0.23. Those in the AI condition reported the highest message credibility (M = 3.89, SD = 0.79), followed by the human experts condition (M = 3.86, SD = 0.89) and the human experts-AI condition (M = 3.84, SD = 0.81). The crowdsourcing condition showed the lowest message credibility (M = 3.66, SD = 0.81). The post hoc test indicated that the AI source label induced significantly higher message credibility than the crowdsourcing source label (p = .042). However, no significant differences were found among other source labels.","PeriodicalId":11166,"journal":{"name":"Digital Journalism","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages\",\"authors\":\"Myojung Chung, Won-Ki Moon, S. Mo Jones-Jang\",\"doi\":\"10.1080/21670811.2023.2254820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AbstractWhile fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.Keywords: AIartificial intelligencefact-checkingmisinformationmessage credibilityfake newsmotivated reasoningsocial media Disclosure StatementNo potential conflict of interest was reported by the author(s).Notes1 A series of analysis of variance (ANOVA) and Chi-square tests found no significant demographic differences between conditions (p = .099 for age; p = .522 for gender; p = .417 for income; p = .364 for education; p = .549 for political partisanship; p = .153 for political ideology, p = .493 for frequency of social media use). Thus, randomization was deemed successful.2 To further explore differences in message credibility across the four fact-checking source labels, one-way ANOVA and a Bonferroni post hoc test were conducted. The results showed that there are significant differences across the four source labels in shaping message credibility, F(3, 641) = 2.82, p = .038, Cohen’s d = 0.23. Those in the AI condition reported the highest message credibility (M = 3.89, SD = 0.79), followed by the human experts condition (M = 3.86, SD = 0.89) and the human experts-AI condition (M = 3.84, SD = 0.81). The crowdsourcing condition showed the lowest message credibility (M = 3.66, SD = 0.81). The post hoc test indicated that the AI source label induced significantly higher message credibility than the crowdsourcing source label (p = .042). However, no significant differences were found among other source labels.\",\"PeriodicalId\":11166,\"journal\":{\"name\":\"Digital Journalism\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2023-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Journalism\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/21670811.2023.2254820\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Journalism","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/21670811.2023.2254820","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages
AbstractWhile fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.Keywords: AIartificial intelligencefact-checkingmisinformationmessage credibilityfake newsmotivated reasoningsocial media Disclosure StatementNo potential conflict of interest was reported by the author(s).Notes1 A series of analysis of variance (ANOVA) and Chi-square tests found no significant demographic differences between conditions (p = .099 for age; p = .522 for gender; p = .417 for income; p = .364 for education; p = .549 for political partisanship; p = .153 for political ideology, p = .493 for frequency of social media use). Thus, randomization was deemed successful.2 To further explore differences in message credibility across the four fact-checking source labels, one-way ANOVA and a Bonferroni post hoc test were conducted. The results showed that there are significant differences across the four source labels in shaping message credibility, F(3, 641) = 2.82, p = .038, Cohen’s d = 0.23. Those in the AI condition reported the highest message credibility (M = 3.89, SD = 0.79), followed by the human experts condition (M = 3.86, SD = 0.89) and the human experts-AI condition (M = 3.84, SD = 0.81). The crowdsourcing condition showed the lowest message credibility (M = 3.66, SD = 0.81). The post hoc test indicated that the AI source label induced significantly higher message credibility than the crowdsourcing source label (p = .042). However, no significant differences were found among other source labels.
期刊介绍:
Digital Journalism provides a critical forum for scholarly discussion, analysis and responses to the wide ranging implications of digital technologies, along with economic, political and cultural developments, for the practice and study of journalism. Radical shifts in journalism are changing every aspect of the production, content and reception of news; and at a dramatic pace which has transformed ‘new media’ into ‘legacy media’ in barely a decade. These crucial changes challenge traditional assumptions in journalism practice, scholarship and education, make definitional boundaries fluid and require reassessment of even the most fundamental questions such as "What is journalism?" and "Who is a journalist?" Digital Journalism pursues a significant and exciting editorial agenda including: Digital media and the future of journalism; Social media as sources and drivers of news; The changing ‘places’ and ‘spaces’ of news production and consumption in the context of digital media; News on the move and mobile telephony; The personalisation of news; Business models for funding digital journalism in the digital economy; Developments in data journalism and data visualisation; New research methods to analyse and explore digital journalism; Hyperlocalism and new understandings of community journalism; Changing relationships between journalists, sources and audiences; Citizen and participatory journalism; Machine written news and the automation of journalism; The history and evolution of online journalism; Changing journalism ethics in a digital setting; New challenges and directions for journalism education and training; Digital journalism, protest and democracy; Journalists’ changing role perceptions; Wikileaks and novel forms of investigative journalism.