人工智能恐惧症的兴起!利用ML、NLP和llm揭示AI恐惧情绪的新闻驱动传播

IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Jim Samuel;Tanya Khanna;Julia Esguerra;Srinivasaraghavan Sundar;Alexander Pelaez;Soumitra S. Bhuyan
{"title":"人工智能恐惧症的兴起!利用ML、NLP和llm揭示AI恐惧情绪的新闻驱动传播","authors":"Jim Samuel;Tanya Khanna;Julia Esguerra;Srinivasaraghavan Sundar;Alexander Pelaez;Soumitra S. Bhuyan","doi":"10.1109/ACCESS.2025.3588179","DOIUrl":null,"url":null,"abstract":"Contemporary public discourse surrounding artificial intelligence (AI) often displays disproportionate fear and confusion relative to AI’s actual potential. This study examines how the use of alarmist and fear-inducing language by news media contributes to negative public perceptions of AI. Nearly 70,000 AI-related news headlines were analyzed using natural language processing (NLP), machine learning (ML), and large language models (LLMs) to identify dominant themes and sentiment patterns. The theoretical framework draws on existing literature that posits the power of fear-inducing headlines to influence public perception and behavior, even when such headlines represent a relatively small proportion of total coverage. This research applies topic modeling and fear sentiment classification using BERT, LLaMA, and Mistral, alongside supervised ML techniques. The findings show a persistent presence of emotionally negative and fear-laden language in AI news coverage. This portrayal of AI as dangerous to humans or as an existential threat profoundly shapes public perception, fueling AI phobia that leads to behavioral resistance toward AI, which is ultimately detrimental to the science of AI. Furthermore, this can have an adverse impact on AI policies and regulations, leading to a stunted growth environment for AI. The study concludes with implications and recommendations to counter fear-driven narratives and suggests ways to improve public understanding of AI through responsible news media coverage, broad AI education, democratization of AI resources, and the drawing of clear distinctions between AI as a science versus commercial AI applications, to promote enhanced fact-based mass engagement with AI while preserving human dignity and agency.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"125944-125969"},"PeriodicalIF":3.6000,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11079577","citationCount":"0","resultStr":"{\"title\":\"The Rise of Artificial Intelligence Phobia! Unveiling News-Driven Spread of AI Fear Sentiment Using ML, NLP, and LLMs\",\"authors\":\"Jim Samuel;Tanya Khanna;Julia Esguerra;Srinivasaraghavan Sundar;Alexander Pelaez;Soumitra S. Bhuyan\",\"doi\":\"10.1109/ACCESS.2025.3588179\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Contemporary public discourse surrounding artificial intelligence (AI) often displays disproportionate fear and confusion relative to AI’s actual potential. This study examines how the use of alarmist and fear-inducing language by news media contributes to negative public perceptions of AI. Nearly 70,000 AI-related news headlines were analyzed using natural language processing (NLP), machine learning (ML), and large language models (LLMs) to identify dominant themes and sentiment patterns. The theoretical framework draws on existing literature that posits the power of fear-inducing headlines to influence public perception and behavior, even when such headlines represent a relatively small proportion of total coverage. This research applies topic modeling and fear sentiment classification using BERT, LLaMA, and Mistral, alongside supervised ML techniques. The findings show a persistent presence of emotionally negative and fear-laden language in AI news coverage. This portrayal of AI as dangerous to humans or as an existential threat profoundly shapes public perception, fueling AI phobia that leads to behavioral resistance toward AI, which is ultimately detrimental to the science of AI. Furthermore, this can have an adverse impact on AI policies and regulations, leading to a stunted growth environment for AI. The study concludes with implications and recommendations to counter fear-driven narratives and suggests ways to improve public understanding of AI through responsible news media coverage, broad AI education, democratization of AI resources, and the drawing of clear distinctions between AI as a science versus commercial AI applications, to promote enhanced fact-based mass engagement with AI while preserving human dignity and agency.\",\"PeriodicalId\":13079,\"journal\":{\"name\":\"IEEE Access\",\"volume\":\"13 \",\"pages\":\"125944-125969\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11079577\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Access\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11079577/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11079577/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

与人工智能的实际潜力相比,围绕人工智能(AI)的当代公共话语往往表现出不成比例的恐惧和困惑。本研究考察了新闻媒体使用危言耸听和引起恐惧的语言如何导致公众对人工智能的负面看法。使用自然语言处理(NLP)、机器学习(ML)和大型语言模型(llm)分析了近7万个与人工智能相关的新闻标题,以确定主要主题和情绪模式。该理论框架借鉴了现有的文献,这些文献假设了引起恐惧的标题能够影响公众的感知和行为,即使这些标题在总报道中所占的比例相对较小。本研究使用BERT、LLaMA和Mistral以及监督式机器学习技术应用主题建模和恐惧情绪分类。研究结果表明,在人工智能新闻报道中,情绪消极和充满恐惧的语言一直存在。这种将人工智能描述为对人类危险或存在威胁的描述深刻地塑造了公众的看法,助长了对人工智能的恐惧,导致对人工智能的行为抵制,最终对人工智能科学有害。此外,这可能会对人工智能政策和法规产生不利影响,导致人工智能的发展环境受到阻碍。该研究最后提出了应对恐惧驱动叙事的影响和建议,并提出了通过负责任的新闻媒体报道、广泛的人工智能教育、人工智能资源民主化以及明确区分人工智能作为科学与商业人工智能应用,提高公众对人工智能理解的方法,以促进基于事实的人工智能大众参与,同时维护人类尊严和主体。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Rise of Artificial Intelligence Phobia! Unveiling News-Driven Spread of AI Fear Sentiment Using ML, NLP, and LLMs
Contemporary public discourse surrounding artificial intelligence (AI) often displays disproportionate fear and confusion relative to AI’s actual potential. This study examines how the use of alarmist and fear-inducing language by news media contributes to negative public perceptions of AI. Nearly 70,000 AI-related news headlines were analyzed using natural language processing (NLP), machine learning (ML), and large language models (LLMs) to identify dominant themes and sentiment patterns. The theoretical framework draws on existing literature that posits the power of fear-inducing headlines to influence public perception and behavior, even when such headlines represent a relatively small proportion of total coverage. This research applies topic modeling and fear sentiment classification using BERT, LLaMA, and Mistral, alongside supervised ML techniques. The findings show a persistent presence of emotionally negative and fear-laden language in AI news coverage. This portrayal of AI as dangerous to humans or as an existential threat profoundly shapes public perception, fueling AI phobia that leads to behavioral resistance toward AI, which is ultimately detrimental to the science of AI. Furthermore, this can have an adverse impact on AI policies and regulations, leading to a stunted growth environment for AI. The study concludes with implications and recommendations to counter fear-driven narratives and suggests ways to improve public understanding of AI through responsible news media coverage, broad AI education, democratization of AI resources, and the drawing of clear distinctions between AI as a science versus commercial AI applications, to promote enhanced fact-based mass engagement with AI while preserving human dignity and agency.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Access
IEEE Access COMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍: IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest. IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on: Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals. Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering. Development of new or improved fabrication or manufacturing techniques. Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信