Identifying Disinformation on the Extended Impacts of COVID-19: Methodological Investigation Using a Fuzzy Ranking Ensemble of Natural Language Processing Models.

IF 5.8 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES
Jian-An Chen, Wu-Chun Chung, Che-Lun Hung, Chun-Ying Wu
{"title":"Identifying Disinformation on the Extended Impacts of COVID-19: Methodological Investigation Using a Fuzzy Ranking Ensemble of Natural Language Processing Models.","authors":"Jian-An Chen, Wu-Chun Chung, Che-Lun Hung, Chun-Ying Wu","doi":"10.2196/73601","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>During the COVID-19 pandemic, the continuous spread of misinformation on the internet posed an ongoing threat to public trust and understanding of epidemic prevention policies. Although the pandemic is now under control, information regarding the risks of long-term COVID-19 effects and reinfection still needs to be integrated into COVID-19 policies.</p><p><strong>Objective: </strong>This study aims to develop a robust and generalizable deep learning framework for detecting misinformation related to the prolonged impacts of COVID-19 by integrating pretrained language models (PLMs) with an innovative fuzzy rank-based ensemble approach.</p><p><strong>Methods: </strong>A comprehensive dataset comprising 566 genuine and 2361 fake samples was curated from reliable open sources and processed using advanced techniques. The dataset was randomly split using the scikit-learn package to facilitate both training and evaluation. Deep learning models were trained for 20 epochs on a Tesla T4 for hierarchical attention networks (HANs) and an RTX A5000 (for the other models). To enhance performance, we implemented an ensemble learning strategy that incorporated a reparameterized Gompertz function, which assigned fuzzy ranks based on each model's prediction confidence for each test case. This method effectively fused outputs from state-of-the-art PLMs such as robustly optimized bidirectional encoder representations from transformers pretraining approach (RoBERTa), decoding-enhanced bidirectional encoder representations from transformers with disentangled attention (DeBERTa), and XLNet.</p><p><strong>Results: </strong>After training on the dataset, various classification methods were evaluated on the test set, including the fuzzy rank-based method and state-of-the-art large language models. Experimental results reveal that language models, particularly XLNet, outperform traditional approaches that combine term frequency-inverse document frequency features with support vector machine or utilize deep models like HAN. The evaluation metrics-including accuracy, precision, recall, F<sub>1</sub>-score, and area under the curve (AUC)-indicated a clear performance advantage for models that had a larger number of parameters. However, this study also highlights that model architecture, training procedures, and optimization techniques are critical determinants of classification effectiveness. XLNet's permutation language modeling approach enhances bidirectional context understanding, allowing it to surpass even larger models in the bidirectional encoder representations from transformers (BERT) series despite having relatively fewer parameters. Notably, the fuzzy rank-based ensemble method, which combines multiple language models, achieved impressive results on the test set, with an accuracy of 93.52%, a precision of 94.65%, an F<sub>1</sub>-score of 96.03%, and an AUC of 97.15%.</p><p><strong>Conclusions: </strong>The fusion of ensemble learning with PLMs and the Gompertz function, employing fuzzy rank-based methodology, introduces a novel prediction approach with prospects for enhancing accuracy and reliability. Additionally, the experimental results imply that training solely on textual content can yield high prediction accuracy, thereby providing valuable insights into the optimization of fake news detection systems. These findings not only aid in detecting misinformation but also have broader implications for the application of advanced deep learning techniques in public health policy and communication.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"27 ","pages":"e73601"},"PeriodicalIF":5.8000,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Internet Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/73601","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: During the COVID-19 pandemic, the continuous spread of misinformation on the internet posed an ongoing threat to public trust and understanding of epidemic prevention policies. Although the pandemic is now under control, information regarding the risks of long-term COVID-19 effects and reinfection still needs to be integrated into COVID-19 policies.

Objective: This study aims to develop a robust and generalizable deep learning framework for detecting misinformation related to the prolonged impacts of COVID-19 by integrating pretrained language models (PLMs) with an innovative fuzzy rank-based ensemble approach.

Methods: A comprehensive dataset comprising 566 genuine and 2361 fake samples was curated from reliable open sources and processed using advanced techniques. The dataset was randomly split using the scikit-learn package to facilitate both training and evaluation. Deep learning models were trained for 20 epochs on a Tesla T4 for hierarchical attention networks (HANs) and an RTX A5000 (for the other models). To enhance performance, we implemented an ensemble learning strategy that incorporated a reparameterized Gompertz function, which assigned fuzzy ranks based on each model's prediction confidence for each test case. This method effectively fused outputs from state-of-the-art PLMs such as robustly optimized bidirectional encoder representations from transformers pretraining approach (RoBERTa), decoding-enhanced bidirectional encoder representations from transformers with disentangled attention (DeBERTa), and XLNet.

Results: After training on the dataset, various classification methods were evaluated on the test set, including the fuzzy rank-based method and state-of-the-art large language models. Experimental results reveal that language models, particularly XLNet, outperform traditional approaches that combine term frequency-inverse document frequency features with support vector machine or utilize deep models like HAN. The evaluation metrics-including accuracy, precision, recall, F1-score, and area under the curve (AUC)-indicated a clear performance advantage for models that had a larger number of parameters. However, this study also highlights that model architecture, training procedures, and optimization techniques are critical determinants of classification effectiveness. XLNet's permutation language modeling approach enhances bidirectional context understanding, allowing it to surpass even larger models in the bidirectional encoder representations from transformers (BERT) series despite having relatively fewer parameters. Notably, the fuzzy rank-based ensemble method, which combines multiple language models, achieved impressive results on the test set, with an accuracy of 93.52%, a precision of 94.65%, an F1-score of 96.03%, and an AUC of 97.15%.

Conclusions: The fusion of ensemble learning with PLMs and the Gompertz function, employing fuzzy rank-based methodology, introduces a novel prediction approach with prospects for enhancing accuracy and reliability. Additionally, the experimental results imply that training solely on textual content can yield high prediction accuracy, thereby providing valuable insights into the optimization of fake news detection systems. These findings not only aid in detecting misinformation but also have broader implications for the application of advanced deep learning techniques in public health policy and communication.

识别关于COVID-19扩展影响的虚假信息:使用自然语言处理模型模糊排序集成的方法学调查
背景:在新冠肺炎大流行期间,互联网上错误信息的持续传播对公众对防疫政策的信任和理解构成了持续威胁。虽然大流行现已得到控制,但有关COVID-19长期影响和再感染风险的信息仍需纳入COVID-19政策。目的:本研究旨在通过将预训练语言模型(PLMs)与创新的基于模糊秩的集成方法相结合,开发一个鲁棒且可推广的深度学习框架,用于检测与COVID-19长期影响相关的错误信息。方法:从可靠的开放资源中收集了566个正品样本和2361个假样本,并使用先进的技术进行处理。使用scikit-learn包对数据集进行随机分割,以方便训练和评估。深度学习模型在特斯拉T4和RTX A5000(用于其他模型)上进行了20个epoch的训练。为了提高性能,我们实现了一个集成学习策略,该策略包含了一个重新参数化的Gompertz函数,该函数根据每个模型对每个测试用例的预测置信度分配模糊排名。该方法有效地融合了来自最先进plm的输出,例如来自变压器预训练方法的稳健优化双向编码器表示(RoBERTa),来自具有解纠缠注意力的变压器的解码增强双向编码器表示(DeBERTa)和XLNet。结果:在数据集上进行训练后,在测试集上评估了各种分类方法,包括基于模糊排名的方法和最先进的大型语言模型。实验结果表明,语言模型,特别是XLNet,优于传统的方法,这些方法将词频率逆文档频率特征与支持向量机相结合,或者利用像HAN这样的深度模型。评估指标——包括准确性、精密度、召回率、f1分数和曲线下面积(AUC)——表明具有更多参数的模型具有明显的性能优势。然而,本研究也强调了模型架构、训练过程和优化技术是分类有效性的关键决定因素。XLNet的排列语言建模方法增强了双向上下文理解能力,允许它超越来自转换器(BERT)系列的双向编码器表示中的更大模型,尽管具有相对较少的参数。值得注意的是,基于模糊秩的集成方法结合了多种语言模型,在测试集上取得了令人印象深刻的结果,准确率为93.52%,精密度为94.65%,f1分数为96.03%,AUC为97.15%。结论:将集成学习与PLMs和Gompertz函数融合,采用基于模糊秩的方法,引入了一种新的预测方法,具有提高准确性和可靠性的前景。此外,实验结果表明,仅对文本内容进行训练可以产生很高的预测精度,从而为假新闻检测系统的优化提供了有价值的见解。这些发现不仅有助于发现错误信息,而且对在公共卫生政策和沟通中应用先进的深度学习技术具有更广泛的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
14.40
自引率
5.40%
发文量
654
审稿时长
1 months
期刊介绍: The Journal of Medical Internet Research (JMIR) is a highly respected publication in the field of health informatics and health services. With a founding date in 1999, JMIR has been a pioneer in the field for over two decades. As a leader in the industry, the journal focuses on digital health, data science, health informatics, and emerging technologies for health, medicine, and biomedical research. It is recognized as a top publication in these disciplines, ranking in the first quartile (Q1) by Impact Factor. Notably, JMIR holds the prestigious position of being ranked #1 on Google Scholar within the "Medical Informatics" discipline.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信