Moussa Cheima, Altayyar Sarah, Vergonjeanne Marion, Gelle Thibaut, Preux Pierre-Marie
{"title":"Validation of a generative artificial intelligence tool for the critical appraisal of articles on the epidemiology of mental health: Its application in the Middle East and North Africa.","authors":"Moussa Cheima, Altayyar Sarah, Vergonjeanne Marion, Gelle Thibaut, Preux Pierre-Marie","doi":"10.1016/j.jeph.2025.202990","DOIUrl":null,"url":null,"abstract":"<p><p>Mental health disorders have a high disability-adjusted life years in the Middle East and North Africa. This rise has led to a surge in related publications, prompting researchers to use AI tools like ChatGPT to reduce time spent on routine tasks. Our study aimed to validate an AI-assisted critical appraisal (CA) tool by comparing it with human raters. We developed customized GPT models using ChatGPT-4. These models were tailored to evaluate studies using the Newcastle-Ottawa Scale (NOS) or the Jadad Scale in one model, while another model evaluated STROBE or CONSORT guidelines. Our results showed a moderate to good agreement between human CA and our GPTs for the NOS for cohort, case control and cross-sectional studies and for the Jadad scale, with an ICC of 0.68 [95 %CI: 0.24-0.82], 0.69 [95 %CI: 0.31-0.88], 0.76 [95 %CI: 0.47-0.90] and 0.84 [95 %CI: 0.57-0.94] respectively. There was also a moderate to substantial agreement between the two methods for STROBE in cross sectional, cohort, case control studies, and for CONSORT in trial design, with a K of 0.63 [95 %CI: 0.56-0.70], 0.57 [95 %CI: 0.47-0.66], 0.48 [95 %CI: 0.38-0.50] and 0.70 [95 %CI: 0.63-0.77] respectively. Our custom GPT models produced hallucinations in 6.5 % and 4.9 % of cases, respectively. Human raters took an average of 19.6 ± 4.3 min per article, whereas our customized GPTs took only 1.4. ChatGPT could be a useful tool for handling repetitive tasks yet its effective application relies on the critical expertise of researchers.</p>","PeriodicalId":517428,"journal":{"name":"Journal of epidemiology and population health","volume":"73 2","pages":"202990"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of epidemiology and population health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.jeph.2025.202990","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在中东和北非地区,精神疾病的残疾调整寿命年数很高。这一增长导致相关出版物激增,促使研究人员使用 ChatGPT 等人工智能工具来减少日常工作所花费的时间。我们的研究旨在通过将人工智能辅助批判性评价(CA)工具与人类评分员进行比较,验证该工具的有效性。我们使用 ChatGPT-4 开发了定制的 GPT 模型。其中一个模型使用纽卡斯尔-渥太华量表(NOS)或贾达德量表对研究进行评估,而另一个模型则使用 STROBE 或 CONSORT 指南对研究进行评估。我们的结果显示,人类 CA 与我们的 GPT 在队列研究、病例对照和横断面研究的 NOS 以及 Jadad 量表方面具有中度到良好的一致性,ICC 分别为 0.68 [95 %CI:0.24-0.82]、0.69 [95 %CI:0.31-0.88]、0.76 [95 %CI:0.47-0.90] 和 0.84 [95 %CI:0.57-0.94]。在横断面研究、队列研究和病例对照研究中,两种方法的 STROBE 值以及试验设计中的 CONSORT 值也基本一致,K 值分别为 0.63 [95 %CI:0.56-0.70]、0.57 [95 %CI:0.47-0.66]、0.48 [95 %CI:0.38-0.50] 和 0.70 [95 %CI:0.63-0.77]。我们定制的 GPT 模型产生幻觉的比例分别为 6.5% 和 4.9%。人类评分者平均每篇文章花费 19.6 ± 4.3 分钟,而我们定制的 GPT 仅需 1.4 分钟。ChatGPT 可能是处理重复性任务的有用工具,但其有效应用有赖于研究人员的关键专业知识。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Validation of a generative artificial intelligence tool for the critical appraisal of articles on the epidemiology of mental health: Its application in the Middle East and North Africa.

Mental health disorders have a high disability-adjusted life years in the Middle East and North Africa. This rise has led to a surge in related publications, prompting researchers to use AI tools like ChatGPT to reduce time spent on routine tasks. Our study aimed to validate an AI-assisted critical appraisal (CA) tool by comparing it with human raters. We developed customized GPT models using ChatGPT-4. These models were tailored to evaluate studies using the Newcastle-Ottawa Scale (NOS) or the Jadad Scale in one model, while another model evaluated STROBE or CONSORT guidelines. Our results showed a moderate to good agreement between human CA and our GPTs for the NOS for cohort, case control and cross-sectional studies and for the Jadad scale, with an ICC of 0.68 [95 %CI: 0.24-0.82], 0.69 [95 %CI: 0.31-0.88], 0.76 [95 %CI: 0.47-0.90] and 0.84 [95 %CI: 0.57-0.94] respectively. There was also a moderate to substantial agreement between the two methods for STROBE in cross sectional, cohort, case control studies, and for CONSORT in trial design, with a K of 0.63 [95 %CI: 0.56-0.70], 0.57 [95 %CI: 0.47-0.66], 0.48 [95 %CI: 0.38-0.50] and 0.70 [95 %CI: 0.63-0.77] respectively. Our custom GPT models produced hallucinations in 6.5 % and 4.9 % of cases, respectively. Human raters took an average of 19.6 ± 4.3 min per article, whereas our customized GPTs took only 1.4. ChatGPT could be a useful tool for handling repetitive tasks yet its effective application relies on the critical expertise of researchers.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信