A Systematic Examination of Generative Artificial Intelligence (GAI) Usage Guidelines for Scholarly Publishing in Medical Journals

Shuhui Yin, Peiyi Lu, Zhuoran Xu, Zi Lian, Chenfei Ye, CHIHUA LI
{"title":"A Systematic Examination of Generative Artificial Intelligence (GAI) Usage Guidelines for Scholarly Publishing in Medical Journals","authors":"Shuhui Yin, Peiyi Lu, Zhuoran Xu, Zi Lian, Chenfei Ye, CHIHUA LI","doi":"10.1101/2024.03.19.24304550","DOIUrl":null,"url":null,"abstract":"Background A thorough and in-depth examination of generative artificial intelligence (GAI) usage guidelines in medical journals will inform potential gaps and promote proper GAI usage in scholarly publishing. This study aims to examine the provision and specificity of GAI usage guidelines and their relationships with journal characteristics. Methods From the SCImago Journal Rank (SJR) list for medicine in 2022, we selected 98 journals as top journals to represent highly indexed journals and 144 as whole-spectrum sample journals to represent all medical journals. We examined their GAI usage guidelines for scholarly publishing between December 2023 and January 2024. Results Compared to whole-spectrum sample journals, the top journals were more likely to provide author guidelines (64.3% vs. 27.8%) and reviewer guidelines (11.2% vs. 0.0%) as well as refer to external guidelines (85.7% vs 74.3%). Probit models showed that SJR score or region was not associated with the provision of these guidelines among top journals. However, among whole-spectrum sample journals, SJR score was positively associated with the provision of author guidelines (0.85, 95% CI 0.49 to 1.25) and references to external guidelines (2.01, 95% CI 1.24 to 3.65). Liner models showed that SJR score was positively associated with the specificity level of author and reviewer guidelines among whole-spectrum sample journals (1.21, 95% CI 0.72 to 1.70), and no such pattern was observed among top journals. Conclusions The provision of GAI usage guidelines is limited across medical journals, especially for reviewer guidelines. The lack of specificity and consistency in existing guidelines highlights areas deserving improvement. These findings suggest that immediate attention is needed to guide GAI usage in scholarly publishing in medical journals.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Medical Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.03.19.24304550","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background A thorough and in-depth examination of generative artificial intelligence (GAI) usage guidelines in medical journals will inform potential gaps and promote proper GAI usage in scholarly publishing. This study aims to examine the provision and specificity of GAI usage guidelines and their relationships with journal characteristics. Methods From the SCImago Journal Rank (SJR) list for medicine in 2022, we selected 98 journals as top journals to represent highly indexed journals and 144 as whole-spectrum sample journals to represent all medical journals. We examined their GAI usage guidelines for scholarly publishing between December 2023 and January 2024. Results Compared to whole-spectrum sample journals, the top journals were more likely to provide author guidelines (64.3% vs. 27.8%) and reviewer guidelines (11.2% vs. 0.0%) as well as refer to external guidelines (85.7% vs 74.3%). Probit models showed that SJR score or region was not associated with the provision of these guidelines among top journals. However, among whole-spectrum sample journals, SJR score was positively associated with the provision of author guidelines (0.85, 95% CI 0.49 to 1.25) and references to external guidelines (2.01, 95% CI 1.24 to 3.65). Liner models showed that SJR score was positively associated with the specificity level of author and reviewer guidelines among whole-spectrum sample journals (1.21, 95% CI 0.72 to 1.70), and no such pattern was observed among top journals. Conclusions The provision of GAI usage guidelines is limited across medical journals, especially for reviewer guidelines. The lack of specificity and consistency in existing guidelines highlights areas deserving improvement. These findings suggest that immediate attention is needed to guide GAI usage in scholarly publishing in medical journals.
生成式人工智能(GAI)在医学期刊学术出版中的使用指南的系统性研究
背景 对医学期刊中的生成式人工智能(GAI)使用指南进行全面深入的研究,可以了解潜在的差距,促进学术出版界正确使用GAI。本研究旨在考察GAI使用指南的规定和具体程度,以及它们与期刊特点的关系。方法 从2022年SCImago医学期刊排名(SJR)列表中,我们选取了98种顶级期刊代表高收录期刊,144种全样本期刊代表所有医学期刊。我们研究了它们在 2023 年 12 月至 2024 年 1 月期间学术出版的 GAI 使用指南。结果 与全样本期刊相比,顶级期刊更有可能提供作者指南(64.3% 对 27.8%)、审稿人指南(11.2% 对 0.0%)以及参考外部指南(85.7% 对 74.3%)。Probit 模型显示,在顶级期刊中,SJR 分数或地区与提供这些指南无关。然而,在全样本期刊中,SJR得分与提供作者指南(0.85,95% CI 0.49-1.25)和参考外部指南(2.01,95% CI 1.24-3.65)呈正相关。划线模型显示,在全样本期刊中,SJR得分与作者和审稿人指南的特异性水平呈正相关(1.21,95% CI 0.72-1.70),而在顶级期刊中则未观察到这种模式。结论 医学期刊提供的GAI使用指南有限,尤其是审稿人指南。现有指南缺乏针对性和一致性,这凸显了需要改进的地方。这些发现表明,在医学期刊的学术出版中,需要立即关注GAI的使用指南。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信