聊天GPT与经验丰富的眼科医生:评估聊天机器人在眼科中的写作表现。

IF 2.8 3区 医学 Q1 OPHTHALMOLOGY
Eye Pub Date : 2025-04-01 DOI:10.1038/s41433-025-03779-1
Gabriel Katz, Ofira Zloto, Avner Hostovsky, Ruth Huna-Baron, Iris Ben-Bassat Mizrachi, Zvia Burgansky, Alon Skaat, Vicktoria Vishnevskia-Dai, Ido Didi Fabian, Oded Sagiv, Ayelet Priel, Benjamin S Glicksberg, Eyal Klang
{"title":"聊天GPT与经验丰富的眼科医生:评估聊天机器人在眼科中的写作表现。","authors":"Gabriel Katz, Ofira Zloto, Avner Hostovsky, Ruth Huna-Baron, Iris Ben-Bassat Mizrachi, Zvia Burgansky, Alon Skaat, Vicktoria Vishnevskia-Dai, Ido Didi Fabian, Oded Sagiv, Ayelet Priel, Benjamin S Glicksberg, Eyal Klang","doi":"10.1038/s41433-025-03779-1","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To examine the abilities of ChatGPT in writing scientific ophthalmology introductions and to compare those abilities to experienced ophthalmologists.</p><p><strong>Methods: </strong>OpenAI web interface was utilized to interact with and prompt ChatGPT 4 for generating the introductions for the selected papers. Consequently, each paper had two introductions-one drafted by ChatGPT and the other by the original author. Ten ophthalmology specialists with a minimal experience of more than 15 years, each representing distinct subspecialties-retina, neuro-ophthalmology, oculoplastic, glaucoma, and ocular oncology were provided with the two sets of introductions without revealing the origin (ChatGPT or human author) and were tasked to evaluate the introductions.</p><p><strong>Results: </strong>For each type of introduction, out of 45 instances, specialists correctly identified the source 26 times (57.7%) and erred 19 times (42.2%). The misclassification rates for introductions were 25% for experts evaluating introductions from their own subspecialty while to 44.4% for experts assessed introductions outside their subspecialty domain. In the comparative evaluation of introductions written by ChatGPT and human authors, no significant difference was identified across the assessed metrics (language, data arrangement, factual accuracy, originality, data Currency). The misclassification rate (the frequency at which reviewers incorrectly identified the authorship) was highest in Oculoplastic (66.7%) and lowest in Retina (11.1%).</p><p><strong>Conclusions: </strong>ChatGPT represents a significant advancement in facilitating the creation of original scientific papers in ophthalmology. The introductions generated by ChatGPT showed no statistically significant difference compared to those written by experts in terms of language, data organization, factual accuracy, originality, and the currency of information. In addition, nearly half of them being indistinguishable from the originals. Future research endeavours should explore ChatGPT-4's utility in composing other sections of research papers and delve into the associated ethical considerations.</p>","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Chat GPT vs an experienced ophthalmologist: evaluating chatbot writing performance in ophthalmology.\",\"authors\":\"Gabriel Katz, Ofira Zloto, Avner Hostovsky, Ruth Huna-Baron, Iris Ben-Bassat Mizrachi, Zvia Burgansky, Alon Skaat, Vicktoria Vishnevskia-Dai, Ido Didi Fabian, Oded Sagiv, Ayelet Priel, Benjamin S Glicksberg, Eyal Klang\",\"doi\":\"10.1038/s41433-025-03779-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To examine the abilities of ChatGPT in writing scientific ophthalmology introductions and to compare those abilities to experienced ophthalmologists.</p><p><strong>Methods: </strong>OpenAI web interface was utilized to interact with and prompt ChatGPT 4 for generating the introductions for the selected papers. Consequently, each paper had two introductions-one drafted by ChatGPT and the other by the original author. Ten ophthalmology specialists with a minimal experience of more than 15 years, each representing distinct subspecialties-retina, neuro-ophthalmology, oculoplastic, glaucoma, and ocular oncology were provided with the two sets of introductions without revealing the origin (ChatGPT or human author) and were tasked to evaluate the introductions.</p><p><strong>Results: </strong>For each type of introduction, out of 45 instances, specialists correctly identified the source 26 times (57.7%) and erred 19 times (42.2%). The misclassification rates for introductions were 25% for experts evaluating introductions from their own subspecialty while to 44.4% for experts assessed introductions outside their subspecialty domain. In the comparative evaluation of introductions written by ChatGPT and human authors, no significant difference was identified across the assessed metrics (language, data arrangement, factual accuracy, originality, data Currency). The misclassification rate (the frequency at which reviewers incorrectly identified the authorship) was highest in Oculoplastic (66.7%) and lowest in Retina (11.1%).</p><p><strong>Conclusions: </strong>ChatGPT represents a significant advancement in facilitating the creation of original scientific papers in ophthalmology. The introductions generated by ChatGPT showed no statistically significant difference compared to those written by experts in terms of language, data organization, factual accuracy, originality, and the currency of information. In addition, nearly half of them being indistinguishable from the originals. Future research endeavours should explore ChatGPT-4's utility in composing other sections of research papers and delve into the associated ethical considerations.</p>\",\"PeriodicalId\":12125,\"journal\":{\"name\":\"Eye\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Eye\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1038/s41433-025-03779-1\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eye","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1038/s41433-025-03779-1","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:研究 ChatGPT 撰写眼科科学引言的能力,并将这些能力与经验丰富的眼科医生进行比较:方法:使用 OpenAI 网络界面与 ChatGPT 4 进行交互,并提示其为所选论文生成引言。因此,每篇论文都有两个引言,一个由 ChatGPT 起草,另一个由原作者起草。我们向 10 位眼科专家提供了两套引言,他们的经验最少超过 15 年,分别代表不同的亚专科--视网膜、神经眼科、眼整形、青光眼和眼肿瘤,但不透露引言的来源(ChatGPT 或人类作者),并让他们对引言进行评估:对于每种类型的介绍,在 45 个实例中,专家正确识别来源 26 次(57.7%),错误 19 次(42.2%)。专家在评估本专业领域内的介绍时,误判率为 25%,而在评估本专业领域外的介绍时,误判率为 44.4%。在对由 ChatGPT 和人类作者撰写的引言进行比较评估时,在各项评估指标(语言、数据安排、事实准确性、原创性、数据货币)上均未发现显著差异。错误分类率(审稿人错误识别作者身份的频率)最高的是眼整形(66.7%),最低的是视网膜(11.1%):结论:ChatGPT 在促进眼科原创科学论文的发表方面取得了重大进展。在语言、数据组织、事实准确性、原创性和信息时效性方面,ChatGPT 生成的引言与专家撰写的引言相比没有统计学意义上的显著差异。此外,近一半的简介与专家撰写的简介无异。未来的研究工作应探索 ChatGPT-4 在撰写研究论文其他部分的实用性,并深入探讨相关的伦理问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Chat GPT vs an experienced ophthalmologist: evaluating chatbot writing performance in ophthalmology.

Purpose: To examine the abilities of ChatGPT in writing scientific ophthalmology introductions and to compare those abilities to experienced ophthalmologists.

Methods: OpenAI web interface was utilized to interact with and prompt ChatGPT 4 for generating the introductions for the selected papers. Consequently, each paper had two introductions-one drafted by ChatGPT and the other by the original author. Ten ophthalmology specialists with a minimal experience of more than 15 years, each representing distinct subspecialties-retina, neuro-ophthalmology, oculoplastic, glaucoma, and ocular oncology were provided with the two sets of introductions without revealing the origin (ChatGPT or human author) and were tasked to evaluate the introductions.

Results: For each type of introduction, out of 45 instances, specialists correctly identified the source 26 times (57.7%) and erred 19 times (42.2%). The misclassification rates for introductions were 25% for experts evaluating introductions from their own subspecialty while to 44.4% for experts assessed introductions outside their subspecialty domain. In the comparative evaluation of introductions written by ChatGPT and human authors, no significant difference was identified across the assessed metrics (language, data arrangement, factual accuracy, originality, data Currency). The misclassification rate (the frequency at which reviewers incorrectly identified the authorship) was highest in Oculoplastic (66.7%) and lowest in Retina (11.1%).

Conclusions: ChatGPT represents a significant advancement in facilitating the creation of original scientific papers in ophthalmology. The introductions generated by ChatGPT showed no statistically significant difference compared to those written by experts in terms of language, data organization, factual accuracy, originality, and the currency of information. In addition, nearly half of them being indistinguishable from the originals. Future research endeavours should explore ChatGPT-4's utility in composing other sections of research papers and delve into the associated ethical considerations.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Eye
Eye 医学-眼科学
CiteScore
6.40
自引率
5.10%
发文量
481
审稿时长
3-6 weeks
期刊介绍: Eye seeks to provide the international practising ophthalmologist with high quality articles, of academic rigour, on the latest global clinical and laboratory based research. Its core aim is to advance the science and practice of ophthalmology with the latest clinical- and scientific-based research. Whilst principally aimed at the practising clinician, the journal contains material of interest to a wider readership including optometrists, orthoptists, other health care professionals and research workers in all aspects of the field of visual science worldwide. Eye is the official journal of The Royal College of Ophthalmologists. Eye encourages the submission of original articles covering all aspects of ophthalmology including: external eye disease; oculo-plastic surgery; orbital and lacrimal disease; ocular surface and corneal disorders; paediatric ophthalmology and strabismus; glaucoma; medical and surgical retina; neuro-ophthalmology; cataract and refractive surgery; ocular oncology; ophthalmic pathology; ophthalmic genetics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信