Using Artificial Intelligence ChatGPT to Access Medical Information about Chemical Eye Injuries: A Comparative Study.

IF 2 Q3 HEALTH CARE SCIENCES & SERVICES
Layan Yousef Alharbi, Rema Rashed Alrashoud, Bader Shabib Alotaibi, Abdulaziz Meshal Al Dera, Raghad Saleh Alajlan, Reem Rashed AlHuthail, Dalal Ibrahim Alessa
{"title":"Using Artificial Intelligence ChatGPT to Access Medical Information about Chemical Eye Injuries: A Comparative Study.","authors":"Layan Yousef Alharbi, Rema Rashed Alrashoud, Bader Shabib Alotaibi, Abdulaziz Meshal Al Dera, Raghad Saleh Alajlan, Reem Rashed AlHuthail, Dalal Ibrahim Alessa","doi":"10.2196/73642","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Background: Chemical ocular injuries are a major public health issue. They cause eye damage from harmful chemicals and can lead to severe vision loss or blindness if not treated promptly and effectively. Although medical knowledge has advanced, accessing reliable and understandable information on these injuries remains a challenge. This is due to unverified online content and complex terminology. Artificial Intelligence (AI) tools like ChatGPT provide a promising solution by simplifying medical information and making it more accessible to the general public.</p><p><strong>Objective: </strong>Objective: This study aims to assess the use of ChatGPT in providing reliable, accurate, and accessible medical information on chemical ocular injuries. It evaluates the correctness, thematic accuracy, and coherence of ChatGPT's responses compared to established medical guidelines and explores its potential for patient education.</p><p><strong>Methods: </strong>Methods: Nine questions were entered to ChatGPT regarding various aspects of chemical ocular injuries. These included the definition, prevalence, etiology, prevention, symptoms, diagnosis, treatment, follow-up, and complications. The responses provided by ChatGPT were compared to the ICD-9 and ICD-10 guidelines for chemical (alkali and acid) injuries of the conjunctiva and cornea. The evaluation focused on criteria such as correctness, thematic accuracy, coherence to assess the accuracy of ChatGPT's responses. The inputs were categorized into three distinct groups, and statistical analyses, including Flesch-Kincaid readability tests, ANOVA, and trend analysis, were conducted to assess their readability, complexity and trends.</p><p><strong>Results: </strong>Results: The results showed that ChatGPT provided accurate and coherent responses for most questions about chemical ocular injuries, demonstrating thematic relevance. However, the responses sometimes overlooked critical clinical details or guideline-specific elements, such as emphasizing the urgency of care, using precise classification systems, and addressing detailed diagnostic or management protocols. While the answers were generally valid, they occasionally included less relevant or overly generalized information. This reduced their consistency with established medical guidelines. The average FRES was 33.84 ± 2.97, indicating a fairly challenging reading level, while the FKGL averaged 14.21 ± 0.97, suitable for readers with college-level proficiency. Passive voice was used in 7.22% ± 5.60% of sentences, indicating moderate reliance. Statistical analysis showed no significant differences in FRES (p = .385), FKGL (p = .555), or passive sentence usage (p = .601) across categories, as determined by one-way ANOVA. Readability remained relatively constant across the three categories, as determined by trend analysis.</p><p><strong>Conclusions: </strong>Conclusions: ChatGPT shows strong potential in providing accurate and relevant information about chemical ocular injuries. However, its language complexity may prevent accessibility for individuals with lower health literacy and sometimes miss critical aspects. Future improvements should focus on enhancing readability, increasing context-specific accuracy, and tailoring responses to person needs and literacy levels.</p><p><strong>Clinicaltrial: </strong>This is not RCT.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":" ","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Formative Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/73642","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Background: Chemical ocular injuries are a major public health issue. They cause eye damage from harmful chemicals and can lead to severe vision loss or blindness if not treated promptly and effectively. Although medical knowledge has advanced, accessing reliable and understandable information on these injuries remains a challenge. This is due to unverified online content and complex terminology. Artificial Intelligence (AI) tools like ChatGPT provide a promising solution by simplifying medical information and making it more accessible to the general public.

Objective: Objective: This study aims to assess the use of ChatGPT in providing reliable, accurate, and accessible medical information on chemical ocular injuries. It evaluates the correctness, thematic accuracy, and coherence of ChatGPT's responses compared to established medical guidelines and explores its potential for patient education.

Methods: Methods: Nine questions were entered to ChatGPT regarding various aspects of chemical ocular injuries. These included the definition, prevalence, etiology, prevention, symptoms, diagnosis, treatment, follow-up, and complications. The responses provided by ChatGPT were compared to the ICD-9 and ICD-10 guidelines for chemical (alkali and acid) injuries of the conjunctiva and cornea. The evaluation focused on criteria such as correctness, thematic accuracy, coherence to assess the accuracy of ChatGPT's responses. The inputs were categorized into three distinct groups, and statistical analyses, including Flesch-Kincaid readability tests, ANOVA, and trend analysis, were conducted to assess their readability, complexity and trends.

Results: Results: The results showed that ChatGPT provided accurate and coherent responses for most questions about chemical ocular injuries, demonstrating thematic relevance. However, the responses sometimes overlooked critical clinical details or guideline-specific elements, such as emphasizing the urgency of care, using precise classification systems, and addressing detailed diagnostic or management protocols. While the answers were generally valid, they occasionally included less relevant or overly generalized information. This reduced their consistency with established medical guidelines. The average FRES was 33.84 ± 2.97, indicating a fairly challenging reading level, while the FKGL averaged 14.21 ± 0.97, suitable for readers with college-level proficiency. Passive voice was used in 7.22% ± 5.60% of sentences, indicating moderate reliance. Statistical analysis showed no significant differences in FRES (p = .385), FKGL (p = .555), or passive sentence usage (p = .601) across categories, as determined by one-way ANOVA. Readability remained relatively constant across the three categories, as determined by trend analysis.

Conclusions: Conclusions: ChatGPT shows strong potential in providing accurate and relevant information about chemical ocular injuries. However, its language complexity may prevent accessibility for individuals with lower health literacy and sometimes miss critical aspects. Future improvements should focus on enhancing readability, increasing context-specific accuracy, and tailoring responses to person needs and literacy levels.

Clinicaltrial: This is not RCT.

利用人工智能ChatGPT获取化学眼损伤医疗信息的比较研究
背景:眼部化学性损伤是一个重要的公共卫生问题。它们会造成有害化学物质对眼睛的伤害,如果不及时有效地治疗,可能导致严重的视力丧失或失明。尽管医学知识有所进步,但获取有关这些伤害的可靠和可理解的信息仍然是一项挑战。这是由于未经验证的在线内容和复杂的术语。ChatGPT等人工智能(AI)工具通过简化医疗信息并使其更容易被公众获取,提供了一个很有前途的解决方案。目的:目的:本研究旨在评估ChatGPT在提供可靠、准确、可及的化学眼损伤医学信息中的应用。与已建立的医疗指南相比,它评估了ChatGPT回答的正确性、主题准确性和一致性,并探索了其在患者教育方面的潜力。方法:方法:在ChatGPT中输入9个问题,涉及眼部化学性损伤的各个方面。这些包括定义、流行、病因、预防、症状、诊断、治疗、随访和并发症。ChatGPT提供的反应与ICD-9和ICD-10结膜和角膜化学(碱和酸)损伤指南进行了比较。评估集中在诸如正确性、主题准确性、连贯性等标准上,以评估ChatGPT回答的准确性。将输入信息分为三组,并进行统计分析,包括Flesch-Kincaid可读性检验、方差分析和趋势分析,以评估其可读性、复杂性和趋势。结果:结果表明,ChatGPT对眼部化学损伤的大部分问题提供了准确、连贯的回答,显示出主题相关性。然而,这些回应有时忽略了关键的临床细节或指南特定的要素,例如强调护理的紧迫性,使用精确的分类系统,以及解决详细的诊断或管理方案。虽然答案总体上是有效的,但它们偶尔会包含不太相关或过于一般化的信息。这降低了它们与既定医疗指南的一致性。平均FRES为33.84±2.97,具有相当高的阅读水平,而FKGL平均为14.21±0.97,适合大学水平的读者。被动语态的使用频率为7.22%±5.60%,依赖度中等。统计分析显示,FRES (p = .385)、FKGL (p = .555)或被动句的使用(p = .601)在不同类别之间没有显著差异,通过单因素方差分析确定。正如趋势分析所确定的那样,可读性在三个类别中保持相对恒定。结论:ChatGPT在提供准确、相关的眼部化学损伤信息方面具有很强的潜力。然而,它的语言复杂性可能会阻碍健康素养较低的个人访问,有时会错过关键方面。未来的改进应侧重于提高可读性,提高上下文特定的准确性,并根据个人需求和识字水平定制响应。临床试验:这不是随机对照试验。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Formative Research
JMIR Formative Research Medicine-Medicine (miscellaneous)
CiteScore
2.70
自引率
9.10%
发文量
579
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信