ChatGPT 35更好地提高了英语的可理解性,比西班牙语,生成骨肉瘤问题的回答。

IF 2 3区 医学 Q3 ONCOLOGY
Rosamaria Dias, Ashley Castan, Katie Gotoff, Yazan Kadkoy, Joseph Ippolito, Kathleen Beebe, Joseph Benevenia
{"title":"ChatGPT 35更好地提高了英语的可理解性,比西班牙语,生成骨肉瘤问题的回答。","authors":"Rosamaria Dias, Ashley Castan, Katie Gotoff, Yazan Kadkoy, Joseph Ippolito, Kathleen Beebe, Joseph Benevenia","doi":"10.1002/jso.28109","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Despite adequate discussion and counseling in the office, inadequate health literacy or language barriers may make it difficult to follow instructions from a physician and access necessary resources. This may negatively impact survival outcomes. Most healthcare materials are written at a 10th grade level, while many patients read at an 8th grade level. Hispanic Americans comprise about 25% of the US patient population, while only 6% of physicians identify as bilingual.</p><p><strong>Questions/purpose: </strong>(1) Does ChatGPT 3.5 provide appropriate responses to frequently asked patient questions that are sufficient for clinical practice and accurate in English and Spanish? (2) What is the comprehensibility of the responses provided by ChatGPT 3.5 and are these modifiable?</p><p><strong>Methods: </strong>Twenty frequently asked osteosarcoma patient questions, evaluated by two fellowship-trained musculoskeletal oncologists were input into ChatGPT 3.5. Responses were evaluated by two independent reviewers to assess appropriateness for clinical practice, and accuracy. Responses were graded using the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Grade Level test (FKGL). The responses were then input into ChatGPT 3.5 for a second time with the following command \"Make text easier to understand\". The same method was done in Spanish.</p><p><strong>Results: </strong>All responses generated were appropriate for a patient-facing informational platform. There was no difference in the Flesch Reading Ease Score between English and Spanish responses before the modification (p = 0.307) and with the Flesch-Kincaid grade level (p = 0.294). After modification, there was a statistically significant difference in comprehensibility between English and Spanish responses (p = 0.003 and p = 0.011).</p><p><strong>Conclusion: </strong>In both English and Spanish, none of the ChatGPT generated responses were found to be factually inaccurate. ChatGPT was able to modify responses upon follow-up with a simplified command. However, it was shown to be better at improving English responses than equivalent Spanish responses.</p>","PeriodicalId":17111,"journal":{"name":"Journal of Surgical Oncology","volume":" ","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT 35 Better Improves Comprehensibility of English, Than Spanish, Generated Responses to Osteosarcoma Questions.\",\"authors\":\"Rosamaria Dias, Ashley Castan, Katie Gotoff, Yazan Kadkoy, Joseph Ippolito, Kathleen Beebe, Joseph Benevenia\",\"doi\":\"10.1002/jso.28109\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Despite adequate discussion and counseling in the office, inadequate health literacy or language barriers may make it difficult to follow instructions from a physician and access necessary resources. This may negatively impact survival outcomes. Most healthcare materials are written at a 10th grade level, while many patients read at an 8th grade level. Hispanic Americans comprise about 25% of the US patient population, while only 6% of physicians identify as bilingual.</p><p><strong>Questions/purpose: </strong>(1) Does ChatGPT 3.5 provide appropriate responses to frequently asked patient questions that are sufficient for clinical practice and accurate in English and Spanish? (2) What is the comprehensibility of the responses provided by ChatGPT 3.5 and are these modifiable?</p><p><strong>Methods: </strong>Twenty frequently asked osteosarcoma patient questions, evaluated by two fellowship-trained musculoskeletal oncologists were input into ChatGPT 3.5. Responses were evaluated by two independent reviewers to assess appropriateness for clinical practice, and accuracy. Responses were graded using the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Grade Level test (FKGL). The responses were then input into ChatGPT 3.5 for a second time with the following command \\\"Make text easier to understand\\\". The same method was done in Spanish.</p><p><strong>Results: </strong>All responses generated were appropriate for a patient-facing informational platform. There was no difference in the Flesch Reading Ease Score between English and Spanish responses before the modification (p = 0.307) and with the Flesch-Kincaid grade level (p = 0.294). After modification, there was a statistically significant difference in comprehensibility between English and Spanish responses (p = 0.003 and p = 0.011).</p><p><strong>Conclusion: </strong>In both English and Spanish, none of the ChatGPT generated responses were found to be factually inaccurate. ChatGPT was able to modify responses upon follow-up with a simplified command. However, it was shown to be better at improving English responses than equivalent Spanish responses.</p>\",\"PeriodicalId\":17111,\"journal\":{\"name\":\"Journal of Surgical Oncology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2025-02-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Surgical Oncology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1002/jso.28109\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Surgical Oncology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/jso.28109","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景:尽管在办公室进行了充分的讨论和咨询,但卫生知识不足或语言障碍可能使患者难以遵循医生的指示并获得必要的资源。这可能会对生存结果产生负面影响。大多数医疗保健材料都是10年级的水平,而许多患者的阅读水平是8年级的水平。西班牙裔美国人约占美国患者总数的25%,而只有6%的医生认为自己会说两种语言。问题/目的:(1)ChatGPT 3.5是否为临床实践提供了足够的、准确的英语和西班牙语的患者常见问题的适当回答?(2) ChatGPT 3.5提供的回答的可理解性如何,是否可以修改?方法:将20个骨肉瘤患者常问的问题输入ChatGPT 3.5,并由两名受过培训的肌肉骨骼肿瘤学家进行评估。反应由两名独立的审稿人评估,以评估临床实践的适宜性和准确性。采用Flesch Reading Ease Score (FRES)和Flesch- kincaid Grade Level test (FKGL)对回答进行评分。然后使用以下命令“使文本更容易理解”将响应第二次输入ChatGPT 3.5。西班牙语也采用了同样的方法。结果:生成的所有应答都适合于面向患者的信息平台。修改前英语和西班牙语的Flesch阅读轻松得分无差异(p = 0.307),与Flesch- kincaid年级水平无差异(p = 0.294)。修改后,英语和西班牙语回答的可理解性差异有统计学意义(p = 0.003和p = 0.011)。结论:在英语和西班牙语中,ChatGPT生成的回答都没有发现事实不准确。ChatGPT能够使用简化的命令修改后续响应。然而,它被证明在提高英语反应方面比同等西班牙语反应更好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ChatGPT 35 Better Improves Comprehensibility of English, Than Spanish, Generated Responses to Osteosarcoma Questions.

Background: Despite adequate discussion and counseling in the office, inadequate health literacy or language barriers may make it difficult to follow instructions from a physician and access necessary resources. This may negatively impact survival outcomes. Most healthcare materials are written at a 10th grade level, while many patients read at an 8th grade level. Hispanic Americans comprise about 25% of the US patient population, while only 6% of physicians identify as bilingual.

Questions/purpose: (1) Does ChatGPT 3.5 provide appropriate responses to frequently asked patient questions that are sufficient for clinical practice and accurate in English and Spanish? (2) What is the comprehensibility of the responses provided by ChatGPT 3.5 and are these modifiable?

Methods: Twenty frequently asked osteosarcoma patient questions, evaluated by two fellowship-trained musculoskeletal oncologists were input into ChatGPT 3.5. Responses were evaluated by two independent reviewers to assess appropriateness for clinical practice, and accuracy. Responses were graded using the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Grade Level test (FKGL). The responses were then input into ChatGPT 3.5 for a second time with the following command "Make text easier to understand". The same method was done in Spanish.

Results: All responses generated were appropriate for a patient-facing informational platform. There was no difference in the Flesch Reading Ease Score between English and Spanish responses before the modification (p = 0.307) and with the Flesch-Kincaid grade level (p = 0.294). After modification, there was a statistically significant difference in comprehensibility between English and Spanish responses (p = 0.003 and p = 0.011).

Conclusion: In both English and Spanish, none of the ChatGPT generated responses were found to be factually inaccurate. ChatGPT was able to modify responses upon follow-up with a simplified command. However, it was shown to be better at improving English responses than equivalent Spanish responses.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.70
自引率
4.00%
发文量
367
审稿时长
2 months
期刊介绍: The Journal of Surgical Oncology offers peer-reviewed, original papers in the field of surgical oncology and broadly related surgical sciences, including reports on experimental and laboratory studies. As an international journal, the editors encourage participation from leading surgeons around the world. The JSO is the representative journal for the World Federation of Surgical Oncology Societies. Publishing 16 issues in 2 volumes each year, the journal accepts Research Articles, in-depth Reviews of timely interest, Letters to the Editor, and invited Editorials. Guest Editors from the JSO Editorial Board oversee multiple special Seminars issues each year. These Seminars include multifaceted Reviews on a particular topic or current issue in surgical oncology, which are invited from experts in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信