Hoyoung Jung, Jean Oh, Kirk A J Stephenson, Aaron W Joe, Zaid N Mammo
{"title":"使用 ChatGPT3.5 和 GPT4 即时工程,改善视网膜疾病的患者教育。","authors":"Hoyoung Jung, Jean Oh, Kirk A J Stephenson, Aaron W Joe, Zaid N Mammo","doi":"10.1016/j.jcjo.2024.08.010","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To assess the effect of prompt engineering on the accuracy, comprehensiveness, readability, and empathy of large language model (LLM)-generated responses to patient questions regarding retinal disease.</p><p><strong>Design: </strong>Prospective qualitative study.</p><p><strong>Participants: </strong>Retina specialists, ChatGPT3.5, and GPT4.</p><p><strong>Methods: </strong>Twenty common patient questions regarding 5 retinal conditions were inputted to ChatGPT3.5 and GPT4 as a stand-alone question or preceded by an optimized prompt (prompt A) or preceded by prompt A with specified limits to length and grade reading level (prompt B). Accuracy and comprehensiveness were graded by 3 retina specialists on a Likert scale from 1 to 5 (1: very poor to 5: very good). Readability of responses was assessed using Readable.com, an online readability tool.</p><p><strong>Results: </strong>There were no significant differences between ChatGPT3.5 and GPT4 across any of the metrics tested. Median accuracy of responses to a stand-alone question, prompt A, and prompt B questions were 5.0, 5.0, and 4.0, respectively. Median comprehensiveness of responses to a stand-alone question, prompt A, and prompt B questions were 5.0, 5.0, and 4.0, respectively. The use of prompt B was associated with a lower accuracy and comprehensiveness than responses to stand-alone question or prompt A questions (p < 0.001). Average-grade reading level of responses across both LLMs were 13.45, 11.5, and 10.3 for a stand-alone question, prompt A, and prompt B questions, respectively (p < 0.001).</p><p><strong>Conclusions: </strong>Prompt engineering can significantly improve readability of LLM-generated responses, although at the cost of reducing accuracy and comprehensiveness. Further study is needed to understand the utility and bioethical implications of LLMs as a patient educational resource.</p>","PeriodicalId":9606,"journal":{"name":"Canadian journal of ophthalmology. Journal canadien d'ophtalmologie","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prompt engineering with ChatGPT3.5 and GPT4 to improve patient education on retinal diseases.\",\"authors\":\"Hoyoung Jung, Jean Oh, Kirk A J Stephenson, Aaron W Joe, Zaid N Mammo\",\"doi\":\"10.1016/j.jcjo.2024.08.010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>To assess the effect of prompt engineering on the accuracy, comprehensiveness, readability, and empathy of large language model (LLM)-generated responses to patient questions regarding retinal disease.</p><p><strong>Design: </strong>Prospective qualitative study.</p><p><strong>Participants: </strong>Retina specialists, ChatGPT3.5, and GPT4.</p><p><strong>Methods: </strong>Twenty common patient questions regarding 5 retinal conditions were inputted to ChatGPT3.5 and GPT4 as a stand-alone question or preceded by an optimized prompt (prompt A) or preceded by prompt A with specified limits to length and grade reading level (prompt B). Accuracy and comprehensiveness were graded by 3 retina specialists on a Likert scale from 1 to 5 (1: very poor to 5: very good). Readability of responses was assessed using Readable.com, an online readability tool.</p><p><strong>Results: </strong>There were no significant differences between ChatGPT3.5 and GPT4 across any of the metrics tested. Median accuracy of responses to a stand-alone question, prompt A, and prompt B questions were 5.0, 5.0, and 4.0, respectively. Median comprehensiveness of responses to a stand-alone question, prompt A, and prompt B questions were 5.0, 5.0, and 4.0, respectively. The use of prompt B was associated with a lower accuracy and comprehensiveness than responses to stand-alone question or prompt A questions (p < 0.001). Average-grade reading level of responses across both LLMs were 13.45, 11.5, and 10.3 for a stand-alone question, prompt A, and prompt B questions, respectively (p < 0.001).</p><p><strong>Conclusions: </strong>Prompt engineering can significantly improve readability of LLM-generated responses, although at the cost of reducing accuracy and comprehensiveness. Further study is needed to understand the utility and bioethical implications of LLMs as a patient educational resource.</p>\",\"PeriodicalId\":9606,\"journal\":{\"name\":\"Canadian journal of ophthalmology. Journal canadien d'ophtalmologie\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Canadian journal of ophthalmology. Journal canadien d'ophtalmologie\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1016/j.jcjo.2024.08.010\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian journal of ophthalmology. Journal canadien d'ophtalmologie","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jcjo.2024.08.010","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
目的评估提示工程对大语言模型(LLM)生成的患者视网膜疾病问题回复的准确性、全面性、可读性和共鸣性的影响:设计:前瞻性定性研究:视网膜专家、ChatGPT3.5 和 GPT4:向 ChatGPT3.5 和 GPT4 输入有关 5 种视网膜疾病的 20 个常见患者问题,这些问题可以是单独的问题,也可以在问题之前加上优化提示(提示 A),或者在提示 A 之前加上规定的长度限制和年级阅读水平(提示 B)。准确性和全面性由 3 位视网膜专家以 1-5 分的李克特量表进行评分(1 分:非常差,5 分:非常好)。回答的可读性使用在线可读性工具 Readable.com 进行评估:结果:ChatGPT3.5 和 GPT4 在所有测试指标上都没有明显差异。对独立问题、提示 A 和提示 B 问题的回答的准确性中位数分别为 5.0、5.0 和 4.0。对独立问题、提示 A 和提示 B 问题回答的全面性中位数分别为 5.0、5.0 和 4.0。与回答独立问题或提示语 A 问题相比,使用提示语 B 的准确性和全面性较低(p < 0.001)。对于独立问题、提示语 A 和提示语 B 问题,两个 LLM 答案的平均阅读水平分别为 13.45、11.5 和 10.3(p < 0.001):提示工程可以大大提高 LLM 生成的回答的可读性,但代价是降低了准确性和全面性。要了解 LLM 作为患者教育资源的实用性和生物伦理意义,还需要进一步研究。
Prompt engineering with ChatGPT3.5 and GPT4 to improve patient education on retinal diseases.
Objective: To assess the effect of prompt engineering on the accuracy, comprehensiveness, readability, and empathy of large language model (LLM)-generated responses to patient questions regarding retinal disease.
Design: Prospective qualitative study.
Participants: Retina specialists, ChatGPT3.5, and GPT4.
Methods: Twenty common patient questions regarding 5 retinal conditions were inputted to ChatGPT3.5 and GPT4 as a stand-alone question or preceded by an optimized prompt (prompt A) or preceded by prompt A with specified limits to length and grade reading level (prompt B). Accuracy and comprehensiveness were graded by 3 retina specialists on a Likert scale from 1 to 5 (1: very poor to 5: very good). Readability of responses was assessed using Readable.com, an online readability tool.
Results: There were no significant differences between ChatGPT3.5 and GPT4 across any of the metrics tested. Median accuracy of responses to a stand-alone question, prompt A, and prompt B questions were 5.0, 5.0, and 4.0, respectively. Median comprehensiveness of responses to a stand-alone question, prompt A, and prompt B questions were 5.0, 5.0, and 4.0, respectively. The use of prompt B was associated with a lower accuracy and comprehensiveness than responses to stand-alone question or prompt A questions (p < 0.001). Average-grade reading level of responses across both LLMs were 13.45, 11.5, and 10.3 for a stand-alone question, prompt A, and prompt B questions, respectively (p < 0.001).
Conclusions: Prompt engineering can significantly improve readability of LLM-generated responses, although at the cost of reducing accuracy and comprehensiveness. Further study is needed to understand the utility and bioethical implications of LLMs as a patient educational resource.
期刊介绍:
Official journal of the Canadian Ophthalmological Society.
The Canadian Journal of Ophthalmology (CJO) is the official journal of the Canadian Ophthalmological Society and is committed to timely publication of original, peer-reviewed ophthalmology and vision science articles.