加强对乳房手术患者的教育:人工智能为乳房切除术、隆胸、缩小和重建提供指导。

Gianluca Marcaccini, Pietro Susini, Yi Xie, Roberto Cuomo, Mirco Pozzi, Luca Grimaldi, Warren M Rozen, Ishith Seth
{"title":"加强对乳房手术患者的教育:人工智能为乳房切除术、隆胸、缩小和重建提供指导。","authors":"Gianluca Marcaccini, Pietro Susini, Yi Xie, Roberto Cuomo, Mirco Pozzi, Luca Grimaldi, Warren M Rozen, Ishith Seth","doi":"10.21037/tbcr-24-67","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs), such as ChatGPT have revolutionised patient education by offering accessible, reasonable, and empathetic guidance. This study evaluates ChatGPT's role in supporting patient inquiries regarding four key plastic surgery procedures: mastopexy, breast augmentation, breast reduction, and breast reconstruction. The study highlights its potential as a supplemental tool in patient education by assessing its performance across relevance, accuracy, clarity, and empathy criteria.</p><p><strong>Methods: </strong>The study collected frequently asked questions from patients about the selected procedures during pre- and post-operative consultations. Responses were generated by ChatGPT and evaluated by a panel of Plastic Surgery experts. Scores from 1 to 5 were assigned to four criteria: relevance, accuracy, clarity, and empathy. Statistical analyses, including means, standard deviations, and Kruskal-Wallis tests, were conducted to evaluate differences in the scores assigned to responses across criteria and procedures.</p><p><strong>Results: </strong>ChatGPT demonstrated high performance across all evaluation criteria, with clarity emerging as the strongest attribute, reflecting the model's ability to simplify complex medical concepts effectively. Accuracy, while slightly lower, remained reliable, aligning well with medical standards. Among the procedures, breast reconstruction appeared to perform particularly well, followed closely by mastopexy and breast augmentation. The analysis revealed no significant differences across the criteria, indicating consistent performance.</p><p><strong>Conclusions: </strong>ChatGPT demonstrated remarkable capability in addressing patient concerns and offering clear, empathetic, and relevant responses. However, limitations include the lack of personalised advice and potential patient misinterpretations, emphasising the need for professional oversight. ChatGPT is a valuable adjunct to professional medical consultations, enhancing patient education and engagement. Future research should focus on improving personalisation and evaluating its real-world application in clinical settings.</p>","PeriodicalId":101427,"journal":{"name":"Translational breast cancer research : a journal focusing on translational research in breast cancer","volume":"6 ","pages":"12"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12104957/pdf/","citationCount":"0","resultStr":"{\"title\":\"Enhancing patient education in breast surgery: artificial intelligence-powered guidance for mastopexy, augmentation, reduction, and reconstruction.\",\"authors\":\"Gianluca Marcaccini, Pietro Susini, Yi Xie, Roberto Cuomo, Mirco Pozzi, Luca Grimaldi, Warren M Rozen, Ishith Seth\",\"doi\":\"10.21037/tbcr-24-67\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Large language models (LLMs), such as ChatGPT have revolutionised patient education by offering accessible, reasonable, and empathetic guidance. This study evaluates ChatGPT's role in supporting patient inquiries regarding four key plastic surgery procedures: mastopexy, breast augmentation, breast reduction, and breast reconstruction. The study highlights its potential as a supplemental tool in patient education by assessing its performance across relevance, accuracy, clarity, and empathy criteria.</p><p><strong>Methods: </strong>The study collected frequently asked questions from patients about the selected procedures during pre- and post-operative consultations. Responses were generated by ChatGPT and evaluated by a panel of Plastic Surgery experts. Scores from 1 to 5 were assigned to four criteria: relevance, accuracy, clarity, and empathy. Statistical analyses, including means, standard deviations, and Kruskal-Wallis tests, were conducted to evaluate differences in the scores assigned to responses across criteria and procedures.</p><p><strong>Results: </strong>ChatGPT demonstrated high performance across all evaluation criteria, with clarity emerging as the strongest attribute, reflecting the model's ability to simplify complex medical concepts effectively. Accuracy, while slightly lower, remained reliable, aligning well with medical standards. Among the procedures, breast reconstruction appeared to perform particularly well, followed closely by mastopexy and breast augmentation. The analysis revealed no significant differences across the criteria, indicating consistent performance.</p><p><strong>Conclusions: </strong>ChatGPT demonstrated remarkable capability in addressing patient concerns and offering clear, empathetic, and relevant responses. However, limitations include the lack of personalised advice and potential patient misinterpretations, emphasising the need for professional oversight. ChatGPT is a valuable adjunct to professional medical consultations, enhancing patient education and engagement. Future research should focus on improving personalisation and evaluating its real-world application in clinical settings.</p>\",\"PeriodicalId\":101427,\"journal\":{\"name\":\"Translational breast cancer research : a journal focusing on translational research in breast cancer\",\"volume\":\"6 \",\"pages\":\"12\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12104957/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Translational breast cancer research : a journal focusing on translational research in breast cancer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21037/tbcr-24-67\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Translational breast cancer research : a journal focusing on translational research in breast cancer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21037/tbcr-24-67","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:大型语言模型(llm),如ChatGPT,通过提供可访问的、合理的和同理心的指导,彻底改变了患者教育。本研究评估了ChatGPT在四种关键整形手术中支持患者咨询的作用:乳房切除术、隆胸、缩胸和乳房重建。该研究通过评估其在相关性、准确性、清晰度和移情标准方面的表现,强调了其作为患者教育补充工具的潜力。方法:本研究收集了患者在术前和术后咨询中关于选择手术的常见问题。回复由ChatGPT生成,并由整形外科专家小组进行评估。从1到5分分为四个标准:相关性、准确性、清晰度和同理心。统计分析包括均值、标准偏差和Kruskal-Wallis检验,以评估不同标准和程序分配给回答的分数的差异。结果:ChatGPT在所有评估标准中都表现出了很高的性能,其中清晰度是最强的属性,反映了模型有效简化复杂医学概念的能力。准确性虽然略低,但仍然可靠,符合医疗标准。在这些手术中,乳房重建似乎表现得特别好,紧随其后的是乳房切除术和隆胸术。分析显示,不同标准之间没有显著差异,表明表现一致。结论:ChatGPT在解决患者担忧和提供清晰、移情和相关的回应方面表现出卓越的能力。然而,局限性包括缺乏个性化的建议和潜在的患者误解,强调需要专业监督。ChatGPT是一个有价值的辅助专业医疗咨询,加强患者教育和参与。未来的研究应该集中在提高个性化和评估其在临床环境中的实际应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing patient education in breast surgery: artificial intelligence-powered guidance for mastopexy, augmentation, reduction, and reconstruction.

Background: Large language models (LLMs), such as ChatGPT have revolutionised patient education by offering accessible, reasonable, and empathetic guidance. This study evaluates ChatGPT's role in supporting patient inquiries regarding four key plastic surgery procedures: mastopexy, breast augmentation, breast reduction, and breast reconstruction. The study highlights its potential as a supplemental tool in patient education by assessing its performance across relevance, accuracy, clarity, and empathy criteria.

Methods: The study collected frequently asked questions from patients about the selected procedures during pre- and post-operative consultations. Responses were generated by ChatGPT and evaluated by a panel of Plastic Surgery experts. Scores from 1 to 5 were assigned to four criteria: relevance, accuracy, clarity, and empathy. Statistical analyses, including means, standard deviations, and Kruskal-Wallis tests, were conducted to evaluate differences in the scores assigned to responses across criteria and procedures.

Results: ChatGPT demonstrated high performance across all evaluation criteria, with clarity emerging as the strongest attribute, reflecting the model's ability to simplify complex medical concepts effectively. Accuracy, while slightly lower, remained reliable, aligning well with medical standards. Among the procedures, breast reconstruction appeared to perform particularly well, followed closely by mastopexy and breast augmentation. The analysis revealed no significant differences across the criteria, indicating consistent performance.

Conclusions: ChatGPT demonstrated remarkable capability in addressing patient concerns and offering clear, empathetic, and relevant responses. However, limitations include the lack of personalised advice and potential patient misinterpretations, emphasising the need for professional oversight. ChatGPT is a valuable adjunct to professional medical consultations, enhancing patient education and engagement. Future research should focus on improving personalisation and evaluating its real-world application in clinical settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信