Gianluca Marcaccini, Pietro Susini, Yi Xie, Roberto Cuomo, Mirco Pozzi, Luca Grimaldi, Warren M Rozen, Ishith Seth
{"title":"加强对乳房手术患者的教育:人工智能为乳房切除术、隆胸、缩小和重建提供指导。","authors":"Gianluca Marcaccini, Pietro Susini, Yi Xie, Roberto Cuomo, Mirco Pozzi, Luca Grimaldi, Warren M Rozen, Ishith Seth","doi":"10.21037/tbcr-24-67","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs), such as ChatGPT have revolutionised patient education by offering accessible, reasonable, and empathetic guidance. This study evaluates ChatGPT's role in supporting patient inquiries regarding four key plastic surgery procedures: mastopexy, breast augmentation, breast reduction, and breast reconstruction. The study highlights its potential as a supplemental tool in patient education by assessing its performance across relevance, accuracy, clarity, and empathy criteria.</p><p><strong>Methods: </strong>The study collected frequently asked questions from patients about the selected procedures during pre- and post-operative consultations. Responses were generated by ChatGPT and evaluated by a panel of Plastic Surgery experts. Scores from 1 to 5 were assigned to four criteria: relevance, accuracy, clarity, and empathy. Statistical analyses, including means, standard deviations, and Kruskal-Wallis tests, were conducted to evaluate differences in the scores assigned to responses across criteria and procedures.</p><p><strong>Results: </strong>ChatGPT demonstrated high performance across all evaluation criteria, with clarity emerging as the strongest attribute, reflecting the model's ability to simplify complex medical concepts effectively. Accuracy, while slightly lower, remained reliable, aligning well with medical standards. Among the procedures, breast reconstruction appeared to perform particularly well, followed closely by mastopexy and breast augmentation. The analysis revealed no significant differences across the criteria, indicating consistent performance.</p><p><strong>Conclusions: </strong>ChatGPT demonstrated remarkable capability in addressing patient concerns and offering clear, empathetic, and relevant responses. However, limitations include the lack of personalised advice and potential patient misinterpretations, emphasising the need for professional oversight. ChatGPT is a valuable adjunct to professional medical consultations, enhancing patient education and engagement. Future research should focus on improving personalisation and evaluating its real-world application in clinical settings.</p>","PeriodicalId":101427,"journal":{"name":"Translational breast cancer research : a journal focusing on translational research in breast cancer","volume":"6 ","pages":"12"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12104957/pdf/","citationCount":"0","resultStr":"{\"title\":\"Enhancing patient education in breast surgery: artificial intelligence-powered guidance for mastopexy, augmentation, reduction, and reconstruction.\",\"authors\":\"Gianluca Marcaccini, Pietro Susini, Yi Xie, Roberto Cuomo, Mirco Pozzi, Luca Grimaldi, Warren M Rozen, Ishith Seth\",\"doi\":\"10.21037/tbcr-24-67\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Large language models (LLMs), such as ChatGPT have revolutionised patient education by offering accessible, reasonable, and empathetic guidance. This study evaluates ChatGPT's role in supporting patient inquiries regarding four key plastic surgery procedures: mastopexy, breast augmentation, breast reduction, and breast reconstruction. The study highlights its potential as a supplemental tool in patient education by assessing its performance across relevance, accuracy, clarity, and empathy criteria.</p><p><strong>Methods: </strong>The study collected frequently asked questions from patients about the selected procedures during pre- and post-operative consultations. Responses were generated by ChatGPT and evaluated by a panel of Plastic Surgery experts. Scores from 1 to 5 were assigned to four criteria: relevance, accuracy, clarity, and empathy. Statistical analyses, including means, standard deviations, and Kruskal-Wallis tests, were conducted to evaluate differences in the scores assigned to responses across criteria and procedures.</p><p><strong>Results: </strong>ChatGPT demonstrated high performance across all evaluation criteria, with clarity emerging as the strongest attribute, reflecting the model's ability to simplify complex medical concepts effectively. Accuracy, while slightly lower, remained reliable, aligning well with medical standards. Among the procedures, breast reconstruction appeared to perform particularly well, followed closely by mastopexy and breast augmentation. The analysis revealed no significant differences across the criteria, indicating consistent performance.</p><p><strong>Conclusions: </strong>ChatGPT demonstrated remarkable capability in addressing patient concerns and offering clear, empathetic, and relevant responses. However, limitations include the lack of personalised advice and potential patient misinterpretations, emphasising the need for professional oversight. ChatGPT is a valuable adjunct to professional medical consultations, enhancing patient education and engagement. Future research should focus on improving personalisation and evaluating its real-world application in clinical settings.</p>\",\"PeriodicalId\":101427,\"journal\":{\"name\":\"Translational breast cancer research : a journal focusing on translational research in breast cancer\",\"volume\":\"6 \",\"pages\":\"12\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12104957/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Translational breast cancer research : a journal focusing on translational research in breast cancer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21037/tbcr-24-67\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Translational breast cancer research : a journal focusing on translational research in breast cancer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21037/tbcr-24-67","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
Enhancing patient education in breast surgery: artificial intelligence-powered guidance for mastopexy, augmentation, reduction, and reconstruction.
Background: Large language models (LLMs), such as ChatGPT have revolutionised patient education by offering accessible, reasonable, and empathetic guidance. This study evaluates ChatGPT's role in supporting patient inquiries regarding four key plastic surgery procedures: mastopexy, breast augmentation, breast reduction, and breast reconstruction. The study highlights its potential as a supplemental tool in patient education by assessing its performance across relevance, accuracy, clarity, and empathy criteria.
Methods: The study collected frequently asked questions from patients about the selected procedures during pre- and post-operative consultations. Responses were generated by ChatGPT and evaluated by a panel of Plastic Surgery experts. Scores from 1 to 5 were assigned to four criteria: relevance, accuracy, clarity, and empathy. Statistical analyses, including means, standard deviations, and Kruskal-Wallis tests, were conducted to evaluate differences in the scores assigned to responses across criteria and procedures.
Results: ChatGPT demonstrated high performance across all evaluation criteria, with clarity emerging as the strongest attribute, reflecting the model's ability to simplify complex medical concepts effectively. Accuracy, while slightly lower, remained reliable, aligning well with medical standards. Among the procedures, breast reconstruction appeared to perform particularly well, followed closely by mastopexy and breast augmentation. The analysis revealed no significant differences across the criteria, indicating consistent performance.
Conclusions: ChatGPT demonstrated remarkable capability in addressing patient concerns and offering clear, empathetic, and relevant responses. However, limitations include the lack of personalised advice and potential patient misinterpretations, emphasising the need for professional oversight. ChatGPT is a valuable adjunct to professional medical consultations, enhancing patient education and engagement. Future research should focus on improving personalisation and evaluating its real-world application in clinical settings.