{"title":"Future Perspective of Risk Prediction in Aesthetic Surgery: Is Artificial Intelligence Reliable?","authors":"Alpay Duran, Oguz Cortuk, Bora Ok","doi":"10.1093/asj/sjae140","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) techniques are showing significant potential in the medical field. The rapid advancement in artificial intelligence methods suggests their soon-to-be essential role in physicians' practices.</p><p><strong>Objectives: </strong>In this study, we sought to assess and compare the readability, clarity, and precision of medical knowledge responses provided by 3 large language models (LLMs) and informed consent forms for 14 common aesthetic surgical procedures, as prepared by the American Society of Plastic Surgeons (ASPS).</p><p><strong>Methods: </strong>The efficacy, readability, and accuracy of 3 leading LLMs, ChatGPT-4 (OpenAI, San Francisco, CA), Gemini (Google, Mountain View, CA), and Copilot (Microsoft, Redmond, WA), was systematically evaluated with 14 different prompts related to the risks of 14 common aesthetic procedures. Alongside these LLM responses, risk sections from the informed consent forms for these procedures, provided by the ASPS, were also reviewed.</p><p><strong>Results: </strong>The risk factor segments of the combined general and specific operation consent forms were rated highest for medical knowledge accuracy (P < .05). Regarding readability and clarity, the procedure-specific informed consent forms, including LLMs, scored highest scores (P < .05). However, these same forms received the lowest score for medical knowledge accuracy (P < .05). Interestingly, surgeons preferred patient-facing materials created by ChatGPT-4, citing superior accuracy and medical information compared to other AI tools.</p><p><strong>Conclusions: </strong>Physicians prefer patient-facing materials created by ChatGPT-4 over other AI tools due to their precise and comprehensive medical knowledge. Importantly, adherence to the strong recommendation of ASPS for signing both the procedure-specific and the general informed consent forms can avoid potential future complications and ethical concerns, thereby ensuring patients receive adequate information.</p>","PeriodicalId":7728,"journal":{"name":"Aesthetic Surgery Journal","volume":" ","pages":"NP839-NP849"},"PeriodicalIF":3.0000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Aesthetic Surgery Journal","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/asj/sjae140","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Artificial intelligence (AI) techniques are showing significant potential in the medical field. The rapid advancement in artificial intelligence methods suggests their soon-to-be essential role in physicians' practices.
Objectives: In this study, we sought to assess and compare the readability, clarity, and precision of medical knowledge responses provided by 3 large language models (LLMs) and informed consent forms for 14 common aesthetic surgical procedures, as prepared by the American Society of Plastic Surgeons (ASPS).
Methods: The efficacy, readability, and accuracy of 3 leading LLMs, ChatGPT-4 (OpenAI, San Francisco, CA), Gemini (Google, Mountain View, CA), and Copilot (Microsoft, Redmond, WA), was systematically evaluated with 14 different prompts related to the risks of 14 common aesthetic procedures. Alongside these LLM responses, risk sections from the informed consent forms for these procedures, provided by the ASPS, were also reviewed.
Results: The risk factor segments of the combined general and specific operation consent forms were rated highest for medical knowledge accuracy (P < .05). Regarding readability and clarity, the procedure-specific informed consent forms, including LLMs, scored highest scores (P < .05). However, these same forms received the lowest score for medical knowledge accuracy (P < .05). Interestingly, surgeons preferred patient-facing materials created by ChatGPT-4, citing superior accuracy and medical information compared to other AI tools.
Conclusions: Physicians prefer patient-facing materials created by ChatGPT-4 over other AI tools due to their precise and comprehensive medical knowledge. Importantly, adherence to the strong recommendation of ASPS for signing both the procedure-specific and the general informed consent forms can avoid potential future complications and ethical concerns, thereby ensuring patients receive adequate information.
期刊介绍:
Aesthetic Surgery Journal is a peer-reviewed international journal focusing on scientific developments and clinical techniques in aesthetic surgery. The official publication of The Aesthetic Society, ASJ is also the official English-language journal of many major international societies of plastic, aesthetic and reconstructive surgery representing South America, Central America, Europe, Asia, and the Middle East. It is also the official journal of the British Association of Aesthetic Plastic Surgeons, the Canadian Society for Aesthetic Plastic Surgery and The Rhinoplasty Society.