John Rong Hao Tay, Dian Yi Chow, Yi Rong Ivan Lim, Ethan Ng
{"title":"通过提示工程增强以患者为中心的种植牙科信息:四种大型语言模型的比较。","authors":"John Rong Hao Tay, Dian Yi Chow, Yi Rong Ivan Lim, Ethan Ng","doi":"10.3389/froh.2025.1566221","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Patients frequently seek dental information online, and generative pre-trained transformers (GPTs) may be a valuable resource. However, the quality of responses based on varying prompt designs has not been evaluated. As dental implant treatment is widely performed, this study aimed to investigate the influence of prompt design on GPT performance in answering commonly asked questions related to dental implants.</p><p><strong>Materials and methods: </strong>Thirty commonly asked questions about implant dentistry - covering patient selection, associated risks, peri-implant disease symptoms, treatment for missing teeth, prevention, and prognosis - were posed to four different GPT models with different prompt designs. Responses were recorded and independently appraised by two periodontists across six quality domains.</p><p><strong>Results: </strong>All models performed well, with responses classified as good quality. The contextualized model performed worse on treatment-related questions (21.5 ± 3.4, <i>p</i> < 0.05), but outperformed the input-output, zero-shot chain of thought, and instruction-tuned models in citing appropriate sources in its responses (4.1 ± 1.0, <i>p</i> < 0.001). However, responses had less clarity and relevance compared to the other models.</p><p><strong>Conclusion: </strong>GPTs can provide accurate, complete, and useful information for questions related to dental implants. While prompt designs can enhance response quality, further refinement is necessary to optimize its performance.</p>","PeriodicalId":94016,"journal":{"name":"Frontiers in oral health","volume":"6 ","pages":"1566221"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12009804/pdf/","citationCount":"0","resultStr":"{\"title\":\"Enhancing patient-centered information on implant dentistry through prompt engineering: a comparison of four large language models.\",\"authors\":\"John Rong Hao Tay, Dian Yi Chow, Yi Rong Ivan Lim, Ethan Ng\",\"doi\":\"10.3389/froh.2025.1566221\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Patients frequently seek dental information online, and generative pre-trained transformers (GPTs) may be a valuable resource. However, the quality of responses based on varying prompt designs has not been evaluated. As dental implant treatment is widely performed, this study aimed to investigate the influence of prompt design on GPT performance in answering commonly asked questions related to dental implants.</p><p><strong>Materials and methods: </strong>Thirty commonly asked questions about implant dentistry - covering patient selection, associated risks, peri-implant disease symptoms, treatment for missing teeth, prevention, and prognosis - were posed to four different GPT models with different prompt designs. Responses were recorded and independently appraised by two periodontists across six quality domains.</p><p><strong>Results: </strong>All models performed well, with responses classified as good quality. The contextualized model performed worse on treatment-related questions (21.5 ± 3.4, <i>p</i> < 0.05), but outperformed the input-output, zero-shot chain of thought, and instruction-tuned models in citing appropriate sources in its responses (4.1 ± 1.0, <i>p</i> < 0.001). However, responses had less clarity and relevance compared to the other models.</p><p><strong>Conclusion: </strong>GPTs can provide accurate, complete, and useful information for questions related to dental implants. While prompt designs can enhance response quality, further refinement is necessary to optimize its performance.</p>\",\"PeriodicalId\":94016,\"journal\":{\"name\":\"Frontiers in oral health\",\"volume\":\"6 \",\"pages\":\"1566221\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12009804/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in oral health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/froh.2025.1566221\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in oral health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/froh.2025.1566221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
Enhancing patient-centered information on implant dentistry through prompt engineering: a comparison of four large language models.
Background: Patients frequently seek dental information online, and generative pre-trained transformers (GPTs) may be a valuable resource. However, the quality of responses based on varying prompt designs has not been evaluated. As dental implant treatment is widely performed, this study aimed to investigate the influence of prompt design on GPT performance in answering commonly asked questions related to dental implants.
Materials and methods: Thirty commonly asked questions about implant dentistry - covering patient selection, associated risks, peri-implant disease symptoms, treatment for missing teeth, prevention, and prognosis - were posed to four different GPT models with different prompt designs. Responses were recorded and independently appraised by two periodontists across six quality domains.
Results: All models performed well, with responses classified as good quality. The contextualized model performed worse on treatment-related questions (21.5 ± 3.4, p < 0.05), but outperformed the input-output, zero-shot chain of thought, and instruction-tuned models in citing appropriate sources in its responses (4.1 ± 1.0, p < 0.001). However, responses had less clarity and relevance compared to the other models.
Conclusion: GPTs can provide accurate, complete, and useful information for questions related to dental implants. While prompt designs can enhance response quality, further refinement is necessary to optimize its performance.