Joanna M Roy, D Mitchell Self, Emily Isch, Basel Musmar, Matthews Lan, Kavantissa Keppetipola, Sravanthi Koduri, Mary-Katharine Pontarelli, Stavropoula I Tjoumakaris, M Reid Gooch, Robert H Rosenwasser, Pascal M Jabbour
{"title":"评估血管内神经外科自动CPT代码预测的大型语言模型。","authors":"Joanna M Roy, D Mitchell Self, Emily Isch, Basel Musmar, Matthews Lan, Kavantissa Keppetipola, Sravanthi Koduri, Mary-Katharine Pontarelli, Stavropoula I Tjoumakaris, M Reid Gooch, Robert H Rosenwasser, Pascal M Jabbour","doi":"10.1007/s10916-025-02149-4","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) have been utilized to automate tasks like writing discharge summaries and operative reports in neurosurgery. The present study evaluates their ability to identify current procedural terminology (CPT) codes from operative reports. Three LLMs (ChatGPT 4.0, AtlasGPT and Gemini) were evaluated in their ability to provide CPT codes for diagnostic or interventional procedures in endovascular neurosurgery at a single institution. Responses were classified as correct, partially correct or incorrect, and the percentage of correctly identified CPT codes were calculated. The Chi-Square test and Kruskal Wallis test were used to compare responses across LLMs. A total of 30 operative notes were used in the present study. AtlasGPT identified CPT codes for 98.3% procedures with partially correct responses, while ChatGPT and Gemini provided partially correct responses for 86.7% and 30% procedures, respectively (P < 0.001). AtlasGPT identified CPT codes correctly in an average of 35.3% of procedures, followed by ChatGPT (35.1%) and Gemini (8.9%) (P < 0.001). A pairwise comparison among three LLMs revealed that AtlasGPT and ChatGPT outperformed Gemini. Untrained LLMs have the ability to identify partially correct CPT codes in endovascular neurosurgery. Training these models could further enhance their ability to identify CPT codes and minimize healthcare expenditure.</p>","PeriodicalId":16338,"journal":{"name":"Journal of Medical Systems","volume":"49 1","pages":"15"},"PeriodicalIF":3.5000,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating Large Language Models for Automated CPT Code Prediction in Endovascular Neurosurgery.\",\"authors\":\"Joanna M Roy, D Mitchell Self, Emily Isch, Basel Musmar, Matthews Lan, Kavantissa Keppetipola, Sravanthi Koduri, Mary-Katharine Pontarelli, Stavropoula I Tjoumakaris, M Reid Gooch, Robert H Rosenwasser, Pascal M Jabbour\",\"doi\":\"10.1007/s10916-025-02149-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Large language models (LLMs) have been utilized to automate tasks like writing discharge summaries and operative reports in neurosurgery. The present study evaluates their ability to identify current procedural terminology (CPT) codes from operative reports. Three LLMs (ChatGPT 4.0, AtlasGPT and Gemini) were evaluated in their ability to provide CPT codes for diagnostic or interventional procedures in endovascular neurosurgery at a single institution. Responses were classified as correct, partially correct or incorrect, and the percentage of correctly identified CPT codes were calculated. The Chi-Square test and Kruskal Wallis test were used to compare responses across LLMs. A total of 30 operative notes were used in the present study. AtlasGPT identified CPT codes for 98.3% procedures with partially correct responses, while ChatGPT and Gemini provided partially correct responses for 86.7% and 30% procedures, respectively (P < 0.001). AtlasGPT identified CPT codes correctly in an average of 35.3% of procedures, followed by ChatGPT (35.1%) and Gemini (8.9%) (P < 0.001). A pairwise comparison among three LLMs revealed that AtlasGPT and ChatGPT outperformed Gemini. Untrained LLMs have the ability to identify partially correct CPT codes in endovascular neurosurgery. Training these models could further enhance their ability to identify CPT codes and minimize healthcare expenditure.</p>\",\"PeriodicalId\":16338,\"journal\":{\"name\":\"Journal of Medical Systems\",\"volume\":\"49 1\",\"pages\":\"15\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-01-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Systems\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s10916-025-02149-4\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10916-025-02149-4","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Evaluating Large Language Models for Automated CPT Code Prediction in Endovascular Neurosurgery.
Large language models (LLMs) have been utilized to automate tasks like writing discharge summaries and operative reports in neurosurgery. The present study evaluates their ability to identify current procedural terminology (CPT) codes from operative reports. Three LLMs (ChatGPT 4.0, AtlasGPT and Gemini) were evaluated in their ability to provide CPT codes for diagnostic or interventional procedures in endovascular neurosurgery at a single institution. Responses were classified as correct, partially correct or incorrect, and the percentage of correctly identified CPT codes were calculated. The Chi-Square test and Kruskal Wallis test were used to compare responses across LLMs. A total of 30 operative notes were used in the present study. AtlasGPT identified CPT codes for 98.3% procedures with partially correct responses, while ChatGPT and Gemini provided partially correct responses for 86.7% and 30% procedures, respectively (P < 0.001). AtlasGPT identified CPT codes correctly in an average of 35.3% of procedures, followed by ChatGPT (35.1%) and Gemini (8.9%) (P < 0.001). A pairwise comparison among three LLMs revealed that AtlasGPT and ChatGPT outperformed Gemini. Untrained LLMs have the ability to identify partially correct CPT codes in endovascular neurosurgery. Training these models could further enhance their ability to identify CPT codes and minimize healthcare expenditure.
期刊介绍:
Journal of Medical Systems provides a forum for the presentation and discussion of the increasingly extensive applications of new systems techniques and methods in hospital clinic and physician''s office administration; pathology radiology and pharmaceutical delivery systems; medical records storage and retrieval; and ancillary patient-support systems. The journal publishes informative articles essays and studies across the entire scale of medical systems from large hospital programs to novel small-scale medical services. Education is an integral part of this amalgamation of sciences and selected articles are published in this area. Since existing medical systems are constantly being modified to fit particular circumstances and to solve specific problems the journal includes a special section devoted to status reports on current installations.