{"title":"聊天机器人在手外科门诊就诊计费中显示出适度的交互可靠性。","authors":"Luke D Latario, John R Fowler","doi":"10.1177/15589447241295328","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence offers opportunities to improve the burden of health care administrative tasks. Application of machine learning to coding and billing for clinic encounters may represent time- and cost-saving benefits with low risk to patient outcomes.</p><p><strong>Methods: </strong>Gemini, a publicly available large language model chatbot, was queried with 139 de-identified patient encounters from a single surgeon and asked to provide the Current Procedural Terminology code based on the criteria for different encounter types. Percent agreement and Cohen's kappa coefficient were calculated.</p><p><strong>Results: </strong>Gemini demonstrated 68% agreement for all encounter types, with a kappa coefficient of 0.586 corresponding to moderate interrater reliability. Agreement was highest for postoperative encounters (n = 43) with 98% agreement and lowest for new encounters (n = 27) with 48% agreement. Gemini recommended billing levels greater than the surgeon's billing level 31 times and lower billing levels 10 times, with 4 wrong encounter type codes.</p><p><strong>Conclusions: </strong>A publicly available chatbot without specific programming for health care billing demonstrated moderate interrater reliability with a hand surgeon in billing clinic encounters. Future integration of artificial intelligence tools in physician workflow may improve the accuracy and speed of billing encounters and lower administrative costs.</p>","PeriodicalId":12902,"journal":{"name":"HAND","volume":" ","pages":"15589447241295328"},"PeriodicalIF":1.8000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11571175/pdf/","citationCount":"0","resultStr":"{\"title\":\"Chatbot Demonstrates Moderate Interrater Reliability in Billing for Hand Surgery Clinic Encounters.\",\"authors\":\"Luke D Latario, John R Fowler\",\"doi\":\"10.1177/15589447241295328\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Artificial intelligence offers opportunities to improve the burden of health care administrative tasks. Application of machine learning to coding and billing for clinic encounters may represent time- and cost-saving benefits with low risk to patient outcomes.</p><p><strong>Methods: </strong>Gemini, a publicly available large language model chatbot, was queried with 139 de-identified patient encounters from a single surgeon and asked to provide the Current Procedural Terminology code based on the criteria for different encounter types. Percent agreement and Cohen's kappa coefficient were calculated.</p><p><strong>Results: </strong>Gemini demonstrated 68% agreement for all encounter types, with a kappa coefficient of 0.586 corresponding to moderate interrater reliability. Agreement was highest for postoperative encounters (n = 43) with 98% agreement and lowest for new encounters (n = 27) with 48% agreement. Gemini recommended billing levels greater than the surgeon's billing level 31 times and lower billing levels 10 times, with 4 wrong encounter type codes.</p><p><strong>Conclusions: </strong>A publicly available chatbot without specific programming for health care billing demonstrated moderate interrater reliability with a hand surgeon in billing clinic encounters. Future integration of artificial intelligence tools in physician workflow may improve the accuracy and speed of billing encounters and lower administrative costs.</p>\",\"PeriodicalId\":12902,\"journal\":{\"name\":\"HAND\",\"volume\":\" \",\"pages\":\"15589447241295328\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11571175/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"HAND\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/15589447241295328\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"HAND","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15589447241295328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
Chatbot Demonstrates Moderate Interrater Reliability in Billing for Hand Surgery Clinic Encounters.
Background: Artificial intelligence offers opportunities to improve the burden of health care administrative tasks. Application of machine learning to coding and billing for clinic encounters may represent time- and cost-saving benefits with low risk to patient outcomes.
Methods: Gemini, a publicly available large language model chatbot, was queried with 139 de-identified patient encounters from a single surgeon and asked to provide the Current Procedural Terminology code based on the criteria for different encounter types. Percent agreement and Cohen's kappa coefficient were calculated.
Results: Gemini demonstrated 68% agreement for all encounter types, with a kappa coefficient of 0.586 corresponding to moderate interrater reliability. Agreement was highest for postoperative encounters (n = 43) with 98% agreement and lowest for new encounters (n = 27) with 48% agreement. Gemini recommended billing levels greater than the surgeon's billing level 31 times and lower billing levels 10 times, with 4 wrong encounter type codes.
Conclusions: A publicly available chatbot without specific programming for health care billing demonstrated moderate interrater reliability with a hand surgeon in billing clinic encounters. Future integration of artificial intelligence tools in physician workflow may improve the accuracy and speed of billing encounters and lower administrative costs.
期刊介绍:
HAND is the official journal of the American Association for Hand Surgery and is a peer-reviewed journal featuring articles written by clinicians worldwide presenting current research and clinical work in the field of hand surgery. It features articles related to all aspects of hand and upper extremity surgery and the post operative care and rehabilitation of the hand.