聊天机器人在手外科门诊就诊计费中显示出适度的交互可靠性。

IF 1.8 Q2 ORTHOPEDICS
HAND Pub Date : 2024-11-16 DOI:10.1177/15589447241295328
Luke D Latario, John R Fowler
{"title":"聊天机器人在手外科门诊就诊计费中显示出适度的交互可靠性。","authors":"Luke D Latario, John R Fowler","doi":"10.1177/15589447241295328","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence offers opportunities to improve the burden of health care administrative tasks. Application of machine learning to coding and billing for clinic encounters may represent time- and cost-saving benefits with low risk to patient outcomes.</p><p><strong>Methods: </strong>Gemini, a publicly available large language model chatbot, was queried with 139 de-identified patient encounters from a single surgeon and asked to provide the Current Procedural Terminology code based on the criteria for different encounter types. Percent agreement and Cohen's kappa coefficient were calculated.</p><p><strong>Results: </strong>Gemini demonstrated 68% agreement for all encounter types, with a kappa coefficient of 0.586 corresponding to moderate interrater reliability. Agreement was highest for postoperative encounters (n = 43) with 98% agreement and lowest for new encounters (n = 27) with 48% agreement. Gemini recommended billing levels greater than the surgeon's billing level 31 times and lower billing levels 10 times, with 4 wrong encounter type codes.</p><p><strong>Conclusions: </strong>A publicly available chatbot without specific programming for health care billing demonstrated moderate interrater reliability with a hand surgeon in billing clinic encounters. Future integration of artificial intelligence tools in physician workflow may improve the accuracy and speed of billing encounters and lower administrative costs.</p>","PeriodicalId":12902,"journal":{"name":"HAND","volume":" ","pages":"15589447241295328"},"PeriodicalIF":1.8000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11571175/pdf/","citationCount":"0","resultStr":"{\"title\":\"Chatbot Demonstrates Moderate Interrater Reliability in Billing for Hand Surgery Clinic Encounters.\",\"authors\":\"Luke D Latario, John R Fowler\",\"doi\":\"10.1177/15589447241295328\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Artificial intelligence offers opportunities to improve the burden of health care administrative tasks. Application of machine learning to coding and billing for clinic encounters may represent time- and cost-saving benefits with low risk to patient outcomes.</p><p><strong>Methods: </strong>Gemini, a publicly available large language model chatbot, was queried with 139 de-identified patient encounters from a single surgeon and asked to provide the Current Procedural Terminology code based on the criteria for different encounter types. Percent agreement and Cohen's kappa coefficient were calculated.</p><p><strong>Results: </strong>Gemini demonstrated 68% agreement for all encounter types, with a kappa coefficient of 0.586 corresponding to moderate interrater reliability. Agreement was highest for postoperative encounters (n = 43) with 98% agreement and lowest for new encounters (n = 27) with 48% agreement. Gemini recommended billing levels greater than the surgeon's billing level 31 times and lower billing levels 10 times, with 4 wrong encounter type codes.</p><p><strong>Conclusions: </strong>A publicly available chatbot without specific programming for health care billing demonstrated moderate interrater reliability with a hand surgeon in billing clinic encounters. Future integration of artificial intelligence tools in physician workflow may improve the accuracy and speed of billing encounters and lower administrative costs.</p>\",\"PeriodicalId\":12902,\"journal\":{\"name\":\"HAND\",\"volume\":\" \",\"pages\":\"15589447241295328\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11571175/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"HAND\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/15589447241295328\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"HAND","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15589447241295328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:人工智能为改善医疗管理任务的负担提供了机会。将机器学习应用于诊所就诊的编码和计费可能会节省时间和成本,同时对患者的治疗效果风险较低:方法:Gemini 是一款公开的大型语言模型聊天机器人,它从一名外科医生处获取了 139 个去标识化的患者病例,并要求其根据不同病例类型的标准提供当前程序术语代码。结果表明,Gemini 的一致性达到了 68%:结果:Gemini 对所有病例类型的一致率为 68%,卡帕系数为 0.586,相当于中等程度的术者间可靠性。术后会诊的一致性最高(n = 43),为 98%;新会诊的一致性最低(n = 27),为 48%。Gemini 建议的计费水平比外科医生的计费水平高 31 次,比外科医生的计费水平低 10 次,有 4 个错误的会诊类型代码:结论:一个公开可用的聊天机器人没有专门的医疗计费程序,在与手外科医生进行诊所会诊计费时表现出了中等程度的交互可靠性。未来将人工智能工具整合到医生工作流程中,可能会提高会诊计费的准确性和速度,并降低管理成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Chatbot Demonstrates Moderate Interrater Reliability in Billing for Hand Surgery Clinic Encounters.

Background: Artificial intelligence offers opportunities to improve the burden of health care administrative tasks. Application of machine learning to coding and billing for clinic encounters may represent time- and cost-saving benefits with low risk to patient outcomes.

Methods: Gemini, a publicly available large language model chatbot, was queried with 139 de-identified patient encounters from a single surgeon and asked to provide the Current Procedural Terminology code based on the criteria for different encounter types. Percent agreement and Cohen's kappa coefficient were calculated.

Results: Gemini demonstrated 68% agreement for all encounter types, with a kappa coefficient of 0.586 corresponding to moderate interrater reliability. Agreement was highest for postoperative encounters (n = 43) with 98% agreement and lowest for new encounters (n = 27) with 48% agreement. Gemini recommended billing levels greater than the surgeon's billing level 31 times and lower billing levels 10 times, with 4 wrong encounter type codes.

Conclusions: A publicly available chatbot without specific programming for health care billing demonstrated moderate interrater reliability with a hand surgeon in billing clinic encounters. Future integration of artificial intelligence tools in physician workflow may improve the accuracy and speed of billing encounters and lower administrative costs.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
HAND
HAND Medicine-Surgery
CiteScore
3.30
自引率
0.00%
发文量
209
期刊介绍: HAND is the official journal of the American Association for Hand Surgery and is a peer-reviewed journal featuring articles written by clinicians worldwide presenting current research and clinical work in the field of hand surgery. It features articles related to all aspects of hand and upper extremity surgery and the post operative care and rehabilitation of the hand.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信