A comparison of techniques for predicting telehealth visit failure

Alexander J. Idarraga , David F. Schneider
{"title":"A comparison of techniques for predicting telehealth visit failure","authors":"Alexander J. Idarraga ,&nbsp;David F. Schneider","doi":"10.1016/j.ibmed.2025.100235","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>Telehealth is an increasingly important method for delivering care. Health systems lack the ability to accurately predict which telehealth visits will fail due to poor connection, poor technical literacy, or other reasons. This results in wasted resources and disrupted patient care. The purpose of this study is to characterize and compare various methods for predicting telehealth visit failure, and to determine the prediction method most suited for implementation in a real-time operational setting.</div></div><div><h3>Methods</h3><div>A single-center, retrospective cohort study was conducted using data sourced from our data warehouse. Patient demographic information and data characterizing prior visit success and engagement with electronic health tools were included. Three main model types were evaluated: an existing scoring model developed by Hughes et al., a regression-based scoring model, and Machine Learning classifiers. Variables were selected for their importance and anticipated availability; Number Needed to Treat was used to demonstrate the number of interventions (e.g. pre-visit phone calls) required to improve success rates in the context of weekly patient volumes.</div></div><div><h3>Results</h3><div>217, 229 visits spanning 480 days were evaluated, of which 22,443 (10.33 %) met criteria for failure. Hughes et al.’s model applied to our data yielded an Area Under the Receiver Operating Characteristics Curve (AUC ROC) of 0.678 when predicting failure. A score-based model achieved an AUC ROC of 0.698. Logistic Regression, Random Forest, and Gradient Boosting models demonstrated AUC ROCs ranging from 0.7877 to 0.7969. A NNT of 32 was achieved if the 263 highest-risk patients were selected in a low-volume week using the RF classifier, compared to an expected NNT of 90 if the same number of patients were randomly selected.</div></div><div><h3>Conclusions</h3><div>Machine Learning classifiers demonstrated superiority over score-based methods for predicting telehealth visit failure. Prospective evaluation is required; evaluation using NNT as a metric can help to operationalize these models.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100235"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligence-based medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666521225000390","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

Telehealth is an increasingly important method for delivering care. Health systems lack the ability to accurately predict which telehealth visits will fail due to poor connection, poor technical literacy, or other reasons. This results in wasted resources and disrupted patient care. The purpose of this study is to characterize and compare various methods for predicting telehealth visit failure, and to determine the prediction method most suited for implementation in a real-time operational setting.

Methods

A single-center, retrospective cohort study was conducted using data sourced from our data warehouse. Patient demographic information and data characterizing prior visit success and engagement with electronic health tools were included. Three main model types were evaluated: an existing scoring model developed by Hughes et al., a regression-based scoring model, and Machine Learning classifiers. Variables were selected for their importance and anticipated availability; Number Needed to Treat was used to demonstrate the number of interventions (e.g. pre-visit phone calls) required to improve success rates in the context of weekly patient volumes.

Results

217, 229 visits spanning 480 days were evaluated, of which 22,443 (10.33 %) met criteria for failure. Hughes et al.’s model applied to our data yielded an Area Under the Receiver Operating Characteristics Curve (AUC ROC) of 0.678 when predicting failure. A score-based model achieved an AUC ROC of 0.698. Logistic Regression, Random Forest, and Gradient Boosting models demonstrated AUC ROCs ranging from 0.7877 to 0.7969. A NNT of 32 was achieved if the 263 highest-risk patients were selected in a low-volume week using the RF classifier, compared to an expected NNT of 90 if the same number of patients were randomly selected.

Conclusions

Machine Learning classifiers demonstrated superiority over score-based methods for predicting telehealth visit failure. Prospective evaluation is required; evaluation using NNT as a metric can help to operationalize these models.
预测远程医疗访问失败的技术比较
目的远程医疗是一种日益重要的医疗服务方式。卫生系统缺乏准确预测哪些远程医疗访问将由于连接不良、技术素养低下或其他原因而失败的能力。这导致资源浪费和病人护理中断。本研究的目的是描述和比较各种预测远程医疗访问失败的方法,并确定最适合在实时操作环境中实施的预测方法。方法采用单中心、回顾性队列研究,数据来源于我们的数据仓库。包括患者人口统计信息和表征先前访问成功和使用电子健康工具的数据。评估了三种主要的模型类型:Hughes等人开发的现有评分模型,基于回归的评分模型和机器学习分类器。根据变量的重要性和预期可用性选择变量;需要治疗的人数用于展示在每周患者数量的情况下提高成功率所需的干预措施的数量(例如,就诊前电话)。结果共评估就诊217229次,共计480 d,其中22443次(10.33%)符合不合格标准。Hughes等人的模型应用于我们的数据,在预测失败时,接受者工作特征曲线下面积(AUC ROC)为0.678。基于评分的模型的AUC ROC为0.698。Logistic回归、随机森林和梯度增强模型的AUC roc范围为0.7877 ~ 0.7969。如果在低容量周内使用RF分类器选择263名风险最高的患者,则NNT为32,而如果随机选择相同数量的患者,则NNT为90。结论机器学习分类器在预测远程医疗就诊失败方面优于基于分数的方法。需要前瞻性评价;使用NNT作为度量的评估可以帮助这些模型的操作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Intelligence-based medicine
Intelligence-based medicine Health Informatics
CiteScore
5.00
自引率
0.00%
发文量
0
审稿时长
187 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信