使用计算机视觉识别电子健康记录任务和活动。

IF 2.2 2区 医学 Q4 MEDICAL INFORMATICS
Liem Manh Nguyen, Amrita Sinha, Adam Dziorny, Daniel Tawfik
{"title":"使用计算机视觉识别电子健康记录任务和活动。","authors":"Liem Manh Nguyen, Amrita Sinha, Adam Dziorny, Daniel Tawfik","doi":"10.1055/a-2698-0841","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Time spent in the electronic health record (EHR) is an important measure of clinical activity. Vendor-derived EHR use metrics may not correspond to actual EHR experience. Raw EHR audit logs enable customized EHR use metrics, but translating discrete timestamps to time intervals is challenging. There are insufficient data available to quantify inactivity between audit log timestamps.</p><p><strong>Methods: </strong>We propose a computer vision-based model that can 1) classify EHR tasks being performed, and identify when task changes occur, and 2) quantify active-use time using session screen recordings of EHR use. We generated 111 minutes of simulated workflow in an Epic sandbox environment for development and training and collected 86 minutes of real-world clinician session recordings for validation. The model used YOLOv8, Tesseract OCR, and a predefined dictionary to perform task classification and task change detection. We developed a frame comparison algorithm to delineate activity from inactivity and thus quantify active time. We compared the model's output of task classification, task change identification, and active time quantification against clinician annotations. We then performed a post-hoc sensitivity analysis to identify the model's accuracy when using optimal parameters.</p><p><strong>Results: </strong>Our model classified time spent in various high-level tasks with 94% accuracy. It detected task changes with 90.6% sensitivity. Active-use quantification varied by task, with lower MAPE for tasks with clear visual changes (e.g., Results Review) and higher MAPE for tasks with subtle interactions (e.g., Note Entry). A post-hoc sensitivity analysis revealed improvement in active-use quantification with a lower threshold of inactivity than initially used.</p><p><strong>Conclusion: </strong>A computer vision approach to identifying tasks performed and measuring time spent in the EHR is feasible. Future work should refine task-specific thresholds and validate across diverse settings. This approach enables defining optimal context-sensitive thresholds for quantifying clinically relevant active EHR time using raw audit log data.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Identifying Electronic Health Record Tasks and Activity Using Computer Vision.\",\"authors\":\"Liem Manh Nguyen, Amrita Sinha, Adam Dziorny, Daniel Tawfik\",\"doi\":\"10.1055/a-2698-0841\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Time spent in the electronic health record (EHR) is an important measure of clinical activity. Vendor-derived EHR use metrics may not correspond to actual EHR experience. Raw EHR audit logs enable customized EHR use metrics, but translating discrete timestamps to time intervals is challenging. There are insufficient data available to quantify inactivity between audit log timestamps.</p><p><strong>Methods: </strong>We propose a computer vision-based model that can 1) classify EHR tasks being performed, and identify when task changes occur, and 2) quantify active-use time using session screen recordings of EHR use. We generated 111 minutes of simulated workflow in an Epic sandbox environment for development and training and collected 86 minutes of real-world clinician session recordings for validation. The model used YOLOv8, Tesseract OCR, and a predefined dictionary to perform task classification and task change detection. We developed a frame comparison algorithm to delineate activity from inactivity and thus quantify active time. We compared the model's output of task classification, task change identification, and active time quantification against clinician annotations. We then performed a post-hoc sensitivity analysis to identify the model's accuracy when using optimal parameters.</p><p><strong>Results: </strong>Our model classified time spent in various high-level tasks with 94% accuracy. It detected task changes with 90.6% sensitivity. Active-use quantification varied by task, with lower MAPE for tasks with clear visual changes (e.g., Results Review) and higher MAPE for tasks with subtle interactions (e.g., Note Entry). A post-hoc sensitivity analysis revealed improvement in active-use quantification with a lower threshold of inactivity than initially used.</p><p><strong>Conclusion: </strong>A computer vision approach to identifying tasks performed and measuring time spent in the EHR is feasible. Future work should refine task-specific thresholds and validate across diverse settings. This approach enables defining optimal context-sensitive thresholds for quantifying clinically relevant active EHR time using raw audit log data.</p>\",\"PeriodicalId\":48956,\"journal\":{\"name\":\"Applied Clinical Informatics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Clinical Informatics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1055/a-2698-0841\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Clinical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-2698-0841","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:花费在电子健康记录(EHR)上的时间是衡量临床活动的重要指标。供应商衍生的EHR使用度量可能不符合实际的EHR体验。原始的EHR审计日志支持定制的EHR使用度量,但是将离散的时间戳转换为时间间隔是具有挑战性的。没有足够的数据可用于量化审计日志时间戳之间的不活动。方法:我们提出了一个基于计算机视觉的模型,该模型可以1)对正在执行的电子病历任务进行分类,并识别任务何时发生变化;2)使用电子病历使用的会话屏幕记录来量化活跃使用时间。我们在Epic沙盒环境中生成了111分钟的模拟工作流程,用于开发和培训,并收集了86分钟的真实临床医生会话记录用于验证。该模型使用YOLOv8、Tesseract OCR和预定义字典来执行任务分类和任务变更检测。我们开发了一种帧比较算法来描述活动和不活动,从而量化活动时间。我们将模型在任务分类、任务变更识别和活动时间量化方面的输出与临床医生注释进行了比较。然后,我们进行了事后敏感性分析,以确定使用最佳参数时模型的准确性。结果:我们的模型对各种高级任务所花费的时间进行分类,准确率为94%。它检测任务变化的灵敏度为90.6%。主动使用量化因任务而异,具有明显视觉变化的任务(例如,结果评审)的MAPE较低,而具有微妙交互的任务(例如,笔记录入)的MAPE较高。事后敏感性分析显示,与最初使用相比,不活动阈值较低,积极使用量化有所改善。结论:计算机视觉方法识别任务执行和测量时间花费在电子病历是可行的。未来的工作应该细化特定于任务的阈值,并在不同的设置中进行验证。这种方法可以定义最佳的上下文敏感阈值,用于使用原始审计日志数据量化临床相关的活动EHR时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Identifying Electronic Health Record Tasks and Activity Using Computer Vision.

Background: Time spent in the electronic health record (EHR) is an important measure of clinical activity. Vendor-derived EHR use metrics may not correspond to actual EHR experience. Raw EHR audit logs enable customized EHR use metrics, but translating discrete timestamps to time intervals is challenging. There are insufficient data available to quantify inactivity between audit log timestamps.

Methods: We propose a computer vision-based model that can 1) classify EHR tasks being performed, and identify when task changes occur, and 2) quantify active-use time using session screen recordings of EHR use. We generated 111 minutes of simulated workflow in an Epic sandbox environment for development and training and collected 86 minutes of real-world clinician session recordings for validation. The model used YOLOv8, Tesseract OCR, and a predefined dictionary to perform task classification and task change detection. We developed a frame comparison algorithm to delineate activity from inactivity and thus quantify active time. We compared the model's output of task classification, task change identification, and active time quantification against clinician annotations. We then performed a post-hoc sensitivity analysis to identify the model's accuracy when using optimal parameters.

Results: Our model classified time spent in various high-level tasks with 94% accuracy. It detected task changes with 90.6% sensitivity. Active-use quantification varied by task, with lower MAPE for tasks with clear visual changes (e.g., Results Review) and higher MAPE for tasks with subtle interactions (e.g., Note Entry). A post-hoc sensitivity analysis revealed improvement in active-use quantification with a lower threshold of inactivity than initially used.

Conclusion: A computer vision approach to identifying tasks performed and measuring time spent in the EHR is feasible. Future work should refine task-specific thresholds and validate across diverse settings. This approach enables defining optimal context-sensitive thresholds for quantifying clinically relevant active EHR time using raw audit log data.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Clinical Informatics
Applied Clinical Informatics MEDICAL INFORMATICS-
CiteScore
4.60
自引率
24.10%
发文量
132
期刊介绍: ACI is the third Schattauer journal dealing with biomedical and health informatics. It perfectly complements our other journals Öffnet internen Link im aktuellen FensterMethods of Information in Medicine and the Öffnet internen Link im aktuellen FensterYearbook of Medical Informatics. The Yearbook of Medical Informatics being the “Milestone” or state-of-the-art journal and Methods of Information in Medicine being the “Science and Research” journal of IMIA, ACI intends to be the “Practical” journal of IMIA.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信