基于视频的多目标多摄像机术后相位识别。

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL
Franziska Jurosch, Janik Zeller, Lars Wagner, Ege Özsoy, Alissa Jell, Sven Kolb, Dirk Wilhelm
{"title":"基于视频的多目标多摄像机术后相位识别。","authors":"Franziska Jurosch, Janik Zeller, Lars Wagner, Ege Özsoy, Alissa Jell, Sven Kolb, Dirk Wilhelm","doi":"10.1007/s11548-025-03344-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning methods are commonly used to generate context understanding to support surgeons and medical professionals. By expanding the current focus beyond the operating room (OR) to postoperative workflows, new forms of assistance are possible. In this article, we propose a novel multi-target multi-camera tracking (MTMCT) architecture for postoperative phase recognition, location tracking, and automatic timestamp generation.</p><p><strong>Methods: </strong>Three RGB cameras were used to create a multi-camera data set containing 19 reenacted postoperative patient flows. Patients and beds were annotated and used to train the custom MTMCT architecture. It includes bed and patient tracking for each camera and a postoperative patient state module to provide the postoperative phase, current location of the patient, and automatically generated timestamps.</p><p><strong>Results: </strong>The architecture demonstrates robust performance for single- and multi-patient scenarios by embedding medical domain-specific knowledge. In multi-patient scenarios, the state machine representing the postoperative phases has a traversal accuracy of <math><mrow><mn>84.9</mn> <mo>±</mo> <mn>6.0</mn> <mo>%</mo></mrow> </math> , <math><mrow><mn>91.4</mn> <mo>±</mo> <mn>1.5</mn> <mo>%</mo></mrow> </math> of timestamps are generated correctly, and the patient tracking IDF1 reaches <math><mrow><mn>92.0</mn> <mo>±</mo> <mn>3.6</mn> <mo>%</mo></mrow> </math> . Comparative experiments show the effectiveness of using AFLink for matching partial trajectories in postoperative settings.</p><p><strong>Conclusion: </strong>As our approach shows promising results, it lays the foundation for real-time surgeon support, enhancing clinical documentation and ultimately improving patient care.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Video-based multi-target multi-camera tracking for postoperative phase recognition.\",\"authors\":\"Franziska Jurosch, Janik Zeller, Lars Wagner, Ege Özsoy, Alissa Jell, Sven Kolb, Dirk Wilhelm\",\"doi\":\"10.1007/s11548-025-03344-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Deep learning methods are commonly used to generate context understanding to support surgeons and medical professionals. By expanding the current focus beyond the operating room (OR) to postoperative workflows, new forms of assistance are possible. In this article, we propose a novel multi-target multi-camera tracking (MTMCT) architecture for postoperative phase recognition, location tracking, and automatic timestamp generation.</p><p><strong>Methods: </strong>Three RGB cameras were used to create a multi-camera data set containing 19 reenacted postoperative patient flows. Patients and beds were annotated and used to train the custom MTMCT architecture. It includes bed and patient tracking for each camera and a postoperative patient state module to provide the postoperative phase, current location of the patient, and automatically generated timestamps.</p><p><strong>Results: </strong>The architecture demonstrates robust performance for single- and multi-patient scenarios by embedding medical domain-specific knowledge. In multi-patient scenarios, the state machine representing the postoperative phases has a traversal accuracy of <math><mrow><mn>84.9</mn> <mo>±</mo> <mn>6.0</mn> <mo>%</mo></mrow> </math> , <math><mrow><mn>91.4</mn> <mo>±</mo> <mn>1.5</mn> <mo>%</mo></mrow> </math> of timestamps are generated correctly, and the patient tracking IDF1 reaches <math><mrow><mn>92.0</mn> <mo>±</mo> <mn>3.6</mn> <mo>%</mo></mrow> </math> . Comparative experiments show the effectiveness of using AFLink for matching partial trajectories in postoperative settings.</p><p><strong>Conclusion: </strong>As our approach shows promising results, it lays the foundation for real-time surgeon support, enhancing clinical documentation and ultimately improving patient care.</p>\",\"PeriodicalId\":51251,\"journal\":{\"name\":\"International Journal of Computer Assisted Radiology and Surgery\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computer Assisted Radiology and Surgery\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s11548-025-03344-x\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-025-03344-x","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

目的:深度学习方法通常用于生成上下文理解,以支持外科医生和医疗专业人员。通过将目前的重点从手术室(OR)扩展到术后工作流程,新形式的协助是可能的。在本文中,我们提出了一种新的多目标多相机跟踪(MTMCT)架构,用于术后相位识别、位置跟踪和时间戳自动生成。方法:使用3台RGB摄像机创建包含19个重现的术后患者流程的多摄像机数据集。对患者和床位进行标注,并用于训练定制的MTMCT体系结构。它包括每个摄像机的病床和患者跟踪,以及术后患者状态模块,用于提供术后阶段、患者当前位置和自动生成的时间戳。结果:通过嵌入医疗领域特定知识,该体系结构在单患者和多患者场景中表现出稳健的性能。在多患者场景下,表示术后阶段的状态机遍历精度为84.9±6.0%,正确生成91.4±1.5%的时间戳,患者跟踪IDF1达到92.0±3.6%。对比实验表明,在术后设置中使用AFLink匹配部分轨迹是有效的。结论:我们的方法显示出良好的效果,为实时外科医生支持奠定了基础,增强了临床记录,最终改善了患者的护理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Video-based multi-target multi-camera tracking for postoperative phase recognition.

Purpose: Deep learning methods are commonly used to generate context understanding to support surgeons and medical professionals. By expanding the current focus beyond the operating room (OR) to postoperative workflows, new forms of assistance are possible. In this article, we propose a novel multi-target multi-camera tracking (MTMCT) architecture for postoperative phase recognition, location tracking, and automatic timestamp generation.

Methods: Three RGB cameras were used to create a multi-camera data set containing 19 reenacted postoperative patient flows. Patients and beds were annotated and used to train the custom MTMCT architecture. It includes bed and patient tracking for each camera and a postoperative patient state module to provide the postoperative phase, current location of the patient, and automatically generated timestamps.

Results: The architecture demonstrates robust performance for single- and multi-patient scenarios by embedding medical domain-specific knowledge. In multi-patient scenarios, the state machine representing the postoperative phases has a traversal accuracy of 84.9 ± 6.0 % , 91.4 ± 1.5 % of timestamps are generated correctly, and the patient tracking IDF1 reaches 92.0 ± 3.6 % . Comparative experiments show the effectiveness of using AFLink for matching partial trajectories in postoperative settings.

Conclusion: As our approach shows promising results, it lays the foundation for real-time surgeon support, enhancing clinical documentation and ultimately improving patient care.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信