一种用于手术动作识别的时空动态融合网络

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Gui-Bin Bian, Yaqin Peng, Zhen Li, Qiang Ye, Ruichen Ma
{"title":"一种用于手术动作识别的时空动态融合网络","authors":"Gui-Bin Bian,&nbsp;Yaqin Peng,&nbsp;Zhen Li,&nbsp;Qiang Ye,&nbsp;Ruichen Ma","doi":"10.1016/j.neucom.2025.130808","DOIUrl":null,"url":null,"abstract":"<div><div>Surgical action recognition is crucial and tough in intelligent surgical robots, enabling these systems to accurately identify and interpret the ongoing actions within a surgical procedure. By recognizing the action state in real-time, the robot can provide immediate feedback and make adjustments to ensure the precision and safety of the surgery. However, it faces several challenges, such as the temporal complexity of surgical actions, the fine operation steps and subtle changes in surgical movements. Thus, a dynamic attention mechanism has been proposed to capture the temporal correlation between the current frame and the previous frames from video sequence. Furthermore, a spatiotemporal dynamic fusion network comprising two specialized modules has been proposed. The first module, Double Bi-level Routing Attention (DBRA), is designed to extract the most pertinent spatial and temporal features. While the other module, CNN-LSTM, is dedicated to delivering comprehensive spatiotemporal information. Experiments have been conducted on both a neurosurgical dataset Neuro67 and a public dataset Suturing to demonstrate the performance of the proposed method. The results indicate that the proposed method has achieved superior performance on the hard issue, achieving AP on Neuro67 and ACC on Suturing of 76.9% and 85.5%, leading by 6.7% and 1.2% respectively, with the modules effectively focusing on dependency relationships both within regions of a frame and across frames in video sequence.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"649 ","pages":"Article 130808"},"PeriodicalIF":6.5000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A spatiotemporal dynamic fusion network for surgical action recognition\",\"authors\":\"Gui-Bin Bian,&nbsp;Yaqin Peng,&nbsp;Zhen Li,&nbsp;Qiang Ye,&nbsp;Ruichen Ma\",\"doi\":\"10.1016/j.neucom.2025.130808\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Surgical action recognition is crucial and tough in intelligent surgical robots, enabling these systems to accurately identify and interpret the ongoing actions within a surgical procedure. By recognizing the action state in real-time, the robot can provide immediate feedback and make adjustments to ensure the precision and safety of the surgery. However, it faces several challenges, such as the temporal complexity of surgical actions, the fine operation steps and subtle changes in surgical movements. Thus, a dynamic attention mechanism has been proposed to capture the temporal correlation between the current frame and the previous frames from video sequence. Furthermore, a spatiotemporal dynamic fusion network comprising two specialized modules has been proposed. The first module, Double Bi-level Routing Attention (DBRA), is designed to extract the most pertinent spatial and temporal features. While the other module, CNN-LSTM, is dedicated to delivering comprehensive spatiotemporal information. Experiments have been conducted on both a neurosurgical dataset Neuro67 and a public dataset Suturing to demonstrate the performance of the proposed method. The results indicate that the proposed method has achieved superior performance on the hard issue, achieving AP on Neuro67 and ACC on Suturing of 76.9% and 85.5%, leading by 6.7% and 1.2% respectively, with the modules effectively focusing on dependency relationships both within regions of a frame and across frames in video sequence.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"649 \",\"pages\":\"Article 130808\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225014808\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225014808","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

手术动作识别是智能手术机器人的关键和难点,使这些系统能够准确地识别和解释手术过程中正在进行的动作。通过实时识别动作状态,机器人可以及时反馈并进行调整,保证手术的精度和安全性。然而,手术动作的时间复杂性、手术步骤的精细性和手术动作的细微变化等都是其面临的挑战。因此,本文提出了一种动态注意力机制来捕捉当前帧与视频序列中前一帧之间的时间相关性。在此基础上,提出了一个由两个专用模块组成的时空动态融合网络。第一个模块,双双级路由注意(DBRA),用于提取最相关的时空特征。而另一个模块CNN-LSTM则致力于提供全面的时空信息。在神经外科数据集Neuro67和公共数据集缝合线上进行了实验,以验证所提出方法的性能。结果表明,该方法在硬问题上取得了优异的性能,在Neuro67和Suturing上分别实现了76.9%和85.5%的AP和ACC,分别领先6.7%和1.2%,并且模块有效地关注了视频序列中帧内区域和帧间的依赖关系。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A spatiotemporal dynamic fusion network for surgical action recognition

A spatiotemporal dynamic fusion network for surgical action recognition
Surgical action recognition is crucial and tough in intelligent surgical robots, enabling these systems to accurately identify and interpret the ongoing actions within a surgical procedure. By recognizing the action state in real-time, the robot can provide immediate feedback and make adjustments to ensure the precision and safety of the surgery. However, it faces several challenges, such as the temporal complexity of surgical actions, the fine operation steps and subtle changes in surgical movements. Thus, a dynamic attention mechanism has been proposed to capture the temporal correlation between the current frame and the previous frames from video sequence. Furthermore, a spatiotemporal dynamic fusion network comprising two specialized modules has been proposed. The first module, Double Bi-level Routing Attention (DBRA), is designed to extract the most pertinent spatial and temporal features. While the other module, CNN-LSTM, is dedicated to delivering comprehensive spatiotemporal information. Experiments have been conducted on both a neurosurgical dataset Neuro67 and a public dataset Suturing to demonstrate the performance of the proposed method. The results indicate that the proposed method has achieved superior performance on the hard issue, achieving AP on Neuro67 and ACC on Suturing of 76.9% and 85.5%, leading by 6.7% and 1.2% respectively, with the modules effectively focusing on dependency relationships both within regions of a frame and across frames in video sequence.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信