Knowledge-Driven Framework for Anatomical Landmark Annotation in Laparoscopic Surgery

Jie Zhang;Song Zhou;Yiwei Wang;Huan Zhao;Han Ding
{"title":"Knowledge-Driven Framework for Anatomical Landmark Annotation in Laparoscopic Surgery","authors":"Jie Zhang;Song Zhou;Yiwei Wang;Huan Zhao;Han Ding","doi":"10.1109/TMI.2025.3529294","DOIUrl":null,"url":null,"abstract":"Accurate and reliable annotation of anatomical landmarks in laparoscopic surgery remains a challenge due to varying degrees of landmark visibility and changing shapes of human tissues during a surgical procedure in videos. In this paper, we propose a knowledge-driven framework that integrates prior surgical expertise with visual data to address this problem. Inspired by visual reasoning knowledge of tool-anatomy interactions, our framework models a spatio-temporal graph to represent the static topology of tool and tissue and dynamic transitions of landmarks’ temporal behavior. By assigning explainable features of the surgical scene as node attributes in the graph, the surgical context is incorporated into the knowledge space. An attention-guided message passing mechanism across the graph dynamically adjusts the focus in different scenarios, enabling robust tracking of landmark states throughout the surgical process. Evaluations on the clinical dataset demonstrate the framework’s ability to effectively use the inductive bias of explainable features to label landmarks, showing its potential in tackling intricate surgical tasks with improved stability and reliability.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2218-2229"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10841458/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate and reliable annotation of anatomical landmarks in laparoscopic surgery remains a challenge due to varying degrees of landmark visibility and changing shapes of human tissues during a surgical procedure in videos. In this paper, we propose a knowledge-driven framework that integrates prior surgical expertise with visual data to address this problem. Inspired by visual reasoning knowledge of tool-anatomy interactions, our framework models a spatio-temporal graph to represent the static topology of tool and tissue and dynamic transitions of landmarks’ temporal behavior. By assigning explainable features of the surgical scene as node attributes in the graph, the surgical context is incorporated into the knowledge space. An attention-guided message passing mechanism across the graph dynamically adjusts the focus in different scenarios, enabling robust tracking of landmark states throughout the surgical process. Evaluations on the clinical dataset demonstrate the framework’s ability to effectively use the inductive bias of explainable features to label landmarks, showing its potential in tackling intricate surgical tasks with improved stability and reliability.
腹腔镜手术解剖地标标注的知识驱动框架
由于视频中手术过程中人体组织形状的变化和不同程度的地标可视性,在腹腔镜手术中准确可靠地标注解剖地标仍然是一个挑战。在本文中,我们提出了一个知识驱动的框架,将先前的外科专业知识与视觉数据相结合,以解决这一问题。受工具-解剖相互作用的视觉推理知识的启发,我们的框架建模了一个时空图,以表示工具和组织的静态拓扑以及地标时间行为的动态转换。通过将手术场景的可解释特征分配为图中的节点属性,将手术上下文合并到知识空间中。注意引导的信息传递机制在图中动态调整不同场景下的焦点,从而在整个手术过程中实现对地标状态的鲁棒跟踪。对临床数据集的评估表明,该框架能够有效地利用可解释特征的归纳偏差来标记地标,表明其在处理复杂的外科任务方面具有提高稳定性和可靠性的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信