A Bayesian approach to temporal surgical segmentation model fusion.

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL
Max Berniker, Sreeram Kamabattula, Kiran Bhattacharyya
{"title":"A Bayesian approach to temporal surgical segmentation model fusion.","authors":"Max Berniker, Sreeram Kamabattula, Kiran Bhattacharyya","doi":"10.1007/s11548-026-03686-0","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Robotic-assisted surgery (RAS) generates vast amounts of video and robotic data, presenting opportunities for machine learning. Video-based models, in particular, that can temporally segment frames by ontological categories such as procedure type, phase, steps, actions, etc., are needed. Training separate models for each category neglects statistical dependencies between categories and can yield incompatible predictions. Training large multi-category models may help, but increases complexity while reducing model modularity and interpretability.</p><p><strong>Methods: </strong>We present a model fusion alternative: an effectively zero-free-parameter Bayesian model fusion technique. Incorporating the empirical conditional dependencies across categories and time, we combine predictions from multiple segmentation models into one joint Bayesian inference. The result is a Bayes' optimal distribution over all categories evolving over time with accumulated evidence.</p><p><strong>Results: </strong>On a large test set of hundreds of surgical cases, of nearly eight million frames of annotated data, we found that fused predictions from the joint Bayesian model provide clear benefits over the individual models, correcting inconsistent and inaccurate predictions, and even forming accurate beliefs when evidence was absent.</p><p><strong>Conclusion: </strong>The model we present is a lightweight, principled alternative to machine learning-based model fusion. A sufficiently complex model could be trained to produce the same results, but would effectively trade explainable predictions with minimal overheard for computational complexity and transparency. We end by discussing how the same approach can be used to encompass larger more sophisticated models within the same conceptual framework.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-026-03686-0","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Robotic-assisted surgery (RAS) generates vast amounts of video and robotic data, presenting opportunities for machine learning. Video-based models, in particular, that can temporally segment frames by ontological categories such as procedure type, phase, steps, actions, etc., are needed. Training separate models for each category neglects statistical dependencies between categories and can yield incompatible predictions. Training large multi-category models may help, but increases complexity while reducing model modularity and interpretability.

Methods: We present a model fusion alternative: an effectively zero-free-parameter Bayesian model fusion technique. Incorporating the empirical conditional dependencies across categories and time, we combine predictions from multiple segmentation models into one joint Bayesian inference. The result is a Bayes' optimal distribution over all categories evolving over time with accumulated evidence.

Results: On a large test set of hundreds of surgical cases, of nearly eight million frames of annotated data, we found that fused predictions from the joint Bayesian model provide clear benefits over the individual models, correcting inconsistent and inaccurate predictions, and even forming accurate beliefs when evidence was absent.

Conclusion: The model we present is a lightweight, principled alternative to machine learning-based model fusion. A sufficiently complex model could be trained to produce the same results, but would effectively trade explainable predictions with minimal overheard for computational complexity and transparency. We end by discussing how the same approach can be used to encompass larger more sophisticated models within the same conceptual framework.

基于贝叶斯方法的颞骨外科分割模型融合。
目的:机器人辅助手术(RAS)产生大量的视频和机器人数据,为机器学习提供了机会。需要基于视频的模型,特别是可以按本体类别(如过程类型、阶段、步骤、动作等)暂时分割框架的模型。为每个类别训练单独的模型忽略了类别之间的统计依赖性,并可能产生不相容的预测。训练大型多类别模型可能会有所帮助,但会增加复杂性,同时降低模型的模块化和可解释性。方法:提出一种有效的零自由参数贝叶斯模型融合技术。结合跨类别和时间的经验条件依赖关系,我们将多个分割模型的预测结合到一个联合贝叶斯推理中。结果是贝叶斯最优分布在所有类别随着时间的推移和积累的证据。结果:在数百个手术病例的大型测试集上,我们发现来自联合贝叶斯模型的融合预测比单个模型提供了明显的优势,纠正了不一致和不准确的预测,甚至在缺乏证据的情况下形成了准确的信念。结论:我们提出的模型是基于机器学习的模型融合的轻量级、原则性替代方案。一个足够复杂的模型可以被训练来产生相同的结果,但它会有效地以最小的窃听来换取计算复杂性和透明度的可解释预测。最后,我们将讨论如何使用相同的方法在相同的概念框架中包含更大更复杂的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书