Controllable illumination invariant GAN for diverse temporally-consistent surgical video synthesis

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Long Chen , Mobarak I. Hoque , Zhe Min , Matt Clarkson , Thomas Dowrick
{"title":"Controllable illumination invariant GAN for diverse temporally-consistent surgical video synthesis","authors":"Long Chen ,&nbsp;Mobarak I. Hoque ,&nbsp;Zhe Min ,&nbsp;Matt Clarkson ,&nbsp;Thomas Dowrick","doi":"10.1016/j.media.2025.103731","DOIUrl":null,"url":null,"abstract":"<div><div>Surgical video synthesis offers a cost-effective way to expand training data and enhance the performance of machine learning models in computer-assisted surgery. However, existing video translation methods often produce video sequences with large illumination changes across different views, disrupting the temporal consistency of the videos. Additionally, these methods typically synthesize videos with a monotonous style, whereas diverse synthetic data is desired to improve the generalization ability of downstream machine learning models. To address these challenges, we propose a novel Controllable Illumination Invariant Generative Adversarial Network (CIIGAN) for generating diverse, illumination-consistent video sequences. CIIGAN fuses multi-scale illumination-invariant features from a novel controllable illumination-invariant (CII) image space with multi-scale texture-invariant features from self-constructed 3D scenes. The CII image space, along with the 3D scenes, allows CIIGAN to produce diverse and temporally-consistent video or image translations. Extensive experiments demonstrate that CIIGAN achieves more realistic and illumination-consistent translations compared to previous state-of-the-art baselines. Furthermore, the segmentation networks trained on our diverse synthetic data outperform those trained on monotonous synthetic data. Our source code, well-trained models, and 3D simulation scenes are public available at <span><span>https://github.com/LongChenCV/CIIGAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103731"},"PeriodicalIF":10.7000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525002786","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Surgical video synthesis offers a cost-effective way to expand training data and enhance the performance of machine learning models in computer-assisted surgery. However, existing video translation methods often produce video sequences with large illumination changes across different views, disrupting the temporal consistency of the videos. Additionally, these methods typically synthesize videos with a monotonous style, whereas diverse synthetic data is desired to improve the generalization ability of downstream machine learning models. To address these challenges, we propose a novel Controllable Illumination Invariant Generative Adversarial Network (CIIGAN) for generating diverse, illumination-consistent video sequences. CIIGAN fuses multi-scale illumination-invariant features from a novel controllable illumination-invariant (CII) image space with multi-scale texture-invariant features from self-constructed 3D scenes. The CII image space, along with the 3D scenes, allows CIIGAN to produce diverse and temporally-consistent video or image translations. Extensive experiments demonstrate that CIIGAN achieves more realistic and illumination-consistent translations compared to previous state-of-the-art baselines. Furthermore, the segmentation networks trained on our diverse synthetic data outperform those trained on monotonous synthetic data. Our source code, well-trained models, and 3D simulation scenes are public available at https://github.com/LongChenCV/CIIGAN.
可控光照不变GAN用于多种时间一致手术视频合成
手术视频合成为计算机辅助手术中扩展训练数据和增强机器学习模型的性能提供了一种经济有效的方法。然而,现有的视频翻译方法通常会产生在不同视图之间具有较大光照变化的视频序列,从而破坏了视频的时间一致性。此外,这些方法通常合成风格单调的视频,而需要多样化的合成数据来提高下游机器学习模型的泛化能力。为了解决这些挑战,我们提出了一种新的可控光照不变生成对抗网络(CIIGAN),用于生成不同的、光照一致的视频序列。CIIGAN融合了一种新的可控光照不变量(CII)图像空间的多尺度光照不变量特征和自构建三维场景的多尺度纹理不变量特征。CII图像空间,以及3D场景,允许CIIGAN产生多样化和时间一致的视频或图像翻译。大量的实验表明,与以前最先进的基线相比,CIIGAN实现了更逼真和光照一致的翻译。此外,在多种合成数据上训练的分割网络优于在单一合成数据上训练的分割网络。我们的源代码、训练有素的模型和3D模拟场景可在https://github.com/LongChenCV/CIIGAN上公开获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信