Zihang Chen , Weijie Zhao , Jingyang Liu , Puguang Xie , Siyu Hou , Yongjian Nian , Xiaochao Yang , Ruiyan Ma , Haiyan Ding , Jingjing Xiao
{"title":"Active learning for cross-modal cardiac segmentation with sparse annotation","authors":"Zihang Chen , Weijie Zhao , Jingyang Liu , Puguang Xie , Siyu Hou , Yongjian Nian , Xiaochao Yang , Ruiyan Ma , Haiyan Ding , Jingjing Xiao","doi":"10.1016/j.patcog.2025.111403","DOIUrl":null,"url":null,"abstract":"<div><div>This work presents a new dual-domain active learning method for cross-modal cardiac image segmentation with sparse annotations. Our network uses tilted Variational Auto-Encoders (tVAE) to extract and align invariant features from different domains. A proposed innovative Category Diversity Maximization approach that calculates statistical information regarding categories within a region reflects category diversity. The Uncertainty Region Selection Strategy is devised to measure the uncertainty of each predicted pixel. By jointly using these two methodologies, we identify risky areas for future annotation in active learning. The method was benchmarked against leading algorithms using two public cardiac datasets. In the MS-CMRSeg bSSFP to LGE segmentation task, our method achieved a DSC of 87.2% with just six-pixel annotations, surpassing the best results from the MS-CMRSeg Challenge 2019. In the MM-WHS dataset, our method using only 0.1% of annotations achieved a DSC of 91.8% for CT to MR and 88.9% for MR to CT, surpassing fully supervised models.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"162 ","pages":"Article 111403"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325000639","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This work presents a new dual-domain active learning method for cross-modal cardiac image segmentation with sparse annotations. Our network uses tilted Variational Auto-Encoders (tVAE) to extract and align invariant features from different domains. A proposed innovative Category Diversity Maximization approach that calculates statistical information regarding categories within a region reflects category diversity. The Uncertainty Region Selection Strategy is devised to measure the uncertainty of each predicted pixel. By jointly using these two methodologies, we identify risky areas for future annotation in active learning. The method was benchmarked against leading algorithms using two public cardiac datasets. In the MS-CMRSeg bSSFP to LGE segmentation task, our method achieved a DSC of 87.2% with just six-pixel annotations, surpassing the best results from the MS-CMRSeg Challenge 2019. In the MM-WHS dataset, our method using only 0.1% of annotations achieved a DSC of 91.8% for CT to MR and 88.9% for MR to CT, surpassing fully supervised models.1
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.