Peng Wan , Haiyan Xue , Shukang Zhang , Wentao Kong , Wei Shao , Baojie Wen , Daoqiang Zhang
{"title":"Image by co-reasoning: A collaborative reasoning-based implicit data augmentation method for dual-view CEUS classification","authors":"Peng Wan , Haiyan Xue , Shukang Zhang , Wentao Kong , Wei Shao , Baojie Wen , Daoqiang Zhang","doi":"10.1016/j.media.2025.103557","DOIUrl":null,"url":null,"abstract":"<div><div>Dual-view contrast-enhanced ultrasound (CEUS) data are often insufficient to train reliable machine learning models in typical clinical scenarios. A key issue is that limited clinical CEUS data fail to cover the underlying texture variations for specific diseases. Implicit data augmentation offers a flexible way to enrich sample diversity, however, inter-view semantic consistency has not been considered in previous studies. To address this issue, we propose a novel implicit data augmentation method for dual-view CEUS classification, which performs a sample-adaptive data augmentation with collaborative semantic reasoning across views. Specifically, the method constructs a feature augmentation distribution for each ultrasound view of an individual sample, accounting for intra-class variance. To maintain semantic consistency between the augmented views, plausible semantic changes in one view are transferred from similar instances in the other view. In this retrospective study, we validate the proposed method on the dual-view CEUS datasets of breast cancer and liver cancer, obtaining the superior mean diagnostic accuracy of 89.25% and 95.57%, respectively. Experimental results demonstrate its effectiveness in improving model performance with limited clinical CEUS data. Code: <span><span>https://github.com/wanpeng16/CRIDA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103557"},"PeriodicalIF":10.7000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525001045","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Dual-view contrast-enhanced ultrasound (CEUS) data are often insufficient to train reliable machine learning models in typical clinical scenarios. A key issue is that limited clinical CEUS data fail to cover the underlying texture variations for specific diseases. Implicit data augmentation offers a flexible way to enrich sample diversity, however, inter-view semantic consistency has not been considered in previous studies. To address this issue, we propose a novel implicit data augmentation method for dual-view CEUS classification, which performs a sample-adaptive data augmentation with collaborative semantic reasoning across views. Specifically, the method constructs a feature augmentation distribution for each ultrasound view of an individual sample, accounting for intra-class variance. To maintain semantic consistency between the augmented views, plausible semantic changes in one view are transferred from similar instances in the other view. In this retrospective study, we validate the proposed method on the dual-view CEUS datasets of breast cancer and liver cancer, obtaining the superior mean diagnostic accuracy of 89.25% and 95.57%, respectively. Experimental results demonstrate its effectiveness in improving model performance with limited clinical CEUS data. Code: https://github.com/wanpeng16/CRIDA.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.