Jiansong Zhang , Xiuming Wu , Shunlan Liu , Yuling Fan , Yongjian Chen , Guorong Lyu , Peizhong Liu , Zhonghua Liu , Shaozheng He
{"title":"Adaptive batch-fusion self-supervised learning for ultrasound image pretraining","authors":"Jiansong Zhang , Xiuming Wu , Shunlan Liu , Yuling Fan , Yongjian Chen , Guorong Lyu , Peizhong Liu , Zhonghua Liu , Shaozheng He","doi":"10.1016/j.compmedimag.2025.102599","DOIUrl":null,"url":null,"abstract":"<div><div>Medical self-supervised learning eliminates the reliance on labels, making feature extraction simple and efficient. The intricate design of pretext tasks in single-modal self-supervised analysis presents challenges, however, compounded by an excessive dependency on data augmentation, leading to a bottleneck in medical self-supervised learning research. Consequently, this paper reanalyzes the feature learnability introduced by data augmentation strategies in medical image self-supervised learning. We introduce an adaptive self-supervised learning data augmentation method from the perspective of batch fusion. Moreover, we propose a conv embedding block for learning the incremental representation between these batches. We tested 5 fused data tasks proposed by previous researchers and it achieved a linear classification protocol accuracy of 94.25% with only 150 self-supervised feature training in Vision Transformer(ViT), which is the best among the same methods. With a detailed ablation study on previous augmentation strategies, the results indicate that the proposed medical data augmentation strategy in this paper effectively represents ultrasound data features in the self-supervised learning process. The code and weights could be found at <span><span>here</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102599"},"PeriodicalIF":4.9000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125001089","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Medical self-supervised learning eliminates the reliance on labels, making feature extraction simple and efficient. The intricate design of pretext tasks in single-modal self-supervised analysis presents challenges, however, compounded by an excessive dependency on data augmentation, leading to a bottleneck in medical self-supervised learning research. Consequently, this paper reanalyzes the feature learnability introduced by data augmentation strategies in medical image self-supervised learning. We introduce an adaptive self-supervised learning data augmentation method from the perspective of batch fusion. Moreover, we propose a conv embedding block for learning the incremental representation between these batches. We tested 5 fused data tasks proposed by previous researchers and it achieved a linear classification protocol accuracy of 94.25% with only 150 self-supervised feature training in Vision Transformer(ViT), which is the best among the same methods. With a detailed ablation study on previous augmentation strategies, the results indicate that the proposed medical data augmentation strategy in this paper effectively represents ultrasound data features in the self-supervised learning process. The code and weights could be found at here.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.