Accurate delineation of the cerebral blood vessel from time-of-flight magnetic resonance angiography (TOF-MRA) data is essential to the analysis, diagnosis, and treatment of pathologies related to the cerebral blood supply. The limitations of supervised deep learning approaches in terms of annotation cost and applicability necessitate the exploration of alternative approaches that can effectively address these challenges and facilitate the real-world clinical deployment of automatic 3D cerebrovascular segmentation.
To address the challenges of limited labeled data by exploiting the intricate structures of vessels and developing a method to assess the reliability of generated pseudo-labels, with the ultimate goal of enhancing the efficiency of unlabeled data utilization and improving segmentation accuracy.
We introduce a cross-consistency dual uncertainty quantification mean teacher method for semi-supervised learning fine-grained 3D cerebrovascular segmentation from TOF-MRA images. To effectively incorporate knowledge from unlabeled samples, we present a dual-consistency learning approach that jointly pertains to pixel-image transformation consistent equivariant and feature perturbation invariance. Following that, in an attempt to guarantee more confidence in unsupervised learning, we evaluate the segmentation uncertainty using the predictions from both the student and teacher models and employ them in collaboration for guiding consistency regularization. Additionally, we boost the pixel-level prediction performance by employing a region-specific supervised loss only for the annotated input samples.
Quantitative and qualitative results on two publicly available datasets show that the proposed method yielded better results than state-of-the-art semi-supervised learning methods for cerebrovascular segmentation. Specifically, our method achieved a dice similarity coefficient of 83.3% and intersection-over-union of 71.5% on the IXI dataset, surpassing the baseline uncertainty-aware mean teacher method by 1.7% and 2.8%, respectively.
The framework's ability to achieve competitive performance across various metrics showcases its potential for relieving human annotation efforts for accurate cerebrovascular extraction tasks, where its effectiveness in handling unlabeled data can offer significant advantages.