Medical image analysis最新文献

筛选
英文 中文
A robust image segmentation and synthesis pipeline for histopathology 用于组织病理学的稳健图像分割和合成管道
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-11 DOI: 10.1016/j.media.2024.103344
{"title":"A robust image segmentation and synthesis pipeline for histopathology","authors":"","doi":"10.1016/j.media.2024.103344","DOIUrl":"10.1016/j.media.2024.103344","url":null,"abstract":"<div><p>Significant diagnostic variability between and within observers persists in pathology, despite the fact that digital slide images provide the ability to measure and quantify features much more precisely compared to conventional methods. Automated and accurate segmentation of cancerous cell and tissue regions can streamline the diagnostic process, providing insights into the cancer progression, and helping experts decide on the most effective treatment. Here, we evaluate the performance of the proposed PathoSeg model, with an architecture comprising of a modified HRNet encoder and a UNet++ decoder integrated with a CBAM block to utilize attention mechanism for an improved segmentation capability. We demonstrate that PathoSeg outperforms the current state-of-the-art (SOTA) networks in both quantitative and qualitative assessment of instance and semantic segmentation. Notably, we leverage the use of synthetic data generated by PathopixGAN, which effectively addresses the data imbalance problem commonly encountered in histopathology datasets, further improving the performance of PathoSeg. It utilizes spatially adaptive normalization within a generative and discriminative mechanism to synthesize diverse histopathological environments dictated through semantic information passed through pixel-level annotated Ground Truth semantic masks.Besides, we contribute to the research community by providing an in-house dataset that includes semantically segmented masks for breast carcinoma tubules (BCT), micro/macrovesicular steatosis of the liver (MSL), and prostate carcinoma glands (PCG). In the first part of the dataset, we have a total of 14 whole slide images from 13 patients’ liver, with fat cell segmented masks, totaling 951 masks of size 512 × 512 pixels. In the second part, it includes 17 whole slide images from 13 patients with prostate carcinoma gland segmentation masks, amounting to 30,000 masks of size 512 × 512 pixels. In the third part, the dataset contains 51 whole slides from 36 patients, with breast carcinoma tubule masks totaling 30,000 masks of size 512 × 512 pixels. To ensure transparency and encourage further research, we will make this dataset publicly available for non-commercial and academic purposes. To facilitate reproducibility and encourage further research, we will also make our code and pre-trained models publicly available at <span><span>https://github.com/DeepMIALab/PathoSeg</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-dose computed tomography perceptual image quality assessment 低剂量计算机断层扫描感知图像质量评估
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-06 DOI: 10.1016/j.media.2024.103343
{"title":"Low-dose computed tomography perceptual image quality assessment","authors":"","doi":"10.1016/j.media.2024.103343","DOIUrl":"10.1016/j.media.2024.103343","url":null,"abstract":"<div><p>In computed tomography (CT) imaging, optimizing the balance between radiation dose and image quality is crucial due to the potentially harmful effects of radiation on patients. Although subjective assessments by radiologists are considered the gold standard in medical imaging, these evaluations can be time-consuming and costly. Thus, objective methods, such as the peak signal-to-noise ratio and structural similarity index measure, are often employed as alternatives. However, these metrics, initially developed for natural images, may not fully encapsulate the radiologists’ assessment process. Consequently, interest in developing deep learning-based image quality assessment (IQA) methods that more closely align with radiologists’ perceptions is growing. A significant barrier to this development has been the absence of open-source datasets and benchmark models specific to CT IQA. Addressing these challenges, we organized the Low-dose Computed Tomography Perceptual Image Quality Assessment Challenge in conjunction with the Medical Image Computing and Computer Assisted Intervention 2023. This event introduced the first open-source CT IQA dataset, consisting of 1,000 CT images of various quality, annotated with radiologists’ assessment scores. As a benchmark, this challenge offers a comprehensive analysis of six submitted methods, providing valuable insight into their performance. This paper presents a summary of these methods and insights. This challenge underscores the potential for developing no-reference IQA methods that could exceed the capabilities of full-reference IQA methods, making a significant contribution to the research community with this novel dataset. The dataset is accessible at <span><span>https://zenodo.org/records/7833096</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002688/pdfft?md5=4b571dbdaaece38e1cd24203b6bc5445&pid=1-s2.0-S1361841524002688-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Labeled-to-unlabeled distribution alignment for partially-supervised multi-organ medical image segmentation 用于部分监督多器官医学图像分割的标签到非标签分布对齐
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-05 DOI: 10.1016/j.media.2024.103333
{"title":"Labeled-to-unlabeled distribution alignment for partially-supervised multi-organ medical image segmentation","authors":"","doi":"10.1016/j.media.2024.103333","DOIUrl":"10.1016/j.media.2024.103333","url":null,"abstract":"<div><p>Partially-supervised multi-organ medical image segmentation aims to develop a unified semantic segmentation model by utilizing multiple partially-labeled datasets, with each dataset providing labels for a single class of organs. However, the limited availability of labeled foreground organs and the absence of supervision to distinguish unlabeled foreground organs from the background pose a significant challenge, which leads to a distribution mismatch between labeled and unlabeled pixels. Although existing pseudo-labeling methods can be employed to learn from both labeled and unlabeled pixels, they are prone to performance degradation in this task, as they rely on the assumption that labeled and unlabeled pixels have the same distribution. In this paper, to address the problem of distribution mismatch, we propose a labeled-to-unlabeled distribution alignment (LTUDA) framework that aligns feature distributions and enhances discriminative capability. Specifically, we introduce a cross-set data augmentation strategy, which performs region-level mixing between labeled and unlabeled organs to reduce distribution discrepancy and enrich the training set. Besides, we propose a prototype-based distribution alignment method that implicitly reduces intra-class variation and increases the separation between the unlabeled foreground and background. This can be achieved by encouraging consistency between the outputs of two prototype classifiers and a linear classifier. Extensive experimental results on the AbdomenCT-1K dataset and a union of four benchmark datasets (including LiTS, MSD-Spleen, KiTS, and NIH82) demonstrate that our method outperforms the state-of-the-art partially-supervised methods by a considerable margin, and even surpasses the fully-supervised methods. The source code is publicly available at <span><span>LTUDA</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ATEC23 Challenge: Automated prediction of treatment effectiveness in ovarian cancer using histopathological images ATEC23 挑战赛:利用组织病理学图像自动预测卵巢癌的治疗效果
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-05 DOI: 10.1016/j.media.2024.103342
{"title":"ATEC23 Challenge: Automated prediction of treatment effectiveness in ovarian cancer using histopathological images","authors":"","doi":"10.1016/j.media.2024.103342","DOIUrl":"10.1016/j.media.2024.103342","url":null,"abstract":"<div><p>Ovarian cancer, predominantly epithelial ovarian cancer (EOC), is a global health concern due to its high mortality rate. Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70% of advanced patients are with recurrent cancer and disease. Bevacizumab is a humanized monoclonal antibody, which blocks <em><em>VEGF</em></em> signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by the FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Unfortunately, Bevacizumab may also induce harmful adverse effects, such as hypertension, bleeding, arterial thromboembolism, poor wound healing and gastrointestinal perforation. Given the expensive cost and unwanted toxicities, there is an urgent need for predictive methods to identify who could benefit from bevacizumab. Of the 18 (approved) requests from 5 countries, 6 teams using 284 whole section WSIs for training to develop fully automated systems submitted their predictions on a test set of 180 tissue core images, with the corresponding ground truth labels kept private. This paper summarizes the 5 qualified methods successfully submitted to the international challenge of automated prediction of treatment effectiveness in ovarian cancer using the histopathologic images (ATEC23) held at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023 and evaluates the methods in comparison with 5 state of the art deep learning approaches. This study further assesses the effectiveness of the presented prediction models as indicators for patient selection utilizing both Cox proportional hazards analysis and Kaplan–Meier survival analysis. A robust and cost-effective deep learning pipeline for digital histopathology tasks has become a necessity within the context of the medical community. This challenge highlights the limitations of current MIL methods, particularly within the context of prognosis-based classification tasks, and the importance of DCNNs like inception that has nonlinear convolutional modules at various resolutions to facilitate processing the data in multiple resolutions, which is a key feature required for pathology related prediction tasks. This further suggests the use of feature reuse at various scales to improve models for future research directions. In particular, this paper releases the labels of the testing set and provides applications for future research directions in precision oncology to predict ovarian cancer treatment effectiveness and facilitate patient selection via histopathological images.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing global sensitivity and uncertainty quantification in medical image reconstruction with Monte Carlo arbitrary-masked mamba 用蒙特卡罗任意掩码曼巴增强医学图像重建中的全局灵敏度和不确定性量化
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-03 DOI: 10.1016/j.media.2024.103334
{"title":"Enhancing global sensitivity and uncertainty quantification in medical image reconstruction with Monte Carlo arbitrary-masked mamba","authors":"","doi":"10.1016/j.media.2024.103334","DOIUrl":"10.1016/j.media.2024.103334","url":null,"abstract":"<div><p>Deep learning has been extensively applied in medical image reconstruction, where Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) represent the predominant paradigms, each possessing distinct advantages and inherent limitations: CNNs exhibit linear complexity with local sensitivity, whereas ViTs demonstrate quadratic complexity with global sensitivity. The emerging Mamba has shown superiority in learning visual representation, which combines the advantages of linear scalability and global sensitivity. In this study, we introduce MambaMIR, an Arbitrary-Masked Mamba-based model with wavelet decomposition for joint medical image reconstruction and uncertainty estimation. A novel Arbitrary Scan Masking (ASM) mechanism “masks out” redundant information to introduce randomness for further uncertainty estimation. Compared to the commonly used Monte Carlo (MC) dropout, our proposed MC-ASM provides an uncertainty map without the need for hyperparameter tuning and mitigates the performance drop typically observed when applying dropout to low-level tasks. For further texture preservation and better perceptual quality, we employ the wavelet transformation into MambaMIR and explore its variant based on the Generative Adversarial Network, namely MambaMIR-GAN. Comprehensive experiments have been conducted for multiple representative medical image reconstruction tasks, demonstrating that the proposed MambaMIR and MambaMIR-GAN outperform other baseline and state-of-the-art methods in different reconstruction tasks, where MambaMIR achieves the best reconstruction fidelity and MambaMIR-GAN has the best perceptual quality. In addition, our MC-ASM provides uncertainty maps as an additional tool for clinicians, while mitigating the typical performance drop caused by the commonly used dropout.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002597/pdfft?md5=a57fc8f3e3e9d07f16079fe5caf02411&pid=1-s2.0-S1361841524002597-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142158015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mammography classification with multi-view deep learning techniques: Investigating graph and transformer-based architectures 利用多视角深度学习技术进行乳腺放射摄影分类:研究基于图和变换器的架构
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-02 DOI: 10.1016/j.media.2024.103320
{"title":"Mammography classification with multi-view deep learning techniques: Investigating graph and transformer-based architectures","authors":"","doi":"10.1016/j.media.2024.103320","DOIUrl":"10.1016/j.media.2024.103320","url":null,"abstract":"<div><p>The potential and promise of deep learning systems to provide an independent assessment and relieve radiologists’ burden in screening mammography have been recognized in several studies. However, the low cancer prevalence, the need to process high-resolution images, and the need to combine information from multiple views and scales still pose technical challenges. Multi-view architectures that combine information from the four mammographic views to produce an exam-level classification score are a promising approach to the automated processing of screening mammography. However, training such architectures from exam-level labels, without relying on pixel-level supervision, requires very large datasets and may result in suboptimal accuracy. Emerging architectures such as Visual Transformers (ViT) and graph-based architectures can potentially integrate ipsi-lateral and contra-lateral breast views better than traditional convolutional neural networks, thanks to their stronger ability of modeling long-range dependencies. In this paper, we extensively evaluate novel transformer-based and graph-based architectures against state-of-the-art multi-view convolutional neural networks, trained in a weakly-supervised setting on a middle-scale dataset, both in terms of performance and interpretability. Extensive experiments on the CSAW dataset suggest that, while transformer-based architecture outperform other architectures, different inductive biases lead to complementary strengths and weaknesses, as each architecture is sensitive to different signs and mammographic features. Hence, an ensemble of different architectures should be preferred over a winner-takes-all approach to achieve more accurate and robust results. Overall, the findings highlight the potential of a wide range of multi-view architectures for breast cancer classification, even in datasets of relatively modest size, although the detection of small lesions remains challenging without pixel-wise supervision or ad-hoc networks.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002457/pdfft?md5=54882ce8ea86df8174b91d3e6c870da0&pid=1-s2.0-S1361841524002457-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142144055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image-based simulation of mitral valve dynamic closure including anisotropy 基于图像的二尖瓣动态关闭模拟,包括各向异性。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-08-31 DOI: 10.1016/j.media.2024.103323
{"title":"Image-based simulation of mitral valve dynamic closure including anisotropy","authors":"","doi":"10.1016/j.media.2024.103323","DOIUrl":"10.1016/j.media.2024.103323","url":null,"abstract":"<div><p>Simulation of the dynamic behavior of mitral valve closure could improve clinical treatment by predicting surgical procedures outcome. We propose here a method to achieve this goal by using the immersed boundary method. In order to go towards patient-based simulation, we tailor our method to be adapted to a valve extracted from medical image data. It includes investigating segmentation process, smoothness of geometry, case setup and the shape of the left ventricle. We also study the influence of leaflet tissue anisotropy on the quality of the valve closure by comparing with an isotropic model. As part of the anisotropy analysis, we study the influence of the principal material direction by comparing methods to obtain them without dissection.</p><p>Results show that our method can be scaled to various image-based data. We evaluate the mitral valve closure quality based on measuring bulging area, contact map, and flow rate. The results show also that the anisotropic material model more precisely represents the physiological characteristics of the valve tissue. Furthermore, results indicate that the orientation of the principal material direction plays a role in the effectiveness of the valve seal.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep unfolding network with spatial alignment for multi-modal MRI reconstruction 用于多模态核磁共振成像重建的空间配准深度展开网络
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-08-31 DOI: 10.1016/j.media.2024.103331
{"title":"Deep unfolding network with spatial alignment for multi-modal MRI reconstruction","authors":"","doi":"10.1016/j.media.2024.103331","DOIUrl":"10.1016/j.media.2024.103331","url":null,"abstract":"<div><p>Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic information, but some modalities are limited by the long scanning time. To accelerate the whole acquisition process, MRI reconstruction of one modality from highly under-sampled k-space data with another fully-sampled reference modality is an efficient solution. However, the misalignment between modalities, which is common in clinic practice, can negatively affect reconstruction quality. Existing deep learning-based methods that account for inter-modality misalignment perform better, but still share two main common limitations: (1) The spatial alignment task is not adaptively integrated with the reconstruction process, resulting in insufficient complementarity between the two tasks; (2) the entire framework has weak interpretability. In this paper, we construct a novel Deep Unfolding Network with Spatial Alignment, termed DUN-SA, to appropriately embed the spatial alignment task into the reconstruction process. Concretely, we derive a novel joint alignment-reconstruction model with a specially designed aligned cross-modal prior term. By relaxing the model into cross-modal spatial alignment and multi-modal reconstruction tasks, we propose an effective algorithm to solve this model alternatively. Then, we unfold the iterative stages of the proposed algorithm and design corresponding network modules to build DUN-SA with interpretability. Through end-to-end training, we effectively compensate for spatial misalignment using only reconstruction loss, and utilize the progressively aligned reference modality to provide inter-modality prior to improve the reconstruction of the target modality. Comprehensive experiments on four real datasets demonstrate that our method exhibits superior reconstruction performance compared to state-of-the-art methods.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time placental vessel segmentation in fetoscopic laser surgery for Twin-to-Twin Transfusion Syndrome 在治疗双胎输血综合征的胎儿镜激光手术中实时分割胎盘血管
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-08-30 DOI: 10.1016/j.media.2024.103330
{"title":"Real-time placental vessel segmentation in fetoscopic laser surgery for Twin-to-Twin Transfusion Syndrome","authors":"","doi":"10.1016/j.media.2024.103330","DOIUrl":"10.1016/j.media.2024.103330","url":null,"abstract":"<div><p>Twin-to-Twin Transfusion Syndrome (TTTS) is a rare condition that affects about 15% of monochorionic pregnancies, in which identical twins share a single placenta. Fetoscopic laser photocoagulation (FLP) is the standard treatment for TTTS, which significantly improves the survival of fetuses. The aim of FLP is to identify abnormal connections between blood vessels and to laser ablate them in order to equalize blood supply to both fetuses. However, performing fetoscopic surgery is challenging due to limited visibility, a narrow field of view, and significant variability among patients and domains. In order to enhance the visualization of placental vessels during surgery, we propose TTTSNet, a network architecture designed for real-time and accurate placental vessel segmentation. Our network architecture incorporates a novel channel attention module and multi-scale feature fusion module to precisely segment tiny placental vessels. To address the challenges posed by FLP-specific fiberscope and amniotic sac-based artifacts, we employed novel data augmentation techniques. These techniques simulate various artifacts, including laser pointer, amniotic sac particles, and structural and optical fiber artifacts. By incorporating these simulated artifacts during training, our network architecture demonstrated robust generalizability. We trained TTTSNet on a publicly available dataset of 2060 video frames from 18 independent fetoscopic procedures and evaluated it on a multi-center external dataset of 24 in-vivo procedures with a total of 2348 video frames. Our method achieved significant performance improvements compared to state-of-the-art methods, with a mean Intersection over Union of 78.26% for all placental vessels and 73.35% for a subset of tiny placental vessels. Moreover, our method achieved 172 and 152 frames per second on an A100 GPU, and Clara AGX, respectively. This potentially opens the door to real-time application during surgical procedures. The code is publicly available at <span><span>https://github.com/SanoScience/TTTSNet</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S136184152400255X/pdfft?md5=56d7b823e23898a9a165cb5e7f9e87bd&pid=1-s2.0-S136184152400255X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142163537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-view discrepancy-dependency network for volumetric medical image segmentation 用于容积医学图像分割的跨视角差异依赖网络
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-08-30 DOI: 10.1016/j.media.2024.103329
{"title":"Cross-view discrepancy-dependency network for volumetric medical image segmentation","authors":"","doi":"10.1016/j.media.2024.103329","DOIUrl":"10.1016/j.media.2024.103329","url":null,"abstract":"<div><p>The limited data poses a crucial challenge for deep learning-based volumetric medical image segmentation, and many methods have tried to represent the volume by its subvolumes (<em>i.e.</em>, multi-view slices) for alleviating this issue. However, such methods generally sacrifice inter-slice spatial continuity. Currently, a promising avenue involves incorporating multi-view information into the network to enhance volume representation learning, but most existing studies tend to overlook the discrepancy and dependency across different views, ultimately limiting the potential of multi-view representations. To this end, we propose a cross-view discrepancy-dependency network (CvDd-Net) to task with volumetric medical image segmentation, which exploits multi-view slice prior to assist volume representation learning and explore view discrepancy and view dependency for performance improvement. Specifically, we develop a discrepancy-aware morphology reinforcement (DaMR) module to effectively learn view-specific representation by mining morphological information (<em>i.e.</em>, boundary and position of object). Besides, we design a dependency-aware information aggregation (DaIA) module to adequately harness the multi-view slice prior, enhancing individual view representations of the volume and integrating them based on cross-view dependency. Extensive experiments on four medical image datasets (<em>i.e.</em>, Thyroid, Cervix, Pancreas, and Glioma) demonstrate the efficacy of the proposed method on both fully-supervised and semi-supervised tasks.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信