International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
VMC-UNet: A Vision Mamba-CNN U-Net for Tumor Segmentation in Breast Ultrasound Image VMC-UNet:用于乳腺超声图像肿瘤分割的视觉曼巴-CNN U-Net
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-14 DOI: 10.1002/ima.23222
Dongyue Wang, Weiyu Zhao, Kaixuan Cui, Yi Zhu
{"title":"VMC-UNet: A Vision Mamba-CNN U-Net for Tumor Segmentation in Breast Ultrasound Image","authors":"Dongyue Wang,&nbsp;Weiyu Zhao,&nbsp;Kaixuan Cui,&nbsp;Yi Zhu","doi":"10.1002/ima.23222","DOIUrl":"https://doi.org/10.1002/ima.23222","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer remains one of the most significant health threats to women, making precise segmentation of target tumors critical for early clinical intervention and postoperative monitoring. While numerous convolutional neural networks (CNNs) and vision transformers have been developed to segment breast tumors from ultrasound images, both architectures encounter difficulties in effectively modeling long-range dependencies, which are essential for accurate segmentation. Drawing inspiration from the Mamba architecture, we introduce the Vision Mamba-CNN U-Net (VMC-UNet) for breast tumor segmentation. This innovative hybrid framework merges the long-range dependency modeling capabilities of Mamba with the detailed local representation power of CNNs. A key feature of our approach is the implementation of a residual connection method within the U-Net architecture, utilizing the visual state space (VSS) module to extract long-range dependency features from convolutional feature maps effectively. Additionally, to better integrate texture and structural features, we have designed a bilinear multi-scale attention module (BMSA), which significantly enhances the network's ability to capture and utilize intricate feature details across multiple scales. Extensive experiments conducted on three public datasets demonstrate that the proposed VMC-UNet surpasses other state-of-the-art methods in breast tumor segmentation, achieving Dice coefficients of 81.52% for BUSI, 88.00% for BUS, and 88.96% for STU. The source code is accessible at https://github.com/windywindyw/VMC-UNet.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Suppression of the Tissue Component With the Total Least-Squares Algorithm to Improve Second Harmonic Imaging of Ultrasound Contrast Agents 用最小二乘总算法抑制组织成分,改善超声造影剂的二次谐波成像效果
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-14 DOI: 10.1002/ima.23218
Jingying Zhu, Yufeng Zhang, Bingbing He, Zhiyao Li, Li Xiong, Xun Lang
{"title":"Suppression of the Tissue Component With the Total Least-Squares Algorithm to Improve Second Harmonic Imaging of Ultrasound Contrast Agents","authors":"Jingying Zhu,&nbsp;Yufeng Zhang,&nbsp;Bingbing He,&nbsp;Zhiyao Li,&nbsp;Li Xiong,&nbsp;Xun Lang","doi":"10.1002/ima.23218","DOIUrl":"https://doi.org/10.1002/ima.23218","url":null,"abstract":"<div>\u0000 \u0000 <p>The second harmonic (SH) of ultrasound contrast agents (UCAs) is widely used in contrast-enhanced ultrasound imaging; however, is affected by the nonlinearity of surrounding tissue. Suppression of the tissue component based on the total least-squares (STLS) algorithm is proposed to enhance the SH imaging of UCAs. The image blocks of pulse-inversion-based SH images before and after UCA injections are set as the reference and input of the total least-squares model, respectively. The optimal coefficients of the model are obtained by minimizing the Frobenius norm of perturbations in the input and output signals. After processing all image blocks, the complete SH image of UCAs is obtained by subtracting the optimal output of the model (i.e., the estimated tissue SH image) from the SH image after UCA injection. Simulation and in vivo experiments confirm that the STLS approach offers clearer capillaries. For in vivo experiments, the STLS-based contrast-to-tissue ratios and contrasts increase by 26.90% and 56.27%, as well as 26.99% and 56.43%, respectively, compared with those based on bubble-echo deconvolution and pulse inversion bubble-wavelet imaging methods. The STLS approach enhances the SH imaging of UCAs by effectively suppressing more tissue SH components, having the potential to provide more accurate diagnostic information for clinical applications.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation and Classification of Breast Masses From the Whole Mammography Images Using Transfer Learning and BI-RADS Characteristics 利用迁移学习和 BI-RADS 特征从整个乳腺 X 射线照相图像中对乳腺肿块进行分割和分类
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-13 DOI: 10.1002/ima.23216
Hayette Oudjer, Assia Cherfa, Yazid Cherfa, Noureddine Belkhamsa
{"title":"Segmentation and Classification of Breast Masses From the Whole Mammography Images Using Transfer Learning and BI-RADS Characteristics","authors":"Hayette Oudjer,&nbsp;Assia Cherfa,&nbsp;Yazid Cherfa,&nbsp;Noureddine Belkhamsa","doi":"10.1002/ima.23216","DOIUrl":"https://doi.org/10.1002/ima.23216","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer is the most prevalent cancer among women worldwide, highlighting the critical need for its accurate detection and early diagnosis. In this context, the segmentation of breast masses (the most common symptom of breast cancer) plays a crucial role in analyzing mammographic images. In addition, in image processing, the analysis of mammographic images is very common, but certain combinations of mathematical tools have never been exploited. We propose a computer-aided diagnosis (CAD) system designed with different and new algorithm combinations for the segmentation and classification of breast masses based on the Breast Imaging-Reporting and Data System (BI-RADS) lexicon. The image is initially divided into superpixels using the simple linear iterative clustering (SLIC) algorithm. Fine-tuning of ResNet50, EfficientNetB2, MobileNetV2, and InceptionV3 models is employed to extract features from superpixels. The classification of each superpixel as background or breast mass is performed by feeding the extracted features into a support vector machine (SVM) classifier, resulting in an accurate primary segmentation for breast masses, refined by the GrabCut algorithm with automated initialization. Finally, we extract contour, texture, and shape parameters from the segmented regions for the classification of masses into BI-Rads 2, 3, 4, and 5 using the gradient boost (GB) classifier while also examining the impact of the surrounding tissue. The proposed method was evaluated on the INBreast database, achieving a Dice score of 87.65% and a sensitivity of 87.96% for segmentation. For classification, we obtained a sensitivity of 88.66%, a precision of 90.51%, and an area under the curve (AUC) of 97.8%. The CAD system demonstrates high accuracy in both the segmentation and classification of breast masses, providing a reliable tool for aiding breast cancer diagnosis using the BI-Rads lexicon. The study also showed that the surrounding tissue has an impact on classification, leading to the importance of choosing the right size of ROIs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142642168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Low Technology in Coronary and Abdominal CT Angiography: A Comparative Study of Deep Learning Image Reconstruction and Adaptive Statistic Iterative Reconstruction-Veo 冠状动脉和腹部 CT 血管造影中的双低技术:深度学习图像重建与自适应统计迭代重建的比较研究-Veo
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-13 DOI: 10.1002/ima.23217
Zhanao Meng, Qing Xiang, Jian Cao, Yahao Guo, Sisi Deng, Tao Luo, Yue Zhang, Ke Zhang, Xuan Zhu, Kun Ma, Xiaohong Wang, Jie Qin
{"title":"Dual-Low Technology in Coronary and Abdominal CT Angiography: A Comparative Study of Deep Learning Image Reconstruction and Adaptive Statistic Iterative Reconstruction-Veo","authors":"Zhanao Meng,&nbsp;Qing Xiang,&nbsp;Jian Cao,&nbsp;Yahao Guo,&nbsp;Sisi Deng,&nbsp;Tao Luo,&nbsp;Yue Zhang,&nbsp;Ke Zhang,&nbsp;Xuan Zhu,&nbsp;Kun Ma,&nbsp;Xiaohong Wang,&nbsp;Jie Qin","doi":"10.1002/ima.23217","DOIUrl":"https://doi.org/10.1002/ima.23217","url":null,"abstract":"<div>\u0000 \u0000 <p>To investigate the application advantages of dual-low technology (low radiation dose and low contrast agent dose) in deep learning image reconstruction (DLIR) compared to the adaptive statistical iterative reconstruction-Veo (ASIR-V) standard protocol when combing coronary computed tomography angiography (CCTA) and abdominal computed tomography angiography (ACTA). Sixty patients who underwent CCTA and ACTA were recruited. Thirty patients with low body mass index (BMI) (&lt; 24 kg/m<sup>2</sup>, Group A, standard protocol) were reconstructed using 60% ASIR-V, and 30 patients with high BMI (&gt; 24 kg/m<sup>2</sup>, Group B, dual-low protocol) were reconstructed using DLIR at high strength (DLIR-H). The effective dose and contrast agent dose were recorded. The CT values, standard deviations, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were measured. The subjective evaluation criteria were scored by two radiologists using a blind Likert 5-point scale. The general data, objective evaluation, and subjective scores between both groups were compared using corresponding test methods. The consistency of objective and subjective evaluations between the two radiologists were analyzed using Kappa tests. Group B showed a remarkable 44.6% reduction in mean effective dose (<i>p</i> &lt; 0.01) and a 20.3% decrease in contrast agent dose compared to Group A (<i>p</i> &lt; 0.01). The DLIR-H demonstrated the smallest standard deviations and highest SNR and CNR values (<i>p</i> &lt; 0.01). The subjective score of DLIR-H was the highest (<i>p</i> &lt; 0.01), and there was good consistency between the two radiologists in the subjective scoring of CCTA and ACTA image quality (κ = 0.751 ~ 0.919, <i>p</i> &lt; 0.01). In combined CCTA and ACTA protocols, DLIR can significantly reduce the effective dose and contrast agent dose compared to ASIR-V while maintaining good image quality.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142642169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Deep Learning Approach With Transfer Learning for 7-Stages Diabetic Retinopathy Classification 利用迁移学习的多深度学习法进行 7 级糖尿病视网膜病变分类
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-11 DOI: 10.1002/ima.23213
Abdul Qadir Khan, Guangmin Sun, Majdi Khalid, Majed Farrash, Anas Bilal
{"title":"Multi-Deep Learning Approach With Transfer Learning for 7-Stages Diabetic Retinopathy Classification","authors":"Abdul Qadir Khan,&nbsp;Guangmin Sun,&nbsp;Majdi Khalid,&nbsp;Majed Farrash,&nbsp;Anas Bilal","doi":"10.1002/ima.23213","DOIUrl":"https://doi.org/10.1002/ima.23213","url":null,"abstract":"<div>\u0000 \u0000 <p>Proposed novel investigation focused on leveraging an innovative diabetic retinopathy (DR) dataset comprising seven severity stages, an approach not previously examined. By capitalizing on this unique resource, this study′s findings set a new benchmark for DR classification, highlighting the transformative potential of incorporating advanced data into AI models. This study developed a Vgg16 transfer learning model and gauged its performance against established algorithms including Vgg-19, AlexNet, and SqueezeNet. Remarkably, our results achieved accuracy rates of 96.95, 96.75, 96.09, and 92.96, respectively, emphasizing the contribution of our work. We strongly emphasized comprehensive severity rating, yielding perfect and impressive F1-scores of 1.00 for “mild NPDR” and 97.00 for “no DR signs.” The Vgg16-TL model consistently outperformed other models across all severity levels, reinforcing the value of our discoveries. Our deep learning training process, carefully selecting a learning rate of 1e-05, allowed continuous refinements in training and validation accuracy. Beyond metrics, our investigation underscores the vital clinical importance of precise DR classification for preventing vision loss. This study conclusively establishes deep learning as a powerful transformative tool for developing effective DR algorithms with the potential to improve patient outcomes and advance ophthalmology standards.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLOv7-XAI: Multi-Class Skin Lesion Diagnosis Using Explainable AI With Fair Decision Making YOLOv7-XAI:利用可解释的人工智能进行多类皮肤病变诊断并做出公平决策
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-11 DOI: 10.1002/ima.23214
Nirmala Veeramani, Premaladha Jayaraman
{"title":"YOLOv7-XAI: Multi-Class Skin Lesion Diagnosis Using Explainable AI With Fair Decision Making","authors":"Nirmala Veeramani,&nbsp;Premaladha Jayaraman","doi":"10.1002/ima.23214","DOIUrl":"https://doi.org/10.1002/ima.23214","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin cancer, a prevalent and potentially life-threatening condition, demands accurate and timely detection for effective intervention. It is an uncontrolled growth of abnormal cells in the human body. Studies are underway to determine if a skin lesion is benign (non-cancerous) or malignant (cancerous), but the biggest challenge for a doctor is determining the type of skin cancer. As a result, determining the type of tumour is crucial for the right course of treatment. In this study, we introduce a groundbreaking approach to multi-class skin cancer detection by harnessing the power of Explainable Artificial Intelligence (XAI) in conjunction with a customised You Only Look Once (YOLOv7) architecture. Our research focuses on enhancing the YOLOv7 framework to accurately discern 8 different skin cancer classes, including melanoma, basal cell carcinoma, and squamous cell carcinoma. The YOLOv7 model is the robust backbone, enriched with features tailored for precise multi-class classification. Concurrently, integrating XAI elements, Local Interpretable Modal-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) ensures transparent decision-making processes, enabling healthcare professionals to interpret and trust the model's predictions. This innovative synergy between YOLOv7 and XAI heralds a new era in interpretability, resulting in high-performance skin cancer diagnostics. The obtained results are 96.8%, 94.2%, 95.6%, and 95.8%, evaluated with popular quantitative metrics such as accuracy, precision, recall, and F1 score, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Risk Stratification Study of Ultrasound Images of Thyroid Nodules Based on Improved DETR 基于改进型 DETR 的甲状腺结节超声图像风险分层研究
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-11 DOI: 10.1002/ima.23219
Zhang Le, Yue Liang, Xiaokang Hu, Taorong Qiu, Pan Xu
{"title":"A Risk Stratification Study of Ultrasound Images of Thyroid Nodules Based on Improved DETR","authors":"Zhang Le,&nbsp;Yue Liang,&nbsp;Xiaokang Hu,&nbsp;Taorong Qiu,&nbsp;Pan Xu","doi":"10.1002/ima.23219","DOIUrl":"https://doi.org/10.1002/ima.23219","url":null,"abstract":"<div>\u0000 \u0000 <p>The Chinese Thyroid Imaging Reporting and Data System (C-TIRADS) standard is based on the Chinese current medical context. However, at present, there is a lack of C-TIRADS-based automatic computer-aided diagnosis system for thyroid nodule ultrasound images, and the existing algorithms for detecting and recognizing thyroid nodules are basically for the dichotomous classification of benign and malignant. We used the DETR (detection transformer) model as a baseline model and carried out model enhancements to address the shortcomings of unsatisfactory classification accuracy and difficulty in detecting small thyroid nodules in the DETR model. First, to investigate the method of acquiring multi-scale features of thyroid nodule ultrasound images, we choose TResNet-L as the feature extraction network and introduce multi-scale feature information and group convolution, thereby enhancing the model's multi-label classification accuracy. Second, a parallel decoder architecture for multi-label thyroid nodule ultrasound image classification is designed to enhance the learning of correlation between pathological feature class labels, aiming to improve the multi-label classification accuracy of the detection model. Third, the loss function of the detection model is improved. We propose a linear combination of Smooth L1-Loss and CIoU Loss as the model's bounding box loss function and asymmetric loss as the model's multi-label classification loss function, aiming to further improve the detection model's detection accuracy for small thyroid nodules. The experiment results show that the improved DETR model achieves an AP of 92.4% and 81.6% with IoU thresholds of 0.5 and 0.75, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning and Handcrafted Features for Thyroid Nodule Classification 深度学习和人工特征用于甲状腺结节分类
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-08 DOI: 10.1002/ima.23215
Ayoub Abderrazak Maarouf, Hacini meriem, Fella Hachouf
{"title":"Deep Learning and Handcrafted Features for Thyroid Nodule Classification","authors":"Ayoub Abderrazak Maarouf,&nbsp;Hacini meriem,&nbsp;Fella Hachouf","doi":"10.1002/ima.23215","DOIUrl":"https://doi.org/10.1002/ima.23215","url":null,"abstract":"<div>\u0000 \u0000 <p>In this research, we present a refined image-based computer-aided diagnosis (CAD) system for thyroid cancer detection using ultrasound imagery. This system integrates a specialized convolutional neural network (CNN) architecture designed to address the unique aspects of thyroid image datasets. Additionally, it incorporates a novel statistical model that utilizes a two-dimensional random coefficient autoregressive (2D-RCA) method to precisely analyze the textural characteristics of thyroid images, thereby capturing essential texture-related information. The classification framework relies on a composite feature vector that combines deep learning features from the CNN and handcrafted features from the 2D-RCA model, processed through a support vector machine (SVM) algorithm. Our evaluation methodology is structured in three phases: initial assessment of the 2D-RCA features, analysis of the CNN-derived features, and a final evaluation of their combined effect on classification performance. Comparative analyses with well-known networks such as VGG16, VGG19, ResNet50, and AlexNet highlight the superior performance of our approach. The outcomes indicate a significant enhancement in diagnostic accuracy, achieving a classification accuracy of 97.2%, a sensitivity of 84.42%, and a specificity of 95.23%. These results not only demonstrate a notable advancement in the classification of thyroid nodules but also establish a new standard in the efficiency of CAD systems.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDR2Tr-GAN: A Novel Medical Image Fusion Pipeline Based on GAN With SDR2 Module and Transformer Optimization Strategy SDR2Tr-GAN:基于带有 SDR2 模块和变压器优化策略的 GAN 的新型医学图像融合管道
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-08 DOI: 10.1002/ima.23208
Ying Cheng, Xianjin Fang, Zhiri Tang, Zekuan Yu, Linlin Sun, Li Zhu
{"title":"SDR2Tr-GAN: A Novel Medical Image Fusion Pipeline Based on GAN With SDR2 Module and Transformer Optimization Strategy","authors":"Ying Cheng,&nbsp;Xianjin Fang,&nbsp;Zhiri Tang,&nbsp;Zekuan Yu,&nbsp;Linlin Sun,&nbsp;Li Zhu","doi":"10.1002/ima.23208","DOIUrl":"https://doi.org/10.1002/ima.23208","url":null,"abstract":"<div>\u0000 \u0000 <p>In clinical practice, radiologists diagnose brain tumors with the help of different magnetic resonance imaging (MRI) sequences and judge the type and grade of brain tumors. It is hard to realize the brain tumor computer-aided diagnosis system only with a single MRI sequence. However, the existing multiple MRI sequence fusion methods have limitations in the enhancement of tumor details. To improve fusion details of multi-modality MRI images, a novel conditional generative adversarial fusion network based on three discriminators and a Staggered Dense Residual2 (SDR2) module, named SDR2Tr-GAN, was proposed in this paper. In the SDR2Tr-GAN network pipeline, the generator consists of an encoder, decoder, and fusion strategy that can enhance the feature representation. SDR2 module is developed with Res2Net into the encoder to extract multi-scale features. In addition, a Multi-Head Spatial/Channel Attention Transformer, as a fusion strategy to strengthen the long-range dependencies of global context information, is integrated into our pipeline. A Mask-based constraint as a novel fusion optimization mechanism was designed, focusing on enhancing salient feature details. The Mask-based constraint utilizes the segmentation mask obtained by the pre-trained Unet and Ground Truth to optimize the training process. Meanwhile, MI and SSIM loss jointly improve the visual perception of images. Extensive experiments were conducted on the public BraTS2021 dataset. The visual and quantitative results demonstrate that the proposed method can simultaneously enhance both global image quality and local texture details in multi-modality MRI images. Besides, our SDR2Tr-GAN outperforms the other state-of-the-art fusion methods regarding subjective and objective evaluation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Wavelet-Deep Learning Framework for Fluorescence Microscopy Images Enhancement 荧光显微镜图像增强的混合小波-深度学习框架
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-07 DOI: 10.1002/ima.23212
Francesco Branciforti, Maura Maggiore, Kristen M. Meiburger, Tania Pannellini, Massimo Salvi
{"title":"Hybrid Wavelet-Deep Learning Framework for Fluorescence Microscopy Images Enhancement","authors":"Francesco Branciforti,&nbsp;Maura Maggiore,&nbsp;Kristen M. Meiburger,&nbsp;Tania Pannellini,&nbsp;Massimo Salvi","doi":"10.1002/ima.23212","DOIUrl":"https://doi.org/10.1002/ima.23212","url":null,"abstract":"<p>Fluorescence microscopy is a powerful tool for visualizing cellular structures, but it faces challenges such as noise, low contrast, and autofluorescence that can hinder accurate image analysis. To address these limitations, we propose a novel hybrid image enhancement method that combines wavelet-based denoising, linear contrast enhancement, and convolutional neural network-based autofluorescence correction. Our automated method employs Haar wavelet transform for noise reduction and a series of adaptive linear transformations for pixel value adjustment, effectively enhancing image quality while preserving crucial details. Furthermore, we introduce a semantic segmentation approach using CNNs to identify and correct autofluorescence in cellular aggregates, enabling targeted mitigation of unwanted background signals. We validate our method using quantitative metrics, such as signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR), demonstrating superior performance compared to both mathematical and deep learning-based techniques. Our method achieves an average SNR improvement of 8.5 dB and a PSNR increase of 4.2 dB compared with the original images, outperforming state-of-the-art methods such as BM3D and CLAHE. Extensive testing on diverse datasets, including publicly available human-derived cardiosphere and fluorescence microscopy images of bovine endothelial cells stained for mitochondria and actin filaments, showcases the flexibility and robustness of our approach across various acquisition conditions and artifacts. The proposed method significantly improves fluorescence microscopy image quality, facilitating more accurate and reliable analysis of cellular structures and processes, with potential applications in biomedical research and clinical diagnostics.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信