Xutao Guo, Chenfei Ye, Mingkai Zhang, Xingyu Hao, Yanwu Yang, Yue Yu, Ting Ma, Ying Han
{"title":"A Structure-Preserving Denoising Diffusion Model for AV45 PET Quantification Without MRI in Alzheimer's Disease Diagnosis","authors":"Xutao Guo, Chenfei Ye, Mingkai Zhang, Xingyu Hao, Yanwu Yang, Yue Yu, Ting Ma, Ying Han","doi":"10.1002/ima.70074","DOIUrl":"https://doi.org/10.1002/ima.70074","url":null,"abstract":"<div>\u0000 \u0000 <p>Lack of early diagnosis often results in patients with Alzheimer progressing to irreversible mild to moderate cognitive impairment without timely treatment. The deposition of amyloid-<i>β</i> (A<i>β</i>) in the cerebral cortex, a definitive biomarker for Alzheimer's disease, is detectable through AV45 PET scans, facilitating early diagnosis of the condition. Clinically, accurate quantification of AV45 PET scans necessitate T1 images. However, the prevalent use of PET-CT over PET-MRI equipment entails additional MRI scans, leading to increased costs and patient burden. To address this clinical challenge, this paper proposes the structure-preserving denoising diffusion probabilistic model (SP-DDPM), capable of synthesizing the T1 images from AV45 PET scans. In the SP-DDPM, structural details from T1 images are incorporated into the diffusion model to emphasize anatomical accuracy. We also enhance the model's learning for the targeted brain areas using segmentation-based priors. Moreover, an exponential cosine noise strategy is proposed to improve the model's suitability for generating T1 images. In this study, we incorporated a large-scale cohort of 667 subjects from the ADNI and SILCODE databases to train and validate our models. The MR images generated from AV45 PET demonstrated similar signal patterns to real MR images. The average absolute error of the cortical composite region SUVR, estimated using our method, was 0.018 for the ADNI dataset and 0.041 for the SILCODE dataset, outperforming current techniques. The MR images generated by the SP-DDPM serve as an accurate template for amyloid quantification, facilitating precise AV45 PET scan quantification in the absence of real MR images. The application of this method is poised to streamline the diagnostic workflow for Alzheimer's disease, increase clinical work efficiency, and alleviate patient burden.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143861956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Joint Lightweight U-Shaped Network for Efficient Medical Image Segmentation of Melanoma and Breast Cancer","authors":"Ting Ma, Jilong Liao, Feng Hu, Maode Ma, Ke Wang","doi":"10.1002/ima.70087","DOIUrl":"https://doi.org/10.1002/ima.70087","url":null,"abstract":"<div>\u0000 \u0000 <p>With the continuous development of deep learning, U-Net networks, as an encoder-decoder U-shaped network architecture based on skip connections, have become a popular structure for various medical image segmentation applications in recent years. However, traditional medical segmentation networks face severe challenges when dealing with complex scenarios such as dermoscopy images of melanoma and breast ultrasound images. These challenges primarily stem from limitations in semantic understanding and the complexity of lesion morphology, leading to difficulties in accurately identifying and segmenting lesion structures with irregular shapes and blurred boundaries with surrounding tissues. Additionally, the prevalent issues of parameter redundancy and computational inefficiency in network structures further constrain their potential applications in clinical practice. To address these issues, this paper proposes an image segmentation network based on dynamic skip connections and convolutional multilayer perceptrons—the Joint Lightweight U-shaped Network. JLU-Net, founded on the concept of “joint,” incorporates a joint non-uniform downsampling module that combines linear pooling with nonlinear convolutional downsampling to achieve lightweight modeling. Furthermore, to resolve the semantic gap problem, JLU-Net adopts an enhanced kernel convolution module, which strengthens target region features through feature recalibration operations while integrating detailed and global information. It also includes a joint squeeze attention module, which processes wide and narrow, global and local features simultaneously through squeeze axial operations, thereby enhancing global information exchange. Extensive experiments demonstrate that our JLU-Net achieves state-of-the-art performance across various environments while requiring only 0.29M parameters and 0.52 GFLOPs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Generative Adversarial Network Based on Conditional VAE for EIT Reconstruction of Lung Diseases","authors":"Yueyue Xiao, Jie Yu, Chunxiao Chen, Liang Wang, Songpei Hu, Bokai Chen, Hao Yu","doi":"10.1002/ima.70089","DOIUrl":"https://doi.org/10.1002/ima.70089","url":null,"abstract":"<div>\u0000 \u0000 <p>Electrical impedance tomography (EIT), a non-invasive, real-time, and cost-effective imaging technique, is widely studied in medical diagnostics for lung diseases. However, the severely ill-posed nonlinear inverse problem in EIT leads to reconstructed images being susceptible to noise-induced artifacts. This study aims to advance a deep learning technique to reconstruct high-resolution conductivity distributions using voltages measured by EIT sensors. We proposed a novel reconstruction algorithm called generative adversarial network based on conditional variational autoencoder (CVAE-GAN). We incorporated the true conductivity as a conditional variable into the latent representation of the variational autoencoder (VAE) decoder and encoder to form a conditional variational autoencoder (CVAE). A residual module was introduced into the CVAE decoder and encoder to facilitate the network in learning deeper feature representations, which improves the performance of the model. The adversarial learning strategy leverages the improved CVAE as the generator in a GAN framework, substantially enhancing the accuracy and robustness of the reconstructed images. Experimental results demonstrate that CVAE-GAN outperforms five state-of-the-art deep learning methods. Compared to the best alternative model, it achieves an 8.9% improvement in peak signal-to-noise ratio (PSNR) and a 3.2% improvement in structural similarity index (SSIM), while reducing mean squared error (MSE) by 33.33% and relative error (RE) by 24.57%. These results highlight the significant performance gains in terms of both accuracy and robustness for EIT image reconstruction. The proposed CVAE-GAN framework represents a significant advancement in EIT image reconstruction. By addressing key challenges such as noise-induced artifacts and achieving robust reconstructions, it provides a generalizable approach with transformative potential for real-world applications in medical imaging, particularly in the diagnostics and monitoring of lung diseases.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MGMFormer: Multi-Scale Attentional Medical Image Segmentation Network for Semantic Feature Enhancement","authors":"Yuanbin Wang, Yunbo Shi, Rui Zhao, Yunan Chen, Xingqiao Ren, Binghong Xing","doi":"10.1002/ima.70086","DOIUrl":"https://doi.org/10.1002/ima.70086","url":null,"abstract":"<div>\u0000 \u0000 <p>Multi-scale feature extraction is important for the accurate segmentation of different lesion areas. In order to solve the problem of false cut and missing cut in practical applications due to the difficulty of extracting semantic feature information from existing technologies, we proposed a multi-scale attention network framework based on semantic feature enhancement, MGMFormer. Taking advantage of multi-scale feature extraction and attention mechanism to enhance semantic features, the encoder and decoder are composed of joint learning, multi-scale arbitrary sampling, and global adaptive calibration modules. It makes the encoder more focused on the fine structure, so as to effectively deal with the problem of reduced accuracy caused by modal heterogeneity. At the same time, it solves the problem of lack of feature expression ability when the decoder deals with complex texture information. We evaluated the segmentation performance of MGMFormer on eight different datasets, BraTS, Sypanse, ACDC, ISIC, Kvasir-SEG, CAMUS, CHNCXR, and Glas, and in particular, it outperformed most existing algorithms.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensemble Architecture of Vision Transformer and CNNs for Breast Cancer Tumor Detection From Mammograms","authors":"Saif Ur Rehman Khan, Sohaib Asif, Omair Bilal","doi":"10.1002/ima.70090","DOIUrl":"https://doi.org/10.1002/ima.70090","url":null,"abstract":"<div>\u0000 \u0000 <p>Addressing the complexities of classifying distinct object classes in computer vision presents several challenges, including effectively capturing features such as color, form, and tissue size for each class, correlating class vulnerabilities, singly capturing features, and predicting class labels accurately. To tackle these issues, we introduce a novel hybrid deep dense learning technique that combines deep transfer learning with a transformer architecture. Our approach utilizes ResNet50, EfficientNetB1, and our proposed ProDense block as the backbone models. By integrating the Vit-L16 transformer, we can focus on relevant features in mammography and extract high-value pair features, offering two alternative methods for feature extraction. This allows our model to adaptively shift the region of interest towards the class type in slides. The transformer architecture, particularly Vit-L16, enhances feature extraction by efficiently capturing long-range dependencies in the data, enabling the model to better understand the context and relationships between features. This aids in more accurate classification, especially when fine-tuning pretrained models, as it helps the model adapt to specific characteristics of the target dataset while retaining valuable information learned from the pretraining phase. Furthermore, we employ a stack ensemble technique to leverage both the deep transfer learning model and the ProDense block extension for training extensive features for breast cancer classification. The fine-tuning process employed by our hybrid model helps refine the dense layers, enhancing classification accuracy. Evaluating our method on the INbreast dataset, we observe a significant improvement in predicting the binary cancer category, outperforming the current state-of-the-art classifier by 98.08% in terms of accuracy.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Federated Transfer Learning for Lung Disease Detection","authors":"Shrey Sumariya, Shreyas Rami, Shubham Revadekar, Chetashri Bhadane","doi":"10.1002/ima.70080","DOIUrl":"https://doi.org/10.1002/ima.70080","url":null,"abstract":"<div>\u0000 \u0000 <p>Detecting lung disease traditionally relied on the expertise of doctors and medical practitioners. However, advancements in Artificial Intelligence have revolutionized this process by utilizing machine learning and deep learning algorithms to analyze X-ray and CT scan data. Despite the potential of these technologies, the use of private patient data for training models poses significant privacy concerns, as hospitals are reluctant to share such sensitive information. To address this issue, this paper presents a decentralized approach using Federated Learning, which secures patient data while overcoming the limitations of centralized data collection and storage. We propose a Federated Transfer Learning system that allows for effective model training without centralizing sensitive data. This approach leverages the decentralized nature of federated learning and the efficiency of transfer learning, enabling us to train models with limited data from each hospital while minimizing computing costs. We evaluated four methodologies—centralized, federated, transfer learning, and federated transfer learning—to determine their effectiveness in classifying lung diseases. Our findings demonstrate that Federated Transfer Learning is the most effective method, as it preserves user privacy by training models directly on client devices and achieves high accuracy. Specifically, the ResNet-50 model yielded the highest performance, with accuracies of 87.95%, 88.04%, 87.55%, and 89.96% for the centralized, transfer, federated, and federated transfer learning approaches, respectively. This study underscores the potential of Federated Transfer Learning to enhance both the accuracy of disease classification and the protection of patient privacy in medical applications.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143840810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diff-CFFBNet: Diffusion-Embedded Cross-Layer Feature Fusion Bridge Network for Brain Tumor Segmentation","authors":"Xiaosheng Wu, Qingyi Hou, Chaosheng Tang, Shuihua Wang, Junding Sun, Yudong Zhang","doi":"10.1002/ima.70088","DOIUrl":"https://doi.org/10.1002/ima.70088","url":null,"abstract":"<div>\u0000 \u0000 <p>This study introduces the Diff-CFFBNet, a novel network for brain tumor segmentation designed to address the challenges of misdetection in broken tumor regions within MRI scans, which is crucial for early diagnosis, treatment planning, and disease monitoring. The proposed method incorporates a cross-layer feature fusion bridge (CFFB) to enhance feature interaction and a cross-layer feature fusion U-Net (CFFU-Net) to reduce the semantic gap in diffusion models. Additionally, a sampling-quantity-based fusion (SQ-Fusion) is utilized to leverage the uncertainty of diffusion models for improved segmentation outcomes. Experimental validation on BraTS 2019, BraTS 2020, TCGA-GBM, TCGA-LGG, and MSD datasets demonstrates that Diff-CFFBNet outperforms existing methods, achieving superior performance in terms of Dice score, HD95, and mIoU metrics. These results indicate the model's robustness and precision, even under challenging conditions with complex tumor structures. Diff-CFFBNet provides a reliable solution for accurate and efficient brain tumor segmentation in medical imaging, with the potential for clinical application in treatment planning and disease monitoring. Future work aims to extend this approach to multiple tumor types and refine diffusion model applications in medical image segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143830995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature Subspace Projection Knowledge Distillation for Medical Image Segmentation","authors":"Xiangchun Yu, Qiaoyi Chen, Miaomiao Liang, Lingjuan Yu, Jian Zheng","doi":"10.1002/ima.70085","DOIUrl":"https://doi.org/10.1002/ima.70085","url":null,"abstract":"<div>\u0000 \u0000 <p>Feature-based knowledge distillation facilitates feature knowledge transfer by aligning intermediate features of students and high-performance teachers such as TranUnet and MISSFormer in medical image segmentation. However, the bias-variance coupling resulting from redundancy or noise within high-dimensional features presents a significant challenge for effective knowledge transfer. To tackle this issue, we propose a feature subspace projection knowledge distillation (FSP-KD) method to decouple bias and variance in the high-dimensional feature space. This method decomposes the feature space into two components: the variance-dependent distribution and the bias-dependent distribution. The bias-dependent distribution is modeled as a weighted post-projection feature distribution using the feature subspace projection (FSP) module. Likewise, the variance-dependent distribution is represented by a weighted pre-projection feature distribution. Additionally, a conditional adversarial mechanism (CADV) module is integrated at the logits layer to prompt the student to identify higher-order discrepancies from the teacher. This approach leverages conditional generative adversarial learning to improve the holistic alignment between student and teacher distributions. Extensive experiments are carried out on three benchmark datasets for medical image segmentation: Synapse, Flare2022, and m2caiSeg. The experimental results show that our proposed FSP-KD method has achieved state-of-the-art performance. Notably, FSP-KD has outperformed the teacher MISSFormer when used in a teacher-student setup with ResNet18. Ablation experiments and visualization results provide additional confirmation of the effectiveness of each module.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Explainable AI for Blood Image Classification With Dynamic CNN Model Selection Framework","authors":"Datenji Sherpa, Dibakar Raj Pant","doi":"10.1002/ima.70084","DOIUrl":"https://doi.org/10.1002/ima.70084","url":null,"abstract":"<div>\u0000 \u0000 <p>Explainable AI (XAI) frameworks are becoming essential in many areas, including the medical field, as they help us to understand AI decisions, increasing clinical trust and improving patient care. This research presents a robust and comprehensive Explainable AI framework. To classify images from the BloodMNIST and Raabin-WBC datasets, various pre-trained convolutional neural network (CNN) architectures: the VGG, the ResNet, the DenseNet, the EfficientNet, the MobileNet variants, the SqueezeNet, and the Xception are implemented both individually and in combination with SpinalNet. For parameter analysis, four models, VGG16, VGG19, ResNet50, and ResNet101, were combined with SpinalNet. Notably, these SpinalNet hybrid models significantly reduced the model parameters while maintaining or even improving the model accuracy. For example, the VGG 16 + SpinalNet shows a 40.74% parameter reduction and accuracy of 98.92% (BloodMnist) and 98.32% (Raabin-WBC). Similarly, the combinations of VGG19, ResNet50, and ResNet101 with SpinalNet resulted in weight parameter reductions by 36.36%, 65.33%, and 52.13%, respectively, with improved accuracy for both datasets. These hybrid SpinalNet models are highly efficient and well-suited for resource-limited environments. The authors have developed a dynamic model selection framework. This framework optimally selects the best models based on prediction scores, prioritizing lightweight models in cases of ties. This method guarantees that for every input, the most effective model is used, which results in higher accuracy as well as better outcomes. Explainable AI (XAI) techniques: Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive ExPlanations (SHAP), and Gradient-weighted Class Activation Mapping (Grad-CAM) are implemented. These help us to understand the key features that influence the model predictions. By combining these XAI methods with dynamic model selection, this research not only achieves excellent accuracy but also provides useful insights into the elements that influence model predictions.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EDenseNetViT: Leveraging Ensemble Vision Transform Integrated Transfer Learning for Advanced Differentiation and Severity Scoring of Tuberculosis","authors":"Mamta Patankar, Vijayshri Chaurasia, Madhu Shandilya","doi":"10.1002/ima.70082","DOIUrl":"https://doi.org/10.1002/ima.70082","url":null,"abstract":"<div>\u0000 \u0000 <p>Lung infections such as tuberculosis (TB), COVID-19, and pneumonia share similar symptoms, making early differentiation challenging with x-ray imaging. This can delay correct treatment and increase disease transmission. The study focuses on extracting hybrid features using multiple techniques to effectively distinguish between TB and other lung infections, proposing several methods for early detection and differentiation. To better diagnose TB, the paper presented an ensemble DenseNet with a Vision Transformer (ViT) network (EDenseNetViT). The proposed EDenseNetViT is an ensemble model of Densenet201 and a ViT network that will enhance the detection performance of TB with other lung infections such as pneumonia and COVID-19. Additionally, the EDenseNetViT is extended to predict the severity level of TB. This severity score approach is based on combined weighted low-level features and high-level features to show the severity level of TB as mild, moderate, severe, and fatal. The result evaluation was conducted using chest image datasets, that is Montgomery Dataset, Shenzhen Dataset, Chest x-ray Dataset, and COVID-19 Radiography Database. All data are merged and approx. Seven thousand images were selected for experimental design. The study tested seven baseline models for lung infection differentiation. Initially, DenseNet transfer learning models, including DenseNet121, DenseNet169, and DenseNet201, were assessed, with DenseNet201 performing the best. Subsequently, DenseNet201 was combined with Principal component analysis (PCA) and various classifiers, with the combination of PCA and random forest classifier proving the most effective. However, the EDenseNetViT model surpassed all and achieved approximately 99% accuracy in detecting TB and distinguishing it from other lung infections like pneumonia and COVID-19. The proposed EdenseNetViT model was used for classifying TB, Pneumonia, and COVID-19 and achieved an average accuracy of 99%, 98%, and 96% respectively. Compared to other existing models, EDenseNetViT outperformed the best.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}