Anass Garbaz, Yassine Oukdach, Said Charfi, Mohamed El Ansari, Lahcen Koutti, Mouna Salihoun, Samira Lafraxo
{"title":"MLFE-UNet: Multi-Level Feature Extraction Transformer-Based UNet for Gastrointestinal Disease Segmentation","authors":"Anass Garbaz, Yassine Oukdach, Said Charfi, Mohamed El Ansari, Lahcen Koutti, Mouna Salihoun, Samira Lafraxo","doi":"10.1002/ima.70030","DOIUrl":"https://doi.org/10.1002/ima.70030","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurately segmenting gastrointestinal (GI) disease regions from Wireless Capsule Endoscopy images is essential for clinical diagnosis and survival prediction. However, challenges arise due to similar intensity distributions, variable lesion shapes, and fuzzy boundaries. In this paper, we propose MLFE-UNet, an advanced fusion of CNN-based transformers with UNet. Both the encoder and decoder utilize a multi-level feature extraction (MLFA) CNN-Transformer-based module. This module extracts features from the input data, considering both global dependencies and local information. Furthermore, we introduce a multi-level spatial attention (MLSA) block that functions as the bottleneck. It enhances the network's ability to handle complex structures and overlapping regions in feature maps. The MLSA block captures multiscale dependencies of tokens from the channel perspective and transmits them to the decoding path. A contextual feature stabilization block follows each transition to emulate lesion zones and facilitate segmentation guidelines at each phase. To address high-level semantic information, we incorporate a computationally efficient spatial channel attention block. This is followed by a stabilization block in the skip connections, ensuring global interaction and highlighting important semantic features from the encoder to the decoder. To evaluate the performance of our proposed MLFE-UNet, we selected common GI diseases, specifically bleeding and polyps. The dice coefficient scores obtained by MLFE-UNet on the MICCAI 2017 (Red lesion) and CVC-ClinicalDB data sets are 92.34% and 88.37%, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143120026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing Breast Cancer Detection: Integrating Few-Shot and Transfer Learning for Enhanced Accuracy and Efficiency","authors":"Nadeem Sarwar, Shaha Al-Otaibi, Asma Irshad","doi":"10.1002/ima.70033","DOIUrl":"https://doi.org/10.1002/ima.70033","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer (BC) detection based on mammogram images is still an open issue, particularly when there is little annotated data. Combining few-shot learning (FSL) with transfer learning (TL) has been identified as a potential solution to overcome this problem due to its ability to learn from a few examples while producing robust features for classification. The objective of this study is to use and analyze FSL integrated with TL to enhance the classification accuracy and generalization ability in a limited dataset. The proposed approach integrates the FSL models (prototypical networks, matching networks, and relation networks) with the TL procedures. The models are trained using a small set of samples with annotation and can be assessed using various performance metrics. The models were trained and compared to the TL and the state-of-the-art methods regarding accuracy, precision, recall, F1-score, and area under the ROC curve (AUC). The models proved to be effective when integrated, and the relation networks model was the most accurate, with an accuracy of 95.6% and an AUC of 0.970. The models provided higher accuracy, recall, and F1-scores, especially in the case of discerning between normal, benign, and malignant cases, as compared to TL traditional techniques and the various recent state-of-the-art techniques. This integrated approach gives high efficiency, accuracy, and scalability to the whole BC detection process, and it has potential for further medical imaging domains. Future research will explore hyperparameter tuning and incorporating electronic health record systems to enhance diagnostic precision and individualized care.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmarking YOLO Variants for Enhanced Blood Cell Detection","authors":"Pooja Mehta, Rohan Vaghela, Nensi Pansuriya, Jigar Sarda, Nirav Bhatt, Akash Kumar Bhoi, Parvathaneni Naga Srinivasu","doi":"10.1002/ima.70037","DOIUrl":"https://doi.org/10.1002/ima.70037","url":null,"abstract":"<div>\u0000 \u0000 <p>Blood cell detection provides a significant amount of information about a person's health, aiding in the diagnosis and monitoring of various medical conditions. Red blood cells (RBCs) carry oxygen, white blood cells (WBCs) play a role in immune defence, and platelets contribute to blood clotting. Changes in the composition of these cells can signal various physiological and pathological conditions, which makes accurate blood cell detection essential for effective medical diagnosis. In this study, we apply convolutional neural networks (CNNs), a subset of deep learning (DL) techniques, to automate blood cell detection. Specifically, we compare the performance of multiple variants of the You Only Look Once (YOLO) model, including YOLO v5, YOLO v7, YOLO v8 (in medium, small and nano configurations), YOLO v9c and YOLO v10 (in medium, small and nano configurations), for the task of detecting RBCs, WBCs and platelets. The results show that YOLO v5 achieved the highest mean average precision (mAP50) of 93.5%, with YOLO v10 variants also performing competitively. YOLO v10m achieved the highest precision for RBC detection at 85.1%, while YOLO v10n achieved 98.6% precision for WBC detection. YOLO v5 demonstrated the highest precision for platelets at 88.8%. Overall, YOLO models provided high accuracy and precision in detecting blood cells, making them suitable for medical image analysis. In conclusion, the study demonstrates that the YOLO model family, especially YOLO v5, holds significant potential for advancing automated blood cell detection. These findings can help improve diagnostic accuracy and contribute to more efficient clinical workflows.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An AG-RetinaNet for Embryonic Blastomeres Detection and Counting","authors":"Wenju Zhou, Ouafa Talha, Xiaofei Han, Qiang Liu, Yuan Xu, Zhenbo Zhang, Naitong Yuan","doi":"10.1002/ima.70034","DOIUrl":"https://doi.org/10.1002/ima.70034","url":null,"abstract":"<div>\u0000 \u0000 <p>Embryo morphology assessment is crucial for determining embryo viability in assisted reproductive technology. Traditional manual evaluation, while currently the primary method, is time-consuming, resource-intensive, and prone to inconsistencies due to the complex analysis of morphological parameters such as cell shape, size, and blastomere count. For rapid and accurate recognition and quantification of blastomeres in embryo images, Attention Gated-RetinaNet (AG-RetinaNet) model is proposed in this article. AG-RetinaNet combines an attention block between the backbone network and the Feature Pyramid Network to overcome the difficulties posed by overlapping blastomeres and morphological changes in embryo shape. The proposed model, trained on a dataset of human embryo images at different cell stages, uses ResNet50 and ResNet101 as backbones for performance comparison. Experimental results demonstrate its competitive performance against state-of-the-art detection models, achieving 95.8% average precision while balancing detection accuracy and computational efficiency. Specifically, the AG-RetinaNet achieves 83.08% precision, 91.13% sensitivity, 90.91% specificity, and an F1-score of 86.92% under optimized Intersection Over Union and confidence thresholds, effectively detecting and counting blastomeres across various grades. The comparison between these results and the manual annotations of embryologists confirms that our model has the potential to improve and streamline the workflow of embryologists in clinical practice.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zaifang Zhang, Shenjun Sheng, Shihui Zhu, Jian Jin
{"title":"Chronic Wound Assessment System Using an Improved UPerNet Model","authors":"Zaifang Zhang, Shenjun Sheng, Shihui Zhu, Jian Jin","doi":"10.1002/ima.70032","DOIUrl":"https://doi.org/10.1002/ima.70032","url":null,"abstract":"<div>\u0000 \u0000 <p>Wound assessment plays a crucial role in the healing process. Traditional methods for wound assessment, relying on manual judgment and recording, often yield inaccurate outcomes and require specialized medical equipment. This will result in increasing regular hospital visits and pose a series of challenges, such as delayed examinations, heightened infection risks, and increased costs. Thus, a real-time, portable, and convenient wound assessment system is essential for healing chronic wounds and reducing complications. This paper proposes an improved UPerNet network for wound tissue segmentation, which consists of three modules: Feature-aligned Pyramid Network (FaPN), Kernel update head, and Convolutional Block Attention Module (CBAM). The FaPN is employed to address feature misalignment. The Kernel update head is based on K-Net and dynamically updates convolutional kernel weights. The CBAM module is adopted to attend to crucial features. Ablation studies and comparative studies show that this method can achieve superior performance in wound tissue segmentation compared to other common segmentation models. Meanwhile, based on this method, we develop a mobile application. By using this system, patients can easily upload wound images for assessment, facilitating the convenient tracking of wound healing progress at home, thereby reducing medical expenses and hospital visitation times.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kashif Fareed, Anas Khan, Musaed Alhussein, Khursheed Aurangzeb, Aamir Shahzad, Mazhar Islam
{"title":"CBAM Attention Gate-Based Lightweight Deep Neural Network Model for Improved Retinal Vessel Segmentation","authors":"Kashif Fareed, Anas Khan, Musaed Alhussein, Khursheed Aurangzeb, Aamir Shahzad, Mazhar Islam","doi":"10.1002/ima.70031","DOIUrl":"https://doi.org/10.1002/ima.70031","url":null,"abstract":"<div>\u0000 \u0000 <p>Over the years, researchers have been using deep learning in different fields of science including disease diagnosis. Retinal vessel segmentation has seen significant advancements through deep learning techniques, resulting in high accuracy. Despite this progress, challenges remain in automating the segmentation process. One of the most pressing and often overlooked issues is computational complexity, which is critical for developing portable diagnostic systems. To address this, this study introduces a CBAM-Attention Gate-based U-Netmodel aimed at reducing computational complexity without sacrificing performance on evaluation metrics. The performance of the model was analyzed using four publicly available fundus image datasets: CHASE_DB1, DRIVE, STARE, and HRF, and it achieved sensitivity, specificity, accuracy, AUC, and MCC performances (0.7909, 0.9975, 0.9723, 0.9867, and 0.8011), (0.8217, 0.9816, 0.9674, 0.9849, and 0.9778), (0.8346, 0.9790, 0.9680, 0.9855, and 0.7810), and (0.8082, 0.9769, 0.9638, 0.9723, and 0.7575), respectively. Moreover, this model comprises of only 0.8 million parameters, which makes it one of the lightest available models used for retinal vessel segmentation. This lightweight yet efficient model is most suitable for use in low-end hardware devices. The attributes of significantly lower computational complexity along with improved evaluation metrics advocates for its deployment in portable embedded devices to be used for population-level screening programs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NIR-II Fluorescence Image Translation via Latent Space Disentanglement","authors":"Xiaoming Yu, Jie Tian, Zhenhua Hu","doi":"10.1002/ima.70028","DOIUrl":"https://doi.org/10.1002/ima.70028","url":null,"abstract":"<div>\u0000 \u0000 <p>The second near-infrared window (NIR-II) fluorescence imaging is an excellent optical in vivo imaging method. Compared with NIR-IIa window (1000–1300 nm), NIR-IIb window (1500–1700 nm) imaging can significantly improve the imaging effect. However, due to the limitation that there are no molecular probes approved for NIR-IIb imaging in humans, we expect to achieve the translation of NIR-IIa images to NIR-IIb images through artificial intelligence. NIR-II fluorescence imaging is divided into macroscopic imaging of animal bodies and microscopic imaging of tissue and nerves. The two imaging scenarios are different. To realize the translation of two scene images at the same time, this paper designs a generative adversarial network model. The core idea is to disentangle the information in the encoded latent space into the information shared by the macroscopic and microscopic images and information specific to both to extract the high-quality feature maps for decoding. In addition, we improve the contrastive loss and use the attention-aware sampling strategy to select patches, which further maintains the source image content structure. The experiment results demonstrate the superiority and effectiveness of the proposed method.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143117887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junan Zhu, Zhizhe Tang, Ping Ma, Zheng Liang, Chuanjian Wang
{"title":"DLKUNet: A Lightweight and Efficient Network With Depthwise Large Kernel for Medical Image Segmentation","authors":"Junan Zhu, Zhizhe Tang, Ping Ma, Zheng Liang, Chuanjian Wang","doi":"10.1002/ima.70035","DOIUrl":"https://doi.org/10.1002/ima.70035","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate multi-organ segmentation is crucial in computer-aided diagnosis, surgical navigation, and radiotherapy. Deep learning-based methods for automated multi-organ segmentation have made significant progress recently. However, these improvements often increase model complexity, leading to higher computational costs. To address this problem, we propose a lightweight and efficient network with depthwise large kernel, called DLKUNet. Firstly, we utilize a hierarchical architecture with large kernel convolution to effectively capture multi-scale features. Secondly, we constructed three segmentation models with different layers to meet different speed and accuracy requirements. Additionally, we employ a novel training strategy that works seamlessly with this module to enhance performance. Finally, we conducted extensive experiments on the multi-organ abdominal segmentation (Synapse) and the Automated Cardiac Diagnosis Challenge (ACDC) dataset. DLKUNet-L significantly improves the 95% Hausdorff Distance to 13.89 mm with 65% parameters of Swin-Unet on the Synapse. Furthermore, DLKUNet-S and DLKUNet-M use only 4.5% and 16.52% parameters of Swin-Unet, achieving Dice Similarity Coefficient 91.71% and 91.74% on the ACDC. These results underscore the proposed model's superior performance in terms of accuracy, efficiency, and practical applicability.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143116276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intuitionistic Fuzzy Position Embedding Transformer for Motion Artefact Correction in Chemical Exchange Saturation Transfer MRI Series","authors":"Bowei Chen, Umara Khalid, Enhui Chai, Li Chen","doi":"10.1002/ima.70024","DOIUrl":"https://doi.org/10.1002/ima.70024","url":null,"abstract":"<div>\u0000 \u0000 <p>Chemical Exchange Saturation Transfer (CEST) Magnetic Resonance Imaging (MRI) is a cutting-edge molecular imaging technique that enables non-invasive in vivo visualization of biomolecules, such as proteins and glycans, with exchangeable protons. However, CEST MRI is prone to motion artefacts, which can significantly reduce its accuracy and reliability. To address this issue, this study proposes an image registration method specifically designed to correct motion artefacts in CEST MRI data, with the objective of improving the precision of CEST analysis. Traditional registration techniques often suffer from premature convergence to local optima, especially in the presence of rigid motion within the ventricular region. The proposed approach leverages an Intuitionistic Fuzzy Set (IFS) position encoding integrated with a multi-head attention mechanism to achieve accurate global registration. A custom loss function is designed based on the properties of IFS position encoding to further enhance the model's motion correction capabilities. Experimental results demonstrate that this method provides a more robust and accurate solution for motion artefact correction in CEST MRI, offering new potential for improving the precision of CEST imaging in clinical and research settings.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143116277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Measurement of Lower Limb Angles From Pre- and Post-Operative X-Ray Images Using a Variant SegNet With Compound Loss Function","authors":"Iyyakutty Dheivya, Gurunathan Saravana Kumar","doi":"10.1002/ima.70027","DOIUrl":"https://doi.org/10.1002/ima.70027","url":null,"abstract":"<div>\u0000 \u0000 <p>This work envisages developing an automated computer workflow to locate the landmarks like knee center, tibial plateau, tibial and femoral axis to measure Femur-Tibia Angle (FTA), Medial Proximal Tibial Angle (MPTA), and Hip Knee Ankle Angle (HKAA) from the pre- and post-operative x-rays. In this work, we propose a variant of semantic segmentation model (vSegNet) for the segmentation of the knee and tibia gap for extracting important features used in the automated workflow. Since femur tibia gap is a small region as compared to the complete x-ray image, it poses severe class imbalance issue. Using a combination of the Dice coefficient and Hausdorff distance as a compound loss function, the proposed neural network model shows better segmentation performance as compared to state-of-the-art segmentation models like U-Net, SegNet (with and without VGG16 pre-trained weights), VGG16, MobileNetV2, Pretrained DeepLabv3+ (Resnet18 weights), and Pretrained FCN (VGG16 weights) and different loss functions. We subsequently propose computer methods for feature recognition and prediction of landmarks at femur, tibial and knee center, the side of the fibula and, subsequently, the various knee joint angles. An analysis of sensitivity of segmentation accuracy on the accuracy of predicted angles further substantiate the efficacy of the proposed methods. Dice score of U-Net, Pretrained SegNet, SegNet, VGG16, MobileNetV2, Pretrained DeepLabv3+, Pretrained FCN, vSegNet with cross-entropy loss function and vSegNet with compound loss function are observed as <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.083</mn>\u0000 <mo>±</mo>\u0000 <mn>0.04</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.083pm 0.04 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.51</mn>\u0000 <mo>±</mo>\u0000 <mn>0.16</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.51pm 0.16 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.66</mn>\u0000 <mo>±</mo>\u0000 <mn>0.20</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.66pm 0.20 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.61</mn>\u0000 <mo>±</mo>\u0000 <mn>0.15</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.61pm 0.15 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.17</mn>\u0000 <mo>±</mo>\u0000 <mn>0.16</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.17pm 0.16 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143115783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}