{"title":"Benchmarking YOLO Variants for Enhanced Blood Cell Detection","authors":"Pooja Mehta, Rohan Vaghela, Nensi Pansuriya, Jigar Sarda, Nirav Bhatt, Akash Kumar Bhoi, Parvathaneni Naga Srinivasu","doi":"10.1002/ima.70037","DOIUrl":"https://doi.org/10.1002/ima.70037","url":null,"abstract":"<div>\u0000 \u0000 <p>Blood cell detection provides a significant amount of information about a person's health, aiding in the diagnosis and monitoring of various medical conditions. Red blood cells (RBCs) carry oxygen, white blood cells (WBCs) play a role in immune defence, and platelets contribute to blood clotting. Changes in the composition of these cells can signal various physiological and pathological conditions, which makes accurate blood cell detection essential for effective medical diagnosis. In this study, we apply convolutional neural networks (CNNs), a subset of deep learning (DL) techniques, to automate blood cell detection. Specifically, we compare the performance of multiple variants of the You Only Look Once (YOLO) model, including YOLO v5, YOLO v7, YOLO v8 (in medium, small and nano configurations), YOLO v9c and YOLO v10 (in medium, small and nano configurations), for the task of detecting RBCs, WBCs and platelets. The results show that YOLO v5 achieved the highest mean average precision (mAP50) of 93.5%, with YOLO v10 variants also performing competitively. YOLO v10m achieved the highest precision for RBC detection at 85.1%, while YOLO v10n achieved 98.6% precision for WBC detection. YOLO v5 demonstrated the highest precision for platelets at 88.8%. Overall, YOLO models provided high accuracy and precision in detecting blood cells, making them suitable for medical image analysis. In conclusion, the study demonstrates that the YOLO model family, especially YOLO v5, holds significant potential for advancing automated blood cell detection. These findings can help improve diagnostic accuracy and contribute to more efficient clinical workflows.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An AG-RetinaNet for Embryonic Blastomeres Detection and Counting","authors":"Wenju Zhou, Ouafa Talha, Xiaofei Han, Qiang Liu, Yuan Xu, Zhenbo Zhang, Naitong Yuan","doi":"10.1002/ima.70034","DOIUrl":"https://doi.org/10.1002/ima.70034","url":null,"abstract":"<div>\u0000 \u0000 <p>Embryo morphology assessment is crucial for determining embryo viability in assisted reproductive technology. Traditional manual evaluation, while currently the primary method, is time-consuming, resource-intensive, and prone to inconsistencies due to the complex analysis of morphological parameters such as cell shape, size, and blastomere count. For rapid and accurate recognition and quantification of blastomeres in embryo images, Attention Gated-RetinaNet (AG-RetinaNet) model is proposed in this article. AG-RetinaNet combines an attention block between the backbone network and the Feature Pyramid Network to overcome the difficulties posed by overlapping blastomeres and morphological changes in embryo shape. The proposed model, trained on a dataset of human embryo images at different cell stages, uses ResNet50 and ResNet101 as backbones for performance comparison. Experimental results demonstrate its competitive performance against state-of-the-art detection models, achieving 95.8% average precision while balancing detection accuracy and computational efficiency. Specifically, the AG-RetinaNet achieves 83.08% precision, 91.13% sensitivity, 90.91% specificity, and an F1-score of 86.92% under optimized Intersection Over Union and confidence thresholds, effectively detecting and counting blastomeres across various grades. The comparison between these results and the manual annotations of embryologists confirms that our model has the potential to improve and streamline the workflow of embryologists in clinical practice.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zaifang Zhang, Shenjun Sheng, Shihui Zhu, Jian Jin
{"title":"Chronic Wound Assessment System Using an Improved UPerNet Model","authors":"Zaifang Zhang, Shenjun Sheng, Shihui Zhu, Jian Jin","doi":"10.1002/ima.70032","DOIUrl":"https://doi.org/10.1002/ima.70032","url":null,"abstract":"<div>\u0000 \u0000 <p>Wound assessment plays a crucial role in the healing process. Traditional methods for wound assessment, relying on manual judgment and recording, often yield inaccurate outcomes and require specialized medical equipment. This will result in increasing regular hospital visits and pose a series of challenges, such as delayed examinations, heightened infection risks, and increased costs. Thus, a real-time, portable, and convenient wound assessment system is essential for healing chronic wounds and reducing complications. This paper proposes an improved UPerNet network for wound tissue segmentation, which consists of three modules: Feature-aligned Pyramid Network (FaPN), Kernel update head, and Convolutional Block Attention Module (CBAM). The FaPN is employed to address feature misalignment. The Kernel update head is based on K-Net and dynamically updates convolutional kernel weights. The CBAM module is adopted to attend to crucial features. Ablation studies and comparative studies show that this method can achieve superior performance in wound tissue segmentation compared to other common segmentation models. Meanwhile, based on this method, we develop a mobile application. By using this system, patients can easily upload wound images for assessment, facilitating the convenient tracking of wound healing progress at home, thereby reducing medical expenses and hospital visitation times.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kashif Fareed, Anas Khan, Musaed Alhussein, Khursheed Aurangzeb, Aamir Shahzad, Mazhar Islam
{"title":"CBAM Attention Gate-Based Lightweight Deep Neural Network Model for Improved Retinal Vessel Segmentation","authors":"Kashif Fareed, Anas Khan, Musaed Alhussein, Khursheed Aurangzeb, Aamir Shahzad, Mazhar Islam","doi":"10.1002/ima.70031","DOIUrl":"https://doi.org/10.1002/ima.70031","url":null,"abstract":"<div>\u0000 \u0000 <p>Over the years, researchers have been using deep learning in different fields of science including disease diagnosis. Retinal vessel segmentation has seen significant advancements through deep learning techniques, resulting in high accuracy. Despite this progress, challenges remain in automating the segmentation process. One of the most pressing and often overlooked issues is computational complexity, which is critical for developing portable diagnostic systems. To address this, this study introduces a CBAM-Attention Gate-based U-Netmodel aimed at reducing computational complexity without sacrificing performance on evaluation metrics. The performance of the model was analyzed using four publicly available fundus image datasets: CHASE_DB1, DRIVE, STARE, and HRF, and it achieved sensitivity, specificity, accuracy, AUC, and MCC performances (0.7909, 0.9975, 0.9723, 0.9867, and 0.8011), (0.8217, 0.9816, 0.9674, 0.9849, and 0.9778), (0.8346, 0.9790, 0.9680, 0.9855, and 0.7810), and (0.8082, 0.9769, 0.9638, 0.9723, and 0.7575), respectively. Moreover, this model comprises of only 0.8 million parameters, which makes it one of the lightest available models used for retinal vessel segmentation. This lightweight yet efficient model is most suitable for use in low-end hardware devices. The attributes of significantly lower computational complexity along with improved evaluation metrics advocates for its deployment in portable embedded devices to be used for population-level screening programs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NIR-II Fluorescence Image Translation via Latent Space Disentanglement","authors":"Xiaoming Yu, Jie Tian, Zhenhua Hu","doi":"10.1002/ima.70028","DOIUrl":"https://doi.org/10.1002/ima.70028","url":null,"abstract":"<div>\u0000 \u0000 <p>The second near-infrared window (NIR-II) fluorescence imaging is an excellent optical in vivo imaging method. Compared with NIR-IIa window (1000–1300 nm), NIR-IIb window (1500–1700 nm) imaging can significantly improve the imaging effect. However, due to the limitation that there are no molecular probes approved for NIR-IIb imaging in humans, we expect to achieve the translation of NIR-IIa images to NIR-IIb images through artificial intelligence. NIR-II fluorescence imaging is divided into macroscopic imaging of animal bodies and microscopic imaging of tissue and nerves. The two imaging scenarios are different. To realize the translation of two scene images at the same time, this paper designs a generative adversarial network model. The core idea is to disentangle the information in the encoded latent space into the information shared by the macroscopic and microscopic images and information specific to both to extract the high-quality feature maps for decoding. In addition, we improve the contrastive loss and use the attention-aware sampling strategy to select patches, which further maintains the source image content structure. The experiment results demonstrate the superiority and effectiveness of the proposed method.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143117887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junan Zhu, Zhizhe Tang, Ping Ma, Zheng Liang, Chuanjian Wang
{"title":"DLKUNet: A Lightweight and Efficient Network With Depthwise Large Kernel for Medical Image Segmentation","authors":"Junan Zhu, Zhizhe Tang, Ping Ma, Zheng Liang, Chuanjian Wang","doi":"10.1002/ima.70035","DOIUrl":"https://doi.org/10.1002/ima.70035","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate multi-organ segmentation is crucial in computer-aided diagnosis, surgical navigation, and radiotherapy. Deep learning-based methods for automated multi-organ segmentation have made significant progress recently. However, these improvements often increase model complexity, leading to higher computational costs. To address this problem, we propose a lightweight and efficient network with depthwise large kernel, called DLKUNet. Firstly, we utilize a hierarchical architecture with large kernel convolution to effectively capture multi-scale features. Secondly, we constructed three segmentation models with different layers to meet different speed and accuracy requirements. Additionally, we employ a novel training strategy that works seamlessly with this module to enhance performance. Finally, we conducted extensive experiments on the multi-organ abdominal segmentation (Synapse) and the Automated Cardiac Diagnosis Challenge (ACDC) dataset. DLKUNet-L significantly improves the 95% Hausdorff Distance to 13.89 mm with 65% parameters of Swin-Unet on the Synapse. Furthermore, DLKUNet-S and DLKUNet-M use only 4.5% and 16.52% parameters of Swin-Unet, achieving Dice Similarity Coefficient 91.71% and 91.74% on the ACDC. These results underscore the proposed model's superior performance in terms of accuracy, efficiency, and practical applicability.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143116276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intuitionistic Fuzzy Position Embedding Transformer for Motion Artefact Correction in Chemical Exchange Saturation Transfer MRI Series","authors":"Bowei Chen, Umara Khalid, Enhui Chai, Li Chen","doi":"10.1002/ima.70024","DOIUrl":"https://doi.org/10.1002/ima.70024","url":null,"abstract":"<div>\u0000 \u0000 <p>Chemical Exchange Saturation Transfer (CEST) Magnetic Resonance Imaging (MRI) is a cutting-edge molecular imaging technique that enables non-invasive in vivo visualization of biomolecules, such as proteins and glycans, with exchangeable protons. However, CEST MRI is prone to motion artefacts, which can significantly reduce its accuracy and reliability. To address this issue, this study proposes an image registration method specifically designed to correct motion artefacts in CEST MRI data, with the objective of improving the precision of CEST analysis. Traditional registration techniques often suffer from premature convergence to local optima, especially in the presence of rigid motion within the ventricular region. The proposed approach leverages an Intuitionistic Fuzzy Set (IFS) position encoding integrated with a multi-head attention mechanism to achieve accurate global registration. A custom loss function is designed based on the properties of IFS position encoding to further enhance the model's motion correction capabilities. Experimental results demonstrate that this method provides a more robust and accurate solution for motion artefact correction in CEST MRI, offering new potential for improving the precision of CEST imaging in clinical and research settings.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143116277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Measurement of Lower Limb Angles From Pre- and Post-Operative X-Ray Images Using a Variant SegNet With Compound Loss Function","authors":"Iyyakutty Dheivya, Gurunathan Saravana Kumar","doi":"10.1002/ima.70027","DOIUrl":"https://doi.org/10.1002/ima.70027","url":null,"abstract":"<div>\u0000 \u0000 <p>This work envisages developing an automated computer workflow to locate the landmarks like knee center, tibial plateau, tibial and femoral axis to measure Femur-Tibia Angle (FTA), Medial Proximal Tibial Angle (MPTA), and Hip Knee Ankle Angle (HKAA) from the pre- and post-operative x-rays. In this work, we propose a variant of semantic segmentation model (vSegNet) for the segmentation of the knee and tibia gap for extracting important features used in the automated workflow. Since femur tibia gap is a small region as compared to the complete x-ray image, it poses severe class imbalance issue. Using a combination of the Dice coefficient and Hausdorff distance as a compound loss function, the proposed neural network model shows better segmentation performance as compared to state-of-the-art segmentation models like U-Net, SegNet (with and without VGG16 pre-trained weights), VGG16, MobileNetV2, Pretrained DeepLabv3+ (Resnet18 weights), and Pretrained FCN (VGG16 weights) and different loss functions. We subsequently propose computer methods for feature recognition and prediction of landmarks at femur, tibial and knee center, the side of the fibula and, subsequently, the various knee joint angles. An analysis of sensitivity of segmentation accuracy on the accuracy of predicted angles further substantiate the efficacy of the proposed methods. Dice score of U-Net, Pretrained SegNet, SegNet, VGG16, MobileNetV2, Pretrained DeepLabv3+, Pretrained FCN, vSegNet with cross-entropy loss function and vSegNet with compound loss function are observed as <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.083</mn>\u0000 <mo>±</mo>\u0000 <mn>0.04</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.083pm 0.04 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.51</mn>\u0000 <mo>±</mo>\u0000 <mn>0.16</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.51pm 0.16 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.66</mn>\u0000 <mo>±</mo>\u0000 <mn>0.20</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.66pm 0.20 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.61</mn>\u0000 <mo>±</mo>\u0000 <mn>0.15</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.61pm 0.15 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>0.17</mn>\u0000 <mo>±</mo>\u0000 <mn>0.16</mn>\u0000 </mrow>\u0000 <annotation>$$ 0.17pm 0.16 $$</annotation>\u0000 </semantics></math>, <span></span><math>\u0000 <semantics>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143115783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Liu, Weiming Zeng, Wei Zhang, Ru Zhang, Sizhe Luo
{"title":"MT-BAAN: Multi-View Topological Bilinear Aggregation Attention Network Model for Alzheimer's Disease Diagnosis","authors":"Jie Liu, Weiming Zeng, Wei Zhang, Ru Zhang, Sizhe Luo","doi":"10.1002/ima.70029","DOIUrl":"https://doi.org/10.1002/ima.70029","url":null,"abstract":"<div>\u0000 \u0000 <p>Alzheimer's disease (AD) and mild cognitive impairment (MCI) are common cognitive disorders. Research has shown that cognitive decline is closely related to abnormal connections between different functional areas of the brain. However, research on brain functional network (BFN) has mainly focused on individual topological structures, seldom considering the sparsity of the BFNs and the complexity of multi-level interactions among brain regions. To tackle this problem, in this article, we propose a multi-view topological bilinear aggregation attention network model (MT-BAAN) for disease diagnosis and brain network analysis. Based on rs-fMRI data, the model mainly includes a multi-view graph construction module (MVGC), a feature enhancement module (FEM), a dual-level attention module (DLAM), and a graph relation convolution network module (GRCN). MVGC module uses two sparse methods to construct high-view and low-view graphs and retains fully connected BFN topology as the full-view, aiming at capturing multi-scale topological features. FEM and DLAM utilize bilinear aggregation and attention mechanisms, respectively, to learn topological features and obtain weight coefficients that reflect the importance of different network views. The GRCN module employs two convolutional operators to learn the BFN topology information at the node and network levels and completes the classification. The experimental results indicate that the complementary learning of multi-view topologies can effectively improve model performance. Across binary classification tasks and ternary classification tasks, MT-BAAN shows superior performance compared to other experimental methods, which is valuable for research and clinical diagnosis of attention deficit disorder AD and MCI.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143115850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ESFCU-Net: A Lightweight Hybrid Architecture Incorporating Self-Attention and Edge Enhancement Mechanisms for Enhanced Polyp Image Segmentation","authors":"Wenbin Yang, Xin Chang, Xinyue Guo","doi":"10.1002/ima.70026","DOIUrl":"https://doi.org/10.1002/ima.70026","url":null,"abstract":"<div>\u0000 \u0000 <p>Early detection of polyps during endoscopy reduces the risk of malignancy and facilitates timely intervention. Precise polyp segmentation during endoscopy aids clinicians in identifying polyps, playing a vital role in the clinical prevention of malignancy. However, due to considerable differences in the size, color, and morphology of polyps, the resemblance between polyp lesions and their background, and the impact of factors like lighting changes, low-contrast areas, and gastrointestinal contents during image acquisition, accurate polyp segmentation remains a challenging issue. Additionally, most existing methods require high computational power, which restricts their practical application. Our objective is to develop and test a new lightweight polyp segmentation architecture. This paper presents a hybrid lightweight architecture called ESFCU-Net that combines self-attention and edge enhancement to address these challenges. The model comprises an encoder-decoder and an improved fire module (ESF module), which can learn both local and global information, reduce information loss, maintain computational efficiency, enhance the extraction of critical features in images, and includes a coordinate attention mechanism in each skip connection to suppress background interference and minimize spatial information loss. Extensive validation on two public datasets (Kvasir-SEG and CVC-ClinicDB) and one internal dataset reveals that this network exhibits strong learning performance and generalization capabilities, significantly enhances segmentation accuracy, surpasses existing segmentation methods, and shows potential for clinical application. The code for our work and more technical details can be found at https://github.com/aaafoxy/ESFCU-Net.git.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}