Mohsin Butt, D. N. F. NurFatimah, Majid Ali Khan, Ghazanfar Latif, Abul Bashar
{"title":"MEDCnet: A Memory Efficient Approach for Processing High-Resolution Fundus Images for Diabetic Retinopathy Classification Using CNN","authors":"Mohsin Butt, D. N. F. NurFatimah, Majid Ali Khan, Ghazanfar Latif, Abul Bashar","doi":"10.1002/ima.70063","DOIUrl":"https://doi.org/10.1002/ima.70063","url":null,"abstract":"<p>Modern medical imaging equipment can capture very high-resolution images with detailed features. These high-resolution images have been used in several domains. Diabetic retinopathy (DR) is a medical condition where increased blood sugar levels of diabetic patients affect the retinal vessels of the eye. The usage of high-resolution fundus images in DR classification is quite limited due to Graphics processing unit (GPU) memory constraints. The GPU memory problem becomes even worse with the increased complexity of the current state-of-the-art deep learning models. In this paper, we propose a memory-efficient divide-and-conquer-based approach for training deep learning models that can identify both high-level and detailed low-level features from high-resolution images within given GPU memory constraints. The proposed approach initially uses the traditional transfer learning technique to train the deep learning model with reduced-sized images. This trained model is used to extract detailed low-level features from fixed-size patches of higher-resolution fundus images. These detailed features are then utilized for classification based on standard machine learning algorithms. We have evaluated our proposed approach using the DDR and APTOS datasets. The results of our approach are compared with different approaches, and our model achieves a maximum classification accuracy of 95.92% and 97.39% on the DDR and APTOS datasets, respectively. In general, the proposed approach can be used to get better accuracy by using detailed features from high-resolution images within GPU memory constraints.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javed Rashid, Salah Mahmoud Boulaaras, Muhammad Shoaib Saleem, Muhammad Faheem, Muhammad Umair Shahzad
{"title":"A Novel Transfer Learning Approach for Skin Cancer Classification on ISIC 2024 3D Total Body Photographs","authors":"Javed Rashid, Salah Mahmoud Boulaaras, Muhammad Shoaib Saleem, Muhammad Faheem, Muhammad Umair Shahzad","doi":"10.1002/ima.70065","DOIUrl":"https://doi.org/10.1002/ima.70065","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin cancer, and melanoma in particular, is a significant public health issue in the modern era because of the exponential death rate. Previous research has used 2D data to detect skin cancer, and the present methods, such as biopsies, are arduous. Therefore, we need new, more effective models and tools to tackle current problems quickly. The main objective of the work is to improve the 3D ResNet50 model for skin cancer classification by transfer learning. Trained on the ISIC 2024 3D Total Body Photographs (3D-TBP), a Kaggle competition dataset, the model aims to detect five significant types of skin cancer: Melanoma (Mel), Melanocytic nevus (Nev), Basal cell carcinoma (BCC), Actinic keratosis (AK), and Benign keratosis (BK). While fine-tuning achieves peak performance, data augmentation addresses the issue of overfitting. The proposed model outperforms state-of-the-art methods with an overall accuracy of 93.88%. Since the accuracy drops to 85.67% while utilizing 2D data, the substantial contribution becomes apparent when working with 3D data. The model articulates excellent memory and precision with remarkable accuracy. According to the findings, the 3D ResNet50 model improves the diagnostic process and may be rated better than conventional approaches as a noninvasive, accurate, and efficient substitute. The current model is valuable because it can help with a significant clinical application: the early diagnosis of melanoma.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huiyu Huang, Shreyas Balaji, Bulent Aslan, Yan Wen, Magdy Selim, Ajith J. Thomas, Aristotelis Filippidis, Pascal Spincemaille, Yi Wang, Salil Soman
{"title":"Quantitative Susceptibility Mapping MRI With Computer Vision Metrics to Reduce Scan Time for Brain Hemorrhage Assessment","authors":"Huiyu Huang, Shreyas Balaji, Bulent Aslan, Yan Wen, Magdy Selim, Ajith J. Thomas, Aristotelis Filippidis, Pascal Spincemaille, Yi Wang, Salil Soman","doi":"10.1002/ima.70070","DOIUrl":"https://doi.org/10.1002/ima.70070","url":null,"abstract":"<div>\u0000 \u0000 <p>Optimizing clinical imaging parameters balances scan time and image quality. quantitative susceptibility mapping (QSM) MRI, particularly, for detecting intracranial hemorrhage (ICH), involves multiple echo times (TEs), leading to longer scan durations that can impact patient comfort and imaging efficiency. This study evaluates the necessity of specific TEs for QSM MRI in ICH patients and identifies shorter scan protocols using computer vision metrics (CVMs) to maintain diagnostic accuracy. Fifty-four patients with suspected ICH were retrospectively recruited. multiecho gradient recalled echo (mGRE) sequences with 11 TEs were used for QSM MRI (reference). Subsets of TEs compatible with producing QSM MRI images were generated, producing 71 subgroups per patient. QSM images from each subgroup were compared to reference images using 14 CVMs. Linear regression and Wilcoxon signed-rank tests identified optimal subgroups minimizing scan time while preserving image quality as part of the computer vision optimized rapid imaging (CORI) method described. CVM-based analysis demonstrated Subgroup 1 (TE1-3) to be optimal using several CVMs, supporting a reduction in scan time from 4.5 to 1.23 min (73% reduction). Other CVMs suggested longer maximum TE subgroups as optimal, achieving scan time reductions of 9%–37%. Visual assessments by a neuroradiologist and trained research assistant confirmed no significant difference in ICH area measurements between reference and CORI-identified optimal subgroup-derived QSM, while CORI-identified worst subgroups derived QSM differed significantly (<i>p</i> < 0.05). The findings support using shorter QSM MRI protocols for ICH evaluation and suggest CVMs may aid optimization efforts for other clinical imaging protocols.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
HuiFang Wang, YaTong Liu, Jiongyao Ye, Dawei Yang, Yu Zhu
{"title":"TS-Net: Trans-Scale Network for Medical Image Segmentation","authors":"HuiFang Wang, YaTong Liu, Jiongyao Ye, Dawei Yang, Yu Zhu","doi":"10.1002/ima.70064","DOIUrl":"https://doi.org/10.1002/ima.70064","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate medical image segmentation is crucial for clinical diagnosis and disease treatment. However, there are still great challenges for most existing methods to extract accurate features from medical images because of blurred boundaries and various appearances. To overcome the above limitations, we propose a novel medical image segmentation network named TS-Net that effectively combines the advantages of CNN and Transformer to enhance the feature extraction ability. Specifically, we design a Multi-scale Convolution Modulation (MCM) module to simplify the self-attention mechanism through a convolution modulation strategy that incorporates multi-scale large-kernel convolution into depth-separable convolution, effectively extracting the multi-scale global features and local features. Besides, we adopt the concept of feature complementarity to facilitate the interaction between high-level semantic features and low-level spatial features through the designed Scale Inter-active Attention (SIA) module. The proposed method is evaluated on four different types of medical image segmentation datasets, and the experimental results show its competence with other state-of-the-art methods. The method achieves an average Dice Similarity Coefficient (DSC) of 90.79% ± 1.01% on the public NIH dataset for pancreas segmentation, 76.62% ± 4.34% on the public MSD dataset for pancreatic cancer segmentation, 80.70% ± 6.40% on the private PROMM (Prostate Multi-parametric MRI) dataset for prostate cancer segmentation, and 91.42% ± 0.55% on the public Kvasir-SEG dataset for polyp segmentation. The experimental results across the four different segmentation tasks for medical images demonstrate the effectiveness of the Trans-Scale network.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143639173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farzan Niknejad Mazandarani, Paul Babyn, Javad Alirezaie
{"title":"A Time-Adaptive Diffusion-Based CT Image Denoising Method by Processing Directional and Non-Local Information","authors":"Farzan Niknejad Mazandarani, Paul Babyn, Javad Alirezaie","doi":"10.1002/ima.70067","DOIUrl":"https://doi.org/10.1002/ima.70067","url":null,"abstract":"<p>Low-dose computed tomography (CT) images are prone to noise and artifacts caused by photon starvation and electronic noise. Recently, researchers have explored the use of transformer-based neural networks combined with generative diffusion models, showing promising results in denoising CT images. Despite their high performance, these approaches often struggle to process crucial information in the input data, resulting in suboptimal image quality. To address this limitation, we propose Starformer, a novel transformer-based operation designed to extract non-local directional features essential for diagnostic accuracy while maintaining an acceptable computational complexity overhead. Starformer is seamlessly integrated into the time-adaptive schedules of a diffusion model, dynamically balancing global structural extraction and fine texture refinement throughout the diffusion process. This enables the generation of high-quality, realistic textures in the final denoised images. Extensive experimental results demonstrate the effectiveness of both approaches in enhancing CT image quality, with improvements of up to 15% in PSNR and 36% in SSIM, highlighting their superiority over state-of-the-art methods.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143639136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating Medical Reports With a Novel Deep Learning Architecture","authors":"Murat Ucan, Buket Kaya, Mehmet Kaya","doi":"10.1002/ima.70062","DOIUrl":"https://doi.org/10.1002/ima.70062","url":null,"abstract":"<p>The writing of medical reports by doctors in hospitals is a critical and sensitive process that is time-consuming, prone to human error, and requires medical experts on site. Existing work on autonomous medical report generation using medical images as input has not achieved sufficiently high success. The goal of this paper is to present a new, fast, and high-performance method. For the autonomous generation of paragraph-level medical reports. A deep learning-based hybrid encoder–decoder architecture called G-CNX is developed to generate meaningful reports. ConvNeXtBase is used on the encoder side, and GRU-based RNN is used on the decoder side. Images and reports from the Indiana University Chest X-ray and ROCOv2 data sets were used in the training, validation, and testing processes of the study. The results of the experiments showed that the autonomously generated medical reports had the highest performance compared to other studies in the literature. In the Indiana University Chest X-ray data set, success rates of 0.6544, 0.5035, 0.3682, 0.2766, 0.2766, and 0.4277 were obtained in Bleu-1, Bleu-2, Bleu-3, Bleu-4, and Rouge evaluation metrics, respectively. In the ROCOv2 data set, success scores of 0.5593 and 0.3990 were obtained in Bleu-1 and Rouge evaluation metrics, respectively. In addition to numerical quantifiable analysis, the results of the study were also analyzed observationally and based on density plots. Statistical significance tests were also conducted to prove the reliability of the results. The results show that the test results obtained in the study have semantic properties similar to those of reports written by real doctors and that the autonomous reports produced are consistent and reliable. The proposed method can improve the efficiency of medical reporting, reduce the workload of specialized doctors, and improve the quality of diagnosis and treatment processes.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70062","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143602468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Microscopic Images Segmenter Modeling by Applying Two-Stage Optimization to an Ensemble of Segmentation Methods Using a Genetic Algorithm","authors":"Muamer Kafadar, Zikrija Avdagic, Ingmar Besic, Samir Omanovic","doi":"10.1002/ima.70058","DOIUrl":"https://doi.org/10.1002/ima.70058","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper presents research related to segmentation based on supervisory control, at multiple levels, of optimization of parameters of segmentation methods, and adjustment of 3D microscopic images, with the aim of creating a more efficient segmentation approach. The challenge is how to improve the segmentation of 3D microscopic images using known segmentation methods, but without losing processing speed. In the first phase of this research, a model was developed based on an ensemble of 11 segmentation methods whose parameters were optimized using genetic algorithms (GA). Optimization of the ensemble of segmentation methods using GA produces a set of segmenters that are further evaluated using a two-stage voting system, with the aim of finding the best segmenter configuration according to multiple criteria. In the second phase of this research, the final segmenter model is developed as a result of two-level optimization. The best obtained segmenter does not affect the speed of image processing in the exploitation process as its operating speed is practically equal to the processing speed of the basic segmentation method. Objective selection and fine-tuning of the segmenter was done using multiple segmentation methods. Each of these methods has been subject to an intensive process of a significant number of two-stage optimization cycles. The metric has been specifically created for objective analysis of segmenter performance and was used as a fitness function during GA optimization and result validation. Compared to the expert manual segmentation, segmenter score is 99.73% according to the best mean segmenter principle (average segmentation score for each 3D slice image with respect to the entire sample set). Segmenter score is 99.49% according to the most stable segmenter principle (average segmentation score for each 3D slice image with respect to the entire sample set and considering the reference image classes MGTI median, MGTI voter and GGTI).</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143595620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HSCFNet: Lightweight Convolutional Neural Network for the Classification of Infectious and Non-Infectious Skin Diseases","authors":"Xiangyu Deng, Yapeng Zheng","doi":"10.1002/ima.70052","DOIUrl":"https://doi.org/10.1002/ima.70052","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate differentiation between infectious and non-infectious skin diseases is crucial in the field of dermatological diagnosis, and although deep learning techniques have achieved remarkable results in the classification of a wide range of dermatological diseases, there is still a lack of effective unified frameworks to achieve this goal. To this end, this paper proposes a lightweight convolutional neural network, HSCFNet, for classifying 9 mainstream infectious skin diseases and 10 non-infectious skin diseases. HSCFNet consists of two core modules, the multi-gate hybrid convolution module (MGHC) and triple residual fusion module (TRF). MGHC integrates standard convolution and improved deformable convolution to form two branches and selects different branches for feature extraction through parameter control, while introducing a gating mechanism for feature selection of the extracted features to strengthen the ability of extracting important features. The TRF module facilitates the information interaction between features by fusing three different resolutions of features, which further improves the classification performance of the model. The experimental results show that the accuracy, precision, recall, specificity, and F1 score of HSCFNet reach 97.87%, 97.76%, 97.26%, 99.88%, and 97.43%, respectively, and the size of the model is only 26.1 MB, which is lightweight while maintaining high performance. Compared with 10 existing mainstream classification models, HSCFNet demonstrates the best classification performance. This study provides an efficient and lightweight solution for clinical skin disease diagnosis, which is important for accurately distinguishing mainstream infectious and non-infectious skin diseases.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143595622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CLA-UNet: Convolution and Focused Linear Attention Fusion for Tumor Cell Nucleus Segmentation","authors":"Wei Guo, Zhanxu Liu, Yu Ou","doi":"10.1002/ima.70041","DOIUrl":"https://doi.org/10.1002/ima.70041","url":null,"abstract":"<div>\u0000 \u0000 <p>The accurate diagnosis of tumors is crucial for improving treatment outcomes. To precisely delineate the nucleus regions of tumor cells in hematoxylin and eosin (H&E) stained tissue images and reduce computational overhead, we propose a novel encoder-decoder architecture named Convolution and focused linear attention fusion UNet (CLA-UNet), which integrates depthwise separable convolution and convolution-focused linear attention into the U-Net network. The innovation of this study is reflected in the following three aspects: first, at the skip connections, it utilizes the Global–Local Feature Fusion and Split-Input Transformer (GLFS Transformer) block to extract global feature information, which is then input to the corresponding layers of the decoder; second, it employs depthwise separable convolution blocks to construct the backbone network, thereby deepening the network; finally, it adds a channel attention module at the decoder to focus on important channel information. Experimental results on the MoNuSeg public database of tumor cells show that the algorithm achieves an IoU, Dice score, precision, and recall of 66.18%, 79.57%, 83.23%, and 76.91%, respectively. Compared with other segmentation methods, this algorithm demonstrates superior segmentation performance. The model proposed in this study significantly outperforms other comparison models in segmentation results, while maintaining an extremely low parameter count and computational cost. The lightweight design of the model facilitates the promotion and application of this research.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143595621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D U-Net-Based Brain Tumor Semantic Segmentation Using a Modified Data Generator","authors":"Dinesh Kumar, Dimple Sethi, Wagaye Tadele Kussa, Yeabsira Mengistu Dana, Hitesh Kag","doi":"10.1002/ima.70056","DOIUrl":"https://doi.org/10.1002/ima.70056","url":null,"abstract":"<div>\u0000 \u0000 <p>Brain tumors, particularly gliomas, pose a significant global health challenge, causing numerous fatalities annually. Among gliomas, glioblastoma stands out as a highly aggressive type, often resulting in severe symptoms. Accurate segmentation of brain tumors from multimodal magnetic resonance imaging (MRI) data is crucial for effective diagnosis and treatment planning. This study introduces a novel 3D U-Net semantic segmentation model with a modified data generator approach, specifically tailored for the brain tumor segmentation (BraTS) 2020 dataset. The modified data generator is unique in that it performs on-the-fly data augmentation, generating diverse and distinct data samples during training. This approach reduces overfitting and enhances generalization, which is critical for handling the variability of brain tumor presentations. The model was trained end-to-end without weight transfer, optimizing the dice score as the primary evaluation metric. The proposed model achieved dice scores of 82.2%, 90.3%, and 77.8% for tumor core, whole tumor, and enhancing tumor regions, respectively, on the BraTS 2020 validation dataset. The minimal variation from training data underscores the model's robustness and reliability in segmenting different tumor subtypes. The modified data generator approach presents a promising advancement for brain tumor segmentation, with the potential for significant improvements in treatment planning and patient outcomes. This model could support more accurate and robust segmentation in clinical applications by effectively addressing data variability.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}