Farzan Niknejad Mazandarani, Paul Babyn, Javad Alirezaie
{"title":"SADiff: A Sinogram-Aware Diffusion Model for Low-Dose CT Image Denoising.","authors":"Farzan Niknejad Mazandarani, Paul Babyn, Javad Alirezaie","doi":"10.1007/s10278-025-01469-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01469-8","url":null,"abstract":"<p><p>CT image denoising is a crucial task in medical imaging systems, aimed at enhancing the quality of acquired visual signals. The emergence of diffusion models in machine learning has revolutionized the generation of high-quality CT images. However, diffusion-based CT image denoising methods suffer from two key shortcomings. First, they do not incorporate image formation priors from CT imaging, which limits their adaptability to the CT image denoising task. Second, they are trained on CT images with varying structures and textures at the signal phase, which hinders the model generalization capability. To address the first limitation, we propose a novel conditioning module for our diffusion model that leverages image formation priors from the sinogram domain to generate rich features. To tackle the second issue, we introduce a two-phase training mechanism in which the network gradually learns different anatomical textures and structures. Extensive experimental results demonstrate the effectiveness of both approaches in enhancing CT image quality, with improvements of up to 17% in PSNR and 38% in SSIM, highlighting their superiority over state-of-the-art methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianci Zhang, Rui Li, Zhaoming Zhong, Xuan Zhang, Tuo Liu, Guang-Quan Zhou, Faqin Lv
{"title":"Robust Automatic Grading of Blunt Liver Trauma in Contrast-Enhanced Ultrasound Using Label-Noise-Resistant Models.","authors":"Tianci Zhang, Rui Li, Zhaoming Zhong, Xuan Zhang, Tuo Liu, Guang-Quan Zhou, Faqin Lv","doi":"10.1007/s10278-025-01466-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01466-x","url":null,"abstract":"<p><p>Recently, contrast-enhanced ultrasound (CEUS) has presented a potential value in the diagnosis of liver trauma, the leading cause of death in blunt abdominal trauma. However, the inherent speckle noise and the complicated visual characteristics of blunt liver trauma in CEUS images make the diagnosis highly dependent on the expertise of radiologists, which is subjective and time-consuming. Moreover, the intra- and inter-observer variance inevitably influences the accuracy of diagnosis using CEUS. In this study, we propose a Label-Noisy-Resistant CNN-Transformer Hybrid Architecture (LNRHA) for CUES liver trauma classification. Firstly, a CNN-Transformer-based Self-Contextual Dual Transformer (SCDT) module, a shared feature encoder followed by the dual-perspective Transformer-based modules, is developed to perceive the semantics of trauma lesions from neighbor-contextual and self-attention perspectives. Moreover, to mitigate the annotation noise due to intra- and inter-observer variance, we design a Confidence-Based Label Filter (CLF) module to distinguish potential label noise data based on the ensemble of the SCDT. The uncertainty of the detected noisy data is gradually penalized using a newly designed loss function, making full use of all the data while avoiding overfitting to misleading information, thus improving the classification performance. Extensive experimental results on an in-house liver trauma CEUS dataset show that our network architecture can achieve promising performance. Significantly, the experimental results of our LNRHA method on label noise data also outperform most state-of-the-art classification methods, suggesting its effectiveness in diagnosing liver trauma.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subtraction of Temporally Sequential Digital Mammograms: Prediction and Localization of Near-Term Breast Cancer Occurrence.","authors":"Kosmia Loizidou, Galateia Skouroumouni, Gabriella Savvidou, Anastasia Constantinidou, Eleni Orphanidou Vlachou, Anneza Yiallourou, Costas Pitris, Christos Nikolaou","doi":"10.1007/s10278-025-01456-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01456-z","url":null,"abstract":"<p><p>The objective is to predict a possible near-term occurrence of a breast mass after two consecutive screening rounds with normal mammograms. For the purposes of this study, conducted between 2020 and 2024, three consecutive rounds of mammograms were collected from 75 women, 46 to 79 years old. Successive screenings had an average interval of <math><mo>∼</mo></math> 2 years. In each case, two mammographic views of each breast were collected, resulting in a dataset with a total of 450 images (3 × 2 × 75). The most recent mammogram was considered the \"future\" screening round and provided the location of a biopsy-confirmed malignant mass, serving as the ground truth for the training. The two normal previous mammograms (\"prior\" and \"current\") were processed and a new subtracted image was created for the prediction. Region segmentation and post-processing were, then, applied, along with image feature extraction and selection. The selected features were incorporated into several classifiers and by applying leave-one-patient-out and k-fold cross-validation per patient, the regions of interest were characterized as benign or possible future malignancy. Study participants included 75 women (mean age, 62.5 ± 7.2; median age, 62 years). Feature selection from benign and possible future malignancy areas revealed that 14 features provided the best classification. The most accurate classification performance was achieved using ensemble voting, with 98.8% accuracy, 93.6% sensitivity, 98.8% specificity, and 0.96 AUC. Given the success of this algorithm, its clinical application could enable earlier diagnosis and improve prognosis for patients identified as at risk.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143589194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of TransUnet Deep Learning Model for Automatic Segmentation of Cervical Cancer in Small-Field T2WI Images.","authors":"Zengqiang Shi, Feifei Zhang, Xiong Zhang, Ru Pan, Yabao Cheng, Huang Song, Qiwei Kang, Jianbo Guo, Xin Peng, Yulin Li","doi":"10.1007/s10278-025-01464-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01464-z","url":null,"abstract":"<p><p>Effective segmentation of cervical cancer tissue from magnetic resonance (MR) images is crucial for automatic detection, staging, and treatment planning of cervical cancer. This study develops an innovative deep learning model to enhance the automatic segmentation of cervical cancer lesions. We obtained 4063 T2WI small-field sagittal, coronal, and oblique axial images from 222 patients with pathologically confirmed cervical cancer. Using this dataset, we employed a convolutional neural network (CNN) along with TransUnet models for segmentation training and evaluation of cervical cancer tissues. In this approach, CNNs are leveraged to extract local information from MR images, whereas Transformers capture long-range dependencies related to shape and structural information, which are critical for precise segmentation. Furthermore, we developed three distinct segmentation models based on coronal, axial, and sagittal T2WI within a small field of view using multidirectional MRI techniques. The dice similarity coefficient (DSC) and mean Hausdorff distance (AHD) were used to assess the performance of the models in terms of segmentation accuracy. The average DSC and AHD values obtained using the TransUnet model were 0.7628 and 0.8687, respectively, surpassing those obtained using the U-Net model by margins of 0.0033 and 0.3479, respectively. The proposed TransUnet segmentation model significantly enhances the accuracy of cervical cancer tissue delineation compared to alternative models, demonstrating superior performance in overall segmentation efficacy. This methodology can improve clinical diagnostic efficiency as an automated image analysis tool tailored for cervical cancer diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Fayemiwo, Bryan Gardiner, Jim Harkin, Liam McDaid, Punit Prakash, Michael Dennedy
{"title":"A Novel Pipeline for Adrenal Gland Segmentation: Integration of a Hybrid Post-Processing Technique with Deep Learning.","authors":"Michael Fayemiwo, Bryan Gardiner, Jim Harkin, Liam McDaid, Punit Prakash, Michael Dennedy","doi":"10.1007/s10278-025-01449-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01449-y","url":null,"abstract":"<p><p>Accurate segmentation of adrenal glands from CT images is essential for enhancing computer-aided diagnosis and surgical planning. However, the small size, irregular shape, and proximity to surrounding tissues make this task highly challenging. This study introduces a novel pipeline that significantly improves the segmentation of left and right adrenal glands by integrating advanced pre-processing techniques and a robust post-processing framework. Utilising a 2D UNet architecture with various backbones (VGG16, ResNet34, InceptionV3), the pipeline leverages test-time augmentation (TTA) and targeted removal of unconnected regions to enhance accuracy and robustness. Our results demonstrate a substantial improvement, with a 38% increase in the Dice similarity coefficient for the left adrenal gland and an 11% increase for the right adrenal gland on the AMOS dataset, achieved by the InceptionV3 model. Additionally, the pipeline significantly reduces false positives, underscoring its potential for clinical applications and its superiority over existing methods. These advancements make our approach a crucial contribution to the field of medical image segmentation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Somayeh Sadat Mehrnia, Zhino Safahi, Amin Mousavi, Fatemeh Panahandeh, Arezoo Farmani, Ren Yuan, Arman Rahmim, Mohammad R Salmanpour
{"title":"Landscape of 2D Deep Learning Segmentation Networks Applied to CT Scan from Lung Cancer Patients: A Systematic Review.","authors":"Somayeh Sadat Mehrnia, Zhino Safahi, Amin Mousavi, Fatemeh Panahandeh, Arezoo Farmani, Ren Yuan, Arman Rahmim, Mohammad R Salmanpour","doi":"10.1007/s10278-025-01458-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01458-x","url":null,"abstract":"<p><strong>Background: </strong>The increasing rates of lung cancer emphasize the need for early detection through computed tomography (CT) scans, enhanced by deep learning (DL) to improve diagnosis, treatment, and patient survival. This review examines current and prospective applications of 2D- DL networks in lung cancer CT segmentation, summarizing research, highlighting essential concepts and gaps; Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines, a systematic search of peer-reviewed studies from 01/2020 to 12/2024 on data-driven population segmentation using structured data was conducted across databases like Google Scholar, PubMed, Science Direct, IEEE (Institute of Electrical and Electronics Engineers) and ACM (Association for Computing Machinery) library. 124 studies met the inclusion criteria and were analyzed.</p><p><strong>Results: </strong>The LIDC-LIDR dataset was the most frequently used; The finding particularly relies on supervised learning with labeled data. The UNet model and its variants were the most frequently used models in medical image segmentation, achieving Dice Similarity Coefficients (DSC) of up to 0.9999. The reviewed studies primarily exhibit significant gaps in addressing class imbalances (67%), underuse of cross-validation (21%), and poor model stability evaluations (3%). Additionally, 88% failed to address the missing data, and generalizability concerns were only discussed in 34% of cases.</p><p><strong>Conclusions: </strong>The review emphasizes the importance of Convolutional Neural Networks, particularly UNet, in lung CT analysis and advocates for a combined 2D/3D modeling approach. It also highlights the need for larger, diverse datasets and the exploration of semi-supervised and unsupervised learning to enhance automated lung cancer diagnosis and early detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial-Temporal Information Fusion for Thyroid Nodule Segmentation in Dynamic Contrast-Enhanced MRI: A Novel Approach.","authors":"Binze Han, Qian Yang, Xuetong Tao, Meini Wu, Long Yang, Wenming Deng, Wei Cui, Dehong Luo, Qian Wan, Zhou Liu, Na Zhang","doi":"10.1007/s10278-025-01463-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01463-0","url":null,"abstract":"<p><p>This study aims to develop a novel segmentation method that utilizes spatio-temporal information for segmenting two-dimensional thyroid nodules on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Leveraging medical morphology knowledge of the thyroid gland, we designed a semi-supervised segmentation model that first segments the thyroid gland, guiding the model to focus exclusively on the thyroid region. This approach reduces the complexity of nodule segmentation by filtering out irrelevant regions and artifacts. Then, we introduced a method to explicitly extract temporal information from DCE-MRI data and integrated this with spatial information. The fusion of spatial and temporal features enhances the model's robustness and accuracy, particularly in complex imaging scenarios. Experimental results demonstrate that the proposed method significantly improves segmentation performance across multiple state-of-the-art models. The Dice similarity coefficient (DSC) increased by 8.41%, 7.05%, 9.39%, 11.53%, 20.94%, 17.94%, and 15.65% for U-Net, U-Net + + , SegNet, TransUnet, Swin-Unet, SSTrans-Net, and VM-Unet, respectively, and significantly improved the segmentation accuracy of nodules of different sizes. These results highlight the effectiveness of our spatial-temporal approach in achieving accurate and reliable thyroid nodule segmentation, offering a promising framework for clinical applications and future research in medical image analysis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Tumor Segmentation in Breast-Conserving Surgery Using Deep Learning on Breast Tomosynthesis.","authors":"Wen-Pei Wu, Yu-Wen Chen, Hwa-Koon Wu, Dar-Ren Chen, Yu-Len Huang","doi":"10.1007/s10278-025-01457-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01457-y","url":null,"abstract":"<p><p>Breast cancer is one of the leading causes of cancer-related deaths among women worldwide, with approximately 2.3 million diagnoses and 685,000 deaths in 2020. Early-stage breast cancer is often managed through breast-conserving surgery (BCS) combined with radiation therapy, which aims to preserve the breast's appearance while reducing recurrence risks. This study aimed to enhance intraoperative tumor segmentation using digital breast tomosynthesis (DBT) during BCS. A deep learning model, specifically an improved U-Net architecture incorporating a convolutional block attention module (CBAM), was utilized to delineate tumor margins with high precision. The system was evaluated on 51 patient cases by comparing automated segmentation with manually delineated contours and pathological assessments. Results showed that the proposed method achieved promising accuracy, with Intersection over Union (IoU) and Dice coefficients of 0.866 and 0.928, respectively, demonstrating its potential to improve intraoperative margin assessment and surgical outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Volin, Vidya Viswanathan, Peter Harri, Colin Segovis, Nabile Safdar, Elias Kikano
{"title":"Utilization of an Electronic Health Record Embedded Enterprise Health Data Exchange: A Single Institute Experience.","authors":"Joshua Volin, Vidya Viswanathan, Peter Harri, Colin Segovis, Nabile Safdar, Elias Kikano","doi":"10.1007/s10278-025-01459-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01459-w","url":null,"abstract":"<p><p>Evaluate the demand, volume, and institutional utilization of an EHR-based platform for health data exchange in a single US academic system. A retrospective review (3/2023-4/2024) spanned 11 hospitals and over 500 outpatient sites. Analytic reports from the Epic Care Everywhere Image Exchange Advanced Platform (Verona, WI) captured inbound (requested internally) and outbound (accessed externally) data volumes, including thumbnails and subsequent reference-quality key image retrievals. Data types were categorized as DICOM or non-DICOM (e.g., JPG, PDF). Counts, retrieval rates, and geographic patterns were analyzed. Inbound data involved 1.6 million patients from 370 external organizations, generating 5% for DICOM and 3% for non-DICOM; 77% of inbound thumbnails originated in-state. From 3/2023 to 4/2024, 1.9 million outbound thumbnails were accessed by 1478 external institutions, with a 78% reference-quality key image retrieval rate. Most outbound thumbnails were non-DICOM (66%); however, overall DICOM retrieval rates (5%) were higher than non-DICOM (3%). In-state institutions retrieved the most thumbnails (1,156,896/59%), whereas out-of-state sites requested more reference-quality key images (1,034,614/67%). Top users included various private and public health sectors. Findings show a growing reliance on EHR-embedded health data exchange. The abundance of inbound non-DICOM underscores interoperability challenges, while high DICOM retrieval rates emphasize clinical importance. Geographic disparities highlight the need for standardized solutions to improve continuity of care. Moreover, the upcoming HTI-2 bill mandates stronger data exchange measures, reinforcing the urgency for scalability. Health data exchange is in high demand as patients increasingly seek care across the US.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Appropriateness of Thyroid Nodule Cancer Risk Assessment and Management Recommendations Provided by Large Language Models.","authors":"Mohammad Alarifi","doi":"10.1007/s10278-025-01454-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01454-1","url":null,"abstract":"<p><p>The study evaluates the appropriateness and reliability of thyroid nodule cancer risk assessment recommendations provided by large language models (LLMs) ChatGPT, Gemini, and Claude in alignment with clinical guidelines from the American Thyroid Association (ATA) and the National Comprehensive Cancer Network (NCCN). A team comprising a medical imaging informatics specialist and two radiologists developed 24 clinically relevant questions based on ATA and NCCN guidelines. The readability of AI-generated responses was evaluated using the Readability Scoring System. A total of 322 radiologists in training or practice from the United States, recruited via Amazon Mechanical Turk, assessed the AI responses. Quantitative analysis using SPSS measured the appropriateness of recommendations, while qualitative feedback was analyzed through Dedoose. The study compared the performance of three AI models ChatGPT, Gemini, and Claude in providing appropriate recommendations. Paired samples t-tests showed no statistically significant differences in overall performance among the models. Claude achieved the highest mean score (21.84), followed closely by ChatGPT (21.83) and Gemini (21.47). Inappropriate response rates did not differ significantly, though Gemini showed a trend toward higher rates. However, ChatGPT achieved the highest accuracy (92.5%) in providing appropriate responses, followed by Claude (92.1%) and Gemini (90.4%). Qualitative feedback highlighted ChatGPT's clarity and structure, Gemini's accessibility but shallowness, and Claude's organization with occasional divergence from focus. LLMs like ChatGPT, Gemini, and Claude show potential in supporting thyroid nodule cancer risk assessment but require clinical oversight to ensure alignment with guidelines. Claude and ChatGPT performed nearly identically overall, with Claude having the highest mean score, though the difference was marginal. Further development is necessary to enhance their reliability for clinical use.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}