Zhongxing Zhou, Jarod Wellinghoff, Mingdong Fan, Scott Hsieh, David Holmes, Cynthia H McCollough, Lifeng Yu
{"title":"Automated Web-based Software for CT Quality Control Testing of Low-contrast Detectability using Model Observers.","authors":"Zhongxing Zhou, Jarod Wellinghoff, Mingdong Fan, Scott Hsieh, David Holmes, Cynthia H McCollough, Lifeng Yu","doi":"10.1117/12.3008777","DOIUrl":"https://doi.org/10.1117/12.3008777","url":null,"abstract":"<p><p>The Channelized Hotelling observer (CHO) is well correlated with human observer performance in many CT detection/classification tasks but has not been widely adopted in routine CT quality control and performance evaluation, mainly because of the lack of an easily available, efficient, and validated software tool. We developed a highly automated solution - CT image quality evaluation and Protocol Optimization (CTPro), a web-based software platform that includes CHO and other traditional image quality assessment tools such as modulation transfer function and noise power spectrum. This tool can allow easy access to the CHO for both the research and clinical community and enable efficient, accurate image quality evaluation without the need of installing additional software. Its application was demonstrated by comparing the low-contrast detectability on a clinical photon-counting-detector (PCD)-CT with a traditional energy-integrating-detector (EID)-CT, which showed UHR-T3D had 6.2% higher d' than EID-CT with IR (p = 0.047) and 4.1% lower d' without IR (p = 0.122).</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008424/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140874176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tian Yu, Yunhe Li, Michael E Kim, Chenyu Gao, Qi Yang, Leon Y Cai, Susane M Resnick, Lori L Beason-Held, Daniel C Moyer, Kurt G Schilling, Bennett A Landman
{"title":"Tractography with T1-weighted MRI and associated anatomical constraints on clinical quality diffusion MRI.","authors":"Tian Yu, Yunhe Li, Michael E Kim, Chenyu Gao, Qi Yang, Leon Y Cai, Susane M Resnick, Lori L Beason-Held, Daniel C Moyer, Kurt G Schilling, Bennett A Landman","doi":"10.1117/12.3006286","DOIUrl":"10.1117/12.3006286","url":null,"abstract":"<p><p>Diffusion MRI (dMRI) streamline tractography, the gold-standard for in vivo estimation of white matter (WM) pathways in the brain, has long been considered as a product of WM microstructure. However, recent advances in tractography demonstrated that convolutional recurrent neural networks (CoRNN) trained with a teacher-student framework have the ability to learn to propagate streamlines directly from T1 and anatomical context. Training for this network has previously relied on high resolution dMRI. In this paper, we generalize the training mechanism to traditional clinical resolution data, which allows generalizability across sensitive and susceptible study populations. We train CoRNN on a small subset of the Baltimore Longitudinal Study of Aging (BLSA), which better resembles clinical scans. We define a metric, termed the epsilon ball seeding method, to compare T1 tractography and traditional diffusion tractography at the streamline level. We show that under this metric T1 tractography generated by CoRNN reproduces diffusion tractography with approximately three millimeters of error.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142115752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fourier Diffusion for Sparse CT Reconstruction.","authors":"Anqi Liu, Grace J Gang, J Webster Stayman","doi":"10.1117/12.3008622","DOIUrl":"10.1117/12.3008622","url":null,"abstract":"<p><p>Sparse CT reconstruction continues to be an area of interest in a number of novel imaging systems. Many different approaches have been tried including model-based methods, compressed sensing approaches, and most recently deep-learning-based processing. Diffusion models, in particular, have become extremely popular due to their ability to effectively encode rich information about images and to allow for posterior sampling to generate many possible outputs. One drawback of diffusion models is that their recurrent structure tends to be computationally expensive. In this work we apply a new Fourier diffusion approach that permits processing with many fewer time steps than the standard scalar diffusion model. We present an extension of the Fourier diffusion technique and evaluate it in a simulated breast cone-beam CT system with a sparse view acquisition.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongxing Zhou, Scott S Hsieh, Hao Gong, Cynthia H McCollough, Lifeng Yu
{"title":"Evaluation of data uncertainty for deep-learning-based CT noise reduction using ensemble patient data and a virtual imaging trial framework.","authors":"Zhongxing Zhou, Scott S Hsieh, Hao Gong, Cynthia H McCollough, Lifeng Yu","doi":"10.1117/12.3008581","DOIUrl":"https://doi.org/10.1117/12.3008581","url":null,"abstract":"<p><p>Deep learning-based image reconstruction and noise reduction (DLIR) methods have been increasingly deployed in clinical CT. Accurate assessment of their data uncertainty properties is essential to understand the stability of DLIR in response to noise. In this work, we aim to evaluate the data uncertainty of a DLIR method using real patient data and a virtual imaging trial framework and compare it with filtered-backprojection (FBP) and iterative reconstruction (IR). The ensemble of noise realizations was generated by using a realistic projection domain noise insertion technique. The impact of varying dose levels and denoising strengths were investigated for a ResNet-based deep convolutional neural network (DCNN) model trained using patient images. On the uncertainty maps, DCNN shows more detailed structures than IR although its bias map has less structural dependency, which implies that DCNN is more sensitive to small changes in the input. Both visual examples and histogram analysis demonstrated that hotspots of uncertainty in DCNN may be associated with a higher chance of distortion from the truth than IR, but it may also correspond to a better detection performance for some of the small structures.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Da He, Jayaram K Udupa, Yubing Tong, Drew A Torigian
{"title":"Predicting human effort needed to correct auto-segmentations.","authors":"Da He, Jayaram K Udupa, Yubing Tong, Drew A Torigian","doi":"10.1117/12.3006471","DOIUrl":"10.1117/12.3006471","url":null,"abstract":"<p><p>Medical image auto-segmentation techniques are basic and critical for numerous image-based analysis applications that play an important role in developing advanced and personalized medicine. Compared with manual segmentations, auto-segmentations are expected to contribute to a more efficient clinical routine and workflow by requiring fewer human interventions or revisions to auto-segmentations. However, current auto-segmentation methods are usually developed with the help of some popular segmentation metrics that do not directly consider human correction behavior. Dice Coefficient (DC) focuses on the truly-segmented areas, while Hausdorff Distance (HD) only measures the maximal distance between the auto-segmentation boundary with the ground truth boundary. Boundary length-based metrics such as surface DC (surDC) and Added Path Length (APL) try to distinguish truly-predicted boundary pixels and wrong ones. It is uncertain if these metrics can reliably indicate the required manual mending effort for application in segmentation research. Therefore, in this paper, the potential use of the above four metrics, as well as a novel metric called Mendability Index (MI), to predict the human correction effort is studied with linear and support vector regression models. 265 3D computed tomography (CT) samples for 3 objects of interest from 3 institutions with corresponding auto-segmentations and ground truth segmentations are utilized to train and test the prediction models. The five-fold cross-validation experiments demonstrate that meaningful human effort prediction can be achieved using segmentation metrics with varying prediction errors for different objects. The improved variant of MI, called MIhd, generally shows the best prediction performance, suggesting its potential to indicate reliably the clinical value of auto-segmentations.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12931 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11218903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Yu, Yucheng Tang, Qi Yang, Ho Hin Lee, Shunxing Bao, Yuankai Huo, Bennett A Landman
{"title":"Enhancing Hierarchical Transformers for Whole Brain Segmentation with Intracranial Measurements Integration.","authors":"Xin Yu, Yucheng Tang, Qi Yang, Ho Hin Lee, Shunxing Bao, Yuankai Huo, Bennett A Landman","doi":"10.1117/12.3009084","DOIUrl":"10.1117/12.3009084","url":null,"abstract":"<p><p>Whole brain segmentation with magnetic resonance imaging (MRI) enables the non-invasive measurement of brain regions, including total intracranial volume (TICV) and posterior fossa volume (PFV). Enhancing the existing whole brain segmentation methodology to incorporate intracranial measurements offers a heightened level of comprehensiveness in the analysis of brain structures. Despite its potential, the task of generalizing deep learning techniques for intracranial measurements faces data availability constraints due to limited manually annotated atlases encompassing whole brain and TICV/PFV labels. In this paper, we enhancing the hierarchical transformer UNesT for whole brain segmentation to achieve segmenting whole brain with 133 classes and TICV/PFV simultaneously. To address the problem of data scarcity, the model is first pretrained on 4859 T1-weighted (T1w) 3D volumes sourced from 8 different sites. These volumes are processed through a multi-atlas segmentation pipeline for label generation, while TICV/PFV labels are unavailable. Subsequently, the model is finetuned with 45 T1w 3D volumes from Open Access Series Imaging Studies (OASIS) where both 133 whole brain classes and TICV/PFV labels are available. We evaluate our method with Dice similarity coefficients(DSC). We show that our model is able to conduct precise TICV/PFV estimation while maintaining the 132 brain regions performance at a comparable level. Code and trained model are available at: https://github.com/MASILab/UNesT/wholebrainSeg.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364374/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman
{"title":"Nucleus subtype classification using inter-modality learning.","authors":"Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman","doi":"10.1117/12.3006237","DOIUrl":"10.1117/12.3006237","url":null,"abstract":"<p><p>Understanding the way cells communicate, co-locate, and interrelate is essential to understanding human physiology. Hematoxylin and eosin (H&E) staining is ubiquitously available both for clinical studies and research. The Colon Nucleus Identification and Classification (CoNIC) Challenge has recently innovated on robust artificial intelligence labeling of six cell types on H&E stains of the colon. However, this is a very small fraction of the number of potential cell classification types. Specifically, the CoNIC Challenge is unable to classify epithelial subtypes (progenitor, endocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), or connective subtypes (fibroblasts, stromal). In this paper, we propose to use inter-modality learning to label previously un-labelable cell types on virtual H&E. We leveraged multiplexed immunofluorescence (MxIF) histology imaging to identify 14 subclasses of cell types. We performed style transfer to synthesize virtual H&E from MxIF and transferred the higher density labels from MxIF to these virtual H&E images. We then evaluated the efficacy of learning in this approach. We identified helper T and progenitor nuclei with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02) respectively on virtual H&E. This approach represents a promising step towards automating annotation in digital pathology.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12933 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11392413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hangfan Liu, Bo Li, Yiran Li, Rebecca Welsh, Ze Wang
{"title":"ASL MRI Denoising via Multi Channel Collaborative Low-Rank Regularization.","authors":"Hangfan Liu, Bo Li, Yiran Li, Rebecca Welsh, Ze Wang","doi":"10.1117/12.3005223","DOIUrl":"10.1117/12.3005223","url":null,"abstract":"<p><p>Arterial spin labeling (ASL) perfusion MRI is the only non-invasive imaging technique for quantifying regional cerebral blood flow (CBF), which is a fundamental physiological variable. ASL MRI has a relatively low signal-to-noise-ratio (SNR). In this study, we proposed a novel ASL denoising method by simultaneously exploiting the inter- and intra-receive channel data correlations. MRI including ASL MRI data have been routinely acquired with multi-channel coils but current denoising methods are designed for denoising the coil-combined data. Indeed, the concurrently acquired multi-channel images differ only by coil sensitivity weighting and random noise, resulting in a strong low-rank structure of the stacked multi-channel data matrix. In our method, this matrix was formed by stacking the vectorized slices from different channels. Matrix rank was then approximately measured through the logarithm-determinant of the covariance matrix. Notably, our filtering technique is applied directly to complex data, avoiding the need to separate magnitude and phase or divide real and imaginary data, thereby ensuring minimal information loss. The degree of low-rank regularization is controlled based on the estimated noise level, striking a balance between noise removal and texture preservation. A noteworthy advantage of our framework is its freedom from parameter tuning, distinguishing it from most existing methods. Experimental results on real-world imaging data demonstrate the effectiveness of our proposed approach in significantly improving ASL perfusion quality. By effectively mitigating noise while preserving important textural information, our method showcases its potential for enhancing the utility and accuracy of ASL perfusion MRI, paving the way for improved neuroimaging studies and clinical diagnoses.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11190560/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141443895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leon Y Cai, Stephanie N Del Tufo, Laura Barquero, Micah D'Archangel, Lanier Sachs, Laurie E Cutting, Nicole Glaser, Simona Ghetti, Sarah S Jaser, Adam W Anderson, Lori C Jordan, Bennett A Landman
{"title":"Spatiospectral image processing workflow considerations for advanced MR spectroscopy of the brain.","authors":"Leon Y Cai, Stephanie N Del Tufo, Laura Barquero, Micah D'Archangel, Lanier Sachs, Laurie E Cutting, Nicole Glaser, Simona Ghetti, Sarah S Jaser, Adam W Anderson, Lori C Jordan, Bennett A Landman","doi":"10.1117/12.3005391","DOIUrl":"10.1117/12.3005391","url":null,"abstract":"<p><p>Magnetic resonance spectroscopy (MRS) is one of the few non-invasive imaging modalities capable of making neurochemical and metabolic measurements <i>in vivo</i>. Traditionally, the clinical utility of MRS has been narrow. The most common use has been the \"single-voxel spectroscopy\" variant to discern the presence of a lactate peak in the spectra in one location in the brain, typically to evaluate for ischemia in neonates. Thus, the reduction of rich spectral data to a binary variable has not classically necessitated much signal processing. However, scanners have become more powerful and MRS sequences more advanced, increasing data complexity and adding 2 to 3 spatial dimensions in addition to the spectral one. The result is a spatially- and spectrally-variant MRS image ripe for image processing innovation. Despite this potential, the logistics for robustly accessing and manipulating MRS data across different scanners, data formats, and software standards remain unclear. Thus, as research into MRS advances, there is a clear need to better characterize its image processing considerations to facilitate innovation from scientists and engineers. Building on established neuroimaging standards, we describe a framework for manipulating these images that generalizes to the voxel, spectral, and metabolite level across space and multiple imaging sites while integrating with LCModel, a widely used quantitative MRS peak-fitting platform. In doing so, we provide examples to demonstrate the advantages of such a workflow in relation to recent publications and with new data. Overall, we hope our characterizations will lower the barrier of entry to MRS processing for neuroimaging researchers.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142115751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martina P Orji, Kyle Williams, S V Setlur Nagesh, Stephen Rudin, Daniel R Bednarek
{"title":"Fluoroscopic Procedure-Room Scatter-Dose Reduction Using a Region-of-Interest (ROI) Attenuator.","authors":"Martina P Orji, Kyle Williams, S V Setlur Nagesh, Stephen Rudin, Daniel R Bednarek","doi":"10.1117/12.3006856","DOIUrl":"10.1117/12.3006856","url":null,"abstract":"<p><p>During fluoroscopically-guided interventional (FGI) procedures, dose to the patient as well as the scatter dose to staff can be high. However, a significant dose reduction can be possible by using a region-of-interest (ROI) attenuator that reduces the x-ray intensity in the peripheral x-ray field while providing full field of view imaging. In this work, we investigated the magnitude of scatter dose reduction to staff made possible by using an ROI attenuator composed of 0.7 mm Cu with a central circular hole that projected a 5.4 cm ROI onto a Kyoto anthropomorphic phantom in the head, chest, and abdomen regions. A 150-cc ionization chamber was placed on a stand at a height of 150 cm (eye level) from the floor and 25 cm and 50 cm lateral distance from the gantry isocenter in a direction perpendicular to the table centerline to measure scatter dose at different positions along the length of the table. Scatter dose per entrance air kerma (mGy/Gy) was measured with and without the ROI attenuator and the percent scatter reduction for the ROI attenuator was determined as a function of staff positions, beam energy and gantry angulation. For head imaging, the measured percent dose reduction was 50%-65% and, for chest and abdomen imaging, the scatter dose reduction was 63%-72% at 50 cm lateral distance when using this ROI attenuator with about 20% beam transmission at 80 kVp. Overall, a considerable reduction of scattered radiation in the interventional room can be realized using an ROI attenuator.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11512733/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}