{"title":"Slice2Mesh: 3D Surface Reconstruction From Sparse Slices of Images for the Left Ventricle","authors":"Jia Xiao;Wen Zheng;Wenji Wang;Qing Xia;Zhennan Yan;Qian Guo;Xiao Wang;Shaoping Nie;Shaoting Zhang","doi":"10.1109/TMI.2024.3514869","DOIUrl":"10.1109/TMI.2024.3514869","url":null,"abstract":"Cine MRI is a widely used technique to evaluate left ventricular function and motion, as it captures temporal information. However, due to the limited spatial resolution, cine MRI only provides a few sparse scans at regular positions and orientations, which poses challenges for reconstructing dense 3D cardiac structures, which is essential for better understanding the cardiac structure and motion in a dynamic 3D manner. In this study, we propose a novel learning-based 3D cardiac surface reconstruction method, Slice2Mesh, which directly predicts accurate and high-fidelity 3D meshes from sparse slices of cine MRI images under partial supervision of sparse contour points. Slice2Mesh leverages a 2D UNet to extract image features and a graph convolutional network to predict deformations from an initial template to various 3D surfaces, which enables it to produce topology-consistent meshes that can better characterize and analyze cardiac movement. We also introduce As Rigid As Possible energy in the deformation loss to preserve the intrinsic structure of the predefined template and produce realistic left ventricular shapes. We evaluated our method on 150 clinical test samples and achieved an average chamfer distance of 3.621 mm, outperforming traditional methods by approximately 2.5 mm. We also applied our method to produce 4D surface meshes from cine MRI sequences and utilized a simple SVM model on these 4D heart meshes to identify subjects with myocardial infarction, and achieved a classification sensitivity of 91.8% on 99 test subjects, including 49 abnormal patients, which implies great potential of our method for clinical use.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1541-1555"},"PeriodicalIF":0.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixin Yang;Richard Simon;Kelly Merrell;Cristian A. Linte
{"title":"Boundary Constraint-Free Biomechanical Model-Based Surface Matching for Intraoperative Liver Deformation Correction","authors":"Zixin Yang;Richard Simon;Kelly Merrell;Cristian A. Linte","doi":"10.1109/TMI.2024.3515632","DOIUrl":"10.1109/TMI.2024.3515632","url":null,"abstract":"In image-guided liver surgery, 3D-3D non-rigid registration methods play a crucial role in estimating the mapping between the preoperative model and the intraoperative surface represented as point clouds, addressing the challenge of tissue deformation. Typically, these methods incorporate a biomechanical model, represented as a finite element model (FEM), into the strain energy term to regularize a surface matching term. We propose a 3D-3D non-rigid registration method that incorporates a modified FEM into the surface matching term. The modified FEM alleviates the need to specify boundary conditions, which is achieved by modifying the stiffness matrix of a FEM and using diagonal loading for stabilization. As a result, the modified surface matching term does not require the specification of boundary conditions or an additional strain energy term to regularize the surface matching term. Optimization is achieved through an accelerated gradient algorithm, further enhanced by our proposed method for determining the optimal step size. We evaluated our method and compared it to several state-of-the-art methods across various datasets. Our straightforward and effective approach consistently outperformed or achieved comparable performance to the state-of-the-art methods. Our code and datasets are available at <uri>https://github.com/zixinyang9109/BCF-FEM</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1723-1734"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142809268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Wu;Zhiguo Jiang;Kunming Tang;Jun Shi;Fengying Xie;Wei Wang;Haibo Wu;Yushan Zheng
{"title":"Pan-Cancer Histopathology WSI Pre-Training With Position-Aware Masked Autoencoder","authors":"Kun Wu;Zhiguo Jiang;Kunming Tang;Jun Shi;Fengying Xie;Wei Wang;Haibo Wu;Yushan Zheng","doi":"10.1109/TMI.2024.3513358","DOIUrl":"10.1109/TMI.2024.3513358","url":null,"abstract":"Large-scale pre-training models have promoted the development of histopathology image analysis. However, existing self-supervised methods for histopathology images primarily focus on learning patch features, while there is a notable gap in the availability of pre-training models specifically designed for WSI-level feature learning. In this paper, we propose a novel self-supervised learning framework for pan-cancer WSI-level representation pre-training with the designed position-aware masked autoencoder (PAMA). Meanwhile, we propose the position-aware cross-attention (PACA) module with a kernel reorientation (KRO) strategy and an anchor dropout (AD) mechanism. The KRO strategy can capture the complete semantic structure and eliminate ambiguity in WSIs, and the AD contributes to enhancing the robustness and generalization of the model. We evaluated our method on 7 large-scale datasets from multiple organs for pan-cancer classification tasks. The results have demonstrated the effectiveness and generalization of PAMA in discriminative WSI representation learning and pan-cancer WSI pre-training. The proposed method was also compared with 8 WSI analysis methods. The experimental results have indicated that our proposed PAMA is superior to the state-of-the-art methods. The code and checkpoints are available at <uri>https://github.com/WkEEn/PAMA</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1610-1623"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142809267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Zhou;Jun Hou;Tianqi Chen;Yinchi Zhou;Xiongchao Chen;Huidong Xie;Qiong Liu;Xueqi Guo;Menghua Xia;Yu-Jung Tsai;Vladimir Y. Panin;Takuya Toyonaga;James S. Duncan;Chi Liu
{"title":"POUR-Net: A Population-Prior-Aided Over-Under-Representation Network for Low-Count PET Attenuation Map Generation","authors":"Bo Zhou;Jun Hou;Tianqi Chen;Yinchi Zhou;Xiongchao Chen;Huidong Xie;Qiong Liu;Xueqi Guo;Menghua Xia;Yu-Jung Tsai;Vladimir Y. Panin;Takuya Toyonaga;James S. Duncan;Chi Liu","doi":"10.1109/TMI.2024.3514925","DOIUrl":"10.1109/TMI.2024.3514925","url":null,"abstract":"Low-dose PET offers a valuable means of minimizing radiation exposure in PET imaging. However, the prevalent practice of employing additional CT scans for generating attenuation maps (<inline-formula> <tex-math>$mu $ </tex-math></inline-formula>-map) for PET attenuation correction significantly elevates radiation doses. To address this concern and further mitigate radiation exposure in low-dose PET exams, we propose an innovative Population-prior-aided Over-Under-Representation Network (POUR-Net) that aims for high-quality attenuation map generation from low-dose PET. First, POUR-Net incorporates an Over-Under-Representation Network (OUR-Net) to facilitate efficient feature extraction, encompassing both low-resolution abstracted and fine-detail features, for assisting deep generation on the full-resolution level. Second, complementing OUR-Net, a population prior generation machine (PPGM) utilizing a comprehensive CT-derived <inline-formula> <tex-math>$mu $ </tex-math></inline-formula>-map dataset, provides additional prior information to aid OUR-Net generation. The integration of OUR-Net and PPGM within a cascade framework enables iterative refinement of <inline-formula> <tex-math>$mu $ </tex-math></inline-formula>-map generation, resulting in the production of high-quality <inline-formula> <tex-math>$mu $ </tex-math></inline-formula>-maps. Experimental results underscore the effectiveness of POUR-Net, showing it as a promising solution for accurate CT-free low-count PET attenuation correction, which also surpasses the performance of previous baseline methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1699-1710"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142804778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan B. Martin;Charlotte R. Sappo;Benjamin M. Hardy;William A. Grissom
{"title":"A Minibatch Alternating Projections Algorithm for Robust and Efficient Magnitude Least-Squares RF Pulse Design in MRI","authors":"Jonathan B. Martin;Charlotte R. Sappo;Benjamin M. Hardy;William A. Grissom","doi":"10.1109/TMI.2024.3515035","DOIUrl":"10.1109/TMI.2024.3515035","url":null,"abstract":"A magnitude-least-squares radiofrequency pulse design algorithm is reported which uses interleaved exact and stochastically-generated inexact updates to escape local minima and find low-cost solutions. Inexact updates are performed using a small randomly selected minibatch of the available <inline-formula> <tex-math>${B}_{{1}}^{+}$ </tex-math></inline-formula> measurements to update RF pulse weights, which perturbs the sequence of alternating projections. Applications to RF shimming, parallel transmit spokes RF pulse design, and spectral-spatial RF pulse design are considered. Numerical and simulation studies characterized the optimal minibatch size, which was found to consistently produce lower power and lower RMSE solutions across subjects, coil geometries, <inline-formula> <tex-math>${B}_{{1}}^{+}$ </tex-math></inline-formula> resolutions and orientations. The method was validated in-vivo at 7 Tesla and produced improvements in image quality in a slice-by-slice RF-shimmed imaging sequence. Compared to conventional methods, the pulse design method can more robustly design RF pulses that correct for <inline-formula> <tex-math>${B}_{{1}}^{+}$ </tex-math></inline-formula> inhomogeneities at ultra-high field strengths, and enable pulse designs to be completed with increased computational efficiency.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1556-1567"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142809269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangqi Wen;Peng Cao;Lingwen Liu;Maochun Hao;Siyu Liu;Junjie Zheng;Jinzhu Yang;Osmar R. Zaiane;Fei Wang
{"title":"Heterogeneous Graph Representation Learning Framework for Resting-State Functional Connectivity Analysis","authors":"Guangqi Wen;Peng Cao;Lingwen Liu;Maochun Hao;Siyu Liu;Junjie Zheng;Jinzhu Yang;Osmar R. Zaiane;Fei Wang","doi":"10.1109/TMI.2024.3512603","DOIUrl":"10.1109/TMI.2024.3512603","url":null,"abstract":"Brain functional connectivity analysis is important for understanding brain development and brain disorders. Recent studies have suggested that the variations of functional connectivity among multiple subnetworks are closely related to the development of diseases. However, the existing works failed to sufficiently capture the complex correlation patterns among the subnetworks and ignored the learning of heterogeneous structural information across the subnetworks. To address these issues, we formulate a new paradigm for constructing and analyzing high-order heterogeneous functional brain networks via meta-paths and propose a Heterogeneous Graph representation Learning framework (BrainHGL). Our framework consists of three key aspects: 1) Meta-path encoding for capturing rich heterogeneous topological information, 2) Meta-path interaction for exploiting complex association patterns among subnetworks and 3) Meta-path aggregation for better meta-path fusion. To the best of our knowledge, we are the first to formulate the heterogeneous brain networks for better exploiting the relationship between the subnetwork interactions and the mental disease We evaluate BrainHGL on the private center Nanjing Medical University dataset (center NMU) and the public Autism Brain Imaging Data Exchange (ABIDE) dataset. We demonstrate the effectiveness of the proposed model across various disease classification tasks, including major depression disorder (MDD), bipolar disorder (BD) and autism spectrum disorder (ASD) diagnoses. In addition, our model provides deeper insights into disease interpretability, including the critical brain subnetwork connectivities, brain regions and functional pathways. We also identified disease subtypes consistent with previous neuroscientific studies by our model, which benefits the disease identification performance. The code is available at <uri>https://github.com/IntelliDAL/Graph/BrainHGL</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1581-1595"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction of Lymph Node Metastasis in Colorectal Cancer Using Intraoperative Fluorescence Multi-Modal Imaging","authors":"Xiaobo Zhu;He Sun;Yuhan Wang;Gang Hu;Lizhi Shao;Song Zhang;Fucheng Liu;Chongwei Chi;Kunshan He;Jianqiang Tang;Yu An;Jie Tian;Zhenyu Liu","doi":"10.1109/TMI.2024.3510836","DOIUrl":"10.1109/TMI.2024.3510836","url":null,"abstract":"The diagnosis of lymph node metastasis (LNM) is essential for colorectal cancer (CRC) treatment. The primary method of identifying LNM is to perform frozen sections and pathologic analysis, but this method is labor-intensive and time-consuming. Therefore, combining intraoperative fluorescence imaging with deep learning (DL) methods can improve efficiency. The majority of recent studies only analyze uni-modal fluorescence imaging, which provides less semantic information. In this work, we mainly established a multi-modal fluorescence imaging feature fusion prediction (MFI-FFP) model combining white light, fluorescence, and pseudo-color imaging of lymph nodes for LNM prediction. Firstly, based on the properties of various modal imaging, distinct feature extraction networks are chosen for feature extraction, which could significantly enhance the complementarity of various modal information. Secondly, the multi-modal feature fusion (MFF) module, which combines global and local information, is designed to fuse the extracted features. Furthermore, a novel loss function is formulated to tackle the issue of imbalanced samples, challenges in differentiating samples, and enhancing sample variety. Lastly, the experiments show that the model has a higher area under the receiver operating characteristic (ROC) curve (AUC), accuracy (ACC), and F1 score than the uni-modal and bi-modal models and has a better performance compared to other efficient image classification networks. Our study demonstrates that the MFI-FFP model has the potential to help doctors predict LNM and shows its promise in medical image analysis.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1568-1580"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10787150","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiye Li;Zhenyue Chen;Yu-Hang Liu;Johannes Rebling;Urs A. T. Hofmann;Daniil Nozdriukhin;Xosé Luís Deán-Ben;Daniel Razansky
{"title":"Ultrasound-Aided Large-Scale Optoacoustic Microscopy for Volumetric Angiography and Oximetry","authors":"Weiye Li;Zhenyue Chen;Yu-Hang Liu;Johannes Rebling;Urs A. T. Hofmann;Daniil Nozdriukhin;Xosé Luís Deán-Ben;Daniel Razansky","doi":"10.1109/TMI.2024.3512415","DOIUrl":"10.1109/TMI.2024.3512415","url":null,"abstract":"Given its direct relationship to tissue metabolism and various pathological processes, 3D mapping of blood oxygen saturation (sO2) is essential for advancing our knowledge on oxygen delivery to tissues and evaluating therapeutic efficacy. Optoacoustic microscopy has enabled label-free estimation of sO2 values by exploiting the spectrally distinctive absorption of hemoglobin in its oxygenated and deoxygenated forms. However, quantitative 3D mapping of sO2 distribution over large heterogenous tissue regions is commonly hindered due to the strong spatial and spectral variability of the excitation light fluence. Herein, we capitalize on hybridization between pulse-echo ultrasound and large-scale spectroscopic optoacoustic microscopy readings to accurately delineate the tissue surface, achieve depth-resolved tissue layer segmentation, and comprehensively evaluate the main causes behind inaccurate sO2 estimations with optoacoustic microscopy. Compensation for wavelength-dependent light fluence variations due to relative reflectance and attenuation through multiple tissue layers is further shown to remove spectral noise and restore physiologically relevant sO2 values in the images recorded from the mouse ear and the dorsal murine skin. The ultrasound-aided large-scale optoacoustic microscopy (uLSOM) approach is thus expected to enhance applicability of optoacoustic microscopy for quantitative label-free imaging of tissue oxygenation and metabolism.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1636-1647"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142782682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinyue Guo;Zejin Wang;Hao Zhai;Yanchao Zhang;Jing Liu;Hua Han
{"title":"Re-Isotropic Segmentation for Subcellular Ultrastructure in Anisotropic EM Images","authors":"Jinyue Guo;Zejin Wang;Hao Zhai;Yanchao Zhang;Jing Liu;Hua Han","doi":"10.1109/TMI.2024.3511599","DOIUrl":"10.1109/TMI.2024.3511599","url":null,"abstract":"Despite advances in ultrathin cutting, serial sections in electron microscopy (EM) still exhibit noticeable anisotropy, with much lower z-axis resolution compared with the other two axes. As a result, the imaged biovolume suffers from low connectivity smoothness in contextual structures, which makes the subcellular ultrastructural segmentation challenging. The recent 2.5D hybrid convolutions allow the direct learning of asymmetric semantics from anisotropic features. However, plain representations without the isotropic scale prior limit the performance of the upper bound. This paper presents a novel framework, referred to as ReIsoSeg, which aims at incorporating an isotropic scaling prior into anisotropic biovolumes. More precisely, ReIsoSeg consists of an anisotropic primary encoder, a pseudo-isotropic auxiliary module, and a weight-shared decoder. The auxiliary module implicitly deforms the anisotropic features from the primary encoder to align with the isotropic prior. The re-isotropic loss squeezes the pseudo-isotropic representations into the anisotropic space to reuse the anisotropic labels. The shared decoder ensures that the outputs of the anisotropic encoder converge towards the isotropic representations. During the inference process, the auxiliary module is excluded. Comprehensive experiments were conducted on the AC3/AC4, CREMI, and MitoEM subcellular ultrastructure datasets. The obtained results demonstrate the high performance of the proposed ReIsoSeg.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1624-1635"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142782683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fuping Wu;Le Zhang;Yang Sun;Yuanhan Mo;Thomas E. Nichols;Bartłomiej W. Papież
{"title":"MT-CooL: Multi-Task Cooperative Learning via Flat Minima Searching","authors":"Fuping Wu;Le Zhang;Yang Sun;Yuanhan Mo;Thomas E. Nichols;Bartłomiej W. Papież","doi":"10.1109/TMI.2024.3512173","DOIUrl":"10.1109/TMI.2024.3512173","url":null,"abstract":"While multi-task learning (MTL) has been widely developed for natural image analysis, its potential for enhancing performance in medical imaging remains relatively unexplored. Most methods formulate MTL as a multi-objective problem, inherently forcing all tasks to compete with each other during optimization. In this work, we propose a novel approach by formulating MTL as a multi-level optimization problem, in which the features learned from one task are optimized by benefiting from the other tasks. Specifically, we advocate for a cooperative approach where each task considers the features of others, enabling individual performance enhancement without detriment to others. To achieve this objective, we introduce a novel optimization strategy aimed at seeking flat minima for each sub-problem, fostering the learning of robust sub-models resilient to changes in other sub-models. We demonstrate the advantages of our proposed method through comprehensive parameter and comparison studies on the OrganCMNIST dataset. Additionally, we evaluate its efficacy on three eye-related medical image datasets, comparing its performance against other state-of-the-art MTL approaches. The results highlight the superiority of our method over existing approaches, showcasing its potential for training multi-purpose models in medical image analysis.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1648-1658"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142782455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}