Anders Emil Vralstad, Peter Fosodeder, Karin Ulrike Deibele, Siri Ann Nyrnes, Ole Marius Hoel Rindal, Vibeke Skoura-Torvik, Martin Mienkina, Svein-Erik Masoy
{"title":"Coherence Based Sound Speed Aberration Correction - with clinical validation in fetal ultrasound.","authors":"Anders Emil Vralstad, Peter Fosodeder, Karin Ulrike Deibele, Siri Ann Nyrnes, Ole Marius Hoel Rindal, Vibeke Skoura-Torvik, Martin Mienkina, Svein-Erik Masoy","doi":"10.1109/TMI.2026.3691415","DOIUrl":"https://doi.org/10.1109/TMI.2026.3691415","url":null,"abstract":"<p><p>The purpose of this work is to demonstrate a robust and clinically validated method for correcting sound speed aberrations in medical ultrasound. We propose a correction method that calculates the focus delays directly from the observed two-way distributed average sound speed. The method beamforms multiple coherence images and selects the sound speed that maximizes the coherence for each image pixel. The main contribution of this work is the direct estimation of aberration, without the ill-conditioned inversion of a local sound speed map, and the proposed processing of coherence images, which adapts to in vivo situations where low coherent regions and off-axis scattering represent a challenge. The method is validated in vitro and in silico showing a high correlation with the ground truth speed of sound maps. Further, the method is clinically validated by being applied to channel data recorded from 172 fetal Bmode images, and 12 case examples are presented and discussed in detail. The data is recorded with a GE HealthCare Voluson Expert 22 system with an eM6c matrix array probe. The images are evaluated by three expert clinicians, and the results show that the corrected images are preferred or gave a quality equivalent to that without correction (1540 m/s) for 72.5% of the 172 images. In addition, a sharpness metric from digital photography is used to quantify image quality improvement. The increase in sharpness and the change in average sound speed are shown to be linearly correlated with a Pearson Correlation Coefficient of 0.67.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147857846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wessel L Van Nierop, Oisin Nolan, Tristan S W Stevens, Ruud J G Van Sloun
{"title":"Patient-Adaptive Echocardiography using Cognitive Ultrasound.","authors":"Wessel L Van Nierop, Oisin Nolan, Tristan S W Stevens, Ruud J G Van Sloun","doi":"10.1109/TMI.2026.3691009","DOIUrl":"https://doi.org/10.1109/TMI.2026.3691009","url":null,"abstract":"<p><p>Focused transmits are the most commonly used transmit strategy for echocardiograms, but suffer from relatively low frame rates, and in 3D, even lower volume rates. Fast imaging based on unfocused transmits has disadvantages such as motion decorrelation and limited harmonic imaging capabilities. This work introduces a patient-adaptive focused transmit and receive scheme that has the ability to drastically reduce the number of transmits needed to produce a high-quality ultrasound image. The method relies on posterior sampling with a temporal diffusion model to perceive and reconstruct the anatomy based on partial observations, while subsequently acquiring the most informative transmits. This cognitive ultrasound modality outperforms random and equispaced subsampling in terms of distortion and perceptual metrics on the 2D EchoNet-Dynamic dataset and a 3D Philips dataset, where we actively select focused elevation planes. Furthermore, our method improves generalized contrast-to-noise ratio from 0.83 to 0.89 compared to the same number of diverging wave transmits on six in-house echocardiograms. Additionally, we can segment the left ventricle, with on average 0.91 Dice-Sørensen coefficient, through simulating using 2 out of 112 lines. Finally, our method can be run in real-time on GPU accelerators from 2023, increasing the maximum achievable frame-rate from 46 Hz to 58 Hz. The code is publicly available at https://tue-bmd.github.io/casl/.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147847799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CIM-VTP: Correlation-Guided Image Modeling with Visual-Textual Task Prompt for Universal Medical Image Registration.","authors":"Housheng Xie, Xiaoru Gao, Guoyan Zheng","doi":"10.1109/TMI.2026.3690772","DOIUrl":"https://doi.org/10.1109/TMI.2026.3690772","url":null,"abstract":"<p><p>Universal medical image registration through a single model handling various registration tasks has attracted increasing interest. However, existing deep learning-based methods face two major challenges in adapting to universal registration tasks: 1) they lack generalizable feature representation capabilities for cross-task registration; 2) they rely solely on model architectures with fixed parameters, which limits their flexibility to dynamically adapt to different registration tasks and inherently compromises their generalization capability for zero-shot performance on unseen tasks. To address these limitations, we propose CIM-VTP, a novel two-stage universal registration framework. In the first stage, our proposed Correlation-guided Image Modeling (CIM)-based pretraining strategy leverages cross-image correlation to guide the masked modeling process, which facilitates spatial correspondence capturing that is essential for registration and provides universal representation capabilities as a foundation for registration learning. In the second stage, we introduce a registration task classifier to identify the type of a given input task, which explicitly quantifies the similarity between current inputs and previously seen tasks. The obtained task similarity scores are then fed as prior information into our carefully designed multi-resolution Visual-Textual Task Prompt (VTP) modules, which integrate task-relevant knowledge through prompt learning to adaptively adjust decoder parameters for different input domains. Extensive experiments across six different registration tasks demonstrate that the proposed CIM-VTP exhibits superior universal image registration performance. The code will be released at https://github.com/xiehousheng/CIM-VTP.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147847819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond Foundation Models: Distilling Geometric Priors for Lightweight Monocular Depth Estimation in Endoscopy.","authors":"Kejin Zhu, Shuwei Shao, Yongming Yang, Zhongyu Tian, Baochang Zhang, Zhe Min","doi":"10.1109/TMI.2026.3690379","DOIUrl":"https://doi.org/10.1109/TMI.2026.3690379","url":null,"abstract":"<p><p>In recent times, geometric foundation models have demonstrated remarkable performance in depth estimation tasks, benefiting from exposure to large-scale data that enables the learning of intricate geometric structures and spatial dependencies. However, their large parameter sizes and high computational complexity pose significant challenges in meeting the efficiency requirements of downstream surgical applications. Consequently, the design of a high-performance yet lightweight monocular depth estimator has become a focal point of research. To this end, we harness the rich geometric priors encoded in geometric foundation models and introduce a novel trinity distillation scheme that transfers geometric knowledge across three complementary dimensions, namely spatial, spectral and gradient, into a compact depth estimator. To further enhance prediction quality, we develop a semantic distribution alignment strategy to effectively suppress pseudo-texture artifacts arising from the limited semantic representation capability of the lightweight estimator. Extensive experiments on the SCARED, SERV-CT, Hamlyn, and C3VD datasets demonstrate that the proposed method either surpasses or achieves comparable performance to previous state-of-the-art competitors, with a smaller model size and reduced computational overhead. Code will be available at: https://github.com/ShuweiShao/LiteNet.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147847781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainty-Aware Information Pursuit for Interpretable and Reliable Medical Image Analysis.","authors":"Md Nahiduzzaman, Steven Korevaar, Zongyuan Ge, Feng Xia, Alireza Bab-Hadiashar, Ruwan Tennakoon","doi":"10.1109/TMI.2026.3690077","DOIUrl":"https://doi.org/10.1109/TMI.2026.3690077","url":null,"abstract":"<p><p>To be adopted in safety-critical domains like medical image analysis, AI systems must provide human-interpretable decisions. Variational Information Pursuit (VIP) offers an interpretable-by-design framework by sequentially querying input images for human-understandable concepts, using their presence or absence to make predictions. However, existing V-IP methods overlook sample-specific uncertainty in concept predictions, which can arise from ambiguous features or model limitations, leading to suboptimal query selection and reduced robustness. In this paper, we propose an interpretable and uncertainty-aware framework for medical imaging that addresses these limitations by accounting for upstream uncertainties in concept-based, interpretable-by-design models. Specifically, we introduce two uncertainty-aware models, EUAV-IP and IUA-VIP, that integrate uncertainty estimates into the V-IP querying process to prioritize more reliable concepts per sample. EUAV-IP skips uncertain concepts via masking, while IUAV-IP incorporates uncertainty into query selection implicitly for more informed and clinically aligned decisions. Our approach allows models to make reliable decisions based on a subset of concepts tailored to each individual sample, without human intervention, while maintaining overall interpretability. We evaluate our methods on five medical imaging datasets across four modalities: dermoscopy, X-ray, ultrasound, and blood cell imaging. The proposed IUAV-IP model achieves state-of-the-art accuracy among interpretable-by-design approaches on four of the five datasets, and generates more concise explanations by selecting fewer yet more informative concepts. These advances enable more reliable and clinically meaningful outcomes, enhancing model trustworthiness and supporting safer AI deployment in healthcare. Our code and models are available at: https://github.com/Nahiduzzaman09/ UAV-IP.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147847797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Feng, Bingjie Wang, Zhonghua Wang, Sijin Zhou, Zongyuan Ge
{"title":"Leveraging Image-text Pairs for Generalized Category Discovery in Medical Image Classification.","authors":"Wei Feng, Bingjie Wang, Zhonghua Wang, Sijin Zhou, Zongyuan Ge","doi":"10.1109/TMI.2026.3689859","DOIUrl":"https://doi.org/10.1109/TMI.2026.3689859","url":null,"abstract":"<p><p>Generalized category discovery aims to identify known medical categories and unknown new medical categories from unlabeled data by migrating knowledge from labeled datasets containing only known categories, which is crucial for disease understanding and precision medicine. Many methods have been proposed and significantly improved the performance of GCD in medical images. However, most of the existing methods discover new categories based on image modalities only, ignoring useful information in the large amount of textual data related to diseases. In this paper, we propose M<sup>3</sup>GCD (Medical Multi-Modal Generalized Category Discovery), which exploits image- text pairs to jointly recognize known classes and discover novel categories in medical images. To address the varying contribution of different modalities across samples, we develop a Dynamic Expert Fusion module to automatically learn sample-specific modality weights, and further design a Local Experts Balancing mechanism to preserve the discriminative power of individual modalities. By integrating global and local perspectives, our framework adaptively balances modality contributions and enhances multi-modal robustness. Subsequently, to enable the discovery of novel unknown categories during training, we propose a Category Diffusion module grounded in the Metropolis- Hastings framework. This module adaptively merges and splits categories, allowing the model to simultaneously recognize known classes and uncover previously unseen categories during training, without requiring any prior knowledge about the unknown categories. Extensive experiments on two public multi-modal datasets (MIMIC-CXR and PatchGastric), together with a private multi-modal fundus dataset, MM-Retina, demonstrate that our method consistently improves clustering performance on both known and unknown categories compared with existing approaches.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147847752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Thyro-LMD: A Benchmark Dataset and Sample-Driven Data Loading, Attention, and Regularization for Long-Tailed Multi-Label Thyroid Ultrasound Diagnosis.","authors":"Jiansong Zhang, Shunlan Liu, Xiaoling Luo, Guorong Lyu, Linlin Shen","doi":"10.1109/TMI.2026.3690144","DOIUrl":"https://doi.org/10.1109/TMI.2026.3690144","url":null,"abstract":"<p><p>Developing robust and effective computer-aided diagnostic (CAD) methods for thyroid ultrasound (TUS) remains a key challenge in medical imaging. Prior work has largely focused on binary or multi-class lesion classification, whereas real-world diagnosis follows standardized guidelines based on combinations of lexicon-level descriptors. These combinations naturally exhibit long-tailed distributions due to epidemiological patterns, limiting the robustness and generalizability of existing methods. Motivated by this, we introduce Thyro-LMD, the first long-tailed multi-label dataset for TUS. Using histopathology as the reference, Thyro-LMD provides retrospective, fine-grained annotations aligned with ACR TI-RADS lexicons and reveals a highly imbalanced label distribution. We benchmark representative methods, including end-to-end models, general-purpose multimodal large models (e.g., GPT-4o), and pretrained foundation models. While some methods show reasonable head-class performance, they struggle with body and tail classes. We therefore propose SynTUS-Net, a purpose-built baseline comprising collaborative modules addressing long-tailed multi-label challenges across data loading, feature encoding, and prediction regularization. SynTUS-Net achieves leading performance on Thyro-LMD, outperforming conventional traditional SOTA models by 5.3 Micro-F1 and 11.83 Macro-F1, and exceeding GPT-4o by 42.76 on Tail-F1. Extensive ablation studies confirm the contribution of each module. We believe Thyro-LMD and SynTUS-Net establish a clinically grounded benchmark and a new paradigm for interpretable and generalizable AI in ultrasound. Code and data will be released here.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147847804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tengya Peng, Ruyi Zha, Zhen Li, Xiaofeng Liu, Qing Zou
{"title":"Three-Dimensional MRI Reconstruction With 3D Gaussian Representations: Tackling the Undersampling Problem.","authors":"Tengya Peng, Ruyi Zha, Zhen Li, Xiaofeng Liu, Qing Zou","doi":"10.1109/TMI.2025.3642134","DOIUrl":"10.1109/TMI.2025.3642134","url":null,"abstract":"<p><p>Three-Dimensional Gaussian representation (3DGS) has shown substantial promise in the field of computer vision, but remains unexplored in the field of magnetic resonance imaging (MRI). This study explores its potential for the reconstruction of isotropic resolution 3D MRI from undersampled k-space data. We introduce a novel framework termed 3D Gaussian MRI (3DGSMR), which employs 3D Gaussian distributions as an explicit representation for MR volumes. Experimental evaluations indicate that this method can effectively reconstruct voxelized MR images, achieving a quality on par with that of well-established 3D MRI reconstruction techniques found in the literature. Notably, the 3DGSMR scheme operates under a self-supervised framework, obviating the need for extensive training datasets or prior model training. This approach introduces significant innovations to the domain, notably the adaptation of 3DGS to MRI reconstruction and the novel application of the existing 3DGS methodology to decompose MR signals, which are presented in a complex-valued format.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":"1905-1917"},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145717086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Rassmann, David Kugler, Christian Ewert, Martin Reuter
{"title":"Regression Is All You Need for Medical Image Translation.","authors":"Sebastian Rassmann, David Kugler, Christian Ewert, Martin Reuter","doi":"10.1109/TMI.2025.3650412","DOIUrl":"10.1109/TMI.2025.3650412","url":null,"abstract":"<p><p>While Generative Adversarial Nets (GANs) and Diffusion Models (DMs) have achieved impressive results in natural image synthesis, their core strengths - creativity and realism - can be detrimental in medical applications, where accuracy and fidelity are paramount. These models instead risk introducing hallucinations and replication of unwanted acquisition noise. Here, we propose YODA (You Only Denoise once - or Average), a 2.5D diffusion-based framework for medical image translation (MIT). Consistent with DM theory, we find that conventional diffusion sampling stochastically replicates noise. To mitigate this, we draw and average multiple samples, akin to physical signal averaging. As this effectively approximates the DM's expected value, we term this Expectation-Approximation (ExpA) sampling. We additionally propose regression sampling YODA, which retains the initial DM prediction and omits iterative refinement to produce noise-free images in a single step. Across five diverse multi-modal datasets - including multi-contrast brain MRI and pelvic MRI-CT - we demonstrate that regression sampling is not only substantially more efficient but also matches or exceeds image quality of full diffusion sampling even with ExpA. Our results reveal that iterative refinement solely enhances perceptual realism without benefiting information translation, which we confirm in relevant downstream tasks. YODA outperforms eight state-of-the-art DMs and GANs and challenges the presumed superiority of DMs and GANs over computationally cheap regression models for high-quality MIT. Furthermore, we show that YODA-translated images are interchangeable with, or even superior to, physical acquisitions for several medical applications.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":"2156-2172"},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Langtao Zhou, Xiaoxia Qu, Tianyu Fu, Jiaoyang Wu, Hong Song, Jingfan Fan, Danni Ai, Deqiang Xiao, Junfang Xian, Jian Yang
{"title":"Anatomy-Aware Sketch-Guided Latent Diffusion Model for Orbital Tumor Multi-Parametric MRI Missing Modalities Synthesis.","authors":"Langtao Zhou, Xiaoxia Qu, Tianyu Fu, Jiaoyang Wu, Hong Song, Jingfan Fan, Danni Ai, Deqiang Xiao, Junfang Xian, Jian Yang","doi":"10.1109/TMI.2025.3648852","DOIUrl":"10.1109/TMI.2025.3648852","url":null,"abstract":"<p><p>Synthesizing missing modalities in multi-parametric MRI (mpMRI) is vital for accurate tumor diagnosis, yet remains challenging due to incomplete acquisitions and modality heterogeneity. Diffusion models have shown strong generative capability, but conventional approaches typically operate in the image domain with high memory costs and often rely solely on noise-space supervision, which limits anatomical fidelity. Latent diffusion models (LDMs) improve efficiency by performing denoising in latent space, but standard LDMs lack explicit structural priors and struggle to integrate multiple modalities effectively. To address these limitations, we propose the anatomy-aware sketch-guided latent diffusion model (ASLDM), a novel LDM-based framework designed for flexible and structure-preserving MRI synthesis. ASLDM incorporates an anatomy-aware feature fusion module, which encodes tumor region masks and edge-based anatomical sketches via cross-attention to guide the denoising process with explicit structure priors. A modality synergistic reconstruction strategy enables the joint modeling of available and missing modalities, enhancing cross-modal consistency and supporting arbitrary missing scenarios. Additionally, we introduce image-level losses for pixel-space supervision using L1 and SSIM losses, overcoming the limitations of pure noise-based loss training and improving the anatomical accuracy of synthesized outputs. Extensive experiments on a five-modality orbital tumor mpMRI private dataset and a four-modality public BraTS2024 dataset demonstrate that ASLDM outperforms state-of-the-art methods in both synthesis quality and structural consistency, showing strong potential for clinically reliable multi-modal MRI completion. Our code is publicly available at: https://github.com/zltshadow/ASLDM.git.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":"2140-2155"},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145859643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}