Weiheng Yao;Zhihan Lyu;Mufti Mahmud;Ning Zhong;Baiying Lei;Shuqiang Wang
{"title":"CATD: Unified Representation Learning for EEG-to-fMRI Cross-Modal Generation","authors":"Weiheng Yao;Zhihan Lyu;Mufti Mahmud;Ning Zhong;Baiying Lei;Shuqiang Wang","doi":"10.1109/TMI.2025.3550206","DOIUrl":"10.1109/TMI.2025.3550206","url":null,"abstract":"Multi-modal neuroimaging analysis is crucial for a comprehensive understanding of brain function and pathology, as it allows for the integration of different imaging techniques, thus overcoming the limitations of individual modalities. However, the high costs and limited availability of certain modalities pose significant challenges. To address these issues, this paper proposes the Condition-Aligned Temporal Diffusion (CATD) framework for end-to-end cross-modal synthesis of neuroimaging, enabling the generation of functional magnetic resonance imaging (fMRI)-detected Blood Oxygen Level Dependent (BOLD) signals from more accessible Electroencephalography (EEG) signals. By constructing Conditionally Aligned Block (CAB), heterogeneous neuroimages are aligned into a latent space, achieving a unified representation that provides the foundation for cross-modal transformation in neuroimaging. The combination with the constructed Dynamic Time-Frequency Segmentation (DTFS) module also enables the use of EEG signals to improve the temporal resolution of BOLD signals, thus augmenting the capture of the dynamic details of the brain. Experimental validation demonstrates that the framework improves the accuracy of brain activity state prediction by 9.13% (reaching 69.8%), enhances the diagnostic accuracy of brain disorders by 4.10% (reaching 99.55%), effectively identifies abnormal brain regions, enhancing the temporal resolution of BOLD signals. The proposed framework establishes a new paradigm for cross-modal synthesis of neuroimaging by unifying heterogeneous neuroimaging data into a latent representation space, showing promise in medical applications such as improving Parkinson’s disease prediction and identifying abnormal brain regions.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2757-2767"},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143599334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoran Wang;Lian Huai;Wenbin Li;Lei Qi;Xingqun Jiang;Yinghuan Shi
{"title":"WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM With Sub-Class Exploration and Prompt Affinity Mining","authors":"Haoran Wang;Lian Huai;Wenbin Li;Lei Qi;Xingqun Jiang;Yinghuan Shi","doi":"10.1109/TMI.2025.3549433","DOIUrl":"10.1109/TMI.2025.3549433","url":null,"abstract":"We have witnessed remarkable progress in foundation models in vision tasks. Currently, several recent works have utilized the segmenting anything model (SAM) to boost the segmentation performance in medical images, where most of them focus on training an adaptor for fine-tuning a large amount of pixel-wise annotated medical images following a fully supervised manner. In this paper, to reduce the labeling cost, we investigate a novel weakly-supervised SAM-based segmentation model, namely WeakMedSAM. Specifically, our proposed WeakMedSAM contains two modules: 1) to mitigate severe co-occurrence in medical images, a sub-class exploration module is introduced to learn accurate feature representations. 2) to improve the quality of the class activation maps, our prompt affinity mining module utilizes the prompt capability of SAM to obtain an affinity map for random-walk refinement. Our method can be applied to any SAM-like backbone, and we conduct experiments with SAMUS and EfficientSAM. The experimental results on three popularly-used benchmark datasets, i.e., BraTS 2019, AbdomenCT-1K, and MSD Cardiac dataset, show the promising results of our proposed WeakMedSAM. Our code is available at <uri>https://github.com/wanghr64/WeakMedSAM</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2795-2807"},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MSDUNet: A Model Based on Feature Multi-Scale and Dual-Input Dynamic Enhancement for Skin Lesion Segmentation","authors":"Xiaosen Li;Linli Li;Xinlong Xing;Huixian Liao;Wenji Wang;Qiutong Dong;Xiao Qin;Chang’an Yuan","doi":"10.1109/TMI.2025.3549011","DOIUrl":"10.1109/TMI.2025.3549011","url":null,"abstract":"Melanoma is a malignant tumor originating from the lesions of skin cells. Medical image segmentation tasks for skin lesion play a crucial role in quantitative analysis. Achieving precise and efficient segmentation remains a significant challenge for medical practitioners. Hence, a skin lesion segmentation model named MSDUNet, which incorporates multi-scale deformable block (MSD Block) and dual-input dynamic enhancement module(D2M), is proposed. Firstly, the model employs a hybrid architecture encoder that better integrates global and local features. Secondly, to better utilize macroscopic and microscopic multiscale information, improvements are made to skip connection and decoder block, introducing D2M and MSD Block. The D2M leverages large kernel dilated convolution to draw out attention bias matrix on the decoder features, supplementing and enhancing the semantic features of the decoder’s lower layers transmitted through skip connection features, thereby compensating semantic gaps. The MSD Block uses channel-wise split and deformable convolutions with varying receptive fields to better extract and integrate multi-scale information while controlling the model’s size, enabling the decoder to focus more on task-relevant regions and edge details. MSDUNet attains outstanding performance with Dice scores of 93.08% and 91.68% on the ISIC-2016 and ISIC-2018 datasets, respectively. Furthermore, experiments on the HAM10000 dataset demonstrate its superior performance with a Dice score of 95.40%. External validation experiments based on the ISIC-2016, ISIC-2018, and HAM10000 experimental weights on the PH2 dataset yield Dice scores of 92.67%, 92.31%, and 93.46%, respectively, showcasing the exceptional generalization capability of MSDUNet. Our code implementation is publicly available at the Github.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2819-2830"},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decentralized Personalization for Federated Medical Image Segmentation via Gossip Contrastive Mutual Learning","authors":"Jingyun Chen;Yading Yuan","doi":"10.1109/TMI.2025.3549292","DOIUrl":"10.1109/TMI.2025.3549292","url":null,"abstract":"Federated Learning (FL) presents a promising avenue for collaborative model training among medical centers, facilitating knowledge exchange without compromising data privacy. However, vanilla FL is prone to server failures and rarely achieves optimal performance on all participating sites due to heterogeneous data distributions among them. To overcome these challenges, we propose Gossip Contrastive Mutual Learning (GCML), a unified framework to optimize personalized models in a decentralized environment, where Gossip Protocol is employed for flexible and robust peer-to-peer communication. To make efficient and reliable knowledge exchange in each communication without the global knowledge across all the sites, we introduce deep contrast mutual learning (DCML), a simple yet effective scheme to encourage knowledge transfer between the incoming and local models through collaborative training on local data. By integrating DCML with other efforts to optimize site-specific models by leveraging useful information from peers, we evaluated the performance and efficiency of the proposed method on three publicly available datasets with different segmentation tasks. Our extensive experimental results show that the proposed GCML framework outperformed both centralized and decentralized FL methods with significantly reduced communication overhead, indicating its potential for real-world deployment. Upon the acceptance of manuscript, the code will be available at: <uri>https://github.com/</uri> CUMC-Yuan-Lab/GCML","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2768-2783"},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haifan Gong;Boyao Wan;Luoyao Kang;Xiang Wan;Lingyan Zhang;Haofeng Li
{"title":"Boundary as the Bridge: Toward Heterogeneous Partially-Labeled Medical Image Segmentation and Landmark Detection","authors":"Haifan Gong;Boyao Wan;Luoyao Kang;Xiang Wan;Lingyan Zhang;Haofeng Li","doi":"10.1109/TMI.2025.3548919","DOIUrl":"10.1109/TMI.2025.3548919","url":null,"abstract":"Medical landmark detection and segmentation are crucial elements for computer-aided diagnosis and treatment. However, a common challenge arises because many datasets are exclusively annotated with either landmarks or segmentation masks: a situation we term the ‘heterogeneous partially-labeled’ problem. To address this, we propose a novel yet effective ‘Boundary-as-Bridge’ Loss (BaBLoss) that models the interplay between landmark detection and segmentation tasks. Specifically, our loss function is designed to maximize the correlation between the boundary distance map of the segmentation area and the heatmap deployed for landmark detection. Moreover, we introduce a prompt pipeline to use a segment anything model and landmarks to generate pseudo-segmentation labels for data with landmark annotation. To evaluate the effectiveness of our method, we collect and build two heterogeneous partially-labeled datasets on the brain and knee. Extensive experiments on these datasets using various backbone structures have shown the effectiveness of our method. Code is available at <uri>https://github.com/lhaof/HPL</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2747-2756"},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual Interspersion and Flexible Deployment for Few-Shot Medical Image Segmentation","authors":"Ziming Cheng;Shidong Wang;Yang Long;Tao Zhou;Haofeng Zhang;Ling Shao","doi":"10.1109/TMI.2025.3546653","DOIUrl":"10.1109/TMI.2025.3546653","url":null,"abstract":"Acquiring a large volume of annotated medical data is impractical due to time, financial, and legal constraints. Consequently, few-shot medical image segmentation is increasingly emerging as a prominent research direction. Nowadays, Medical scenarios pose two major challenges: 1) intra-class variation caused by diversity among support and query sets; 2) inter-class extreme imbalance resulting from background heterogeneity. However, existing prototypical networks struggle to tackle these obstacles effectively. To this end, we propose a Dual Interspersion and Flexible Deployment (DIFD) model. Drawing inspiration from military interspersion tactics, we design the dual Interspersion module to generate representative basis prototypes from support features. These basis prototypes are then deeply interacted with query features. Furthermore, we introduce a fusion factor to fuse and refine the basis prototypes. Ultimately, we seamlessly integrate and flexibly deploy the basis prototypes to facilitate correct matching between the query features and basis prototypes, thus conducive to improving the segmentation accuracy of the model. Extensive experiments on three publicly available medical image datasets demonstrate that our model significantly outshines other SoTAs (2.78% higher dice score on average across all datasets), achieving a new level of performance. The code is available at: <uri>https://github.com/zmcheng9/DIFD</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2732-2744"},"PeriodicalIF":0.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143526038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TAD-Graph: Enhancing Whole Slide Image Analysis via Task-Aware Subgraph Disentanglement","authors":"Fuying Wang;Jiayi Xin;Weiqin Zhao;Yuming Jiang;Maximus Yeung;Liansheng Wang;Lequan Yu","doi":"10.1109/TMI.2025.3545680","DOIUrl":"10.1109/TMI.2025.3545680","url":null,"abstract":"Learning contextual features such as interactions among various biological entities is vital for whole slide images (WSI)-based cancer diagnosis and prognosis. Graph-based methods have surpassed traditional multi-instance learning in WSI analysis by robustly integrating local pathological and contextual interaction features. However, the high resolution of WSIs often leads to large, noisy graphs. This can result in shortcut learning and overfitting due to the disproportionate graph size relative to WSI datasets. To overcome these issues, we propose a novel Task-Aware Disentanglement Graph approach (TAD-Graph) for more efficient WSI analysis. TAD-Graph operates on WSI graph representations, effectively identifying and disentangling informative subgraphs to enhance contextual feature extraction. Specifically, we inject stochasticity into the edge connections of the WSI graph and separate the WSI graph into task-relevant and task-irrelevant subgraphs. The disentanglement procedure is optimized using a graph information bottleneck-based objective, with added constraints on the task-irrelevant subgraph to reduce spurious correlations from task-relevant subgraphs to labels. TAD-Graph outperforms existing methods in three WSI analysis tasks across six benchmark datasets. Furthermore, our analysis using pathological concept-based metrics demonstrates TAD-Graph’s ability to not only improve predictive accuracy but also provide interpretive insights and aid in potential biomarker identification. Our code is publicly available at <uri>https://github.com/fuying-wang/TAD-Graph</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2683-2695"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal","authors":"Yikun Zhang;Guannan Liu;Yang Liu;Shipeng Xie;Jiabing Gu;Zujian Huang;Xu Ji;Tianling Lyu;Yan Xi;Shouping Zhu;Jian Yang;Yang Chen","doi":"10.1109/TMI.2025.3545121","DOIUrl":"10.1109/TMI.2025.3545121","url":null,"abstract":"In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2718-2731"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dianye Huang;Chenyang Li;Angelos Karlas;Xiangyu Chu;K. W. Samuel Au;Nassir Navab;Zhongliang Jiang
{"title":"VibNet: Vibration-Boosted Needle Detection in Ultrasound Images","authors":"Dianye Huang;Chenyang Li;Angelos Karlas;Xiangyu Chu;K. W. Samuel Au;Nassir Navab;Zhongliang Jiang","doi":"10.1109/TMI.2025.3545434","DOIUrl":"10.1109/TMI.2025.3545434","url":null,"abstract":"Precise percutaneous needle detection is crucial for ultrasound (US)-guided interventions. However, inherent limitations such as speckles, needle-like artifacts, and low resolution make it challenging to robustly detect needles, especially when their visibility is reduced or imperceptible. To address this challenge, we propose VibNet, a learning-based framework designed to enhance the robustness and accuracy of needle detection in US images by leveraging periodic vibration applied externally to the needle shafts. VibNet integrates neural Short-Time Fourier Transform and Hough Transform modules to achieve successive sub-goals, including motion feature extraction in the spatiotemporal space, frequency feature aggregation, and needle detection in the Hough space. Due to the periodic subtle vibration, the features are more robust in the frequency domain than in the image intensity domain, making VibNet more effective than traditional intensity-based methods. To demonstrate the effectiveness of VibNet, we conducted experiments on distinct ex vivo porcine and bovine tissue samples. The results obtained on porcine samples demonstrate that VibNet effectively detects needles even when their visibility is severely reduced, with a tip error of <inline-formula> <tex-math>${1}.{61}pm {1}.{56}~textit {mm}$ </tex-math></inline-formula> compared to <inline-formula> <tex-math>${8}.{15}pm {9}.{98}~textit {mm}$ </tex-math></inline-formula> for UNet and <inline-formula> <tex-math>${6}.{63}pm {7}.{58}~textit {mm}$ </tex-math></inline-formula> for WNet, and a needle direction error of <inline-formula> <tex-math>${1}.{64}pm {1}.{86}^{circ }$ </tex-math></inline-formula> compared to <inline-formula> <tex-math>${9}.{29}~pm ~{15}.{30}^{circ }$ </tex-math></inline-formula> for UNet and <inline-formula> <tex-math>${8}.{54}~pm ~{17}.{92}^{circ }$ </tex-math></inline-formula> for WNet. Code: <uri>https://github.com/marslicy/VibNet</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2696-2708"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10902567","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Foundation Model-Guided Gaussian Splatting for 4D Reconstruction of Deformable Tissues","authors":"Yifan Liu;Chenxin Li;Hengyu Liu;Chen Yang;Yixuan Yuan","doi":"10.1109/TMI.2025.3545183","DOIUrl":"10.1109/TMI.2025.3545183","url":null,"abstract":"Reconstructing deformable anatomical structures from endoscopic videos is a pivotal and promising research topic that can enable advanced surgical applications and improve patient outcomes. While existing surgical scene reconstruction methods have made notable progress, they often suffer from slow rendering speeds due to using neural radiance fields, limiting their practical viability in real-world applications. To overcome this bottleneck, we propose EndoGaussian, a framework that integrates the strengths of 3D Gaussian Splatting representations, allowing for high-fidelity tissue reconstruction, efficient training, and real-time rendering. Specifically, we dedicate a Foundation Model-driven Initialization (FMI) module, which distills 3D cues from multiple vision foundation models (VFMs) to swiftly construct the preliminary scene structure for Gaussian initialization. Then, a Spatio-temporal Gaussian Tracking (SGT) is designed, efficiently modeling scene dynamics using the multi-scale HexPlane with spatio-temporal priors. Furthermore, to improve the dynamics modeling ability for scenes with large deformation, EndoGaussian integrates Motion-aware Frame Synthesis (MFS) to adaptively synthesize new frames as extra training constraints. Experimental results on public datasets demonstrate EndoGaussian’s efficacy against prior state-of-the-art methods, including superior rendering speed (168 FPS, real-time), enhanced rendering quality (38.555 PSNR), and reduced training overhead (within 2 min/scene). These results underscore EndoGaussian’s potential to significantly advance intraoperative surgery applications, paving the way for more accurate and efficient real-time surgical guidance and decision-making in clinical scenarios. Code is available at: <uri>https://github.com/CUHK-AIM-Group/EndoGaussian</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2672-2682"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}