Neural NetworksPub Date : 2025-09-23DOI: 10.1016/j.neunet.2025.108151
Tongtong Li , Kai Li , Ziyang Zhao , Qi Sun , Xinyan Zhang , Zhijun Yao , Jiansong Zhou , Bin Hu
{"title":"Deep adaptive fusion network with multimodal neuroimaging information for MDD diagnosis: an open data study","authors":"Tongtong Li , Kai Li , Ziyang Zhao , Qi Sun , Xinyan Zhang , Zhijun Yao , Jiansong Zhou , Bin Hu","doi":"10.1016/j.neunet.2025.108151","DOIUrl":"10.1016/j.neunet.2025.108151","url":null,"abstract":"<div><div>Neuroimaging offers powerful evidence for the automated diagnosis of major depressive disorder (MDD). However, discrepancies across imaging modalities hinder the exploration of cross-modal interactions and the effective integration of complementary features. To address this challenge, we propose a supervised Deep Adaptive Fusion Network (DAFN) that fully leverages the complementarity of multimodal neuroimaging information for the diagnosis of MDD. Specifically, high- and low-frequency features are extracted from the images using a customized convolutional neural network and multi-head self-attention encoders, respectively. A modality weight adaptation module dynamically adjusts the contribution of each modality during training, while a progressive information reinforcement training strategy reinforces multimodal fusion features. Finally, the performance of the DAFN is evaluated on both the open-access dataset and the recruited dataset. The results demonstrate that DAFN achieves competitive performance in multimodal neuroimaging fusion for the diagnosis of MDD. The source code is available at: <span><span>https://github.com/TTLi1996/DAFN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108151"},"PeriodicalIF":6.3,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-22DOI: 10.1016/j.neunet.2025.108135
Lu Shi , Gaoyun An , Yigang Cen , Yansen Huang , Fei Gan
{"title":"DePoint: Improving rotation robustness of 3D point cloud analysis via decreasing entropy","authors":"Lu Shi , Gaoyun An , Yigang Cen , Yansen Huang , Fei Gan","doi":"10.1016/j.neunet.2025.108135","DOIUrl":"10.1016/j.neunet.2025.108135","url":null,"abstract":"<div><div>In real-world scenarios, achieving rotation robustness in point cloud analysis is crucial due to the unpredictable orientations of 3D objects. While recent advancements in rotation robustness typically rely on auxiliary modules to align rotated objects, precisely aligning object orientations remains challenging given the vast space of possible rotations. In this work, we investigate the impact of rotation on point clouds, revealing that random rotations significantly increase the joint entropy of point clouds and semantic labels—a key factor leading to degraded model performance on rotated datasets. To address this issue, we introduce DePoint, a simple yet effective rotation enhancement method that decreases entropy by aligning the spatial distribution of rotated point cloud representations with semantic information. Specifically, a Siamese point cloud encoder processes differently oriented views of an object with a shared task head, ensuring semantic consistency in the learned representations. A minimal auxiliary classifier enforces linear separability into these representations. Notably, DePoint can be seamlessly integrated into existing point cloud models without introducing additional parameters during inference. Experimental results demonstrate that DePoint significantly enhances the rotation robustness of various point cloud models in 3D object classification and segmentation.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108135"},"PeriodicalIF":6.3,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-22DOI: 10.1016/j.neunet.2025.108139
Meixiu Long , Jiahai Wang , Junxiao Ma , Jianpeng Zhou , Siyuan Chen
{"title":"LLM-augmented entity alignment: an unsupervised and training-free framework","authors":"Meixiu Long , Jiahai Wang , Junxiao Ma , Jianpeng Zhou , Siyuan Chen","doi":"10.1016/j.neunet.2025.108139","DOIUrl":"10.1016/j.neunet.2025.108139","url":null,"abstract":"<div><div>Entity alignment (EA) is a fundamental task in knowledge graph (KG) integration, aiming to identify equivalent entities across different KGs for a unified and comprehensive representation. Recent advances have explored pre-trained language models (PLMs) to enhance the semantic understanding of entities, achieving notable improvements. However, existing methods face two major limitations. First, they rely heavily on human-annotated labels for training, leading to high computational costs and poor scalability. Second, some approaches use large language models (LLMs) to predict alignments in a multi-choice question format, but LLM outputs may deviate from expected formats, and predefined options may exclude correct matches, leading to suboptimal performance. To address these issues, we propose LEA, an LLM-augmented entity alignment framework that eliminates the need for labeled data and enhances robustness by mitigating information heterogeneity at both embedding and semantic levels. LEA first introduces an entity textualization module that transforms structural and textual information into a unified format, ensuring consistency and improving entity representations. It then leverages LLMs to enrich entity descriptions, enhancing semantic distinctiveness. Finally, these enriched descriptions are encoded into a shared embedding space, enabling efficient alignment through text retrieval techniques. To balance performance and computational cost, we further propose a selective augmentation strategy that prioritizes the most ambiguous entities for refinement. Experimental results on both homogeneous and heterogeneous KGs demonstrate that LEA outperforms existing models trained on 30 % labeled data, achieving a 30 % absolute improvement in Hit@1 score. As LLMs and text embedding models advance, LEA is expected to further enhance EA performance, providing a scalable and robust paradigm for practical applications. The code and dataset can be found at <span><span>https://github.com/Longmeix/LEA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108139"},"PeriodicalIF":6.3,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-22DOI: 10.1016/j.neunet.2025.108128
Nana Jia , Zhiao Zhang , Tong Jia
{"title":"MDSFD-Net: Alzheimer’s disease diagnosis with missing modality via disentanglement learning and feature distillation","authors":"Nana Jia , Zhiao Zhang , Tong Jia","doi":"10.1016/j.neunet.2025.108128","DOIUrl":"10.1016/j.neunet.2025.108128","url":null,"abstract":"<div><div>Multi-modal analysis can provide complementary information and significantly aid in the early diagnosis and intervention of Alzheimer’s Disease (AD). However, the issue of missing modalities presents a major challenge, as most methods that rely on complete multi-modal data become infeasible. The most advanced approaches to addressing missing modalities typically use generative models, but these often neglect the importance of modality-specific features, leading to biased predictions and poor performance. Inspired by this limitation, we propose a Modality Disentanglement and Specific Features Distillation Network (MDSFD-Net) for AD diagnosis with missing modality, which consists of a disentanglement-based imputation module (DI module) and a specific features distillation module (SFD module). In the DI module, we introduce a novel spatial-channel modality disentanglement learning scheme that is first used to disentangle modality-specific features, along with a shared constrain objective to learn modality-shared features, which are used for imputing missing modality features. To address the specific features of the missing modality, the SFD module is designed to transfer the specific features from complete modality in the teacher network to the incomplete modality in the student network. A regularized knowledge distillation (R-KD) mechanism is incorporated to mitigate the impact of incorrect predictions from the teacher network. By leveraging modality-shared features imputation and modality-specific features distillation, our model can effectively learn sufficient information for classification even if some modalities are missing. Extensive experiments on ADNI dataset demonstrate the superiority of our proposed MDSFD-Net over state-of-the-art methods in missing modality situations.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108128"},"PeriodicalIF":6.3,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-22DOI: 10.1016/j.neunet.2025.108136
Changjian Deng, Jian Cheng, Yanzhou Su, Zeyu An, Zhiguo Yang, Ziying Xia, Yijie Zhang, Shiguang Wang
{"title":"WideTopo: Improving foresight neural network pruning through training dynamics preservation and wide topologies exploration","authors":"Changjian Deng, Jian Cheng, Yanzhou Su, Zeyu An, Zhiguo Yang, Ziying Xia, Yijie Zhang, Shiguang Wang","doi":"10.1016/j.neunet.2025.108136","DOIUrl":"10.1016/j.neunet.2025.108136","url":null,"abstract":"<div><div>Foresight neural network pruning methods have garnered significant attention due to their potential to save computational resources. Recent advancements in this field are predominantly categorized into saliency score-based and graph theory-based methods. The former assesses the sensitivity of pruning parameter connections concerning specific metrics, while the latter aims to identify sub-networks characterized by sparse yet highly connected graph structures. However, recent research suggests that relying exclusively on saliency scores may result in deep but narrow sub-networks, while graph theory-based methods may be unsuitable for neural networks requiring pre-trained parameters for initialization, particularly in transfer learning scenarios. We hypothesize that preserving the training dynamics of sub-networks during pruning, along with exploring network structures with wide topology, can facilitate the identification of structurally stable sub-networks with improved post-training performance. Motivated by this, we propose WideTopo, which integrates Neural Tangent Kernel (NTK) theory with Implicit Target Alignment (ITA) in neural networks to capture the training dynamics of sub-networks. Furthermore, it employs a density-aware saliency score decay strategy and a repeated mask restoration strategy to retain more effective nodes, thereby sustaining the width of each layer within the sub-networks. We conducted extensive validations using CNN-based and ViT-based models on representative image classification and semantic segmentation datasets under both random and pre-trained initialization settings. The effectiveness and applicability of our method have been validated on diverse network architectures at various model density rates, showing competitive post-training performance compared with other existing baselines. Our code is publicly available at <span><span>https://github.com/Memoristor/WideTopo</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108136"},"PeriodicalIF":6.3,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-21DOI: 10.1016/j.neunet.2025.108131
Rodolfo Valiente, Praveen K Pilly
{"title":"Metacognition for Unknown Situations and Environments (MUSE).","authors":"Rodolfo Valiente, Praveen K Pilly","doi":"10.1016/j.neunet.2025.108131","DOIUrl":"https://doi.org/10.1016/j.neunet.2025.108131","url":null,"abstract":"<p><p>Metacognition, defined as the awareness and regulation of one's cognitive processes, is central to human adaptability in unknown situations. In contrast, current autonomous agents often struggle in novel environments due to their limited capacity for adaptation. We hypothesize that metacognition is a critical missing ingredient in autonomous agents for the cognitive flexibility needed to tackle unfamiliar challenges. Given the broad scope of metacognitive abilities, we focus on competence awareness and strategy selection. To this end, we propose the Metacognition for Unknown Situations and Environments (MUSE) framework to integrate metacognitive processes of self-assessment and self-regulation into autonomous agents. We present two implementations of MUSE: one based on world modeling and another leveraging large language models (LLMs). Our system continually learns to assess its competence on a given task and uses this self-assessment to guide iterative cycles of strategy selection. MUSE agents demonstrate high competence awareness and significant improvements in self-regulation for solving novel, out-of-distribution tasks more effectively compared to model-based reinforcement learning and purely prompt-based LLM agent approaches. This work highlights the promise of approaches inspired by cognitive and neural systems in enabling autonomous agents to adapt to new environments while mitigating the heavy reliance on extensive training data and large models for the current models.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"108131"},"PeriodicalIF":6.3,"publicationDate":"2025-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145233854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-20DOI: 10.1016/j.neunet.2025.108134
Fang Wan , Jianhang Zhang , Tianyu Li , Guangbo Lei , Li Xu , Zhiwei Ye
{"title":"AAC-GS: Attention-aware adaptive codebook for Gaussian splatting compression","authors":"Fang Wan , Jianhang Zhang , Tianyu Li , Guangbo Lei , Li Xu , Zhiwei Ye","doi":"10.1016/j.neunet.2025.108134","DOIUrl":"10.1016/j.neunet.2025.108134","url":null,"abstract":"<div><div>Neural Radiance Fields (NeRF) have demonstrated remarkable performance in the field of novel view synthesis (NVS). However, their high computational cost limits practical applicability. The 3D Gaussian Splatting (3DGS) method offers a significant improvement in rendering efficiency, enabling real-time rendering through its explicit representations. Nevertheless, its substantial storage requirements pose challenges for complex scenes and resource-constrained devices. Existing methods aim to achieve storage compression through redundant point pruning, spherical harmonics adjustment, and vector quantization. However, point pruning methods often compromise geometric details in complex structures, while vector quantization approaches fail to capture feature relationships effectively, resulting in texture degradation and geometric boundary blurring. Although anchor point representations partially address storage concerns, their sparse representation limits compression efficiency. These limitations become particularly evident in scenes with intricate textures and complex lighting conditions. To ensure optimal compression ratios while maintaining high fidelity in Gaussian scenarios, this paper proposes an Attention-Aware Adaptive Codebook Gaussian Splatting (AAC-GS) method for efficient storage compression. The approach dynamically adjusts the size of the codebook to optimize storage efficiency and incorporates an attention mechanism to capture feature contextual relationships, thereby enhancing reconstruction quality. Additionally, a Generative Adversarial Network (GAN) is employed to mitigate quantization losses, achieving a balance between compression rate and visual fidelity. Experimental results demonstrate that AAC-GS achieves an average compression ratio of approximately 40× while maintaining high reconstruction quality, showcasing its potential for multi-scene applications.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108134"},"PeriodicalIF":6.3,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-19DOI: 10.1016/j.neunet.2025.108126
Yan Wang , Ling Guo , Hao Wu , Tao Zhou
{"title":"Energy-based diffusion generator for efficient sampling of Boltzmann distributions","authors":"Yan Wang , Ling Guo , Hao Wu , Tao Zhou","doi":"10.1016/j.neunet.2025.108126","DOIUrl":"10.1016/j.neunet.2025.108126","url":null,"abstract":"<div><div>Sampling from Boltzmann distributions, particularly those tied to high dimensional and complex energy functions, poses a significant challenge in many fields. In this work, we present the Energy-Based Diffusion Generator (EDG), a novel approach that integrates ideas from variational autoencoders and diffusion models. EDG uses a decoder to generate Boltzmann-distributed samples from simple latent variables, and a diffusion-based encoder to estimate the Kullback-Leibler divergence to the target distribution. Notably, EDG is simulation-free, eliminating the need to solve ordinary or stochastic differential equations during training. Furthermore, by removing constraints such as bijectivity in the decoder, EDG allows for flexible network design. Through empirical evaluation, we demonstrate the superior performance of EDG across a variety of sampling tasks with complex target distributions, outperforming existing methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108126"},"PeriodicalIF":6.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-19DOI: 10.1016/j.neunet.2025.108123
Qinghang Su , Dayan Wu , Chenming Wu , Bo Li , Weiping Wang
{"title":"Planning forward: Deep incremental hashing by gradually defrosting bits","authors":"Qinghang Su , Dayan Wu , Chenming Wu , Bo Li , Weiping Wang","doi":"10.1016/j.neunet.2025.108123","DOIUrl":"10.1016/j.neunet.2025.108123","url":null,"abstract":"<div><div>Deep incremental hashing can generate hash codes incrementally for new classes, while keeping the existing ones unchanged. Existing methods typically allocate fixed code lengths to all classes, causing the entire Hamming space occupied by existing classes, thus failing to prepare models for future extensions. This significantly limits the ability to effectively accommodate new classes. Beyond that, it is inefficient in computation and storage to use all bits for encoding a few classes in the early sessions. This paper presents <strong>B</strong>it <strong>D</strong>efrosting Deep <strong>I</strong>ncremental <strong>H</strong>ashing (BDIH) to tackle these problems. Our key insight is to map the classes into a small subspace by freezing most hash bits during the first session, which reserves adequate space for future classes. This allows subsequent sessions to map new classes into progressively expanding subspaces by defrosting a portion of the frozen bits. Specifically, we propose a bit-defrosting code learning framework, which includes a bit-defrosting center generation part and a center-based bit-defrosting code learning part. The former part generates hash centers as learning objectives in expanding subspaces while the latter part learns globally discriminative hash codes with the guidance of hash centers and preserves the backward compatibility between the updated model and previously stored codes. As a result, our method achieves comparable performance on old classes using fewer bits while reserving more space for new ones. Extensive experiments demonstrate that BDIH outperforms existing methods regarding retrieval accuracy and storage efficiency in long-sequence incremental learning scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108123"},"PeriodicalIF":6.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-18DOI: 10.1016/j.neunet.2025.108124
Haijia Bi , Lu Liu , Hai Cui , Shengyue Liu , Ridong Han , Jiayu Han , Tao Peng
{"title":"Improving few-shot relation classification with multi-scale hierarchical prototype learning","authors":"Haijia Bi , Lu Liu , Hai Cui , Shengyue Liu , Ridong Han , Jiayu Han , Tao Peng","doi":"10.1016/j.neunet.2025.108124","DOIUrl":"10.1016/j.neunet.2025.108124","url":null,"abstract":"<div><div>Few-shot relation classification aims to distinguish different relation classes from extremely limited annotated data. Most existing methods primarily use prototype networks to construct a prototypical representation, classifying the instance by comparing its similarity to each prototype. Despite achieving promising results, the prototypes derived solely from limited support instances are often inaccurate due to constraints in feature extraction capabilities. Moreover, they ignore the different hierarchical levels of relational information, which can provide more effective guidance for classification. In this paper, we propose a novel <strong>m</strong>ulti-sc<strong>a</strong>le hie<strong>r</strong>arch<strong>i</strong>cal pr<strong>o</strong>totype (Mario) learning method that captures relational interaction information at three levels: inter-set, inter-class and intra-class, enhancing the model’s understanding of global semantic information and helping it distinguish subtle differences between classes. Additionally, we incorporate relational descriptive information to reduce the impact of textual expression diversity, enabling the model to emulate the human cognitive process in understanding variation. Extensive experiments conduct on the FewRel dataset demonstrate the effectiveness of our proposed model. In particular, it achieves accuracy rates of 92.52 %/95.33 %/85.46 %/91.33 % under four common few-shot settings. Notably, in the critical 5-way and 10-way 1-shot settings, it outperforms the strongest baseline by 2.87 % and 4.29 %.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108124"},"PeriodicalIF":6.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}