Neural NetworksPub Date : 2025-06-21DOI: 10.1016/j.neunet.2025.107749
Shi Peng , Si Liu , Dapeng Zhi , Peixin Wang , Chenyang Xu , Cheng Chen , Min Zhang
{"title":"ATA: An Abstract-Train-Abstract approach for explanation-friendly deep reinforcement learning","authors":"Shi Peng , Si Liu , Dapeng Zhi , Peixin Wang , Chenyang Xu , Cheng Chen , Min Zhang","doi":"10.1016/j.neunet.2025.107749","DOIUrl":"10.1016/j.neunet.2025.107749","url":null,"abstract":"<div><div>Explaining decision-making neural network models in deep reinforcement learning (DRL) systems is crucial, albeit challenging. Abstract policy graphs (APGs) emerge as an effective method for elucidating these models. However, constructing highly explainable APGs with high-fidelity is challenging. Through empirical analysis, we glean an insight that a larger cluster size corresponds to an APG with higher fidelity. We present a novel approach called <em>Abstract-Train-Abstract</em> (ATA), building on the integration of two key ideas. <em>Abstraction-based training</em> facilitates the clustering of abstract states, expanding the scope of each cluster. <em>Abstraction-oriented clustering</em> ensures that states within the same cluster correspond to the same action. Identifying the cluster to which a state belongs enhances the accuracy of predicting its associated action. Our experiments show that ATA surpasses the state of the art, achieving up to 26.63% higher fidelity, while still preserving competitive rewards. Additionally, our user study demonstrates that ATA substantially improves the accuracy of user prediction by 35.7% on average.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107749"},"PeriodicalIF":6.0,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-21DOI: 10.1016/j.neunet.2025.107754
Jianqi Gao , Hang Yu , Yiu-ming Cheung , Jian Cao , Raymond Chi-Wing Wong , Yonggang Zhang
{"title":"Shaping pre-trained language models for task-specific embedding generation via consistency calibration","authors":"Jianqi Gao , Hang Yu , Yiu-ming Cheung , Jian Cao , Raymond Chi-Wing Wong , Yonggang Zhang","doi":"10.1016/j.neunet.2025.107754","DOIUrl":"10.1016/j.neunet.2025.107754","url":null,"abstract":"<div><div>Pre-trained language models (PLMs) have shown significant success in various downstream tasks by providing initial parameters for task-specific fine-tuning. An inherent challenge of this approach is that adapting solely to downstream tasks may lead to the forgetting of pre-trained knowledge, resulting in limited fine-tuning performance on downstream tasks. To tackle this challenge, we propose a novel approach called EGO-PLM, where PLMs serve as task-specific <u>e</u>mbedding <u>g</u>enerat<u>o</u>r. The underlying insight of EGO-PLM is to align the fine-tuning tasks for PLMs with those utilized during the pre-training phase. Within this framework, we design a task-agnostic pre-defined task that is similar to the pre-training phase and a task-specific embedding generator to adapt to specific tasks, enabling the specific task can be trained jointly with the pre-defined task. To alleviate task conflicts between pre-defined and task-specific tasks and make sure the generated embedding are task-specific, we propose <em>co</em>nsistency <em>ca</em>libration (CoCa), which aligns the pre-defined objectives with the task-specific ones. Specifically, CoCa identifies inconsistencies between the pre-defined and task-specific objectives in an adversarial manner, subsequently calibrating these disparities through adversarial training. We validate the effectiveness of EGO-PLM using <strong>8</strong> datasets across <strong>6</strong> task categories, demonstrating consistent and substantial improvements compared to state-of-the-art baselines.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"191 ","pages":"Article 107754"},"PeriodicalIF":6.0,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144365714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Balancing Learning Plasticity and Memory Stability: A parameter space strategy for class-incremental learning","authors":"Jianzhou Feng, Huaxiao Qiu, Lazhi Zhao, Chenghan Gu, Haonan Qin, Yiming Xu","doi":"10.1016/j.neunet.2025.107755","DOIUrl":"10.1016/j.neunet.2025.107755","url":null,"abstract":"<div><div>The objective of Continual Learning(CL) is to maintain both the learning plasticity and memory stability of a model, allowing it to continuously acquire new knowledge over time while ensuring robust retention of previously learned knowledge. However, existing CL methods primarily focus on memory stability, preventing <em>catastrophic forgetting</em> (CF) of knowledge from earlier tasks, while overlooking efficient learning of new tasks. In this paper,we propose a parameter-space decomposition method to Balanced Learning Plasticity and Memory Stability (BLPMS), dividing the model into sub-networks for each task by parameter isolation, further decomposing each sub-network into task-general and task-specific parameter spaces. During training, BLPMS balancing the update rate between these parameter spaces to promote class-incremental learning. Additionally, at the inference stage,we adopt Mixture of Experts (MoE) module based on Prototypical Network to dynamically select the appropriate parameter space. Experimental results demonstrate that BLPMS outperforms existing methods across multiple benchmark datasets, achieving state-of-the-art performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107755"},"PeriodicalIF":6.0,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-19DOI: 10.1016/j.neunet.2025.107730
Hongze Sun , Jun Wang , Wuque Cai , Duo Chen , Qianqian Liao , Jiayi He , Yan Cui , Dezhong Yao , Daqing Guo
{"title":"ST-FlowNet: An efficient Spiking Neural Network for event-based optical flow estimation","authors":"Hongze Sun , Jun Wang , Wuque Cai , Duo Chen , Qianqian Liao , Jiayi He , Yan Cui , Dezhong Yao , Daqing Guo","doi":"10.1016/j.neunet.2025.107730","DOIUrl":"10.1016/j.neunet.2025.107730","url":null,"abstract":"<div><div>Spiking Neural Networks (SNNs) have emerged as a promising tool for event-based optical flow estimation tasks due to their capability for spatio-temporal information processing and low-power computation. However, the performance of SNN models is often constrained, limiting their applications in real-world scenarios. To address this challenge, we propose ST-FlowNet, a novel neural network architecture specifically designed for optical flow estimation from event-based data. The ST-FlowNet architecture integrates ConvGRU modules to facilitate cross-modal feature augmentation and temporal alignment of the predicted optical flow, thereby improving the network’s ability to capture complex motion patterns. Additionally, we introduce two strategies for deriving SNN models from pre-trained artificial neural networks (ANNs): a standard ANN-to-SNN conversion pipeline and our proposed BISNN method. Notably, the BISNN method alleviates the complexities involved in selecting biologically inspired parameters, further enhancing the robustness of SNNs for optical flow estimation tasks. Extensive evaluations on three benchmark event-based datasets demonstrate that the SNN-based ST-FlowNet model outperforms state-of-the-art methods, achieving superior accuracy in optical flow estimation across a diverse range of dynamic visual scenes. Furthermore, the energy efficiency of models also underscores the potential of SNNs for practical deployment in energy-constrained environments. Overall, our work presents a novel framework for optical flow estimation using SNNs and event-based data, contributing to the advancement of neuromorphic vision applications.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107730"},"PeriodicalIF":6.0,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144366956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-19DOI: 10.1016/j.neunet.2025.107634
Shao-Yuan Li , Yu-Xiang Zheng , Sheng-Jun Huang , Songcan Chen , Kangkan Wang
{"title":"Prototypes as Anchors: Tackling Unseen Noise for online continual learning","authors":"Shao-Yuan Li , Yu-Xiang Zheng , Sheng-Jun Huang , Songcan Chen , Kangkan Wang","doi":"10.1016/j.neunet.2025.107634","DOIUrl":"10.1016/j.neunet.2025.107634","url":null,"abstract":"<div><div>In the context of online class-incremental continual learning (CIL), adapting to label noise becomes paramount for model success in evolving domains. While some continual learning (CL) methods have begun to address noisy data streams, most assume that the noise strictly belongs to closed-set noise—i.e., they follow the assumption that noise in the current task originates classes within the same task. This assumption is clearly unrealistic in real-world scenarios. In this paper, we first formulate and analyze the concepts of <em>closed-set</em> and <em>open-set</em> noise, showing that both types can introduce <em>unseen classes</em> for the current training classifier. Then, to effectively handle noisy labels and unknown classes, we present an innovative replay-based method Prototypes as Anchors (PAA), which learns representative and discriminative prototypes for each class, and conducts a similarity-based denoising schema in the representation space to distinguish and eliminate the negative impact of unseen classes. By implementing a dual-classifier architecture, PAA conducts consistency checks between the classifiers to ensure robustness. Extensive experimental results on diverse datasets demonstrate a significant improvement in model performance and robustness compared to existing approaches, offering a promising avenue for continual learning in dynamic, real-world environments.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107634"},"PeriodicalIF":6.0,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144329869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-18DOI: 10.1016/j.neunet.2025.107726
Ding Li , Hui Xia , Xin Li , Rui Zhang , Mingda Ma
{"title":"DTGBA: A stronger graph backdoor attack with dual triggers","authors":"Ding Li , Hui Xia , Xin Li , Rui Zhang , Mingda Ma","doi":"10.1016/j.neunet.2025.107726","DOIUrl":"10.1016/j.neunet.2025.107726","url":null,"abstract":"<div><div>Graph backdoor attacks can significantly degrade the performance of graph neural networks (GNNs). Specifically, during the training phase, graph backdoor attacks inject triggers and target class labels into poisoned nodes to create a backdoored GNN. During the testing phase, triggers are added to target nodes, causing them to be misclassified as the target class. However, existing graph backdoor attacks lack sufficient imperceptibility and can be easily resisted by random edge dropping-based defense, limiting their effectiveness. To address these issues, we propose Dual Triggers Graph Backdoor Attack (DTGBA). Initially, we deploy an imperceptible injected trigger generator and multiple discriminators, driving the imperceptibility of the injected triggers through adversarial game between them. Additionally, we introduce a feature mask learner to extract the high-impact and low-impact feature dimensions of the target class’s nodes, and then create feature-based triggers by modifying the key feature dimensions of poisoned/target nodes, ensuring that the backdoor implantation can still be effective even if the injected triggers are removed by random edge dropping. Finally, we conduct extensive experiments to demonstrate that DTGBA achieves superior performance. Our code is available at <span><span>https://github.com/SnowStone-DingLi/DTGBA-main</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107726"},"PeriodicalIF":6.0,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144366789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TD-HCN: A trend-driven hypergraph convolutional network for stock return prediction","authors":"Lexin Fang , Tianlong Zhao , Junlei Yu , Qiang Guo , Xuemei Li , Caiming Zhang","doi":"10.1016/j.neunet.2025.107729","DOIUrl":"10.1016/j.neunet.2025.107729","url":null,"abstract":"<div><div>Stock data analysis has become one of the most challenging tasks in time series data analysis due to its dynamism, complexity, and nonlinearity. Recently, relational graphs have become popular for describing certain important relationships in data, particularly by mapping indirect and direct relationships between stocks into non-Euclidean spaces. Existing graph-based methods mainly capture simple pairwise and static relationships between stocks, so they cannot effectively identify higher-order relationships and characterize the dynamic trends of stock relationships. This limitation restricts the performance of stock return prediction models. A variety of stock data types reveal complex relationships among stocks, such as stock prices, industry links, and wiki relationships. This paper proposes a novel <strong>T</strong>rend-<strong>D</strong>riven <strong>H</strong>ypergraph <strong>C</strong>onvolutional <strong>N</strong>etwork (<strong>TD-HCN</strong>) that integrates these data types in order to predict stock rankings through a cooperative learning method of local dynamic and global static relationships across temporal dimensions. To be concrete, we employ a Prior-constrained Relational Learning (PCRL) model that leverages explicit prior knowledge to guide the discovery of latent high-order relationships among stocks. In order to comprehensively capture and utilize dynamic trends in relationships among stocks, a Disentanglement Representation Learning (DRL) mechanism is developed to enhance the key trend features through the disentanglement operation and dual attention module. Extensive experiments on NASDAQ and NYSE datasets show that TD-HCN consistently outperforms the state-of-the-art methods by a considerable margin in terms of returns. It is also effective and robust in learning the dynamic relationships among stocks and capturing key changes in trends within those relationships.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107729"},"PeriodicalIF":6.0,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-16DOI: 10.1016/j.neunet.2025.107692
Kai Liu , Tianxian Zhang , Xiangliang Xu , Yuyang Zhao
{"title":"Counterfactual value decomposition for cooperative multi-agent reinforcement learning","authors":"Kai Liu , Tianxian Zhang , Xiangliang Xu , Yuyang Zhao","doi":"10.1016/j.neunet.2025.107692","DOIUrl":"10.1016/j.neunet.2025.107692","url":null,"abstract":"<div><div>Value decomposition has become a central focus in Multi-Agent Reinforcement Learning (MARL) in recent years. The key challenge lies in the construction and updating of the factored value function (FVF). Traditional methods rely on FVFs with restricted representational capacity, rendering them inadequate for tasks with non-monotonic payoffs. Recent approaches address this limitation by designing FVF update mechanisms that enable applicability to non-monotonic scenarios. However, these methods typically depend on the true optimal joint action value to guide FVF updates. Since the true optimal joint action is computationally infeasible in practice, these methods approximate it using the greedy joint action and update the FVF with the corresponding greedy joint action value. We observe that although the greedy joint action may be close to the true optimal joint action, its associated greedy joint action value can be substantially biased relative to the true optimal joint action value. This makes the approximation unreliable and can lead to incorrect update directions for the FVF, hindering the learning process. To overcome this limitation, we propose Comix, a novel off-policy MARL method based on a Sandwich Value Decomposition Framework. Comix constrains and guides FVF updates using both upper and lower bounds. Specifically, it leverages orthogonal best responses to construct the upper bound, thus overcoming the drawbacks introduced by the optimal approximation. Furthermore, an attention mechanism is incorporated to ensure that the upper bound can be computed with linear time complexity and high accuracy. Theoretical analyses show that Comix satisfies the IGM. Experiments on the asymmetric One-Step Matrix Game, discrete Predator-Prey, and StarCraft Multi-Agent Challenge show that Comix achieves higher learning efficiency and outperforms several state-of-the-art methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107692"},"PeriodicalIF":6.0,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BRSR-OpGAN: Blind radar signal restoration using operational generative adversarial network","authors":"Muhammad Uzair Zahid , Serkan Kiranyaz , Alper Yildirim , Moncef Gabbouj","doi":"10.1016/j.neunet.2025.107709","DOIUrl":"10.1016/j.neunet.2025.107709","url":null,"abstract":"<div><div>Many studies on radar signal restoration in the literature focus on isolated restoration problems, such as denoising over a certain type of noise, while ignoring other types of artifacts. Additionally, these approaches usually assume a noisy environment with a limited set of fixed signal-to-noise ratio (SNR) levels. However, real-world radar signals are often corrupted by a blend of artifacts, including but not limited to unwanted echo, sensor noise, intentional jamming, and interference, each of which can vary in type, severity, and duration. This study introduces Blind Radar Signal Restoration using an Operational Generative Adversarial Network (BRSR-OpGAN), which uses a dual domain loss in the temporal and spectral domains. This approach is designed to improve the quality of radar signals, regardless of the diversity and intensity of the corruption. The BRSR-OpGAN utilizes 1D Operational GANs, which use a generative neuron model specifically optimized for blind restoration of corrupted radar signals. This approach leverages GANs’ flexibility to adapt dynamically to a wide range of artifact characteristics. The proposed approach has been extensively evaluated using a well-established baseline and a newly curated extended dataset called the Blind Radar Signal Restoration (BRSR) dataset. This dataset was designed to simulate real-world conditions and includes a variety of artifacts, each varying in severity. The evaluation shows an average SNR improvement over 15.1 dB and 14.3 dB for the baseline and BRSR datasets, respectively. Finally, the proposed approach can be applied in real-time, even on resource-constrained platforms. This pilot study demonstrates the effectiveness of blind radar restoration in time-domain for real-world radar signals, achieving exceptional performance across various SNR values and artifact types. The BRSR-OpGAN method exhibits robust and computationally efficient restoration of real-world radar signals, significantly outperforming existing methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107709"},"PeriodicalIF":6.0,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144320858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-16DOI: 10.1016/j.neunet.2025.107697
Gilha Lee, Jin Shin, Hyun Kim
{"title":"VFF-Net: Evolving forward–forward algorithms into convolutional neural networks for enhanced computational insights","authors":"Gilha Lee, Jin Shin, Hyun Kim","doi":"10.1016/j.neunet.2025.107697","DOIUrl":"10.1016/j.neunet.2025.107697","url":null,"abstract":"<div><div>In recent years, significant efforts have been made to overcome the limitations inherent in the traditional back-propagation (BP) algorithm. These limitations include overfitting, vanishing/exploding gradients, slow convergence, and black-box nature. To address these limitations, alternatives to BP have been explored, the most well-known of which is the forward–forward network (FFN). We propose a visual forward–forward network (VFF-Net) that significantly improves FFNs for deeper networks, focusing on enhancing performance in convolutional neural network (CNN) training. VFF-Net utilizes a label-wise noise labeling method and cosine-similarity-based contrastive loss, which directly uses intermediate features to solve both the input information loss problem and the performance drop problem caused by the goodness function when applied to CNNs. Furthermore, VFF-Net is accompanied by layer grouping, which groups layers with the same output channel for application in well-known existing CNN-based models; this reduces the number of minima that need to be optimized and facilitates the transfer to CNN-based models by demonstrating the effects of ensemble training. VFF-Net improves the test error by up to 8.31% and 3.80% on a model consisting of four convolutional layers compared with the FFN model targeting a conventional CNN on CIFAR-10 and CIFAR-100, respectively. Furthermore, the fully connected layer-based VFF-Net achieved a test error of 1.70% on the MNIST dataset, which is better than that of the existing BP. In conclusion, the proposed VFF-Net significantly reduces the performance gap with BP by improving the FFN and shows the flexibility to be portable to existing CNN-based models.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107697"},"PeriodicalIF":6.0,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144290653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}