NeurocomputingPub Date : 2026-04-28Epub Date: 2026-02-05DOI: 10.1016/j.neucom.2026.132948
Zhiqiang Zhang , Tianpeng Cheng , Bing Li , Yuankang Sun , Chengxu Wang
{"title":"Memory recall-driven multi-view semantic inference for offensive language detection","authors":"Zhiqiang Zhang , Tianpeng Cheng , Bing Li , Yuankang Sun , Chengxu Wang","doi":"10.1016/j.neucom.2026.132948","DOIUrl":"10.1016/j.neucom.2026.132948","url":null,"abstract":"<div><div>The detection of offensive language plays a critical role in maintaining the health of online communities, preventing cyberbullying, and fostering inclusive communication. Current approaches utilize facilitated LLMs for direct aggressiveness classification, but flaws in complex contextual reasoning and in the detection of subtle cues in conversational environments greatly reduce detection performance. To address the aforementioned challenges, we propose the <strong>M</strong>emory <strong>R</strong>ecall-Driven <strong>M</strong>ulti-<strong>V</strong>iew <strong>S</strong>emantic <strong>I</strong>nference (MR-MVSI) model. Specifically, we first build a multi-view semantic inference module that enables the model to effectively capture subtle contextual cues and underlying emotional features from situational backgrounds, communicative targets, and emotions. Meanwhile, we employ a self-check mechanism to discriminate and regenerate the generated information, thereby ensuring the rigor and reliability of the inference process. In addition, we introduce a training memory recall module, which embeds the input samples into a highly semantic space and retrieves the most relevant memory segments to interpret complex linguistic patterns, thus significantly improving the detection accuracy. The experimental results demonstrate that our proposed MR-MVSI model achieves superior performance across all three benchmark datasets (OLID, HateXplain, and HatEval), with performance improvements of <span><math><mn>6.6</mn><mi>%</mi></math></span>, <span><math><mn>0.2</mn><mi>%</mi></math></span>, and <span><math><mn>7.6</mn><mi>%</mi></math></span> respectively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132948"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2026-04-28Epub Date: 2026-01-12DOI: 10.1016/j.neucom.2026.132653
Chunlin Xu , Erbing Li , Huihui Li , Xiaoyong Liu , Weiqi Chen , Jianhua Guo
{"title":"Enhancing multimodal sentiment analysis via pairwise emotional correlation distillation and information bottleneck","authors":"Chunlin Xu , Erbing Li , Huihui Li , Xiaoyong Liu , Weiqi Chen , Jianhua Guo","doi":"10.1016/j.neucom.2026.132653","DOIUrl":"10.1016/j.neucom.2026.132653","url":null,"abstract":"<div><div>Multimodal Sentiment Analysis (MSA) aims to recognize human emotions by integrating text, audio, and visual modalities. While recent feature-decoupling methods have successfully separated modality-common and modality-specific features, they often overlook the impact of noise within individual modalities, leading to the degradation of shared representations and the retention of redundant information. To address these limitations, we propose the Refined Emotion Distillation Framework (REDF), a novel architecture designed to enhance robustness against noise and misalignment. REDF introduces two key innovations. First, the Pairwise Emotional Correlation Distillation (PECD) module captures fine-grained cross-modal interactions via a co-attention mechanism and distills this dynamic knowledge into the static common representation, ensuring alignment robustness even when single modalities are corrupted. Second, the Modality-Specific Information Refinement (MSIR) module strategically applies the Information Bottleneck principle to post-decoupling features, filtering task-irrelevant noise while preserving discriminative emotional cues. Experimental results on the CMU-MOSI and CMU-MOSEI datasets demonstrate that REDF significantly outperforms state-of-the-art baselines, validating its effectiveness in noisy and complex scenarios.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132653"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Edge-centric community hiding based on permanence in attributed networks","authors":"Zhichao Feng , Bohan Zhang , Junchang Jing , Dong Liu","doi":"10.1016/j.neucom.2026.132924","DOIUrl":"10.1016/j.neucom.2026.132924","url":null,"abstract":"<div><div>Attributed networks contain both structural connections and rich node attributes, which are crucial for the formation and identification of community structures. Although integrating attribute data enhances the accuracy of community detection algorithms, it also raises the risk of privacy leakage. To address this issue, community hiding has emerged as a promising solution. However, most existing research has centered on topological networks, leaving attributed networks largely unexplored. In response to these issues, we propose Attribute Permanence (APERM)—a novel community hiding method specifically designed for attributed networks, which quantifies permanence loss to identify structurally influential edges for perturbation. The objective of our perturbation strategy is to disrupt the global community structure, which typically involves considering all existing and potential edges in the network, and this introduces considerable computational complexity. To tackle this problem, we introduce a strategy that identifies Closely Homogeneous Nodes (CHN) by integrating both structural similarity and attribute information, thereby significantly reducing the edge perturbation search space. The experimental results from eight community detection algorithms (four for attributed networks and four for non-attributed networks) across six real-world datasets demonstrate that our proposed APERM algorithm not only achieves effective community hiding but also retains robust performance.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132924"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tensor-to-tensor models with fast iterated sum features","authors":"Joscha Diehl , Rasheed Ibraheem , Leonard Schmitz , Yue Wu","doi":"10.1016/j.neucom.2026.132884","DOIUrl":"10.1016/j.neucom.2026.132884","url":null,"abstract":"<div><div>Designing expressive yet computationally efficient layers for high-dimensional tensor data (e.g., images) remains a significant challenge. While sequence modeling has seen a shift toward linear-time architectures, extending these benefits to higher-order tensors is non-trivial.</div><div>In this work, we introduce the <strong>Fast Iterated Sums (FIS)</strong> layer, a novel tensor-to-tensor primitive with <strong>linear time and space complexity</strong> relative to the input size.</div><div>Theoretically, our framework bridges deep learning and algorithmic combinatorics: it leverages “corner tree” structures from permutation pattern counting to efficiently compute 2D iterated sums. This formulation admits dual interpretations as both a higher-order state-space model (SSM) and a multiparameter extension of the Signature Transform.</div><div>Practically, the FIS layer serves as a drop-in replacement for standard layers in vision backbones. We evaluate its performance on image classification and anomaly detection. When replacing layers in a smaller ResNet, the FIS-based model achieves accuracy of a larger ResNet baseline while reducing both trainable parameters and multiply-add operations. When replacing layers in ConvNeXt tiny, the FIS-based model saves around 2% of parameters, has around 8% shorter time per epoch and improves accuracy by around 0.6% on CIFAR-10 and around 2% on CIFAR-100. Furthermore, on the texture subset of MVTec AD, it attains an average AUROC of 97.3%. The code is available at <span><span>https://github.com/diehlj/fast-iterated-sums</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132884"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HierLoRA: A hierarchical multi-concept learning approach with enhanced LoRA for personalized image diffusion models","authors":"Yongjie Niu , Pengbo Zhou , Rui Zhou , Mingquan Zhou","doi":"10.1016/j.neucom.2026.132927","DOIUrl":"10.1016/j.neucom.2026.132927","url":null,"abstract":"<div><div>Personalized image generation, a key application of diffusion models, holds significant importance for the advancement of computer vision, artistic creation, and content generation technologies. However, existing diffusion models fine-tuned with Low-Rank Adaptation (LoRA) face multiple challenges when learning novel concepts: language drift undermines the generation quality of new concepts in novel contexts; the entanglement of object features with other elements in reference images leads to misalignment between the learning target and its unique identifier; and traditional LoRA approaches are limited to learning only one concept at a time. To address these issues, this study proposes a novel hierarchical learning strategy and an enhanced LoRA module. Specifically, we incorporate the GeLU activation function into the LoRA architecture as a nonlinear transformation to effectively mitigate language drift. Furthermore, a gated hierarchical learning mechanism is designed to achieve inter-concept disentanglement, enabling a single LoRA module to learn multiple concepts concurrently. Experimental results across multiple random seeds demonstrate that our approach achieves a 4%–6% improvement in memory retention metrics and outperforms state-of-the-art methods in object fidelity and style similarity by approximately 12.5% and 10%, respectively. In addition to superior generation quality, our method demonstrates high computational efficiency, requiring significantly fewer trainable parameters (<span><math><mo>∼</mo></math></span>45M) compared to existing baselines. While preserving critical features of target objects and maintaining the model’s original capabilities, our method enables the generation of images across diverse scenes in new styles. In scenarios requiring the simultaneous learning of multiple concepts, this study not only presents a novel solution to the multi-concept learning problem in personalized diffusion model training but also lays a technical foundation for high-quality customized AI image generation and diverse visual content creation. <strong>The source code is publicly available at</strong> <span><span><strong>https://github.com/ydniuyongjie/HierLoRA/tree/main</strong></span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132927"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2026-04-28Epub Date: 2026-02-04DOI: 10.1016/j.neucom.2026.132971
Jingyi He, Yongjun Li, Yifei Liang, Mengyan Lu, Haorui Liu, Jixing Zhou, Yi Wei, Hongyan Liu
{"title":"Depth aware image compression with multi-reference dynamic entropy model","authors":"Jingyi He, Yongjun Li, Yifei Liang, Mengyan Lu, Haorui Liu, Jixing Zhou, Yi Wei, Hongyan Liu","doi":"10.1016/j.neucom.2026.132971","DOIUrl":"10.1016/j.neucom.2026.132971","url":null,"abstract":"<div><div>To overcome the limitations of static feature extraction and inefficient context modeling in existing learned image compression, this paper proposes an image compression algorithm that integrates Depth-aware Adaptive Transformation (DAT) framework and Multi-reference Dynamic Entropy Model (MDEM). A proposed Multi-scale Capacity-aware Feature Enhancer (MCFE) model is adaptively embedded into the network to enhance feature extraction capability. The DAT architecture integrates a variational autoencoder framework with MCFE to increase the density of latent representations. Furthermore, an improved soft-threshold sparse attention mechanism is combined with a multi-context model, incorporating adaptive weights to eliminate spatial redundancy in the latent representations across local, non-local, and global dimensions, while channel context is introduced to capture channel dependencies. Building upon this, the MDEM integrates the side information provided by DAT along with spatial and channel context information and employs a channel-wise autoregressive model to achieve accurate pixel estimation for precise entropy probability estimation, which improves compression performance. Evaluated on the Kodak, Tecnick, and CLIC(Challenge on Learned Image Compression) Professional Validation datasets, the proposed method achieves BD-rate(Bjøntegaard Delta rate) gains of <span><math><mn>7.75</mn><mi>%</mi></math></span>, <span><math><mn>9.33</mn><mi>%</mi></math></span>, and <span><math><mn>5.73</mn><mi>%</mi></math></span>, respectively, compared to the VTM(Versatile Video Coding Test Model)-17.0 benchmark. Therefore, the proposed algorithm overcomes the limitations of fixed-context and static feature extraction strategies, enabling precise probability estimation and superior compression performance through dynamic resource allocation and multi-dimensional contextual modeling.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132971"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2026-04-28Epub Date: 2026-02-07DOI: 10.1016/j.neucom.2026.132991
Seyed Amir Malekpour , Hamid Pezeshk
{"title":"Explainable artificial intelligence with Boolean rule-aware predictions in ridge regression models","authors":"Seyed Amir Malekpour , Hamid Pezeshk","doi":"10.1016/j.neucom.2026.132991","DOIUrl":"10.1016/j.neucom.2026.132991","url":null,"abstract":"<div><div>Recent artificial intelligence (AI) systems, including deep neural networks (DNNs), have become increasingly complex and less interpretable. We propose a model named Regression-Based Boolean Rule Inference, RBBR, that is understandable to humans. By transforming input features into multiple conjunctions, RBBR fits a ridge regression model to the conjunctions and target variable data and derives the Boolean rule set from conjunctions with a positive weight sign in the model. Moreover, for high-dimensional datasets, a strategy is presented to derive Boolean sub-rules from regression sub-models fitted to specific feature subsets. The Bayesian Information Criterion (BIC) is employed to rank the fitted models and associated Boolean rules, striking a balance between interpretability and accuracy. Additionally, a Bayesian framework is proposed for predicting the target class of new datapoints based on top-ranked Boolean rules selected by BIC. By considering the combinatorial interactions among input features, RBBR offers a robust feature selection strategy, surpassing decision trees. Experiments conducted on datasets with low sample sizes reveal that RBBR exhibits data efficiency. Our approach for Boolean rule inference from regression models is compatible with the learning structure of black-box models like DNNs, enabling the interpretation of parameter sets or neurons using Boolean rules.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132991"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2026-04-28Epub Date: 2026-02-03DOI: 10.1016/j.neucom.2026.132952
Andrea Ceni, Valerio De Caro, Davide Bacciu, Claudio Gallicchio
{"title":"Sparse assemblies of recurrent neural networks with stability guarantees","authors":"Andrea Ceni, Valerio De Caro, Davide Bacciu, Claudio Gallicchio","doi":"10.1016/j.neucom.2026.132952","DOIUrl":"10.1016/j.neucom.2026.132952","url":null,"abstract":"<div><div>We introduce AdaDiag, a framework for constructing sparse assemblies of recurrent neural networks (RNNs) with formal stability guarantees. Our approach builds upon contraction theory by designing RNN modules that are inherently contractive through adaptive diagonal parametrization and learnable characteristic time scales. This formulation enables each module to remain fully trainable while preserving global stability under skew-symmetric coupling. We provide rigorous theoretical analysis of contractivity, along with a complexity discussion showing that stability is achieved without additional computational burden. Experiments on ten heterogeneous time series benchmarks demonstrate that AdaDiag consistently surpasses SCN, LSTM, and Vanilla RNN baselines, and achieves competitive performance with state-of-the-art models, all while requiring substantially fewer trainable parameters. These results highlight the effectiveness of sparse and stable assemblies for efficient, adaptive, and generalizable sequence modeling.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132952"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CGEM: A cognitive-guided network for human-aligned entity matching","authors":"Xin Liu, Xiaojun Li, Junping Yao, Yanfei Liu, Qinggang Fan, Haifeng Sun, Chengrong Dong","doi":"10.1016/j.neucom.2026.132950","DOIUrl":"10.1016/j.neucom.2026.132950","url":null,"abstract":"<div><div>Deep learning (DL) has advanced entity matching (EM), yet limited interpretability is particularly problematic for real-world deployment in decision-support settings, highlighting the need for models aligned with human reasoning as well as strong performance. Existing approaches improve interpretability but rarely reflect how humans make decisions. We propose Cognitive-Guided Entity Matching (CGEM), a human-aligned framework that reconceptualizes EM as a cognitive process rather than a purely technical task. CGEM is grounded in established theories: it introduces complexity-guided gating inspired by Cognitive Load Theory; builds holistic semantic representation grounded in Frame Semantics; and employs core-attribute reasoning following Cue Validity Theory to ensure diagnostic features govern final decisions. CGEM thus explicitly models complexity, contextuality, and diagnosticity, which remain underexplored in EM research. Experiments on DeepMatcher benchmarks show that CGEM delivers its strongest improvements on the Amazon–Google, Abt–Buy, iTunes–Amazon, and Walmart–Amazon datasets, yielding gains of up to 9.34% over DITTO (2023) and 5.51% over AttendEM (2024), and further exceeds large language model (LLM)–based EM methods on multiple benchmarks. To the best of our knowledge, CGEM is the first EM framework grounded in cognitive decision-making theories, advancing entity matching with human-aligned reasoning, strong predictive performance, and improved interpretability.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 132950"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2026-04-28Epub Date: 2026-02-07DOI: 10.1016/j.neucom.2026.133019
Lieqiang Yang, Li Yu, Wang Zhang, Chengyan Deng, Jianqin Liu
{"title":"MPFNet: Mamba-driven progressive fusion network for RAW-RGB collaborative demoiréing","authors":"Lieqiang Yang, Li Yu, Wang Zhang, Chengyan Deng, Jianqin Liu","doi":"10.1016/j.neucom.2026.133019","DOIUrl":"10.1016/j.neucom.2026.133019","url":null,"abstract":"<div><div>With the development of smartphones and display technologies, screen-captured images have become an indispensable means of recording information. However, moiré patterns, generated due to the aliasing effect between the Color Filter Array (CFA) and screen display pixels, severely degrade image quality. Existing demoiré methods suffer from issues such as significant loss of original information in RGB images, limited receptive field range, and high computational complexity, leading to incomplete removal of moiré patterns. To address these limitations, we propose a Mamba-Driven Progressive Fusion Network (MPFNet) for RAW-RGB Collaborative Demoiréing. The MPFNet fully leverages RAW data (which retains richer original information) and RGB data (which provides guidance during RAW-to-RGB conversion), while harnessing the global receptive field attention enabled by Mamba’s linear computational complexity, thereby achieving low-color-difference moiré removal. The MPFNet adopts a two-stage architecture: In the first stage, a Simple Demoiré Block (SDB) performs shallow demoiréing on RAW data while extracting multi-scale RAW features. In the second stage, the dual-path adaptive feature fusion (DAFF) module is used to progressively fuse multi-scale RAW and RGB features, and then the DemoiréMamba Block (DMB) is used to achieve deep moiré removal and accurate color restoration. Extensive experiments on TMM22, RAWVDemoiré and FHDMI datasets demonstrate that MPFNet achieves state-of-the-art performance in both quantitative metrics and qualitative visual comparisons, while maintaining relatively low FLOPs. For instance, MPFNet achieves a PSNR of 28.86 dB on the TMM22 dataset, which is 0.51 dB higher than previous methods, and it also has lower GFLOPs.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"675 ","pages":"Article 133019"},"PeriodicalIF":6.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}