{"title":"Daydreaming Hopfield Networks and their surprising effectiveness on correlated data","authors":"Ludovica Serricchio , Dario Bocchi , Claudio Chilin , Raffaele Marino , Matteo Negri , Chiara Cammarota , Federico Ricci-Tersenghi","doi":"10.1016/j.neunet.2025.107216","DOIUrl":"10.1016/j.neunet.2025.107216","url":null,"abstract":"<div><div>To improve the storage capacity of the Hopfield model, we develop a version of the dreaming algorithm that <em>perpetually</em> reinforces the patterns to be stored (as in the Hebb rule), and erases the spurious memories (as in dreaming algorithms). For this reason, we called it <em>Daydreaming</em>. Daydreaming is not destructive and it converges asymptotically to stationary retrieval maps. When trained on random uncorrelated examples, the model shows optimal performance in terms of the size of the basins of attraction of stored examples and the quality of reconstruction. We also train the Daydreaming algorithm on correlated data obtained via the random-features model and argue that it spontaneously exploits the correlations thus increasing even further the storage capacity and the size of the basins of attraction. Moreover, the Daydreaming algorithm is also able to stabilize the features hidden in the data. Finally, we test Daydreaming on the MNIST dataset and show that it still works surprisingly well, producing attractors that are close to unseen examples and class prototypes.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"186 ","pages":"Article 107216"},"PeriodicalIF":6.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143453692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-11DOI: 10.1016/j.neunet.2025.107250
Youwei Wang , Peisong Cao , Haichuan Fang , Yangdong Ye
{"title":"Span-aware pre-trained network with deep information bottleneck for scientific entity relation extraction","authors":"Youwei Wang , Peisong Cao , Haichuan Fang , Yangdong Ye","doi":"10.1016/j.neunet.2025.107250","DOIUrl":"10.1016/j.neunet.2025.107250","url":null,"abstract":"<div><div>Scientific entity relation extraction intends to promote the performance of each subtask through exploring the contextual representations with rich scientific semantics. However, most of existing models encounter the dilemma of scientific semantic dilution, where task-irrelevant information entangles with task-relevant information making science-friendly representation learning challenging. In addition, existing models isolate task-relevant information among subtasks, undermining the coherence of scientific semantics and consequently impairing the performance of each subtask. To deal with these challenges, a novel and effective <strong>S</strong>pan-aware <strong>P</strong>re-trained network with deep <strong>I</strong>nformation <strong>B</strong>ottleneck (SpIB) is proposed, which aims to conduct the scientific entity and relation extraction by minimizing task-irrelevant information and meanwhile maximizing the relatedness of task-relevant information. Specifically, SpIB model includes a minimum span-based representation learning (SRL) module and a relatedness-oriented task-relevant representation learning (TRL) module to disentangle the task-irrelevant information and discover the relatedness hidden in task-relevant information across subtasks. Then, an information minimum–maximum strategy is designed to minimize the mutual information of span-based representations and maximize the multivariate information of task-relevant representations. Finally, we design a unified loss function to simultaneously optimize the learned span-based and task-relevant representations. Experimental results on several scientific datasets, SciERC, ADE, BioRelEx, show the superiority of the proposed SpIB model over various the state-of-the-art models. The source code is publicly available at <span><span>https://github.com/SWT-AITeam/SpIB</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"186 ","pages":"Article 107250"},"PeriodicalIF":6.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-11DOI: 10.1016/j.neunet.2025.107254
Xuechen Mu , Hankz Hankui Zhuo , Chen Chen , Kai Zhang , Chao Yu , Jianye Hao
{"title":"Hierarchical task network-enhanced multi-agent reinforcement learning: Toward efficient cooperative strategies","authors":"Xuechen Mu , Hankz Hankui Zhuo , Chen Chen , Kai Zhang , Chao Yu , Jianye Hao","doi":"10.1016/j.neunet.2025.107254","DOIUrl":"10.1016/j.neunet.2025.107254","url":null,"abstract":"<div><div>Navigating multi-agent reinforcement learning (MARL) environments with sparse rewards is notoriously difficult, particularly in suboptimal settings where exploration can be prematurely halted. To tackle these challenges, we introduce Hierarchical Symbolic Multi-Agent Reinforcement Learning (HS-MARL), a novel approach that incorporates hierarchical knowledge into MARL to effectively reduce the exploration space. We design intermediate states to decompose the state space into a hierarchical structure, represented using the Hierarchical Domain Definition Language (HDDL) and the option framework, forming domain knowledge and a symbolic option set. We leverage pyHIPOP+, an enhanced hierarchical task network (HTN) planner, to generate action sequences. A high-level meta-controller then assigns these symbolic options as policy functions, guiding low-level agents in their exploration of the environment. During this process, the meta-controller computes intrinsic rewards from the environmental rewards collected, which are used to train the symbolic option policies and refine pyHIPOP+’s heuristic function, thereby optimizing future action sequences. We evaluate HS-MARL with comparison to 15 state-of-the-art algorithms across two types of environments: four with sparse rewards and suboptimal conditions, and a real-world scenario involving a football match. Additionally, we perform an ablation study on HS-MARL’s intrinsic reward mechanism and pyHIPOP+, along with a sensitivity analysis of intrinsic reward hyperparameters. Our results show that HS-MARL significantly outperforms other methods in environments with sparse rewards and suboptimal conditions, underscoring the critical role of its intrinsic reward design and the pyHIPOP+ component. The code is available at: <span><span>https://github.com/Mxc666/HS-MARL.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"186 ","pages":"Article 107254"},"PeriodicalIF":6.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-11DOI: 10.1016/j.neunet.2025.107248
Keke Zu , Hu Zhang , Lei Zhang , Jian Lu , Chen Xu , Hongyang Chen , Yu Zheng
{"title":"EMBANet: A flexible efficient multi-branch attention network","authors":"Keke Zu , Hu Zhang , Lei Zhang , Jian Lu , Chen Xu , Hongyang Chen , Yu Zheng","doi":"10.1016/j.neunet.2025.107248","DOIUrl":"10.1016/j.neunet.2025.107248","url":null,"abstract":"<div><div>Recent advances in the design of convolutional neural networks have shown that performance can be enhanced by improving the ability to represent multi-scale features. However, most existing methods either focus on designing more sophisticated attention modules, which leads to higher computational costs, or fail to effectively establish long-range channel dependencies, or neglect the extraction and utilization of structural information. This work introduces a novel module, the Multi-Branch Concatenation (MBC), designed to process input tensors and extract multi-scale feature maps. The MBC module introduces new degrees of freedom (DoF) in the design of attention networks by allowing for flexible adjustments to the types of transformation operators and the number of branches. This study considers two key transformation operators: multiplexing and splitting, both of which facilitate a more granular representation of multi-scale features and enhance the receptive field range. By integrating the MBC with an attention module, a Multi-Branch Attention (MBA) module is developed to capture channel-wise interactions within feature maps, thereby establishing long-range channel dependencies. Replacing the 3x3 convolutions in the bottleneck blocks of ResNet with the proposed MBA yields a new block, the Efficient Multi-Branch Attention (EMBA), which can be seamlessly integrated into state-of-the-art backbone CNN models. Furthermore, a new backbone network, named EMBANet, is constructed by stacking EMBA blocks. The proposed EMBANet has been thoroughly evaluated across various computer vision tasks, including classification, detection, and segmentation, consistently demonstrating superior performance compared to popular backbones.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107248"},"PeriodicalIF":6.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-10DOI: 10.1016/j.neunet.2025.107249
Mingming Zhao, Ding Wang, Junfei Qiao
{"title":"Neural-network-based accelerated safe Q-learning for optimal control of discrete-time nonlinear systems with state constraints","authors":"Mingming Zhao, Ding Wang, Junfei Qiao","doi":"10.1016/j.neunet.2025.107249","DOIUrl":"10.1016/j.neunet.2025.107249","url":null,"abstract":"<div><div>For unknown nonlinear systems with state constraints, it is difficult to achieve the safe optimal control by using Q-learning methods based on traditional quadratic utility functions. To solve this problem, this article proposes an accelerated safe Q-learning (SQL) technique that addresses the concurrent requirements of safety and optimality for discrete-time nonlinear systems within an integrated framework. First, an adjustable control barrier function is designed and integrated into the cost function, aiming to facilitate the transformation of constrained optimal control problems into unconstrained cases. The augmented cost function is closely linked to the next state, enabling quicker deviation of the state from constraint boundaries. Second, leveraging offline data that adheres to safety constraints, we introduce an off-policy value iteration SQL approach for searching a safe optimal policy, thus mitigating the risk of unsafe interactions that may result from suboptimal iterative policies. Third, the vast amounts of offline data and the complex augmented cost function can hinder the learning speed of the algorithm. To address this issue, we integrate historical iteration information into the current iteration step to accelerate policy evaluation, and introduce the Nesterov Momentum technique to expedite policy improvement. Additionally, the theoretical analysis demonstrates the convergence, optimality, and safety of the SQL algorithm. Finally, under the influence of different parameters, simulation outcomes of two nonlinear systems with state constraints reveal the efficacy and advantages of the accelerated SQL approach. The proposed method requires fewer iterations while enabling the system state to converge to the equilibrium point more rapidly.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"186 ","pages":"Article 107249"},"PeriodicalIF":6.0,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-10DOI: 10.1016/j.neunet.2025.107251
Yanan Zhang , Jiangmeng Li , Qirui Ji , Kai Li , Lixiang Liu , Changwen Zheng , Wenwen Qiang
{"title":"Intervening on few-shot object detection based on the front-door criterion","authors":"Yanan Zhang , Jiangmeng Li , Qirui Ji , Kai Li , Lixiang Liu , Changwen Zheng , Wenwen Qiang","doi":"10.1016/j.neunet.2025.107251","DOIUrl":"10.1016/j.neunet.2025.107251","url":null,"abstract":"<div><div>Most few-shot object detection methods aim to utilize the learned generalizable knowledge from base categories to identify instances of novel categories. The fundamental assumption of these approaches is that the model can acquire sufficient transferable knowledge through the learning of base categories. However, our motivating experiments reveal a phenomenon that the model is overfitted to the data of base categories. To discuss the impact of this phenomenon on detection from a causal perspective, we develop a Structural Causal Model involving two key variables, causal generative factors and spurious generative factors. Both variables are derived from the base categories. Generative factors are latent variables or features that are used to control image generation. Causal generative factors are general generative factors that directly influence the generation process, while spurious generative factors are specific to certain categories, specifically the base categories in the problem we are analyzing. We recognize that the essence of the few-shot object detection methods lies in modeling the statistic dependence between novel object instances and their corresponding categories determined by the causal generative factors, while the set of spurious generative factors serves as a confounder in the modeling process. To mitigate the misleading impact of the spurious generative factors, we propose the <em><strong>F</strong>ront-door <strong>R</strong>egulator</em> guided by the front-door criterion. <em><strong>F</strong>ront-door <strong>R</strong>egulator</em> consists of two plug-and-play regularization terms, namely Semantic Grouping and Semantic Decoupling. We substantiate the effectiveness of our proposed method through experiments conducted on multiple benchmark datasets.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107251"},"PeriodicalIF":6.0,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-10DOI: 10.1016/j.neunet.2025.107252
Jie Yan , Yuxiang Xie , Shiwei Zou , Yingmei Wei , Xidao Luan
{"title":"Enhancing spatial perception and contextual understanding for 3D dense captioning","authors":"Jie Yan , Yuxiang Xie , Shiwei Zou , Yingmei Wei , Xidao Luan","doi":"10.1016/j.neunet.2025.107252","DOIUrl":"10.1016/j.neunet.2025.107252","url":null,"abstract":"<div><div>3D dense captioning (3D-DC) transcends traditional 2D image captioning by requiring detailed spatial understanding and object localization, aiming to generate high-quality descriptions for objects within 3D environments. Current approaches struggle with accurately describing the spatial location relationships of the objects and suffer from discrepancies between object detection and caption generation. To address these limitations, we introduce a novel one-stage 3D-DC model that integrates a Query-Guided Detector and Task-Specific Context-Aware Captioner to enhance the performance of 3D-DC. The Query-Guided Detector employs an adaptive query mechanism and leverages the Transformer architecture to dynamically adjust attention focus across layers, improving the model’s comprehension of spatial relationships within point clouds. Additionally, the Task-Specific Context-Aware Captioner incorporates task-specific context-aware prompts and a Squeeze-and-Excitation (SE) module to improve contextual understanding and ensure consistency and accuracy between detected objects and their descriptions. A two-stage learning rate update strategy is proposed to optimize the training of the Query-Guided Detector. Extensive experiments on the ScanRefer and Nr3D datasets demonstrate the superiority of our approach, outperforming previous two-stage ‘detect-then-describe’ methods and existing one-stage methods, particularly on the challenging Nr3D dataset.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107252"},"PeriodicalIF":6.0,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-08DOI: 10.1016/j.neunet.2025.107262
Ligeng Zou , Qi Liu , Jianhua Dai
{"title":"Augmenting interaction effects in convolutional networks with taylor polynomial gated units","authors":"Ligeng Zou , Qi Liu , Jianhua Dai","doi":"10.1016/j.neunet.2025.107262","DOIUrl":"10.1016/j.neunet.2025.107262","url":null,"abstract":"<div><div>Transformer-based vision models are often assumed to have an advantage over traditional convolutional neural networks (CNNs) due to their ability to model long-range dependencies and interactions between inputs. However, the remarkable success of pure convolutional models such as ConvNeXt, which incorporates architectural elements from Vision Transformers (ViTs), challenge the prevailing assumption about the intrinsic superiority of Transformers. In this work, we aim to explore an alternative path to efficiently express interactions between inputs without an attention module by delving into the interaction effects in ConvNeXt. This exploration leads to the proposal of a new activation function, i.e., the Taylor Polynomial Gated Unit (TPGU). The TPGU substitutes the cumulative distribution function in the Gaussian Error Linear Unit (GELU) with a learnable Taylor polynomial, so that it not only can flexibly adjust the strength of each order of interactions but also does not require additional normalization or regularization of the input and output. Comprehensive experiments demonstrate that swapping out GELUs for TPGUs notably boosts model performance under identical training settings. Moreover, empirical evidence highlights the particularly favorable impact of the TPGU on pure convolutional networks, such that it enhances the performance of ConvNeXt-T by 0.7 % on ImageNet-1K. Our findings encourage revisiting the potential utility of polynomials within contemporary neural network architectures. The code for our implementation has been made publicly available at <span><span>https://github.com/LQandlq/tpgu</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107262"},"PeriodicalIF":6.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-08DOI: 10.1016/j.neunet.2025.107245
Zhiyou Yang , Mingsheng Fu , Hong Qu , Fan Li , Shuqing Shi , Wang Hu
{"title":"Incremental model-based reinforcement learning with model constraint","authors":"Zhiyou Yang , Mingsheng Fu , Hong Qu , Fan Li , Shuqing Shi , Wang Hu","doi":"10.1016/j.neunet.2025.107245","DOIUrl":"10.1016/j.neunet.2025.107245","url":null,"abstract":"<div><div>In model-based reinforcement learning (RL) approaches, the estimated model of a real environment is learned with limited data and then utilized for policy optimization. As a result, the policy optimization process in model-based RL is influenced by both policy and estimated model updates. In practice, previous model-based RL methods only perform incremental policy constraint to policy updates, which cannot assure the complete incremental updates, thereby limiting the algorithm’s performance. To address this issue, we propose an incremental model-based RL update scheme by analyzing the policy optimization procedure of model-based RL. This scheme includes both an incremental model constraint that guarantees incremental updates to the estimated model, and an incremental policy constraint that ensures incremental updates to the policy. Further, we establish a performance bound incorporating the incremental model-based RL update scheme between the real environment and the estimated model, which can assure non-decreasing policy performance improvement in the real environment. To implement the incremental model-based RL update scheme, we develop a simple and efficient model-based RL algorithm known as <strong>IMPO</strong> (<strong>I</strong>ncremental <strong>M</strong>odel-based <strong>P</strong>olicy <strong>O</strong>ptimization), which leverages previous knowledge to enhance stability during the learning process. Experimental results across various control benchmarks demonstrate that IMPO significantly outperforms previous state-of-the-art model-based RL methods in terms of overall performance and sample efficiency.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107245"},"PeriodicalIF":6.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-02-08DOI: 10.1016/j.neunet.2025.107247
Wenyi Zhao , Wei Li , Yongqin Tian , Enwen Hu , Wentao Liu , Bin Zhang , Weidong Zhang , Huihua Yang
{"title":"S3H: Long-tailed classification via spatial constraint sampling, scalable network, and hybrid task","authors":"Wenyi Zhao , Wei Li , Yongqin Tian , Enwen Hu , Wentao Liu , Bin Zhang , Weidong Zhang , Huihua Yang","doi":"10.1016/j.neunet.2025.107247","DOIUrl":"10.1016/j.neunet.2025.107247","url":null,"abstract":"<div><div>Long-tailed classification is a significant yet challenging vision task that aims to making the clearest decision boundaries via integrating semantic consistency and texture characteristics. Unlike prior methods, we design spatial constraint sampling and scalable network to bolster the extraction of well-balanced features during training process. Simultaneously, we propose hybrid task to optimize models, which integrates single-model classification and cross-model contrastive learning complementarity to capture comprehensive features. Concretely, the sampling strategy meticulously furnishes the model with spatial constraint samples, encouraging the model to integrate high-level semantic and low-level texture representative features. The scalable network and hybrid task enable the features learned by the model to be dynamically adjusted and consistent with the true data distribution. Such manners effectively dismantle the constraints associated with multi-stage optimization, thereby ushering in innovative possibilities for the end-to-end training of long-tailed classification tasks. Extensive experiments demonstrate that our method achieves state-of-the-art performance on CIFAR10-LT, CIFAR100-LT, ImageNet-LT, and iNaturalist 2018 datasets. The codes and model weights will be available at <span><span>https://github.com/WilyZhao8/S3H</span><svg><path></path></svg></span></div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107247"},"PeriodicalIF":6.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}