Computer NetworksPub Date : 2025-10-04DOI: 10.1016/j.comnet.2025.111747
Pingchao Zhou , Yong Xie , Cong Peng , Yazhe Kang , Debiao He , Tianhong Mu
{"title":"RLVP-FL: Robust and lightweight verifiable privacy-preserving federated learning scheme","authors":"Pingchao Zhou , Yong Xie , Cong Peng , Yazhe Kang , Debiao He , Tianhong Mu","doi":"10.1016/j.comnet.2025.111747","DOIUrl":"10.1016/j.comnet.2025.111747","url":null,"abstract":"<div><div>Federated Learning (FL) is widely regarded as an effective approach to solving the data privacy issues in machine learning, due to its characteristic of making data available but invisible. However, related research indicates persistent privacy leakage risks during FL’s local gradient upload and aggregation processes. Existing privacy-preserving schemes incur substantial overhead when handling dropped users while lacking rejoining support. Furthermore, verifying aggregated integrity remains a critical challenge in privacy preservation scenarios involving user-server collusion. To address these issues, we propose RLVP-FL, a robust, lightweight, verifiable, privacy-preserving FL scheme. The scheme designs a lightweight, non-pairwise masking aggregation approach in a dual-server framework and combines it with a cross-verification method based on linear homomorphic hashing and commitment schemes. This effectively protects the privacy of the honest user’s local gradients and ensures verifiability, even if the user colludes with the servers. Moreover, it guarantees that the aggregated gradients remain secret from the servers. Additionally, our scheme incurs no extra overhead for handling dropped users, and such users can rejoin subsequent training rounds at any time. Security analysis and experimental results show that our scheme has low computational overhead while being resistant to collusion and channel eavesdropping attack. In terms of communication overhead, our scheme reduces it by 50 % to 75 % compared to the latest similar works.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111747"},"PeriodicalIF":4.6,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145326501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-10-04DOI: 10.1016/j.comnet.2025.111752
Weicong Huang , Pengfei Yu , Qigui Yao
{"title":"Efficient privacy-preserving federated learning with encrypted-domain knowledge distillation","authors":"Weicong Huang , Pengfei Yu , Qigui Yao","doi":"10.1016/j.comnet.2025.111752","DOIUrl":"10.1016/j.comnet.2025.111752","url":null,"abstract":"<div><div>With the development of Decentralized Federated Learning (DFL), it has demonstrated significant advantages in protecting data privacy and enhancing model generalization. However, directly exchanging unprocessed model parameters in DFL not only increases communication overhead but also significantly elevates the risk of privacy leakage among participants. In this context, knowledge distillation has emerged as an effective, lightweight model sharing method to reduce transmission burdens and was once regarded as a promising solution for decentralized federated learning. Nevertheless, studies have shown that logits, the key to its knowledge transfer for classification prediction, pose a privacy risk. Logits obtained from training on private datasets can be used to reconstruct the training data. To address these concerns, this paper proposes the DKDFL framework, which integrates knowledge distillation with Fully Homomorphic Encryption (FHE) in the DFL scenario. This framework achieves efficient knowledge distillation collaboration through grouping strategies and an innovative distillation loss function tailored for the encrypted domain, ensuring both computational efficiency and logits confidentiality. Additionally, the introduction of a coordinator node further optimizes the computation process. Experimental results indicate that DKDFL performs well in terms of model accuracy, exhibiting a relatively stable training process. While ensuring privacy protection, it maintains high model accuracy. In data heterogeneity scenarios, the performance of collaborative learning based on DKDFL significantly outperforms the results of independent training by participants. Compared to another federated learning algorithm also utilizing knowledge distillation and fully homomorphic encryption, DKDFL achieves notable improvements in reducing communication costs and time overhead.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111752"},"PeriodicalIF":4.6,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145326513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-10-03DOI: 10.1016/j.comnet.2025.111743
Damiano Carra , Giovanni Neglia , Xufeng Zhang
{"title":"Low-Complexity online learning for caching","authors":"Damiano Carra , Giovanni Neglia , Xufeng Zhang","doi":"10.1016/j.comnet.2025.111743","DOIUrl":"10.1016/j.comnet.2025.111743","url":null,"abstract":"<div><div>Commonly used caching policies, such as LRU (Least Recently Used) or LFU (Least Frequently Used), exhibit optimal performance only under specific traffic patterns. Even advanced machine learning-based methods, which detect patterns in historical request data, struggle when future requests deviate from past trends. Recently, a new class of policies has emerged that are robust to varying traffic patterns. These algorithms address an online optimization problem, enabling continuous adaptation to the context. They offer theoretical guarantees on the <em>regret</em> metric, which measures the performance gap between the online policy and the optimal static cache allocation in hindsight. However, the high computational complexity of these solutions hinders their practical adoption.</div><div>In this study, we introduce a new variant of the gradient-based online caching policy that achieves groundbreaking logarithmic computational complexity relative to catalog size, while also providing regret guarantees. This advancement allows us to test the policy on large-scale, real-world traces featuring millions of requests and items-a significant achievement, as such scales have been beyond the reach of existing policies with regret guarantees. The regret guarantees and the low complexity are also maintained in cases where items have non-uniform sizes. To the best of our knowledge, the proposed solution is the only low-complexity no-regret policy for such a case, and our experimental results demonstrate for the first time that the regret guarantees of gradient-based caching policies offer substantial benefits in practical scenarios.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111743"},"PeriodicalIF":4.6,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-10-03DOI: 10.1016/j.comnet.2025.111740
Beibei Li , Wei Hu , Lemei Da , Dan Zhu
{"title":"An MPC-based nonlinear data-driven model for cascading failure prediction in large-scale infrastructure networks","authors":"Beibei Li , Wei Hu , Lemei Da , Dan Zhu","doi":"10.1016/j.comnet.2025.111740","DOIUrl":"10.1016/j.comnet.2025.111740","url":null,"abstract":"<div><div>With the continuously increase of network scale and complexity, cascading failures have become the main cause of large-scale infrastructure network paralysis. Network modeling is an important technique for simulating and understanding the cascading failure process. However, linear network modeling methods cannot accurately account for the dynamic characteristics of network systems, while nonlinear network modeling approaches tend to incur high computational costs. To tackle these challenges, we propose a nonlinear data-driven cascading failure prediction model based on Model Predictive Control (MPC) for large-scale infrastructure networks. We consider the influence of noise resulted from dynamic network characteristics on the input dataset and leverage Gaussian Process Regression (GPR) to filter it out. Then, we create a linearized model for the network system using the Koopman operator. We solve the convex quadratic optimization problems by employing the MPC algorithm in the closed-loop verification under a controlled cost model. Finally, we validate the superiority of our proposal by rigorously testing the optimized input datasets across four commonly used cascading failure propagation methods. Experimental results have demonstrated that the proposed approach minimizes 80 % of the input datasets while accurately predicting cascading failures. To our best knowledge, we are the first to apply MPC to a nonlinear data-driven model for cascading failure prediction in large-scale infrastructure networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111740"},"PeriodicalIF":4.6,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-10-03DOI: 10.1016/j.comnet.2025.111748
Yixiao Peng , Hao Hu , Feiyang Li , Yingchang Jiang , Jipeng Tang , Yuling Liu
{"title":"LLM4Game: Multi-agent reinforcement learning with knowledge injection for dynamic defense resource allocation in cloud storage","authors":"Yixiao Peng , Hao Hu , Feiyang Li , Yingchang Jiang , Jipeng Tang , Yuling Liu","doi":"10.1016/j.comnet.2025.111748","DOIUrl":"10.1016/j.comnet.2025.111748","url":null,"abstract":"<div><div>The non-cooperative and interdependent nature of network attack-defense links it closely to game theory. Current game-theoretic decision-making methods construct game models for attack-defense scenarios and use reinforcement learning (RL) to compute optimal strategies. However, RL relies on the “trial and error” exploration and is likely to fall into the local optimum in some cloud storage environment without game equilibrium. First, in cloud storage systems, the resource investment of attack and defense players has a “winner-takes-all” characteristic. Thus, we employ the Colonel Blotto game to model the attack-defense scenario in cloud storage systems, extending it to a multi-player, heterogeneous battlefield model with asymmetric resources. Second, RL’s reliance on trial-and-error exploration leads to suboptimal convergence in sparse-reward, non-equilibrium conditions. We leverage Large Language Models (LLMs) to inject attack-defense context knowledge, addressing the cold start problem of RL. Finally, we propose the RL-LLM-KI algorithm featuring a precomputation-retrieval mechanism that mitigates the inference speed discrepancy between LLMs and RL agents, enabling real-time defense decisions. Experiments show that our work increases utility by 140 % and 136.36 % compared to MADRL and DRS-DQN respectively in typical experimental scenarios. To our best knowledge, this study is the first to reveal the significant effect of knowledge injection in enhancing decision-making efficacy in highly adversarial cloud storage attack-defense scenarios.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111748"},"PeriodicalIF":4.6,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RAPID: Robust APT detection and investigation using context-aware deep learning","authors":"Yonatan Amaru , Prasanna N. Wudali , Yuval Elovici, Asaf Shabtai","doi":"10.1016/j.comnet.2025.111744","DOIUrl":"10.1016/j.comnet.2025.111744","url":null,"abstract":"<div><div>Advanced persistent threats (APTs) pose a critical cybersecurity challenge, enabling attackers to maintain long-term unauthorized access while evading detection. Current APT detection approaches struggle with three key limitations: high false positive rates that lead to alert fatigue, poor adaptability to evolving system behaviors, and the inability to provide actionable investigation context. We present RAPID, a novel deep learning framework that addresses these challenges through context-aware anomaly detection and intelligent alert tracing. RAPID ’s key innovation lies in its dual-phase architecture: first, it employs self-supervised sequence learning with iteratively updated embeddings to capture dynamic system behavior patterns; second, it leverages these embeddings to reconstruct precise attack narratives through provenance graph analysis. Our comprehensive evaluation across five diverse real-world datasets demonstrates RAPID ’s effectiveness, achieving up to 74% precision with near-perfect recall while using only 30% of the data for training, substantially outperforming state-of-the-art methods that require 80% training data to achieve similar performance levels. The framework automatically generates detailed attack narratives that enable efficient incident response, significantly outperforming existing approaches in both detection accuracy and alert investigation precision.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111744"},"PeriodicalIF":4.6,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145326489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-10-01DOI: 10.1016/j.comnet.2025.111746
Zhixin Liu , Lei Gao , Ziyang Ma , Jiawei Su , Fenglei Li , Yazhou Yuan , Xinping Guan
{"title":"Joint task offloading and resource allocation scheme with UAV assistance in vehicle edge computing networks","authors":"Zhixin Liu , Lei Gao , Ziyang Ma , Jiawei Su , Fenglei Li , Yazhou Yuan , Xinping Guan","doi":"10.1016/j.comnet.2025.111746","DOIUrl":"10.1016/j.comnet.2025.111746","url":null,"abstract":"<div><div>As an emerging and promising technology paradigm, Vehicle Edge Computing (VEC) aims to enhance the performance and user experience of in-vehicle applications through efficient computation offloading strategies. However, with the increasing demand for high-complexity, computationally intensive applications within the automotive industry, VEC systems are facing the challenge of limited resources, and how to effectively manage and utilize the limited computational resources has become an urgent problem. In this paper, we propose a novel framework for UAV-assisted task offloading and resource allocation in VEC networks. The framework integrates Software Defined Networking (SDN) and Unmanned Aerial Vehicles (UAVs) to improve computation efficiency and resource utilization. The utility functions of requesting vehicle and VEC server are defined, and an incentive mechanism is proposed to encourage multiple UAVs to form an effective resource pool that can be used for VEC tasks. Based on the Stackelberg game to optimize task offloading and resource allocation and ensure well collaboration among VEC servers, UAVs, and vehicles, the existence of a Nash equilibrium is proved by theoretical derivation. Subsequently, we adopt an efficient evolutionary strategy-genetic algorithm to explore and optimize the optimal pricing strategy for VEC servers. Also, a task allocation algorithm is designed and implemented, which aims to maximize the revenue of UAVs by minimizing the cost of UAV coalition. Finally the simulation comparison experiments are conducted, and the results strongly validate the effectiveness and feasibility of the proposed scheme.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111746"},"PeriodicalIF":4.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-09-29DOI: 10.1016/j.comnet.2025.111742
Zhiwei Zhang, Yuhong Zhao, Jingyu Wang
{"title":"Multi-scale temporal feature-enhanced federated learning framework for network traffic prediction","authors":"Zhiwei Zhang, Yuhong Zhao, Jingyu Wang","doi":"10.1016/j.comnet.2025.111742","DOIUrl":"10.1016/j.comnet.2025.111742","url":null,"abstract":"<div><div>With the rapid development of mobile internet, network traffic has shown exponential growth. Accurate traffic prediction has become a key technology for ensuring stable network performance and optimized resource allocation. However, existing methods fail to fully integrate periodic features and neglect the persistent impact of non-periodic temporal features such as holidays on network traffic. This oversight makes it challenging for models to effectively capture both periodic patterns and sudden fluctuations in traffic. To address this issue, this paper introduces a Multi-Scale Temporal Feature Enhanced Federated Learning Framework for Network Traffic Prediction (MTFE-FL). The framework proposes a Holiday Impact Factor to comprehensively measure the persistent impact of holiday characteristics on network traffic data. High-quality predictive models are trained collaboratively in multiple edge clients, each using an iTransformer model to process time series data. By encapsulating the entire time series into variable tokens, the iTransformer provides a global perspective, enabling the effective identification of complex patterns and dependencies evolving over time. In addition, multivariate attention mechanisms are utilized to deep explore the relationships between network traffic data and temporal information. To further enhance the generalization ability of the global model and mitigate the “client drift” caused by client heterogeneity, Stochastic Controlled Averaging is introduced to correct the gradients of the local models at each edge client. The aggregated corrected models then generate the global model. Experimental results demonstrate that the proposed framework achieves superior performance on two real-world network traffic datasets, significantly improving the accuracy of network traffic predictions.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111742"},"PeriodicalIF":4.6,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-09-29DOI: 10.1016/j.comnet.2025.111724
Joannes Sam Mertens, Laura Galluccio, Giacomo Morabito
{"title":"SPARE: Selective parameter exchange for efficient cooperative learning in vehicular networks","authors":"Joannes Sam Mertens, Laura Galluccio, Giacomo Morabito","doi":"10.1016/j.comnet.2025.111724","DOIUrl":"10.1016/j.comnet.2025.111724","url":null,"abstract":"<div><div>In vehicular networks, decentralized cooperative learning strategies have gained significant attention due to the lower communication overhead they involve when compared to centralized cooperative learning approaches like Federated Learning. Decentralized solutions enable vehicles to collaboratively train Machine Learning (ML) models by exchanging parameters without relying on a central server. However, conventional model-sharing methods still suffer from high communication overhead and increased vulnerability to poisoning attacks.</div><div>This paper presents <em>SPARE</em>, a gossip-based cooperative learning protocol that leverages Vehicle-to-Vehicle (V2V) communication to enhance communication efficiency by exchanging selected model parameters. SPARE selects vehicle nodes for model updates and transmits only the most significantly updated layers, reducing redundancy and improving efficiency. This selective exchange minimizes communication resource consumption and enhances privacy, as the complete model is never shared across the network. We assess the proposed approach using a real-world driving dataset, featuring data from multiple drivers along the same route. Experimental results prove that our method achieves efficient learning with significantly lower communication overhead, demonstrating its suitability for deployment in resource-constrained vehicular networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"273 ","pages":"Article 111724"},"PeriodicalIF":4.6,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-09-28DOI: 10.1016/j.comnet.2025.111735
Erfan Parhizi, Rasool Esmaeilyfard, Reza Javidan
{"title":"LDD-Track: An energy-efficient deep reinforcement learning framework for multi-subject tracking in mobile crowdsensing","authors":"Erfan Parhizi, Rasool Esmaeilyfard, Reza Javidan","doi":"10.1016/j.comnet.2025.111735","DOIUrl":"10.1016/j.comnet.2025.111735","url":null,"abstract":"<div><div>Multi-subject tracking in Mobile Crowdsensing Systems (MCS) is a challenging task due to dynamic mobility, limited energy resources, and the need for real-time decisions. Traditional models like Kalman Filters and Hidden Markov Models struggle in such conditions, while Transformer-based deep learning methods offer high accuracy but are too computationally demanding for mobile use. Unlike previous studies that focus on one-to-one or collaborative group tracking, which often lack scalability and adaptability to real-world complexities, we propose LDD-Track, a novel multi-subject tracking framework that integrates Long Short-Term Memory (LSTM) networks with an adaptive attention mechanism, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and Deep Q-Network (DQN)-based user allocation. The LSTM model, enhanced with attention mechanisms, dynamically assigns weights <span><math><msub><mi>α</mi><mi>t</mi></msub></math></span> to past trajectory points, filtering noise and improving prediction accuracy. The DBSCAN clustering technique effectively groups subjects based on predicted movement, optimizing resource allocation and reducing computational overhead. The DQN-based user assignment strategy models resource optimization as a Markov Decision Process (MDP), leveraging the Q-value function <span><math><mrow><mi>Q</mi><mo>(</mo><msub><mi>s</mi><mi>t</mi></msub><mo>,</mo><msub><mi>a</mi><mi>t</mi></msub><mo>)</mo></mrow></math></span> to ensure adaptive and energy-efficient user allocation. Extensive experiments on the Taxi Mobility in Rome dataset demonstrate the superiority of LDD-Track. The framework achieves a 51 % reduction in energy consumption, a 39 % increase in Coverage Completion Rate (CCR), and a 9.7 % improvement in resource allocation efficiency compared to state-of-the-art methods. These findings validate the effectiveness of integrating attention-based prediction and deep reinforcement learning in large-scale, real-time MCS environments.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"272 ","pages":"Article 111735"},"PeriodicalIF":4.6,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}