{"title":"Resource-Efficient Collaborative Edge Transformer Inference With Hybrid Model Parallelism","authors":"Shengyuan Ye;Bei Ouyang;Jiangsu Du;Liekang Zeng;Tianyi Qian;Wenzhong Ou;Xiaowen Chu;Deke Guo;Yutong Lu;Xu Chen","doi":"10.1109/TMC.2025.3574695","DOIUrl":"https://doi.org/10.1109/TMC.2025.3574695","url":null,"abstract":"Transformer-based models have unlocked a plethora of powerful intelligent applications at the edge, such as voice assistant in smart home. Traditional deployment approaches offload the inference workloads to the remote cloud server, which would induce substantial pressure on the backbone network as well as raise users’ privacy concerns. To address that, in-situ inference has been recently recognized for edge intelligence, but it still confronts significant challenges stemming from the conflict between intensive workloads and limited on-device computing resources. In this paper, we leverage our observation that many edge environments usually comprise a rich set of accompanying trusted edge devices with idle resources and propose <monospace>Galaxy+</monospace>, a collaborative edge AI system that breaks the resource walls across heterogeneous edge devices for efficient Transformer inference acceleration. <monospace>Galaxy+</monospace> introduces a novel hybrid model parallelism to orchestrate collaborative inference, along with a heterogeneity and memory-aware parallelism planning for fully exploiting the resource potential. To mitigate the impact of tensor synchronizations on inference latency under bandwidth-constrained edge environments, <monospace>Galaxy+</monospace> devises a tile-based fine-grained overlapping of communication and computation. Furthermore, a fault-tolerant re-scheduling mechanism is developed to address device-level resource dynamics, ensuring stable and low-latency inference. Extensive evaluation based on prototype implementation demonstrates that <monospace>Galaxy+</monospace> remarkably outperforms state-of-the-art approaches under various edge environment setups, achieving a <inline-formula><tex-math>$1.2times$</tex-math></inline-formula> to <inline-formula><tex-math>$4.24times$</tex-math></inline-formula> end-to-end latency reduction. Besides, <monospace>Galaxy+</monospace> can adapt to device-level resource dynamics, swiftly rescheduling and restoring inference in the presence of unexpected straggler devices.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"10945-10962"},"PeriodicalIF":9.2,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weizheng Wang;Qipeng Xie;Zhaoyang Han;Chunhua Su;Joel J. P. C. Rodrigues;Kaishun Wu
{"title":"Secure Enhanced IoT-WLAN Authentication Protocol With Efficient Fast Reconnection","authors":"Weizheng Wang;Qipeng Xie;Zhaoyang Han;Chunhua Su;Joel J. P. C. Rodrigues;Kaishun Wu","doi":"10.1109/TMC.2025.3569593","DOIUrl":"https://doi.org/10.1109/TMC.2025.3569593","url":null,"abstract":"The increasing integration of Internet of Things (IoT) devices in Wireless Local Area Networks (WLANs) necessitates robust and efficient authentication mechanisms. While existing IoT authentication protocols address certain security concerns, they often fail to provide comprehensive protection against threats such as perfect forward secrecy violations, insider attacks, and key compromise impersonation, or impose significant computational and communication overhead on resource- constrained IoT systems. This paper presents a novel Extensible Authentication Protocol (EAP) based scheme for IoT-WLAN environments that addresses these security challenges while maintaining cost-effectiveness. Our approach utilizes elliptic curve cryptography and incorporates advanced features including perfect forward secrecy, strong identity protection, and explicit key confirmation. We provide a thorough security analysis using informal heuristics, formal methods (Random Oracle Model and BAN Logic), and automated verification with ProVerif. Performance evaluations demonstrate that our protocol achieves lower communication, storage, and computational costs compared to state-of-the-art solutions, with an average 79.6% reduction in computation time. A detailed comparison with existing schemes highlights the efficiency and enhanced security features of our proposed authentication mechanism for IoT-WLAN deployments.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"10085-10098"},"PeriodicalIF":9.2,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guillermo Encinas-Lago;Francesco Devoti;Marco Rossanese;Vincenzo Sciancalepore;Marco Di Renzo;Xavier Costa-Pérez
{"title":"COLoRIS: Localization-Agnostic Smart Surfaces Enabling Opportunistic ISAC in 6G Networks","authors":"Guillermo Encinas-Lago;Francesco Devoti;Marco Rossanese;Vincenzo Sciancalepore;Marco Di Renzo;Xavier Costa-Pérez","doi":"10.1109/TMC.2025.3556326","DOIUrl":"https://doi.org/10.1109/TMC.2025.3556326","url":null,"abstract":"The integration of Smart Surfaces in 6G communication networks, also dubbed as Reconfigurable Intelligent Surfaces (RISs), is a promising paradigm change gaining significant attention given its disruptive features. RISs are a key enabler in the realm of 6G Integrated Sensing and Communication (ISAC) systems where novel services can be offered together with the future mobile networks communication capabilities. This paper addresses the critical challenge of precisely localizing users within a communication network by leveraging the controlled-reflective properties of RIS elements without relying on more power-hungry traditional methods, e.g., GPS, adverting the need of deploying additional infrastructure and even avoiding interfering with communication efforts. Moreover, we go one step beyond: we build COLoRIS, an <i>Opportunistic ISAC</i> approach that leverages localization-agnostic RIS configurations to accurately position mobile users via trained learning models. Extensive experimental validation and simulations in large-scale synthetic scenarios show <inline-formula><tex-math>$mathbf{5%}$</tex-math></inline-formula> positioning errors (with respect to field size) under different conditions. Further, we show that a low-complexity version running in a limited off-the-shelf (embedded, low-power) system achieves positioning errors in the <inline-formula><tex-math>$mathbf{11%}$</tex-math></inline-formula> range at a negligible <inline-formula><tex-math>$mathbf{+2.7%}$</tex-math></inline-formula> energy expense with respect to the classical RIS.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 8","pages":"6812-6826"},"PeriodicalIF":7.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward Deterministic Satellite-Terrestrial Integrated Networks via Resource Adaptation and Differentiated Scheduling","authors":"Weiting Zhang;Peixi Liao;Dong Yang;Qiang Ye;Shiwen Mao;Hongke Zhang","doi":"10.1109/TMC.2025.3574740","DOIUrl":"https://doi.org/10.1109/TMC.2025.3574740","url":null,"abstract":"Satellite-terrestrial integrated network (STIN) is a full-scale communication paradigm, which can support joint information processing and seamless service provision by leveraging satellites’ wide coverage and terrestrial networks’ high capacity. The existing STIN operates with insufficient synergy in transmission scheduling, impacting resource allocation efficiency and transmission delay optimization, particularly in complex transmission scenarios. In this paper, we design <underline>Det</u>erministic STIN (DetSTIN), a novel architecture for STIN, along with two algorithms tailored for transmission scheduling to collaboratively optimize resource adaptation and service flow scheduling. Specifically, the DetSTIN enables the smooth interconnection and integration of heterogeneous networks by providing layered deterministic services. Besides, a genetic-based resource adaptation algorithm is designed for fixed-mobile-satellite heterogeneous networks to reduce resource allocation overhead while maintaining the network performance. Furthermore, we propose a deep reinforcement learning-based differentiated scheduling algorithm to solve the routing-queue two-dimensional decision problem to differentially optimize transmission delay of service flows, thus obtaining higher transmission scheduling benefit. By addressing resource adaptation and differentiated scheduling synergistically, the proposed solution achieves reduced resource allocation overhead and increased transmission scheduling benefit, ultimately leading to increased network operation revenue of the DetSTIN. Simulation results demonstrate that the proposed solution delivers effective performance across various flow proportions, and as the number of flows increases, the network operation revenue exhibits a noticeable improvement, compared with benchmark algorithms.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"11092-11109"},"PeriodicalIF":9.2,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Scene-Aware Model Adaptation Scheme for Cross-Scene Online Inference on Mobile Devices","authors":"Yunzhe Li;Hongzi Zhu;Zhuohong Deng;Yunlong Cheng;Zimu Zheng;Liang Zhang;Shan Chang;Minyi Guo","doi":"10.1109/TMC.2025.3574766","DOIUrl":"https://doi.org/10.1109/TMC.2025.3574766","url":null,"abstract":"Emerging Artificial Intelligence of Things (AIoT) applications desire online prediction using deep neural network (DNN) models on mobile devices. However, due to the movement of devices, <italic>unfamiliar</i> test samples constantly appear, significantly affecting the prediction accuracy of a pre-trained DNN. In addition, unstable network connection calls for local model inference. In this paper, we propose a light-weight scheme, called <italic>Anole</i>, to cope with the local DNN model inference on mobile devices. The core idea of Anole is to first establish an army of compact DNN models, and then adaptively select the model fitting the current test sample best for online inference. The key is to automatically identify <italic>model-friendly</i> scenes for training scene-specific DNN models. To this end, we design a weakly-supervised scene representation learning algorithm by combining both human heuristics and feature similarity in separating scenes. Moreover, we further train a model classifier to predict the best-fit scene-specific DNN model for each test sample. We implement Anole on different types of mobile devices and conduct extensive trace-driven and real-world experiments based on unmannedaerial vehicles (UAVs). The results demonstrate that Anole outwits the method of using a versatile large DNN in terms of prediction accuracy (4.5% higher), response time (33.1% faster) and power consumption (45.1% lower).","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"11061-11075"},"PeriodicalIF":9.2,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CrowdHMTware: A Cross-Level Co-Adaptation Middleware for Context-Aware Mobile DL Deployment","authors":"Sicong Liu;Bin Guo;Shiyan Luo;Yuzhan Wang;Hao Luo;Cheng Fang;Yuan Xu;Ke Ma;Yao Li;Zhiwen Yu","doi":"10.1109/TMC.2025.3549399","DOIUrl":"https://doi.org/10.1109/TMC.2025.3549399","url":null,"abstract":"There are many deep learning (DL) powered mobile and wearable applications today continuously and unobtrusively sensing the ambient surroundings to enhance all aspects of human lives. To enable robust and private mobile sensing, DL models are often deployed locally on resource-constrained mobile devices using techniques such as model compression or offloading. However, existing methods, either front-end algorithm level (i.e. DL model compression/partitioning) or back-end scheduling level (i.e. operator/resource scheduling), cannot be locally online because they require offline retraining to ensure accuracy or rely on manually pre-defined strategies, struggle with <i>dynamic adaptability</i>. The primary challenge lies in feeding back runtime performance from the <i>back-end</i> level to the <i>front-end</i> level optimization decision. Moreover, the adaptive mobile DL model porting middleware with <i>cross-level co-adaptation</i> is less explored, particularly in mobile environments with <i>diversity</i> and <i>dynamics</i>. In response, we introduce CrowdHMTware, a dynamic context-adaptive DL model deployment middleware for heterogeneous mobile devices. It establishes an <i>automated adaptation loop</i> between cross-level functional components, i.e. elastic inference, scalable offloading, and model-adaptive engine, enhancing scalability and adaptability. Experiments with four typical tasks across 15 platforms and a real-world case study demonstrate that <inline-formula><tex-math>${sf CrowdHMTware}$</tex-math></inline-formula> can effectively scale DL model, offloading, and engine actions across diverse platforms and tasks. It hides run-time system issues from developers, reducing the required developer expertise.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 8","pages":"7615-7631"},"PeriodicalIF":7.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Overcoming Catastrophic Forgetting in Federated Continual Graph Learning for Resource-Limited Mobile Devices","authors":"Jiyuan Feng;Xu Yang;Dongyi Zheng;Weihong Han;Binxing Fang;Qing Liao","doi":"10.1109/TMC.2025.3573964","DOIUrl":"https://doi.org/10.1109/TMC.2025.3573964","url":null,"abstract":"Federated Graph Learning (FGL) enables multiple clients to collaboratively learn node representations from private subgraph data, such as user transactions or social networks. Local models are trained on clients and then aggregated by a central server, supporting large-scale graph learning without sharing raw data. However, most existing FGL methods assume that the number of nodes in the graph remains constant, while real-world scenarios often evolve, with new nodes and edges continually added and older ones removed due to limited device memory. We define this setting as Federated Continual Graph Learning (FCGL). In FCGL, global model aggregation may cause interference occur inter-task and inter-client, therefore, FCGL suffers from the global catastrophic forgetting, as the global model adapts to newly added nodes, it loses knowledge acquired from earlier graph data of clients. To address this, we propose GRE-FL, a generative replay framework, which can mitigate global catastrophic forgetting by generating a global summary graph at the server to preserve critical information from historical nodes. It also improves performance by equipping local models with a gating graph attention network for better feature extraction. Experiments show that GRE-FL achieves strong performance across multiple datasets.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"11151-11163"},"PeriodicalIF":9.2,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Characterizing and Scheduling of Diffusion Process for Text-to-Image Generation in Edge Networks","authors":"Shuangwei Gao;Peng Yang;Yuxin Kong;Feng Lyu;Ning Zhang","doi":"10.1109/TMC.2025.3574065","DOIUrl":"https://doi.org/10.1109/TMC.2025.3574065","url":null,"abstract":"Artificial Intelligence-Generated Content (AIGC) technology is transforming content creation by enabling diverse customized and quality services. However, the limited computing resources on mobile devices hinder the provisioning of AIGC services at scale, pose challenges in guaranteeing user-satisfied content quality requirement. To address these challenges, we first investigate the characteristics of prompt category and inference models in Text-to-Image (T2I) diffusion process. It is observed that, model size, denoising steps, and computing resource, are three deciding factors to image generation utility. Based on this insight, we first design an edge-assisted AIGC service system to efficiently process multi-user T2I generative requests, employing a multi-flow queuing model to capture multi-user dynamics and characterize the impact of diffusion scheduling on service latency. The system schedules the diffusion process of T2I generation across edge-deployed models, balancing service quality and computing resource. To maximize generation utility under resource constraints, we propose a Monte Carlo Tree Search-based diffusion scheduling algorithm embedded with adaptive computing resource allocation subroutine. This algorithm ensures that, resource allocation dynamically adapts to scheduling decisions in real time, enabling an effective trade-off between service quality and latency. Extensive experimental comparison against baseline approaches demonstrates that, the proposed system can enhance the generation utility by up to 7.3<inline-formula><tex-math>$%$</tex-math></inline-formula>, achieving a 2.9<inline-formula><tex-math>$%$</tex-math></inline-formula> improvement in quality score and a 33.3<inline-formula><tex-math>$%$</tex-math></inline-formula> reduction in service latency.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"11137-11150"},"PeriodicalIF":9.2,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Security-Aware Designs of Multi-UAV Deployment, Task Offloading and Service Placement in Edge Computing Networks","authors":"Mengru Wu;Haonan Wu;Weidang Lu;Lei Guo;Inkyu Lee;Abbas Jamalipour","doi":"10.1109/TMC.2025.3574061","DOIUrl":"https://doi.org/10.1109/TMC.2025.3574061","url":null,"abstract":"Unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) has emerged as a promising solution to support wireless devices’ computation-intensive services in the absence of terrestrial infrastructures. Nevertheless, the heterogeneous nature of MEC services and the security vulnerability of wireless channels present significant challenges to achieving efficient and secure computation offloading. In this paper, we investigate a multi-UAV-assisted MEC network in which wireless devices need to process diverse computation tasks. The devices can perform local computing or offload their computation tasks to UAV servers that have pre-cached relevant service programs in the presence of eavesdroppers. To facilitate secure service provisioning, we propose a cooperative jamming-based scheme in which a UAV jammer transmits jamming signals to interfere with eavesdroppers during devices’ computation offloading processes. Taking into account UAV servers’ constrained caching spaces and secure offloading requirements, we minimize the total task completion delay of devices by jointly optimizing multi-UAV deployment, task offloading decisions, service placement, UAV jammer’s transmit power, and devices’ transmit power. To tackle the formulated mixed-integer nonlinear programming problem, we design an optimization-embedding multi-agent twin delayed deep deterministic policy gradient (OE-MATD3) algorithm. Specifically, the MATD3 approach is leveraged to deal with optimization variables concerning UAVs, while a closed-form solution for devices’ transmit power is derived and guides MATD3-based decision-making. Simulation results demonstrate that the proposed scheme outperforms baselines in terms of devices’ task completion delay.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"11046-11060"},"PeriodicalIF":9.2,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ke Ma;Bin Guo;Sicong Liu;Cheng Fang;Siqi Luo;Zimu Zheng;Zhiwen Yu
{"title":"AdaShift: Anti-Collapse and Real-Time Deep Model Evolution for Mobile Vision Applications","authors":"Ke Ma;Bin Guo;Sicong Liu;Cheng Fang;Siqi Luo;Zimu Zheng;Zhiwen Yu","doi":"10.1109/TMC.2025.3572215","DOIUrl":"https://doi.org/10.1109/TMC.2025.3572215","url":null,"abstract":"As computational hardware advance, integrating deep learning (DL) models into mobile devices has become ubiquitous for visual tasks. However, “data distribution shift” in live sensory data can lead to a degradation in the accuracy of mobile DL models. Conventional domain adaptation methods, constrained by their dependence on pre-compiled static datasets for offline adaptation, exhibit fundamental limitations in real-time practicality. While modern online adaptation methodologies enable incremental model evolution, they remain plagued by two critical shortcomings: computational latency from excessive resource demands on mobile devices that compromise temporal responsiveness, and accuracy collapse stemming from error accumulation through unreliable pseudo-labeling processes. To address these challenges, we introduce AdaShift, an innovative cloud-assisted framework enabling real-time online model adaptation for vision-based mobile systems operating under non-stationary data distributions. Specifically, to ensure real-time performance, the adaptation trigger and plug-and-play adaptation mechanisms are proposed to minimize redundant adaptation requests and reduce per-request costs. To prevent accuracy collapse, AdaShift introduces a novel anti-collapse parameter restoration mechanism that explicitly recovers knowledge, ensuring stable accuracy improvements during model evolution. Through extensive experiments across various vision tasks and model architectures, AdaShift demonstrates superior accuracy and 100ms-level adaptation latency, achieving an optimal balance between accuracy and real-time performance compared to baselines.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"10573-10589"},"PeriodicalIF":9.2,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}