Zheqi Zhang, Yaling Xun, Haifeng Yang, Jianghui Cai
{"title":"Application type awareness pod-level and system-level container scheduling","authors":"Zheqi Zhang, Yaling Xun, Haifeng Yang, Jianghui Cai","doi":"10.1016/j.future.2025.107898","DOIUrl":"10.1016/j.future.2025.107898","url":null,"abstract":"<div><div>Kubernetes, as a powerful tool for managing containerized applications, is considered a promising tool for supporting cloud computing platforms. The default scheduling scoring strategy only considers seeking an optimal node for the current pod and ignores the availability of subsequent nodes. Additionally, the node with the highest overall score may not necessarily be the most suitable node for the current task when the scoring process is performed. Therefore, a new Container scheduling strategies based on application type awareness at the pod and system levels (ATASL) is proposed. Firstly, in order to address the computational waste caused by the need to sequentially traverse and score all nodes in traditional node filtering methods, ATASL binds labels for Pods and nodes based on the required resources of Pods and the remaining resources of nodes, corresponding to “Compute” and “Memory”. So the subsequent scheduling of Pods is restricted to the corresponding groups of nodes only, avoiding the traversal of all nodes for scoring. Moreover, before scheduling each new task, ATASL adjusts the node roles to accommodate dynamic load changes based on the real-time resource status of the nodes. Secondly, when calculating the node score, not only the Pod-level score that matches the resource demand of the Pod is considered, but also the “system penalty score” mechanism is introduced to avoid the performance bottleneck caused by the over-utilization of a certain resource. This mechanism imposes a penalty on nodes where the utilization of a particular resource significantly exceeds the overall average utilization of the cluster, preventing resource imbalance and performance degradation (i.e., preventing overburdened nodes from being selected). Finally, a Kubernetes cluster was built using VMware to evaluate system performance. The experimental results show that ATASL can significantly improve the overall throughput and system resource utilization of the cluster, and also lead to a substantial improvement in node balance.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"173 ","pages":"Article 107898"},"PeriodicalIF":6.2,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144089409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheng Wang , Shiping Chen , Yumei Shi , Guangshun Yao , Meng Liu
{"title":"AdaGap: An adaptive gap-aware resource allocation strategy for GPU sharing in heterogeneous clusters","authors":"Sheng Wang , Shiping Chen , Yumei Shi , Guangshun Yao , Meng Liu","doi":"10.1016/j.future.2025.107883","DOIUrl":"10.1016/j.future.2025.107883","url":null,"abstract":"<div><div>Heterogeneous GPU clusters are crucial for high-performance computing and deep learning tasks, offering a flexible and cost-effective platform. GPU sharing allows multiple containers to concurrently access the same physical GPU, improving overall GPU usage. However, underutilization of GPU resources remains a significant challenge, primarily due to inefficient resource allocation and fragmentation within GPU sharing environments. Existing GPU sharing solutions often overlook the importance of effective resource allocation strategies, leading to resource gaps. In this paper, we propose AdaGap, an adaptive gap-aware, Deep Q-Network-based resource allocation strategy designed to optimize GPU usage by minimizing underutilized gaps in heterogeneous clusters. We develop a dynamic, gap-aware resource allocation mechanism that adapts to changing task requirements and diverse GPU and CPU resources, formulating the allocation problem as a Markov Decision Process. We conduct experiments using real-world data from Alibaba cloud, and the results demonstrate AdaGap’s robust adaptability across various heterogeneous scenarios. The method improves allocation strategies by minimizing resource gaps and reducing job completion times compared to baseline methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"173 ","pages":"Article 107883"},"PeriodicalIF":6.2,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144089410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive heuristics for scheduling DNN inferencing on edge and cloud for personalized UAV fleets","authors":"Suman Raj, Radhika Mittal, Harshil Gupta, Yogesh Simmhan","doi":"10.1016/j.future.2025.107874","DOIUrl":"10.1016/j.future.2025.107874","url":null,"abstract":"<div><div>Drone fleets with onboard cameras coupled with computer vision and DNN inferencing models can support diverse applications, from package deliveries to disaster monitoring. One such novel domain is for one or more “buddy” drones to assist Visually Impaired People (VIPs) lead an active lifestyle. Video inferencing tasks from such drones can help both navigate the drone and provide situation awareness to the VIP, and hence have strict execution deadlines. These tasks can execute either on an accelerated edge like Nvidia Jetson linked to the drone, or on a cloud INFerencing-as-a-Service (INFaaS). However, making this decision is a challenge given the latency and cost trade-offs across a stream of deadline-sensitive tasks, in the presence of network and/or compute variability. We propose a deadline-driven heuristic, DEMS-A, to schedule diverse DNN tasks generated continuously to perform inferencing over video segments generated by multiple drones linked to an edge, with the option to execute on the cloud. We use strategies like task dropping, work stealing and migration, and dynamic adaptation to cloud variability, to fully utilize the captive edge with intelligent offloading to the cloud, and guarantee a Quality of Service (QoS), <em>i.e.</em> maximize the utility and the number of tasks completed. We also introduce an additional Quality of Experience (QoE) metric useful to the assistive drone domain, which values the frequency of success for task types to ensure the responsiveness and reliability of the VIP application. We extend our DEMS solution to GEMS to solve this. We evaluate these strategies, using (i) an emulated setup of a fleet of over 80 drones supporting over 25 VIPs, with real DNN models executing on pre-recorded drone video streams, using Jetson Nano edges and AWS Lambda cloud functions, and (ii) a real-world setup of a Tello drone and a Jetson Orin Nano edge accelerator executing a subset of the DNN models on live video feeds and generating drone commands to follow a VIP in real-time. The detailed comparative emulation study shows that our strategies have a task completion rate of up to 88%, up to <span><math><mrow><mn>2</mn><mo>.</mo><mn>7</mn><mo>×</mo></mrow></math></span> higher QoS utility compared to the baselines, a further 16% higher QoS utility while adapting to network variability, and up to 75% higher QoE utility. Our practical validation using real drones exhibits task completion of up to 87% for GEMS and 33% higher total utility of GEMS compared to edge-only and achieves the smoothest trajectory with minimum jerk and lowest yaw error.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"173 ","pages":"Article 107874"},"PeriodicalIF":6.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144098508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harshit Gupta , Arya Krishnan , O.P. Vyas , Giovanni Merlino , Francesco Longo , Antonio Puliafito
{"title":"Federated learning across the compute continuum: A hierarchical approach with splitNNs and personalized layers","authors":"Harshit Gupta , Arya Krishnan , O.P. Vyas , Giovanni Merlino , Francesco Longo , Antonio Puliafito","doi":"10.1016/j.future.2025.107878","DOIUrl":"10.1016/j.future.2025.107878","url":null,"abstract":"<div><div>Federated Learning (FL) allows a Machine Learning (ML) model to be trained collaboratively among distributed devices while preserving the privacy of the data being used for the training. On the other hand, Hierarchical Federated Learning (HFL) is the extended architecture of FL, which consists of additional edge servers for partial aggregation. FL is very useful in privacy-preserving machine learning. However, it has some setbacks, such as statistical heterogeneity, multiple expensive global iterations, performance degradation due to insufficient data, and slow convergence. To deal with such setbacks, the work proposes three approaches with HFL. The first approach utilizes Transfer Learning with HFL, the second approach uses personalized layers in HFL by presenting a 2-tier & 3-tier architecture, and the third approach uses Split Learning (SL) with HFL by proposing an extended 3-tier architecture. The proposed work performed well with the computation at multilevel, i.e., on client, edge, and cloud, exploiting the hybrid infrastructure of IoT-Edge-cloud, i.e., compute continuum. The obtained results showed that the proposed work outperforms by increasing the accuracy of complex models from 18.10% to 76.91% with faster convergence. The work also showed better performance than the state-of-the-art models. Significant performance improvement was achieved in the presence of personalized layers in an HFL-SplitNN architecture. The proposed 3-tier architecture especially shines in the case of less homogeneous data per client. SL played a vital role with HFL in enhancing performance by providing a maximum accuracy of 82.38% with Independent & Identically Distributed Data (IID) and 52.16% with Non-IID data distribution.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"173 ","pages":"Article 107878"},"PeriodicalIF":6.2,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144089407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
He Huang , Nan Sun , Massimiliano Tani , Yu Zhang , Jiaojiao Jiang , Sanjay Jha
{"title":"Can LLM-generated misinformation be detected: A study on Cyber Threat Intelligence","authors":"He Huang , Nan Sun , Massimiliano Tani , Yu Zhang , Jiaojiao Jiang , Sanjay Jha","doi":"10.1016/j.future.2025.107877","DOIUrl":"10.1016/j.future.2025.107877","url":null,"abstract":"<div><div>Given the increasing number and severity of cyber attacks, there has been a surge in cybersecurity information across various mediums such as posts, news articles, reports, and other resources. Cyber Threat Intelligence (CTI) involves processing data from these cybersecurity sources, enabling professionals and organizations to gain valuable insights. However, with the rapid dissemination of cybersecurity information, the inclusion of fake CTI can lead to severe consequences, including data poisoning attacks. To address this challenge, we have implemented a three-step strategy: generating synthetic CTI, evaluating the quality of the generated CTI, and detecting fake CTI. Unlike other subdomains, such as fake COVID news detection, there is currently no publicly available dataset specifically tailored for fake CTI detection research. To address this gap, we first establish a reliable groundtruth dataset by utilizing domain-specific cybersecurity data to fine-tune a Large Language Model (LLM) for synthetic CTI generation. We then employ crowdsourcing techniques and advanced synthetic data verification methods to evaluate the quality of the generated dataset, introducing a novel evaluation methodology that combines quantitative and qualitative approaches. Our comprehensive evaluation reveals that the generated CTI cannot be distinguished from genuine CTI by human annotators, regardless of their computer science background, demonstrating the effectiveness of our generation approach. We benchmark various misinformation detection techniques against our groundtruth dataset to establish baseline performance metrics for identifying fake CTI. By leveraging existing techniques and adapting them to the context of fake CTI detection, we provide a foundation for future research in this critical field. To facilitate further research, we make our code, dataset, and experimental results publicly available on <span><span>GitHub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"173 ","pages":"Article 107877"},"PeriodicalIF":6.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143941291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"X-DINC: Toward Cross-Layer ApproXimation for the Distributed and In-Network ACceleration of Multi-Kernel Applications","authors":"Zahra Ebrahimi , Maryam Eslami , Xun Xiao , Akash Kumar","doi":"10.1016/j.future.2025.107864","DOIUrl":"10.1016/j.future.2025.107864","url":null,"abstract":"<div><div>With the rapid evolution of programmable network devices and the urge for energy-efficient and sustainable computing, network infrastructures are mutating toward a computing pipeline, providing In-Network Computing (INC) capability. Despite the initial success in offloading single/small kernels to the network devices, deploying multi-kernel applications remains challenging due to limited memory, computing resources, and lack of support for Floating Point (FP) and complex operations. To tackle these challenges, we present a cross-layer approximation and distribution methodology (X-DINC) that exploits the error resilience of applications. X-DINC utilizes a chain of techniques to facilitate kernel deployment and distribution across heterogeneous devices in INC environments. First, we identify approximation and optimization opportunities in data acquisition and computation phases of multi-kernel applications. Second, we simplify complex arithmetic operations to cope with the <em>computation</em> limitations of the programmable network switches. Third, we perform application-level sensitivity analysis to measure the trade-off between performance gain and Quality of Results (QoR) loss when approximating individual kernels via various techniques. Finally, a greedy heuristic swiftly generates Pareto/near-Pareto mixed-precision configurations that maximize the performance gain while maintaining the user-defined QoR. X-DINC is prototyped on a Virtex-7 Field Programmable Gate Array (FPGA) and evaluated using the Blind Source Separation (BSS) application on industrial audio dataset. Results show that X-DINC performs separation up to 35% faster with up to 88% lower Area-Delay Product (ADP) compared to an <em>Accurate-Centralized</em> approach, when distributed across 2 to 7 network nodes, while maintaining audio quality within an acceptable range of 15–20 dB.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"172 ","pages":"Article 107864"},"PeriodicalIF":6.2,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143928705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A deep reinforcement learning based algorithm for time and cost optimized scaling of serverless applications","authors":"Anupama Mampage, Shanika Karunasekera, Rajkumar Buyya","doi":"10.1016/j.future.2025.107873","DOIUrl":"10.1016/j.future.2025.107873","url":null,"abstract":"<div><div>Serverless computing has gained a strong traction in the cloud computing community in recent years. Among the many benefits of this novel computing model, the rapid auto-scaling capability of user applications takes prominence. However, the offer of adhoc scaling of user deployments at function level introduces many complications to serverless systems. The added delay and failures in function request executions caused by the time consumed for dynamically creating new resources to suit function workloads, known as the cold-start delay, is one such very prevalent shortcoming. Maintaining idle resource pools to alleviate this issue often results in wasted resources from the cloud provider perspective. Existing solutions to address this limitation mostly focus on predicting and understanding function load levels in order to proactively create required resources. Although these solutions improve function performance, the lack of understanding on the overall system characteristics in making these scaling decisions often leads to the sub-optimal usage of system resources. Further, the multi-tenant nature of serverless systems requires a scalable solution adaptable for multiple co-existing applications, a limitation seen in most current solutions. In this paper, we introduce a novel multi-agent Deep Reinforcement Learning based intelligent solution for both horizontal and vertical scaling of function resources, based on a comprehensive understanding on both function and system requirements. Our solution elevates function performance reducing cold starts, while also offering the flexibility for optimizing resource maintenance cost to the service providers. Experiments conducted considering varying workload scenarios show improvements of up to 23% and 34% in terms of application latency and request failures, or alternatively saving up to 45% in infrastructure cost for the service providers.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"173 ","pages":"Article 107873"},"PeriodicalIF":6.2,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144069927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonardo Vianna do Nascimento , José Palazzo Moreira de Oliveira
{"title":"A multi-agent architecture for context sources integration in smart cities","authors":"Leonardo Vianna do Nascimento , José Palazzo Moreira de Oliveira","doi":"10.1016/j.future.2025.107862","DOIUrl":"10.1016/j.future.2025.107862","url":null,"abstract":"<div><div>Contextual data in smart cities are present in large quantities and distributed sources. Many applications can benefit from these data to provide better services to their users. The scale and dynamic nature of urban environments pose significant challenges in making context sources available to applications. These challenges involve transparent access to context, resilience, decentralization, extensibility, scalability, and redundancy of data. This study introduces a new architecture designed to address these issues. This architecture aims to facilitate the acquisition of context by integrating distributed data sources. The developed architecture not only overcomes the challenges posed by the scale and dynamicity of urban environments but also prepares for more innovative and effective solutions for smart cities. The architecture is distributed, decentralized, and fault-tolerant, providing data fusion mechanisms and dynamic context source composition. Compared to existing works, our architecture contributes to the state-of-the-art addressing all these five challenges in one design. The architecture uses the multi-agent paradigm, which is inherently distributed and facilitates decentralization. A scenario was used to execute several experiments demonstrating that the architecture can obtain context data transparently by any application.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"172 ","pages":"Article 107862"},"PeriodicalIF":6.2,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143924376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient edge-based data integrity auditing in cloud storage","authors":"Hao Yan , Yan Wang , Guoxiu Liu , Juan Zhao","doi":"10.1016/j.future.2025.107899","DOIUrl":"10.1016/j.future.2025.107899","url":null,"abstract":"<div><div>Edge computing increasingly collaborates with cloud computing to support numerous applications that involve large data volumes and frequent data interactions. In cloud-edge collaboration environments, applications especially with high requirements for low data transmission delay often deploy frequently accessed client data replicas on edge servers to improve data access efficiency. Consequently, client data is often distributed across both cloud and edge servers in practice. Therefore, efficiently verifying the integrity of all client data poses a complex and urgent challenge. To address this issue, the paper introduces a novel data integrity auditing scheme capable of efficiently performing asynchronous integrity checks on client data across both edge and cloud servers. In our scheme, clients only generate partial block tags and upload them along with the data to the edge server. Edge server computes complete tags based on the partial tags, caches a small portion of frequently accessed data, and transfers the remaining data to the cloud server. For data verification, edge servers provide partial integrity proofs for cached data, supporting the cloud server to generate complete proofs for all challenged data. Thus, the auditors can verify all client data, regardless of its storage location. In our scheme, edge clients bear only about half of the computational workload of existing schemes. Additionally, the cloud server also offloads a portion of computational and storage tasks to edge servers, significantly improving the overall efficiency of data checking. We theoretically prove the security of our scheme, and experimental results demonstrate its efficiency and feasibility.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"172 ","pages":"Article 107899"},"PeriodicalIF":6.2,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143936896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving self-supervised vertical federated learning with contrastive instance-wise similarity and dynamical balance pool","authors":"Shuai Chen , Wenyu Zhang , Xiaoling Huang , Cheng Zhang , Qingjun Mao","doi":"10.1016/j.future.2025.107884","DOIUrl":"10.1016/j.future.2025.107884","url":null,"abstract":"<div><div>Vertical Federated Learning (VFL) enables multiple parties with distinct feature spaces to train a joint VFL model collaboratively without exposing their original private data. In realistic scenarios, the scarcity of aligned and labeled samples among collaborating participants limits the effectiveness of traditional VFL approaches for model training. Current VFL frameworks attempt to leverage abundant unlabeled data using Contrastive Self-Supervised Learning (CSSL). However, the simplistic incorporation of CSSL methods cannot address severe domain shift in VFL. In addition, CSSL methods typically conflict with general regularization approaches designed to alleviate domain shift, thereby significantly limiting the potential of the self-supervised learning framework in VFL. To address these challenges, this study proposes an Improved Self-Supervised Vertical Federated Learning (ISSVFL) framework for VFL in label-scarce scenarios under the semi-honest and no-collusion assumption. ISSVFL merges CSSL with instance-wise similarity to resolve regularization conflicts and captures more significant inter-domain knowledge in the representations from different participants, effectively alleviating domain shift. In addition, a new dynamical balance pool is proposed to fine-tune the pre-trained models for downstream supervised tasks by dynamically balancing inter-domain and intra-domain knowledge. Extensive empirical experiments on image and tabular datasets demonstrate that ISSVFL achieves an average performance improvement of 3.3 % compared with state-of-the-art baselines.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"172 ","pages":"Article 107884"},"PeriodicalIF":6.2,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143931576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}