IEEE Transactions on Computers最新文献

筛选
英文 中文
2024 Reviewers List 2024审稿人名单
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-14 DOI: 10.1109/TC.2025.3527650
{"title":"2024 Reviewers List","authors":"","doi":"10.1109/TC.2025.3527650","DOIUrl":"https://doi.org/10.1109/TC.2025.3527650","url":null,"abstract":"","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"334-340"},"PeriodicalIF":3.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10840336","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shared Recurrence Floating-Point Divide/Sqrt and Integer Divide/Remainder With Early Termination 共享递归浮点除法/平方根和整数除法/余数提前终止
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-11-18 DOI: 10.1109/TC.2024.3500380
Kevin Kim;Katherine Parry;David Harris;Cedar Turek;Alessandro Maiuolo;Rose Thompson;James Stine
{"title":"Shared Recurrence Floating-Point Divide/Sqrt and Integer Divide/Remainder With Early Termination","authors":"Kevin Kim;Katherine Parry;David Harris;Cedar Turek;Alessandro Maiuolo;Rose Thompson;James Stine","doi":"10.1109/TC.2024.3500380","DOIUrl":"https://doi.org/10.1109/TC.2024.3500380","url":null,"abstract":"Division, square root, and remainder are fundamental operations required by most computer systems. Floating-point and integer operations are commonly performed on separate datapaths. This paper presents the first detailed implementation of a shared recurrence unit that supports floating-point division/square root and integer division/remainder. It supports early termination and shares the normalization shifter needed for integer and subnormal inputs. Synthesis results show that shared double-precision dividers producing at least 4 bits per cycle are 9 - 18% smaller and 3 - 16% faster than separate integer and floating-point units.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 2","pages":"740-748"},"PeriodicalIF":3.6,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A System-Level Test Methodology for Communication Peripherals in System-on-Chips 片上系统通信外设的系统级测试方法
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-11-18 DOI: 10.1109/TC.2024.3500375
Francesco Angione;Paolo Bernardi;Nicola di Gruttola Giardino;Gabriele Filipponi;Claudia Bertani;Vincenzo Tancorre
{"title":"A System-Level Test Methodology for Communication Peripherals in System-on-Chips","authors":"Francesco Angione;Paolo Bernardi;Nicola di Gruttola Giardino;Gabriele Filipponi;Claudia Bertani;Vincenzo Tancorre","doi":"10.1109/TC.2024.3500375","DOIUrl":"https://doi.org/10.1109/TC.2024.3500375","url":null,"abstract":"This paper deals with functional System-Level Test (SLT) for System-on-Chips (SoCs) communication peripherals. The proposed methodology is based on analyzing the potential weaknesses of applied structural tests such as Scan-based. Then, the paper illustrates how to develop a functional SLT programs software suite to address such issues. In case the communication peripheral provides detection/correction features, the methodology proposes the design of a hardware companion module to be added to the Automatic Test Equipment (ATE) to interact with the SoC communication module by purposely corrupting data frames. Experimental results are obtained on an industrial, automotive SoC produced by STMicroelectronics focusing on the Controller Area Network (CAN) communication peripheral and showing the effectiveness of the SLT suite to complement structural tests.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 2","pages":"731-739"},"PeriodicalIF":3.6,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10755212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stream: Design Space Exploration of Layer-Fused DNNs on Heterogeneous Dataflow Accelerators 流:异构数据流加速器上层融合dnn的设计空间探索
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-10-10 DOI: 10.1109/TC.2024.3477938
Arne Symons;Linyan Mei;Steven Colleman;Pouya Houshmand;Sebastian Karl;Marian Verhelst
{"title":"Stream: Design Space Exploration of Layer-Fused DNNs on Heterogeneous Dataflow Accelerators","authors":"Arne Symons;Linyan Mei;Steven Colleman;Pouya Houshmand;Sebastian Karl;Marian Verhelst","doi":"10.1109/TC.2024.3477938","DOIUrl":"https://doi.org/10.1109/TC.2024.3477938","url":null,"abstract":"As the landscape of deep neural networks evolves, heterogeneous dataflow accelerators, in the form of multi-core architectures or chiplet-based designs, promise more flexibility and higher inference performance through scalability. So far, these systems exploit the increased parallelism by coarsely mapping a single layer at a time across cores, which incurs frequent costly off-chip memory accesses, or by pipelining batches of inputs, which falls short in meeting the demands of latency-critical applications. To alleviate these bottlenecks, this work explores a new fine-grain mapping paradigm, referred to as layer fusion, on heterogeneous dataflow accelerators through a novel design space exploration framework called \u0000<i>Stream</i>\u0000. \u0000<i>Stream</i>\u0000 captures a wide variety of heterogeneous dataflow architectures and mapping granularities, and implements a memory and communication-aware latency and energy analysis validated with three distinct state-of-the-art hardware implementations. As such, it facilitates a holistic exploration of architecture and mapping, by strategically allocating the workload through constraint optimization. The findings demonstrate that the integration of layer fusion with heterogeneous dataflow accelerators yields up to \u0000<inline-formula><tex-math>$2.2times$</tex-math></inline-formula>\u0000 lower energy-delay product in inference efficiency, addressing both energy consumption and latency concerns. The framework is available open-source at: github.com/kuleuven-micas/stream.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"237-249"},"PeriodicalIF":3.6,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedQClip: Accelerating Federated Learning via Quantized Clipped SGD FedQClip:通过量化的裁剪SGD加速联邦学习
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-10-10 DOI: 10.1109/TC.2024.3477972
Zhihao Qu;Ninghui Jia;Baoliu Ye;Shihong Hu;Song Guo
{"title":"FedQClip: Accelerating Federated Learning via Quantized Clipped SGD","authors":"Zhihao Qu;Ninghui Jia;Baoliu Ye;Shihong Hu;Song Guo","doi":"10.1109/TC.2024.3477972","DOIUrl":"https://doi.org/10.1109/TC.2024.3477972","url":null,"abstract":"Federated Learning (FL) has emerged as a promising technique for collaboratively training machine learning models among multiple participants while preserving privacy-sensitive data. However, the conventional parameter server architecture presents challenges in terms of communication overhead when employing iterative optimization methods such as Stochastic Gradient Descent (SGD). Although communication compression techniques can reduce the traffic cost of FL during each training round, they often lead to degraded convergence rates, mainly due to compression errors and data heterogeneity. To address these issues, this paper presents FedQClip, an innovative approach that combines quantization and Clipped SGD. FedQClip leverages an adaptive step size inversely proportional to the <inline-formula><tex-math>$ell_{2}$</tex-math></inline-formula> norm of the gradient, effectively mitigating the negative impacts of quantized errors. Additionally, clipped operations can be applied locally and globally to further expedite training. Theoretical analyses provide evidence that, even under the settings of Non-IID (non-independent and identically distributed) data, FedQClip achieves a convergence rate of <inline-formula><tex-math>$mathcal{O}(frac{1}{sqrt{T}})$</tex-math></inline-formula>, effectively addressing the convergence degradation caused by compression errors. Furthermore, our theoretical analysis highlights the importance of selecting an appropriate number of local updates to enhance the convergence of FL training. Through extensive experiments, we demonstrate that FedQClip outperforms state-of-the-art methods in terms of communication efficiency and convergence rate.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 2","pages":"717-730"},"PeriodicalIF":3.6,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning-Assisted Template Attack Against Dynamic Frequency Scaling Countermeasures 一种深度学习辅助模板攻击对抗动态频率缩放对抗
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-10-10 DOI: 10.1109/TC.2024.3477997
Davide Galli;Francesco Lattari;Matteo Matteucci;Davide Zoni
{"title":"A Deep Learning-Assisted Template Attack Against Dynamic Frequency Scaling Countermeasures","authors":"Davide Galli;Francesco Lattari;Matteo Matteucci;Davide Zoni","doi":"10.1109/TC.2024.3477997","DOIUrl":"https://doi.org/10.1109/TC.2024.3477997","url":null,"abstract":"In the last decades, machine learning techniques have been extensively used in place of classical template attacks to implement profiled side-channel analysis. This manuscript focuses on the application of machine learning to counteract Dynamic Frequency Scaling defenses. While state-of-the-art attacks have shown promising results against desynchronization countermeasures, a robust attack strategy has yet to be realized. Motivated by the simplicity and effectiveness of template attacks for devices lacking desynchronization countermeasures, this work presents a Deep Learning-assisted Template Attack (DLaTA) methodology specifically designed to target highly desynchronized traces through Dynamic Frequency Scaling. A deep learning-based pre-processing step recovers information obscured by desynchronization, followed by a template attack for key extraction. Specifically, we developed a three-stage deep learning pipeline to resynchronize traces to a uniform reference clock frequency. The experimental results on the AES cryptosystem executed on a RISC-V System-on-Chip reported a Guessing Entropy equal to 1 and a Guessing Distance greater than 0.25. Results demonstrate the method's ability to successfully retrieve secret keys even in the presence of high desynchronization. As an additional contribution, we publicly release our \u0000<monospace>DFS_DESYNCH</monospace>\u0000 database\u0000<xref><sup>1</sup></xref>\u0000<fn><label><sup>1</sup></label><p><uri>https://github.com/hardware-fab/DLaTA</uri></p></fn>\u0000 containing the first set of real-world highly desynchronized power traces from the execution of a software AES cryptosystem.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"293-306"},"PeriodicalIF":3.6,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning 在联邦学习中使用显著梯度保护平衡隐私和准确性
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-10-10 DOI: 10.1109/TC.2024.3477971
Benteng Zhang;Yingchi Mao;Xiaoming He;Huawei Huang;Jie Wu
{"title":"Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning","authors":"Benteng Zhang;Yingchi Mao;Xiaoming He;Huawei Huang;Jie Wu","doi":"10.1109/TC.2024.3477971","DOIUrl":"https://doi.org/10.1109/TC.2024.3477971","url":null,"abstract":"Previous state-of-the-art studies have demonstrated that adversaries can access sensitive user data by membership inference attacks (MIAs) in Federated Learning (FL). Introducing differential privacy (DP) into the FL framework is an effective way to enhance the privacy of FL. Nevertheless, in differentially private federated learning (DP-FL), local gradients become excessively sparse in certain training rounds. Especially when training with low privacy budgets, there is a risk of introducing excessive noise into clients’ gradients. This issue can lead to a significant degradation in the accuracy of the global model. Thus, how to balance the user's privacy and global model accuracy becomes a challenge in DP-FL. To this end, we propose an approach, known as differential privacy federated aggregation, based on significant gradient protection (DP-FedASGP). DP-FedASGP can mitigate excessive noises by protecting significant gradients and accelerate the convergence of the global model by calculating dynamic aggregation weights for gradients. Experimental results show that DP-FedASGP achieves comparable privacy protection effects to DP-FedAvg and cpSGD (communication-private SGD based on gradient quantization) but outperforms DP-FedSNLC (sparse noise based on clipping losses and privacy budget costs) and FedSMP (sparsified model perturbation). Furthermore, the average global test accuracy of DP-FedASGP across four datasets and three models is about \u0000<inline-formula><tex-math>$2.62$</tex-math></inline-formula>\u0000%, \u0000<inline-formula><tex-math>$4.71$</tex-math></inline-formula>\u0000%, \u0000<inline-formula><tex-math>$0.45$</tex-math></inline-formula>\u0000%, and \u0000<inline-formula><tex-math>$0.19$</tex-math></inline-formula>\u0000% higher than the above methods, respectively. These improvements indicate that DP-FedASGP is a promising approach for balancing the privacy and accuracy of DP-FL.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"278-292"},"PeriodicalIF":3.6,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Neural Architecture Search for Personalized Federated Learning 个性化联邦学习的协同神经架构搜索
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-10-10 DOI: 10.1109/TC.2024.3477945
Yi Liu;Song Guo;Jie Zhang;Zicong Hong;Yufeng Zhan;Qihua Zhou
{"title":"Collaborative Neural Architecture Search for Personalized Federated Learning","authors":"Yi Liu;Song Guo;Jie Zhang;Zicong Hong;Yufeng Zhan;Qihua Zhou","doi":"10.1109/TC.2024.3477945","DOIUrl":"https://doi.org/10.1109/TC.2024.3477945","url":null,"abstract":"Personalized federated learning (pFL) is a promising approach to train customized models for multiple clients over heterogeneous data distributions. However, existing works on pFL often rely on the optimization of model parameters and ignore the personalization demand on neural network architecture, which can greatly affect the model performance in practice. Therefore, generating personalized models with different neural architectures for different clients is a key issue in implementing pFL in a heterogeneous environment. Motivated by Neural Architecture Search (NAS), a model architecture searching methodology, this paper aims to automate the model design in a collaborative manner while achieving good training performance for each client. Specifically, we reconstruct the centralized searching of NAS into the distributed scheme called Personalized Architecture Search (PAS), where differentiable architecture fine-tuning is achieved via gradient-descent optimization, thus making each client obtain the most appropriate model. Furthermore, to aggregate knowledge from heterogeneous neural architectures, a knowledge distillation-based training framework is proposed to achieve a good trade-off between generalization and personalization in federated learning. Extensive experiments demonstrate that our architecture-level personalization method achieves higher accuracy under the non-iid settings, while not aggravating model complexity over state-of-the-art benchmarks.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"250-262"},"PeriodicalIF":3.6,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Heterogeneous and Adaptive Architecture for Decision-Tree-Based ACL Engine on FPGA 基于FPGA的决策树ACL引擎异构自适应架构
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-10-10 DOI: 10.1109/TC.2024.3477955
Yao Xin;Chengjun Jia;Wenjun Li;Ori Rottenstreich;Yang Xu;Gaogang Xie;Zhihong Tian;Jun Li
{"title":"A Heterogeneous and Adaptive Architecture for Decision-Tree-Based ACL Engine on FPGA","authors":"Yao Xin;Chengjun Jia;Wenjun Li;Ori Rottenstreich;Yang Xu;Gaogang Xie;Zhihong Tian;Jun Li","doi":"10.1109/TC.2024.3477955","DOIUrl":"https://doi.org/10.1109/TC.2024.3477955","url":null,"abstract":"Access Control Lists (ACLs) are crucial for ensuring the security and integrity of modern cloud and carrier networks by regulating access to sensitive information and resources. However, previous software and hardware implementations no longer meet the requirements of modern datacenters. The emergence of FPGA-based SmartNICs presents an opportunity to offload ACL functions from the host CPU, leading to improved network performance in datacenter applications. However, previous FPGA-based ACL designs lacked the necessary flexibility to support different rulesets without hardware reconfiguration while maintaining high performance. In this paper, we propose HACL, a heterogeneous and adaptive architecture for decision-tree-based ACL engine on FPGA. By employing techniques such as tree decomposition and recirculated pipeline scheduling, HACL can accommodate various rulesets without reconfiguring the underlying architecture. To facilitate the efficient mapping of different decision trees to memory and optimize the throughput of a ruleset, we also introduce a heterogeneous framework with a compiler in CPU platform for HACL. We implement HACL on a typical SmartNIC and evaluate its performance. The results demonstrate that HACL achieves a throughput exceeding 260 Mpps when processing 100K-scale ACL rulesets, with low hardware resource utilization. By integrating more engines, HACL can achieve even higher throughput and support larger rulesets.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"263-277"},"PeriodicalIF":3.6,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dependability of the K Minimum Values Sketch: Protection and Comparative Analysis K最小值草图的可靠性:保护与比较分析
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-10-09 DOI: 10.1109/TC.2024.3475588
Jinhua Zhu;Zhen Gao;Pedro Reviriego;Shanshan Liu;Fabrizio Lombardi
{"title":"Dependability of the K Minimum Values Sketch: Protection and Comparative Analysis","authors":"Jinhua Zhu;Zhen Gao;Pedro Reviriego;Shanshan Liu;Fabrizio Lombardi","doi":"10.1109/TC.2024.3475588","DOIUrl":"https://doi.org/10.1109/TC.2024.3475588","url":null,"abstract":"A basic operation in big data analysis is to find the cardinality estimate; to estimate the cardinality at high speed and with a low memory requirement, data sketches that provide approximate estimates, are usually used. The K Minimum Value (KMV) sketch is one of the most popular options; however, soft errors on memories in KMV may substantially degrade performance. This paper is the first to consider the impact of soft errors on the KMV sketch and to compare it with HyperLogLog (HLL), another widely used sketch for cardinality estimate. Initially, the operation of KMV in the presence of soft errors (so its dependability) in the memory is studied by a theoretical analysis and simulation by error injection. The evaluation results show that errors during the construction phase of KMV may cause large deviations in the estimate results. Subsequently, based on the algorithmic features of the KMV sketch, two protection schemes are proposed. The first scheme is based on using a single parity check (SPC) to detect errors and reduce their impact on the cardinality estimate; the second scheme is based on the incremental property of the memory list in KMV. The presented evaluation shows that both schemes can dramatically improve the performance of KMV, and the SPC scheme performs better even though it requires more memory footprint and overheads in the checking operation. Finally, it is shown that soft errors on the unprotected KMV produce larger worst-case errors than in HLL, but the average impact of errors is lower; also, the protected KMV using the proposed schemes are more dependable than HLL with existing protection techniques.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"210-221"},"PeriodicalIF":3.6,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信