IEEE Transactions on Computers最新文献

筛选
英文 中文
Optimizing Structured-Sparse Matrix Multiplication in RISC-V Vector Processors
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-24 DOI: 10.1109/TC.2025.3533083
Vasileios Titopoulos;Kosmas Alexandridis;Christodoulos Peltekis;Chrysostomos Nicopoulos;Giorgos Dimitrakopoulos
{"title":"Optimizing Structured-Sparse Matrix Multiplication in RISC-V Vector Processors","authors":"Vasileios Titopoulos;Kosmas Alexandridis;Christodoulos Peltekis;Chrysostomos Nicopoulos;Giorgos Dimitrakopoulos","doi":"10.1109/TC.2025.3533083","DOIUrl":"https://doi.org/10.1109/TC.2025.3533083","url":null,"abstract":"Structured sparsity has been proposed as an efficient way to prune the complexity of Machine Learning (ML) applications and to simplify the handling of sparse data in hardware. Accelerating ML models, whether for training, or inference, heavily relies on matrix multiplications that can be efficiently executed on vector processors, or custom matrix engines. This work aims to integrate the simplicity of structured sparsity into vector execution to speed up the corresponding matrix multiplications. Initially, the implementation of structured-sparse matrix multiplication using the current RISC-V instruction set vector extension is comprehensively explored. Critical parameters that affect performance, such as the impact of data distribution across the scalar and vector register files, data locality, and the effectiveness of loop unrolling are analyzed both qualitatively and quantitatively. Furthermore, it is demonstrated that the addition of a single new instruction would reap even higher performance. The newly proposed instruction is called <monospace>vindexmac</monospace>, i.e., vector index-multiply-accumulate. It allows for indirect reads from the vector register file and it reduces the number of instructions executed per matrix multiplication iteration, without introducing additional dependencies that would limit loop unrolling. The proposed new instruction was integrated in a decoupled RISC-V vector processor with negligible hardware cost. Experimental results demonstrate the runtime efficiency and the scalability offered by the introduced optimizations and the new instruction for the execution of state-of-the-art Convolutional Neural Networks. More particularly, the addition of a custom instruction improves runtime by 25% and 33%, when compared with highly-optimized vectorized kernels that use only the currently defined RISC-V instructions.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1446-1460"},"PeriodicalIF":3.6,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Reviewers List 2024审稿人名单
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-14 DOI: 10.1109/TC.2025.3527650
{"title":"2024 Reviewers List","authors":"","doi":"10.1109/TC.2025.3527650","DOIUrl":"https://doi.org/10.1109/TC.2025.3527650","url":null,"abstract":"","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"334-340"},"PeriodicalIF":3.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10840336","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLOpt: Serving Real-Time Inference Pipeline With Strict Latency Constraint
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-10 DOI: 10.1109/TC.2025.3528125
Zhixin Zhao;Yitao Hu;Guotao Yang;Ziqi Gong;Chen Shen;Laiping Zhao;Wenxin Li;Xiulong Liu;Wenyu Qu
{"title":"SLOpt: Serving Real-Time Inference Pipeline With Strict Latency Constraint","authors":"Zhixin Zhao;Yitao Hu;Guotao Yang;Ziqi Gong;Chen Shen;Laiping Zhao;Wenxin Li;Xiulong Liu;Wenyu Qu","doi":"10.1109/TC.2025.3528125","DOIUrl":"https://doi.org/10.1109/TC.2025.3528125","url":null,"abstract":"The rise of machine learning as a service (MLaaS) has driven the demand for complex and customized real-time inference tasks, often requiring cascading multiple deep neural network (DNN) models into inference pipelines. However, these pipelines pose significant challenges due to scheduling complexity, particularly in maintaining strict latency service level objectives (SLOs). Existing systems serve pipelines with model-independent scheduling policies, which ignore the unique workload characteristics introduced by model cascading in the inference pipeline, leading to SLO violations and resource inefficiencies. In this paper, we propose that the serving system should exploit the model-cascading nature and intermodel workload dependency of the inference pipeline to ensure strict latency SLO cost-effectively. Based on this, we design and implement <monospace>SLOpt</monospace>, a serving system optimized for real-time inference pipelines with a three-stage codesign of workload estimation, resource provisioning, and request execution. <monospace>SLOpt</monospace> proposes cascade workload estimation and ahead-of-time tuning, which together address the challenge of cascade blocking and head-of-line blocking in workload estimation and resource provisioning. <monospace>SLOpt</monospace> further implements an adaptive batch drop policy to mitigate latency amplification issues within the pipeline. These innovations enable <monospace>SLOpt</monospace> to reduce the 99th percentile latency (P99 latency) by <inline-formula><tex-math>$1.4$</tex-math></inline-formula> to <inline-formula><tex-math>$2.5$</tex-math></inline-formula> times compared to the state of the arts while lowering serving costs by up to <inline-formula><tex-math>$29%$</tex-math></inline-formula>. Moreover, to achieve comparable P99 latency, <monospace>SLOpt</monospace> requires up to <inline-formula><tex-math>$70%$</tex-math></inline-formula> less cost than existing systems. Extensive evaluations on a 64-GPU cluster demonstrate <monospace>SLOpt</monospace>'s effectiveness in meeting strict P99 latency SLOs under diverse real-world workloads.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1431-1445"},"PeriodicalIF":3.6,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NetCRC-NR: In-Network 5G NR CRC Accelerator
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-07 DOI: 10.1109/TC.2025.3526326
Abdulbary Naji;Xingfu Wang;Ping Liu;Ammar Hawbani;Liang Zhao;Xiaohua Xu;Fuyou Miao
{"title":"NetCRC-NR: In-Network 5G NR CRC Accelerator","authors":"Abdulbary Naji;Xingfu Wang;Ping Liu;Ammar Hawbani;Liang Zhao;Xiaohua Xu;Fuyou Miao","doi":"10.1109/TC.2025.3526326","DOIUrl":"https://doi.org/10.1109/TC.2025.3526326","url":null,"abstract":"In 5G Radio Access Networks (RAN), Cyclic Redundancy Check (CRC) algorithms play a vital role in detecting accidental changes to digital data during transmission. However, due to the massive bandwidth demands in 5G networks, CRC computation is a resource-intensive process. To address this challenge, we propose performing CRC computation and verification directly in the network path. Specifically, we introduce NetCRC-NR, a 5G New Radio (NR) standard-compliant in-network CRC accelerator. NetCRC-NR implements the 5G NR CRC algorithms specified in 3GPP TS 38.212, including CRC24A, CRC24B, CRC24C, CRC16, CRC11, and CRC6. It leverages programmable switches to perform in-network CRC generation and validation for the Transport Blocks (TBs) and Code Blocks (CBs), aiming at providing high CRC computation throughput and alleviating the computational burden on General-Purpose Processors (GPPs). We design and implement NetCRC-NR on Intel Tofino programmable switch and commodity servers running the Data Plane Development Kit (DPDK). Extensive experiments demonstrate that NetCRC-NR performs CRC generation and verification at the switch line rate of up to 4+Tbps CRC throughput, showcasing its efficiency and potential in accelerating the 5G RAN error detection process.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1418-1430"},"PeriodicalIF":3.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Precision Error Bit Prediction for 3D QLC NAND Flash Memory: Observations, Analysis, and Modeling
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-06 DOI: 10.1109/TC.2025.3525610
Guangkuo Yang;Meng Zhang;Peng Guo;Xuepeng Zhan;Shaoqi Yang;Xiaohuan Zhao;Xinyi Guo;Pengpeng Sang;Jixuan Wu;Fei Wu;Jiezhi Chen
{"title":"High-Precision Error Bit Prediction for 3D QLC NAND Flash Memory: Observations, Analysis, and Modeling","authors":"Guangkuo Yang;Meng Zhang;Peng Guo;Xuepeng Zhan;Shaoqi Yang;Xiaohuan Zhao;Xinyi Guo;Pengpeng Sang;Jixuan Wu;Fei Wu;Jiezhi Chen","doi":"10.1109/TC.2025.3525610","DOIUrl":"https://doi.org/10.1109/TC.2025.3525610","url":null,"abstract":"In the age of artificial intelligence, large language models (LLM) require rapid development along with massive volumes of training data and parameter storage. Over the past decade, 3D NAND flash memory has emerged as the dominant non-volatile memory technology due to its high bit density and large capacity. However, because of its 3D vertical stacking technique and array designs, 3D NAND flash memory has more complicated data loss mechanisms compared to 2D NAND flash memory. As bit densities rise to Quad-level-cells (QLC), the small read margins will further complicate and make the situation more unpredictable. In this work, we propose an error-bit prediction model in this paper for 3D QLC NAND flash memory with the charge-trap (CT) cell structure based on a thorough analysis of multiple parameters that affect the error-bit distributions, including read disturb (RD) and degradation from program/erase (PE) cycles. Specifically, we develop the whole-block prediction (WBP) and the dynamic-worst-page prediction (DWPM) models. It is shown that the proposed models can be used for high-precision error-bit prediction to guarantee data reliability in commonly used NAND-based storage systems based on the characterization results of raw NAND chips.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1392-1404"},"PeriodicalIF":3.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asynchronous Control Based Aggregation Transport Protocol for Distributed Deep Learning
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-06 DOI: 10.1109/TC.2025.3525604
Jin Ye;Yajun Peng;Yijun Li;Zhaoyi Li;Jiawei Huang
{"title":"Asynchronous Control Based Aggregation Transport Protocol for Distributed Deep Learning","authors":"Jin Ye;Yajun Peng;Yijun Li;Zhaoyi Li;Jiawei Huang","doi":"10.1109/TC.2025.3525604","DOIUrl":"https://doi.org/10.1109/TC.2025.3525604","url":null,"abstract":"With the rapid growth scale of dataset and model, the training of deep neural networks (DNN) tends to be deployed in a distributed manner. In the large-scale distributed training, the bottlenecks have gradually moved from computational resources to communication process. Recent researches adopt in-network aggregation (INA) that offloads the gradient aggregation process to programmable switches, thereby reducing network traffic amount and transmission latency. Unfortunately, due to the bandwidth competition in shared training clusters, the straggler will slow down the training efficiency of INA. To address this issue, we propose an Asynchronous Control based Aggregation Transport Protocol (AC-ATP), which makes full use uncongested links to transmit gradients and the switch memory to cache gradients from the fast workers to accelerate the gradient aggregation. Meanwhile, AC-ATP performs congestion control according to the transmission progress of worker and the remaining completion time of the job. The evaluation results of real testbed and large-scale simulations show that AC-ATP reduces the aggregate time by up to 68% and speeds up training in real-world benchmark models.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1362-1376"},"PeriodicalIF":3.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and Efficient Cross-Modal Retrieval Over Encrypted Multimodal Data
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-06 DOI: 10.1109/TC.2025.3525614
Li Yang;Wei Zhang;Yinbin Miao;Yanrong Liang;Xinghua Li;Kim-Kwang Raymond Choo;Robert H. Deng
{"title":"Secure and Efficient Cross-Modal Retrieval Over Encrypted Multimodal Data","authors":"Li Yang;Wei Zhang;Yinbin Miao;Yanrong Liang;Xinghua Li;Kim-Kwang Raymond Choo;Robert H. Deng","doi":"10.1109/TC.2025.3525614","DOIUrl":"https://doi.org/10.1109/TC.2025.3525614","url":null,"abstract":"With the popularity of social media, mobile devices and the Internet, a large amount of multimodal data (e.g, text, image, audio, video, etc.) is increasingly being outsourced to cloud to save local computing and storage costs. To search through encrypted multimodal data in the cloud, privacy-preserving cross-modal retrieval (PPCMR) techniques have attracted extensive attention. However, most of the existing PPCMR schemes lack the ability to resist quantum attacks and have low search efficiency on large-scale datasets. To solve above problems, we first propose a basic PPCMR scheme FECMR using the enhanced Single-key Function-hiding Inner Product Functional Encryption for Binary strings (SFB-IPFE) and cross-modal hashing technology, which achieves the measurement of similarity over encrypted multimodal data while resisting quantum attacks. Then, we design an efficient index KM-tree utilizing the K-modes clustering algorithm. On this basis, we propose an improved scheme FECMR+, which achieves sub-linear search complexity. Finally, formal security analysis proves that our schemes are secure against quantum attacks, and extensive experiments prove that our schemes are efficient and feasible for practical application.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1405-1417"},"PeriodicalIF":3.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LUNA-CiM: A Programmable Compute-in-Memory Fabric for Neural Network Acceleration
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-06 DOI: 10.1109/TC.2025.3525601
Peyman Dehghanzadeh;Ovishake Sen;Baibhab Chatterjee;Swarup Bhunia
{"title":"LUNA-CiM: A Programmable Compute-in-Memory Fabric for Neural Network Acceleration","authors":"Peyman Dehghanzadeh;Ovishake Sen;Baibhab Chatterjee;Swarup Bhunia","doi":"10.1109/TC.2025.3525601","DOIUrl":"https://doi.org/10.1109/TC.2025.3525601","url":null,"abstract":"Compute-in-memory (CiM) has emerged as a promising approach for improving energy efficiency for diverse data-intensive applications. In this paper, we present LUNA-CiM, a lookup table (LUT)-based programmable fabric for flexible and efficient mapping of artificial neural network (ANN) in memory. Its objective is to tackle scalability challenges in LUT-based computation by minimizing hardware, storage elements, and energy consumption. The proposed method utilizes the divide and conquer (D&C) strategy to enhance the scalability of LUT-based computation. For example, in a 4b <inline-formula><tex-math>$boldsymbol{times}$</tex-math></inline-formula> 4b lookup table-based multiplier, as one of the main components in ANN, decomposing high-precision operations into lower-precision counterparts leads to a substantial reduction in area overheads, approximately 73% less compared to conventional LUT-based approaches. Importantly, this efficiency gain is achieved without compromising accuracy. Extensive simulations were conducted to validate the performance of the proposed method. The analysis presented in this paper reveals a noteworthy advancement in energy efficiency, indicating a 58% reduction in energy consumption per computation compared to the conventional lookup table approach. Additionally, the introduced approach demonstrates a 36% improvement in speed over the traditional lookup table approach. These findings highlight notable advancements in performance, showcasing the potential of this inventive method to achieve low power, low-area overhead, and fast computations through the utilization of LUTs within an SRAM array.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1348-1361"},"PeriodicalIF":3.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Karatsuba Matrix Multiplication and Its Efficient Custom Hardware Implementations
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-01-06 DOI: 10.1109/TC.2025.3525606
Trevor E. Pogue;Nicola Nicolici
{"title":"Karatsuba Matrix Multiplication and Its Efficient Custom Hardware Implementations","authors":"Trevor E. Pogue;Nicola Nicolici","doi":"10.1109/TC.2025.3525606","DOIUrl":"https://doi.org/10.1109/TC.2025.3525606","url":null,"abstract":"While the Karatsuba algorithm reduces the complexity of large integer multiplication, the extra additions required minimize its benefits for smaller integers of more commonly-used bitwidths. In this work, we propose the extension of the scalar Karatsuba multiplication algorithm to matrix multiplication, showing how this maintains the reduction in multiplication complexity of the original Karatsuba algorithm while reducing the complexity of the extra additions. Furthermore, we propose new matrix multiplication hardware architectures for efficiently exploiting this extension of the Karatsuba algorithm in custom hardware. We show that the proposed algorithm and hardware architectures can provide real area or execution time improvements for integer matrix multiplication compared to scalar Karatsuba or conventional matrix multiplication algorithms, while also supporting implementation through proven systolic array and conventional multiplier architectures at the core. We provide a complexity analysis of the algorithm and architectures and evaluate the proposed designs both in isolation and in an end-to-end deep learning accelerator system compared to baseline designs and prior state-of-the-art works implemented on the same type of compute platform, demonstrating their ability to increase the performance-per-area of matrix multiplication hardware.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1377-1391"},"PeriodicalIF":3.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Printed Multilayer Perceptrons Realization via Area-Aware Neural Minimization
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-12-31 DOI: 10.1109/TC.2024.3524076
Argyris Kokkinis;Georgios Zervakis;Kostas Siozios;Mehdi Baradaran Tahoori;Jörg Henkel
{"title":"Enabling Printed Multilayer Perceptrons Realization via Area-Aware Neural Minimization","authors":"Argyris Kokkinis;Georgios Zervakis;Kostas Siozios;Mehdi Baradaran Tahoori;Jörg Henkel","doi":"10.1109/TC.2024.3524076","DOIUrl":"https://doi.org/10.1109/TC.2024.3524076","url":null,"abstract":"Printed Electronics (PE) set up a new path for the realization of ultra low-cost circuits that can be deployed in every-day consumer goods and disposables. In addition, PE satisfy requirements such as porosity, flexibility, and conformity. However, the large feature sizes in PE and limited device counts incur high restrictions and increased area and power overheads, prohibiting the realization of complex circuits. As a result, although printed Machine Learning (ML) circuits could open new horizons and bring “intelligence” in such domains, the implementation of complex classifiers, as required in target applications, is hardly feasible. In this paper, we aim to address this and focus on the design of battery-powered printed Multilayer Perceptrons (MLPs). To that end, we exploit fully-customized circuit (bespoke) implementations, enabled in PE, and propose a hardware-aware neural minimization framework dedicated for such customized MLP circuits. Our evaluation demonstrates that, for up to 3% accuracy loss, our co-design methodology enables, for the first time, battery-powered operation of complex printed MLPs.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1461-1469"},"PeriodicalIF":3.6,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信