IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems最新文献

筛选
英文 中文
Dual Multimodal Fusions With Convolution and Transformer Layers for VLSI Congestion Prediction 基于卷积和变压器层的双多模态融合VLSI拥塞预测
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-23 DOI: 10.1109/TCAD.2024.3522199
Hao Gu;Youwen Wang;Xinglin Zheng;Keyu Peng;Ziran Zhu;Jianli Chen;Jun Yang
{"title":"Dual Multimodal Fusions With Convolution and Transformer Layers for VLSI Congestion Prediction","authors":"Hao Gu;Youwen Wang;Xinglin Zheng;Keyu Peng;Ziran Zhu;Jianli Chen;Jun Yang","doi":"10.1109/TCAD.2024.3522199","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3522199","url":null,"abstract":"In very large scale integration (VLSI) circuit physical design, precise congestion prediction during placement is crucial for enhancing routability and accelerating design processes. Existing congestion prediction models often encounter challenges in handling multimodal information and lack effective fusion of placement and netlist features, limiting their prediction accuracy. In this article, we present a novel congestion prediction model that leverages dual multimodal fusions with convolution and transformer layers to effectively capture the multiscale placement information and enhance congestion prediction accuracy. We first adopt convolutional neural networks (CNNs) to extract grid-based placement features and heterogeneous graph convolutional networks (HGCNs) to extract netlist information. To help the model understand the correlation between different modalities, we then propose an early feature fusion (EFF) to integrate netlist knowledge into multiscale placement features at multimodal interaction subspace. Besides, a deep feature fusion (DFF) method is proposed to further fuse multimodal features, which has multiple vision transformer layers based on adaptive attention enhancement technology. These layers include self-attention (SA) to boost intramodal features and cross-attention (CA) to perform cross-modal feature fusion on netlist and grid-based placement features. Finally, the output features of DFF are sent into the cascaded decoder to recover the congestion map by exploiting several upsampling layers and merging with EFF features. Compared with the existing state-of-the-art congestion prediction models, experimental results demonstrate that our model not only outperforms them in prediction accuracy, but also excels in reducing routing congestion when integrated into the placer DREAMPlace.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2378-2391"},"PeriodicalIF":2.7,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving DNN Accuracy on MLC PIM via Non-Ideal PIM Device Fine-Tuning 通过非理想PIM器件微调提高MLC PIM的深度神经网络精度
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-23 DOI: 10.1109/TCAD.2024.3521195
Hao Lv;Lei Zhang;Ying Wang
{"title":"Improving DNN Accuracy on MLC PIM via Non-Ideal PIM Device Fine-Tuning","authors":"Hao Lv;Lei Zhang;Ying Wang","doi":"10.1109/TCAD.2024.3521195","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3521195","url":null,"abstract":"Resistive random access memory (RRAM) emerges as a promising technology for developing energy-efficient deep neural network (DNN) accelerators, owing to its analog computing paradigm for matrix-vector multiplication. However, the inherent nonideal device features of RRAM cells, such as device variation, read disturbances, and limited on/off ratio, present challenges for model deployment. Therefore, to ensure accurate storage and computing precision for RRAM-based accelerators, a widely used practice is encoding a DNN weight by multiple cells, resulting in significant memory overhead and underutilization. This challenge is further exacerbated by the rapid increases in model size witnessed in recent years. While the one-to-one weight-cell mapping strategy can improve memory utilization, it inevitably introduces deviations in the mapped DNN weight from the desired value due to RRAM variation issues, leading to model accuracy degradation. In response to this challenge, we abstract the model optimization on RRAM chips as a non-ideal PIM device optimization problem, aimed at optimizing model accuracy without the requirement of precise weight programming. We systematically analyze the model optimization behavior on multilevel RRAM devices by investigating the accuracy recovery process of various fine-tuning strategies in recovering model performance under the non-ideal PIM device setting. Based on the analysis, we propose a non-ideal PIM device finetune scheme to recover the model performance for multilevel RRAM under the non-ideal PIM device setting. Our proposed scheme leverages knowledge distillation and exploits input/output information of the model on RRAM to guide the fine-tuning process, finally restoring its accuracy. Experimental results demonstrate the efficacy of our non-ideal PIM device fine-tuning scheme, achieving nearly complete recovery of model performance. Our approach yields over a 3% improvement in model accuracy compared to variation-aware training approaches.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2277-2286"},"PeriodicalIF":2.7,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HTs-GCN: Identifying Hardware Trojan Nodes in Integrated Circuits Using a Graph Convolutional Network 利用图卷积网络识别集成电路中的硬件木马节点
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-19 DOI: 10.1109/TCAD.2024.3520522
Jie Xiao;Shuiliang Chai;Yanjiao Gao;Yuhao Huang;Fan Zhang;Tieming Chen
{"title":"HTs-GCN: Identifying Hardware Trojan Nodes in Integrated Circuits Using a Graph Convolutional Network","authors":"Jie Xiao;Shuiliang Chai;Yanjiao Gao;Yuhao Huang;Fan Zhang;Tieming Chen","doi":"10.1109/TCAD.2024.3520522","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3520522","url":null,"abstract":"Hardware Trojans (HTs) present significant security threats to integrated circuits. Detecting and locating HTs is crucial for mitigating these threats. Thus, this article proposes a method called HTs-GCN, which utilizes a graph convolutional network (GCN) to identify HTs. First, it extracts two novel features of gate nodes using a depth-first search strategy and topological logical analysis to enrich the feature information of circuit nodes. Second, through a message-passing mechanism, it designs a local feature aggregation method based on the GCN and a global feature fusion method based on an attention mechanism to improve the representation capability of circuit node features. Then, leveraging the concept of stochastic gradient descent and incorporating mini-batch oversampling and under-sampling techniques, it employs a dataset imbalance handling method to address the scarcity of HT nodes in circuits. These approaches significantly enhance the distinguishability between gate nodes with HTs and other gate nodes while reducing computational complexity. Experimental results indicate that HTs-GCN outperforms the recently proposed NHTD-GL method in terms of recall: it achieves approximately 7.8% points higher recall while maintaining similar accuracy. HTs-GCN demonstrates exceptional generalizability, with an average recall and accuracy of 93.0% and 100%, respectively, on infrequently used circuits in the Trust-Hub benchmark. In addition, on the TRIT-TC benchmark, HTs-GCN achieves excellent average true positive rate (TPR) and true negative rate (TNR) of 95.1% and 94.4%, respectively. Furthermore, HTs-GCN exhibits robust performance under gate modification attacks, with average TPR and TNR reaching 82.1% and 92.5%, respectively.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2353-2366"},"PeriodicalIF":2.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiobjective Optimization for Common-Centroid Placement of Analog Transistors 模拟晶体管共质心布置的多目标优化
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-19 DOI: 10.1109/TCAD.2024.3520521
Supriyo Maji;Hyungjoo Park;Gi-Moon Hong;Souradip Poddar;David Z. Pan
{"title":"Multiobjective Optimization for Common-Centroid Placement of Analog Transistors","authors":"Supriyo Maji;Hyungjoo Park;Gi-Moon Hong;Souradip Poddar;David Z. Pan","doi":"10.1109/TCAD.2024.3520521","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3520521","url":null,"abstract":"In analog circuits, process variation can cause unpredictability in circuit performance. Common-centroid (CC) type layouts have been shown to mitigate process-induced variations and are widely used to match circuit elements. Nevertheless, selecting the most suitable CC topology necessitates careful consideration of important layout constraints. Manual handling of these constraints becomes challenging, especially with large size problems. State-of-the-art CC placement methods lack an optimization framework to handle important layout constraints collectively. They also require manual efforts and consequently, the solutions can be suboptimal. To address this, we propose a unified framework based on multiobjective optimization for CC placement of analog transistors. Our method handles various constraints, including degree of dispersion, routing complexity, diffusion sharing, and layout dependent effects. The multiobjective optimization provides better handling of the objectives when compared to single-objective optimization. Moreover, compared to existing methods, our method explores more CC topologies. Post-layout simulation results show better performance compared to state-of-the-art techniques in generating CC layouts.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2029-2039"},"PeriodicalIF":2.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SubMap: A Partial Mapping Strategy for CGRA Based on sub-CGRA Exploration SubMap:基于子CGRA探测的CGRA局部映射策略
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-18 DOI: 10.1109/TCAD.2024.3520024
Ning Li;Dejian Li;Zhipeng Wu;Peiguang Jing;Sio Hang Pun;Yu Liu
{"title":"SubMap: A Partial Mapping Strategy for CGRA Based on sub-CGRA Exploration","authors":"Ning Li;Dejian Li;Zhipeng Wu;Peiguang Jing;Sio Hang Pun;Yu Liu","doi":"10.1109/TCAD.2024.3520024","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3520024","url":null,"abstract":"Coarse-grained reconfigurable array (CGRA) is a quality hardware for compute-intensive loop kernels, with its excellent balance of performance, energy efficiency, and reconfigurability. However, the efficiency of CGRA depends heavily on how the compiler maps the data flow graph (DFG) extracted from application kernels onto the target architecture. Most existing CGRA compilers encounter the challenge of long compilation times due to excessive exploration space. To reduce the exploration space and compilation time, we propose SubMap, which adaptively explores a suitable sub-CGRA for different DFGs in a target CGRA and efficiently performs the mapping process. The experimental results show that SubMap greatly reduces the compilation time compared to the latest methods while maintaining the mapping quality. On HyCube <inline-formula> <tex-math>$4times 4$ </tex-math></inline-formula>, SubMap has an average performance improvement of <inline-formula> <tex-math>$9.47 times $ </tex-math></inline-formula> and <inline-formula> <tex-math>$11.67 times $ </tex-math></inline-formula>, respectively, compared with Morpher (Pathfinder) and Morpher (SA). As the scale of the target CGRA increases, the performance improvement of SubMap becomes more pronounced.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 7","pages":"2827-2831"},"PeriodicalIF":2.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Index IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Vol. 43 集成电路与系统计算机辅助设计学报,第43卷
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-17 DOI: 10.1109/TCAD.2024.3518672
{"title":"2024 Index IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Vol. 43","authors":"","doi":"10.1109/TCAD.2024.3518672","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3518672","url":null,"abstract":"","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"43 12","pages":"4865-4939"},"PeriodicalIF":2.7,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10804686","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Modular Adder Architecture for Cryptographic Applications on FPGAs fpga上加密应用的优化模块化加法器架构
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-16 DOI: 10.1109/TCAD.2024.3518412
Madani Bachir;Azzaz Mohamed Salah;Sadoudi Said;Kaibou Redouane;Bruno da Silva
{"title":"Optimized Modular Adder Architecture for Cryptographic Applications on FPGAs","authors":"Madani Bachir;Azzaz Mohamed Salah;Sadoudi Said;Kaibou Redouane;Bruno da Silva","doi":"10.1109/TCAD.2024.3518412","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3518412","url":null,"abstract":"Modular addition is a fundamental operation in public-key cryptographic algorithms operating in finite fields, such as elliptic curve cryptography (ECC), Chebyshev polynomials, and post-quantum cryptography (PQC). The performance of these cryptographic algorithms is limited by the conventional modular adder approach, which incorporates two cascaded adders in series. This approach leads to a doubled critical path delay, ultimately causing a decrease in frequency despite utilizing a high-performance adder. This research presents a high-performance, low-area architecture for a modular adder, employing a novel approach. Specifically designed for various prime fields recommended in public key cryptography, the architecture optimally utilizes the carry chain and exploits the structural advantages of the 7-series field programmable gate array and series beyond. Implementation results demonstrate superior performance, achieving operating frequencies of 290.0 MHz for 192 bits and 205.5 MHz for 1024 bits. Notably, the proposed design performs modular addition in a single clock cycle, resulting in an approximate 57% frequency enhancement compared to the conventional approach. Consequently, this architecture stands as an optimal solution for systems demanding high-speed operations.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2168-2180"},"PeriodicalIF":2.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Less Traces Are All It Takes: Efficient Side-Channel Analysis on AES 更少的走线是所有它需要:有效的侧信道分析AES
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-16 DOI: 10.1109/TCAD.2024.3518414
Zhiyuan Xiao;Chen Wang;Jian Shen;Q. M. Jonathan Wu;Debiao He
{"title":"Less Traces Are All It Takes: Efficient Side-Channel Analysis on AES","authors":"Zhiyuan Xiao;Chen Wang;Jian Shen;Q. M. Jonathan Wu;Debiao He","doi":"10.1109/TCAD.2024.3518414","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3518414","url":null,"abstract":"In cryptography, side-channel analysis (SCA) is a technique used to recover cryptographic keys by examining the physical leakages that occur during the operation of cryptographic devices. Recent advancements in deep learning (DL) have greatly enhanced the extraction of crucial information from intricate leakage patterns. A considerable amount of research is dedicated to studying the SubByte (SB) operations of the advanced encryption standard (AES). This is because the SB process, which generates numerous transitions between 0s and 1s during encryption, results in significant energy leakage. However, traditional analysis models primarily focus on the initial round of SB operations in AES, which are less effective on mobile terminals where it is difficult to collect enough signals. These models often neglect additional operations and subsequent rounds, thus providing limited insights from small datasets. Consequently, this limitation has a direct impact on the accuracy and efficiency of key recovery. Our study uses <inline-formula> <tex-math>$rho $ </tex-math></inline-formula>-test analysis to show that significant leakage occurs not only during the S-box operation but also during the AddRoundKey (AR) phase of AES. To address these challenges, we propose a new SCA method, that is, optimized for small sample sizes. This method includes a new comprehensive round trace labeling algorithm, which simultaneously analyzes the SB and AR stages of each AES round. Additionally, we introduce the peak precise localization algorithm to accurately identify the points of energy leakage during each encryption round. Our experiments, conducted with power and electromagnetic (EM) datasets from the STM32F303 microcontroller, demonstrate that our method can reliably recover keys with as few as 20 traces. These results highlight the enhanced capability of our method in handling the complexities of small sample datasets in cryptographic analysis.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2080-2092"},"PeriodicalIF":2.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realization of Early-Exit Dynamic Neural Networks on Reconfigurable Hardware 可重构硬件上早期退出动态神经网络的实现
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-16 DOI: 10.1109/TCAD.2024.3519055
Anastasios Dimitriou;Lei Xun;Jonathon Hare;Geoff V. Merrett
{"title":"Realization of Early-Exit Dynamic Neural Networks on Reconfigurable Hardware","authors":"Anastasios Dimitriou;Lei Xun;Jonathon Hare;Geoff V. Merrett","doi":"10.1109/TCAD.2024.3519055","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3519055","url":null,"abstract":"Early-exiting is a strategy that is becoming popular in deep neural networks (DNNs), as it can lead to faster execution and a reduction in the computational intensity of inference. To achieve this, intermediate classifiers abstract information from the input samples to strategically stop forward propagation and generate an output at an earlier stage. Confidence criteria are used to identify easier-to-recognize samples over the ones that need further filtering. However, such dynamic DNNs have only been realized in conventional computing systems (CPU+GPU) using libraries designed for static networks. In this article, we first explore the feasibility and benefits of realizing early-exit dynamic DNNs on field-programmable gate arrays (FPGAs), a platform already proven to be highly effective for neural network applications. We consider two approaches for implementing and executing the intermediate classifiers: 1) pipeline, which uses existing hardware and 2) parallel, which uses additional dedicated modules. We model their energy needs and execution time and explore their performance using the BranchyNet early-exit approach on LeNet-5, AlexNet, VGG19, and ResNet32, and a Xilinx ZCU106 Evaluation Board. We found that the dynamic approaches are at least 24% faster than a static network executed on an FPGA, consuming a minimum of <inline-formula> <tex-math>$1.32times $ </tex-math></inline-formula> lower energy. We further observe that FPGAs can enhance the performance of early-exit dynamic DNNs by minimizing the complexities introduced by the decision intermediate classifiers through parallel execution. Finally, we compare the two approaches and identify which is best for different network types and confidence levels.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2195-2203"},"PeriodicalIF":2.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Efficient Sparse Multiplications on GPUs With Heuristic Adaptability 启发式自适应在gpu上实现高效稀疏乘法
IF 2.7 3区 计算机科学
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Pub Date : 2024-12-16 DOI: 10.1109/TCAD.2024.3518413
Jiaming Xu;Shan Huang;Jinhao Li;Guyue Huang;Yuan Xie;Yu Wang;Guohao Dai
{"title":"Enabling Efficient Sparse Multiplications on GPUs With Heuristic Adaptability","authors":"Jiaming Xu;Shan Huang;Jinhao Li;Guyue Huang;Yuan Xie;Yu Wang;Guohao Dai","doi":"10.1109/TCAD.2024.3518413","DOIUrl":"https://doi.org/10.1109/TCAD.2024.3518413","url":null,"abstract":"Sparse matrix-vector/matrix multiplication, namely SpMMul, has become a fundamental operation during model inference in various domains. Previous studies have explored numerous optimizations to accelerate it. However, to enable efficient end-to-end inference, the following challenges remain unsolved: 1) incomplete design space and time-consuming preprocessing. Previous methods optimize SpMMul in limited loops and neglect the potential space exploration for further optimization, resulting in >30% waste of computing power. In addition, the preprocessing overhead in SparseTIR and DTC-SpMM is <inline-formula> <tex-math>$1000times $ </tex-math></inline-formula> larger than sparse computing; 2) incompatibility between static dataflow and dynamic input. A static dataflow can not always be efficient to all input, leading to >80% performance loss; and 3) simplistic algorithm performance analysis. Previous studies primarily analyze performance from algorithmic advantages, without considering other aspects like hardware and data features. To tackle the above challenges, we present DA-SpMMul, a Data-Aware heuristic GPU implementation for SpMMul in multiplatforms. DA-SpMMul creatively proposes: 1) complete design space based on theoretical computations and nontrivial implementations without preprocessing. We propose three orthogonal design principles based on theoretical computations and provide nontrivial implementations on standard formats, eliminating the complex preprocessing; 2) feature-enabled adaptive algorithm selection mechanism. We design a heuristic model to enable algorithm selection considering various features; and 3) comprehensive algorithm performance analysis. We extract the features from multiple perspectives and present a comprehensive performance analysis of all algorithms. DA-SpMMul supports PyTorch on both NVIDIA and AMD and achieves an average speedup of <inline-formula> <tex-math>$3.33times $ </tex-math></inline-formula> and <inline-formula> <tex-math>$3.02times $ </tex-math></inline-formula> over NVIDIA cuSPARSE, and <inline-formula> <tex-math>$12.05times $ </tex-math></inline-formula> and <inline-formula> <tex-math>$8.32times $ </tex-math></inline-formula> over AMD rocSPARSE for sparse matrix-vector multiplication and sparse matrix-matrix multiplication, and up to <inline-formula> <tex-math>$1.48times $ </tex-math></inline-formula> speedup against the state-of-the-art open-source algorithm. Integrated with graph neural network framework, PyG, DA-SpMMul achieves up to <inline-formula> <tex-math>$1.22times $ </tex-math></inline-formula> speedup on GCN inference.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2226-2239"},"PeriodicalIF":2.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信