ACM Transactions on Design Automation of Electronic Systems最新文献

筛选
英文 中文
Applying reinforcement learning to learn best net to rip and re-route in global routing 在全局路由中应用强化学习来学习最佳路由和重新路由
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-05-16 DOI: 10.1145/3664286
Upma Gandhi, Erfan Aghaeekiasaraee, Sahir, Payam Mousavi, Ismail S. K. Bustany, Mathew E. Taylor, Laleh Behjat
{"title":"Applying reinforcement learning to learn best net to rip and re-route in global routing","authors":"Upma Gandhi, Erfan Aghaeekiasaraee, Sahir, Payam Mousavi, Ismail S. K. Bustany, Mathew E. Taylor, Laleh Behjat","doi":"10.1145/3664286","DOIUrl":"https://doi.org/10.1145/3664286","url":null,"abstract":"Physical designers typically employ heuristics to solve challenging problems in global routing. However, these heuristic solutions are not adaptable to the ever-changing fabrication demands, and the experience and creativity of designers can limit their effectiveness. Reinforcement learning (RL) is an effective method to tackle sequential optimization problems due to its ability to adapt and learn through trial and error. Hence, RL can create policies that can handle complex tasks. This work presents an RL framework for global routing that incorporates a self-learning model called RL-Ripper. The primary function of RL-Ripper is to identify the best nets that need to be ripped and rerouted in order to decrease the number of total short violations. In this work, we show that the proposed RL-Ripper framework’s approach can reduce the number of short violations for ISPD 2018 Benchmarks when compared to the state-of-the-art global router CUGR. Moreover, RL-Ripper reduced the total number of short violations after the first iteration of detailed routing over the baseline while being on par with the wirelength, VIA, and runtime. The proposed framework’s major impact is providing a novel learning-based approach to global routing that can be replicated for newer technologies.","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140970146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Open-Source ML-Based Full-Stack Optimization Framework for Machine Learning Accelerators 面向机器学习加速器的基于 ML 的开源全栈优化框架
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-05-11 DOI: 10.1145/3664652
Hadi Esmaeilzadeh, Soroush Ghodrati, Andrew Kahng, Joon Kyung Kim, Sean Kinzer, Sayak Kundu, Rohan Mahapatra, Susmita Dey Manasi, Sachin Sapatnekar, Zhiang Wang, Ziqing Zeng
{"title":"An Open-Source ML-Based Full-Stack Optimization Framework for Machine Learning Accelerators","authors":"Hadi Esmaeilzadeh, Soroush Ghodrati, Andrew Kahng, Joon Kyung Kim, Sean Kinzer, Sayak Kundu, Rohan Mahapatra, Susmita Dey Manasi, Sachin Sapatnekar, Zhiang Wang, Ziqing Zeng","doi":"10.1145/3664652","DOIUrl":"https://doi.org/10.1145/3664652","url":null,"abstract":"<p>Parameterizable machine learning (ML) accelerators are the product of recent breakthroughs in ML. To fully enable their design space exploration (DSE), we propose a physical-design-driven, learning-based prediction framework for hardware-accelerated deep neural network (DNN) and non-DNN ML algorithms. It adopts a unified approach that combines power, performance, and area (PPA) analysis with frontend performance simulation, thereby achieving a realistic estimation of both backend PPA and system metrics such as runtime and energy. In addition, our framework includes a fully automated DSE technique, which optimizes backend and system metrics through an automated search of architectural and backend parameters. Experimental studies show that our approach consistently predicts backend PPA and system metrics with an average 7% or less prediction error for the ASIC implementation of two deep learning accelerator platforms, VTA and VeriGOOD-ML, in both a commercial 12 nm process and a research-oriented 45 nm process.</p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140929019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Pruning-enabled High Performance and Reliable Graph Neural Network Training on ReRAM-based Processing-in-Memory Accelerators 在基于 ReRAM 的内存处理加速器上进行数据剪枝,实现高性能、可靠的图神经网络训练
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-05-03 DOI: 10.1145/3656171
Chukwufumnanya Ogbogu, Biresh K. Joardar, Krishnendu Chakrabarty, Jana Doppa, Partha Pratim Pande
{"title":"Data Pruning-enabled High Performance and Reliable Graph Neural Network Training on ReRAM-based Processing-in-Memory Accelerators","authors":"Chukwufumnanya Ogbogu, Biresh K. Joardar, Krishnendu Chakrabarty, Jana Doppa, Partha Pratim Pande","doi":"10.1145/3656171","DOIUrl":"https://doi.org/10.1145/3656171","url":null,"abstract":"<p>Graph Neural Networks (GNNs) have achieved remarkable accuracy in cognitive tasks such as predictive analytics on graph-structured data. Hence, they have become very popular in diverse real-world applications. However, GNN training with large real-world graph datasets in edge-computing scenarios is both memory- and compute-intensive. Traditional computing platforms such as CPUs and GPUs do not provide the energy efficiency and low latency required in edge intelligence applications due to their limited memory bandwidth. Resistive random-access memory (ReRAM)-based processing-in-memory (PIM) architectures have been proposed as suitable candidates for accelerating AI applications at the edge, including GNN training. However, ReRAM-based PIM architectures suffer from low reliability due to their limited endurance, and low performance when they are used for GNN training in real-world scenarios with large graphs. In this work, we propose a learning-for-data-pruning framework, which leverages a trained Binary Graph Classifier (BGC) to reduce the size of the input data graph by pruning subgraphs early in the training process to accelerate the GNN training process on ReRAM-based architectures. The proposed light-weight BGC model reduces the amount of redundant information in input graph(s) to speed up the overall training process, improves the reliability of the ReRAM-based PIM accelerator, and reduces the overall training cost. This enables fast, energy-efficient, and reliable GNN training on ReRAM-based architectures. Our experimental results demonstrate that using this learning for data pruning framework, we can accelerate GNN training and improve the reliability of ReRAM-based PIM architectures by up to 1.6 ×, and reduce the overall training cost by 100 × compared to state-of-the-art data pruning techniques.</p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140827622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HLS-IRT: Hardware Trojan Insertion through Modification of Intermediate Representation During High-Level Synthesis HLS-IRT:在高层合成过程中通过修改中间表示法插入硬件木马
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-05-03 DOI: 10.1145/3663477
Rijoy Mukherjee, Archisman Ghosh, Rajat Subhra Chakraborty
{"title":"HLS-IRT: Hardware Trojan Insertion through Modification of Intermediate Representation During High-Level Synthesis","authors":"Rijoy Mukherjee, Archisman Ghosh, Rajat Subhra Chakraborty","doi":"10.1145/3663477","DOIUrl":"https://doi.org/10.1145/3663477","url":null,"abstract":"<p>Modern integrated circuit (IC) design incorporates the usage of proprietary computer-aided design (CAD) software and integration of third-party hardware intellectual property (IP) cores. Subsequently, the fabrication process for the design takes place in untrustworthy offshore foundries that raises concerns regarding security and reliability. Hardware Trojans (HTs) are difficult to detect malicious modifications to IC that constitute a major threat, which if undetected prior to deployment, can lead to catastrophic functional failures or the unauthorized leakage of confidential information. Apart from the risks posed by rogue human agents, recent studies have shown that high-level synthesis (HLS) CAD software can serve as a potent attack vector for inserting Hardware Trojans (HTs). In this paper, we introduce a novel automated attack vector, which we term “HLS-IRT”, by inserting HT in the register transfer logic (RTL) description of circuits generated during a HLS based IC design flow, by directly modifying the compiler-generated intermediate representation (IR) corresponding to the design. We demonstrate the attack using a design and implementation flow based on the open-source <i>Bambu</i> HLS software and <i>Xilinx</i> FPGA, on several hardware accelerators spanning different application domains. Our results show that the resulting HTs are surreptitious and effective, while incurring minimal design overhead. We also propose a novel detection scheme for HLS-IRT, since existing techniques are found to be inadequate to detect the proposed HTs.</p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140827826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepOTF: Learning Equations-constrained Prediction for Electromagnetic Behavior DeepOTF:学习受方程约束的电磁行为预测
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-05-01 DOI: 10.1145/3663476
Peng Xu, Siyuan XU, Tinghuan Chen, Guojin Chen, Tsungyi Ho, Bei Yu
{"title":"DeepOTF: Learning Equations-constrained Prediction for Electromagnetic Behavior","authors":"Peng Xu, Siyuan XU, Tinghuan Chen, Guojin Chen, Tsungyi Ho, Bei Yu","doi":"10.1145/3663476","DOIUrl":"https://doi.org/10.1145/3663476","url":null,"abstract":"<p>High-quality passive devices are becoming increasingly important for the development of mobile devices and telecommunications, but obtaining such devices through simulation and analysis of electromagnetic (EM) behavior is time-consuming. To address this challenge, artificial neural network (ANN) models have emerged as an effective tool for modeling EM behavior, with NeuroTF being a representative example. However, these models are limited by the specific form of the transfer function, leading to discontinuity issues and high sensitivities. Moreover, previous methods have overlooked the physical relationship between distributed parameters, resulting in unacceptable numeric errors in the conversion results. To overcome these limitations, we propose two different neural network architectures: DeepOTF and ComplexTF. DeepOTF is a data-driven deep operator network for automatically learning feasible transfer functions for different geometric parameters. ComplexTF utilizes complex-valued neural networks to fit feasible transfer functions for different geometric parameters in the complex domain while maintaining causality and passivity. Our approach also employs an Equations-constraint Learning scheme to ensure the strict consistency of predictions and a dynamic weighting strategy to balance optimization objectives. The experimental results demonstrate that our framework shows superior performance than baseline methods, achieving up to 1700 × higher accuracy. </p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140827670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Permanent Stuck-At Fault injection attacks on Elephant and GIFT lightweight ciphers 针对大象和 GIFT 轻型密码的半永久性卡顿故障注入攻击
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-04-29 DOI: 10.1145/3662734
Priyanka Joshi, Bodhisatwa Mazumdar
{"title":"Semi-Permanent Stuck-At Fault injection attacks on Elephant and GIFT lightweight ciphers","authors":"Priyanka Joshi, Bodhisatwa Mazumdar","doi":"10.1145/3662734","DOIUrl":"https://doi.org/10.1145/3662734","url":null,"abstract":"<p>Fault attacks pose a potent threat to modern cryptographic implementations, particularly those used in physically approachable embedded devices in IoT environments. Information security in such resource-constrained devices is ensured using lightweight ciphers, where combinational circuit implementations of SBox are preferable over look-up tables (LUT) as they are more efficient regarding area, power, and memory requirements. Most existing fault analysis techniques focus on fault injection in memory cells and registers. Recently, a novel fault model and analysis technique, namely <i>Semi-Permanent Stuck-At</i> (SPSA) fault analysis, has been proposed to evaluate the security of ciphers with combinational circuit implementation of <i>Substitution layer</i> elements, SBox. In this work, we propose optimized techniques to recover the key in a minimum number of ciphertexts in such implementations of lightweight ciphers. Based on the proposed techniques, a key recovery attack on the NIST lightweight cryptography (NIST-LWC) standardization process finalist, <monospace>Elephant</monospace> AEAD, has been proposed. The proposed key recovery attack is validated on two versions of <monospace>Elephant</monospace> cipher. The proposed fault analysis approach recovered the secret key within 85 − 240 ciphertexts, calculated over 1000 attack instances. To the best of our knowledge, this is the first work on fault analysis attacks on the <monospace>Elephant</monospace> scheme. Furthermore, an optimized combinational circuit implementation of <i>Spongent</i> SBox (SBox used in <monospace>Elephant</monospace> cipher) is proposed, having a smaller gate count than the optimized implementation reported in the literature. The proposed fault analysis techniques are validated on primary and optimized versions of <i>Spongent</i> SBox through Verilog simulations. Further, we pinpoint SPSA hotspots in the lightweight <monospace>GIFT</monospace> cipher SBox architecture. We observe that <monospace>GIFT</monospace> SBox exhibits resilience towards the proposed SPSA fault analysis technique under the single fault adversarial model. However, <i>eight</i> SPSA fault patterns reduce the nonlinearity of the SBox to zero, rendering it vulnerable to linear cryptanalysis. Conclusively, SPSA faults may adversely affect the cryptographic properties of an SBox, thereby leading to trivial key recovery. The <monospace>GIFT</monospace> cipher is used as an example to focus on two aspects: i) its SBox construction is resilient to the proposed SPSA analysis and therefore characterizing such constructions for SPSA resilience and, ii) an SBox even though resilient to the proposed SPSA analysis, may exhibit vulnerabilities towards other classical analysis techniques when subjected to SPSA faults. Our work reports new vulnerabilities in fault analysis in the combinational circuit implementations of cryptographic protocols.</p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140810874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
gem5-NVDLA: A Simulation Framework for Compiling, Scheduling and Architecture Evaluation on AI System-on-Chips gem5-NVDLA:人工智能片上系统的编译、调度和架构评估仿真框架
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-04-29 DOI: 10.1145/3661997
Chengtao Lai, Wei Zhang
{"title":"gem5-NVDLA: A Simulation Framework for Compiling, Scheduling and Architecture Evaluation on AI System-on-Chips","authors":"Chengtao Lai, Wei Zhang","doi":"10.1145/3661997","DOIUrl":"https://doi.org/10.1145/3661997","url":null,"abstract":"<p>Recent years have seen an increasing trend in designing AI accelerators together with the rest of the system, including CPUs and memory hierarchy. This trend calls for high-quality simulators or analytical models that enable such kind of co-exploration. Currently, the majority of such exploration is supported by AI accelerator analytical models. But such models usually overlook the non-trivial impact of congestion of shared resources, non-ideal hardware utilization and non-zero CPU scheduler overhead, which could only be modeled by cycle-level simulators. However, most simulators with full-stack toolchains are proprietary to corporations, and the few open-source simulators are suffering from either weak compilers or limited space of modeling. This framework resolves these issues by proposing a compilation and simulation flow to run arbitrary Caffe neural network models on the NVIDIA Deep Learning Accelerator (NVDLA) with gem5, a cycle-level simulator, and by adding more building blocks including scratchpad allocation, multi-accelerator scheduling, tensor-level prefetching mechanisms and a DMA-aided embedded buffer to map workload to multiple NVDLAs. The proposed framework has been tested and verified on a set of convolution neural networks, showcasing the capability of modeling complex buffer management strategies, scheduling policies and hardware architectures. As a case study of this framework, we demonstrate the importance of adopting different buffering strategies for activation and weight tensors in AI accelerators to acquire remarkable speedup.</p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140810870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Watermarking for Paper-Based Digital Microfluidic Biochips 纸质数字微流控生物芯片的增强型水印技术
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-04-29 DOI: 10.1145/3661309
Jian-De Li, Sying-Jyan Wang, Katherine Shu-Min Li, Tsung-Yi Ho
{"title":"Enhanced Watermarking for Paper-Based Digital Microfluidic Biochips","authors":"Jian-De Li, Sying-Jyan Wang, Katherine Shu-Min Li, Tsung-Yi Ho","doi":"10.1145/3661309","DOIUrl":"https://doi.org/10.1145/3661309","url":null,"abstract":"<p>Paper-based digital microfluidic biochip (PB-DMFB) technology provides a promising solution to many biochemical applications. However, the PB-DMFB manufacturing process may suffer from potential security threats. For example, Trojan insertion attack may affect the functionality of PB-DMFBs. To ensure the correct functionality of PB-DMFBs, we propose a watermarking scheme to hide information in the PB-DMFB layout, which allows users to check design integrity and authenticate the source of the PB-DMFB design. As a result, the proposed method serves as a countermeasure against Trojan insertion attacks in addition to proof of authorship.</p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140811286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Translating Test Responses to Images for Test-termination Prediction via Multiple Machine Learning Strategies 通过多种机器学习策略将测试反应转化为图像,以进行测试结束预测
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-04-25 DOI: 10.1145/3661310
Hongfei Wang, Jingyao Li, Jiayi Wang, Zijun Ping, Hongcan Xiong, Wei Liu, Dongmian Zou
{"title":"Translating Test Responses to Images for Test-termination Prediction via Multiple Machine Learning Strategies","authors":"Hongfei Wang, Jingyao Li, Jiayi Wang, Zijun Ping, Hongcan Xiong, Wei Liu, Dongmian Zou","doi":"10.1145/3661310","DOIUrl":"https://doi.org/10.1145/3661310","url":null,"abstract":"Failure diagnosis is a software-based data-driven procedure. Collecting an excessive amount of fail data not only increases the overall test cost but can also potentially reduce diagnostic resolution. Test-termination prediction is thus proposed to dynamically determine which failing test pattern to terminate testing, producing an amount of test data that is sufficient for an accurate diagnosis analysis. In this work, we describe a set of novel methods utilizing advanced machine learning techniques for efficient test-termination prediction. To implement this approach, we first generate images representing failing test responses from failure-log files. These images are then used to train a multi-layer convolutional neural network (CNN) incorporating a residual block. The trained CNN model leverages the images and known diagnostic results to determine the optimal test-termination strategy within the testing process, ensuring efficient and high-quality diagnosis. In addition to the integration of test response-to-image translation, our approach harnesses two cutting-edge learning strategies to enhance fail data and boost performance in subsequent tasks. The first strategy is transfer learning, which utilizes sample-label information from one circuit to guide the decision of whether to continue or stop testing for another circuit lacking labels. The second strategy involves the use of a generative deep model to generate fail data in the form of synthetic images. This technique increases the modeling effectiveness by expanding the volume of training samples. Experimental results conducted on actual failing chips and standard benchmarks validate that our proposed method surpasses existing approaches. Our method creates opportunities to harness the power of recent advances in machine learning for improving test and diagnosis efficiency.","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140656949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of Machine Learning for Software-assisted Hardware Design Verification: Past, Present, and Prospect 用于软件辅助硬件设计验证的机器学习调查:过去、现在与未来
IF 1.4 4区 计算机科学
ACM Transactions on Design Automation of Electronic Systems Pub Date : 2024-04-24 DOI: 10.1145/3661308
Nan Wu, Yingjie Li, Hang Yang, Hanqiu Chen, Steve Dai, Cong Hao, Cunxi Yu, Yuan Xie
{"title":"Survey of Machine Learning for Software-assisted Hardware Design Verification: Past, Present, and Prospect","authors":"Nan Wu, Yingjie Li, Hang Yang, Hanqiu Chen, Steve Dai, Cong Hao, Cunxi Yu, Yuan Xie","doi":"10.1145/3661308","DOIUrl":"https://doi.org/10.1145/3661308","url":null,"abstract":"With the ever-increasing hardware design complexity comes the realization that efforts required for hardware verification increase at an even faster rate. Driven by the push from the desired verification productivity boost and the pull from leap-ahead capabilities of machine learning (ML), recent years have witnessed the emergence of exploiting ML-based techniques to improve the efficiency of hardware verification. In this paper, we present a panoramic view of how ML-based techniques are embraced in hardware design verification, from formal verification to simulation-based verification, from academia to industry, and from current progress to future prospects. We envision that the adoption of ML-based techniques will pave the road for more scalable, more intelligent, and more productive hardware verification.","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140660389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信