{"title":"A Novel Security Threat Model for Automated AI Accelerator Generation Platforms","authors":"Chao Guo;Youhua Shi","doi":"10.1109/ACCESS.2025.3558072","DOIUrl":null,"url":null,"abstract":"In recent years, the design of Artificial Intelligence (AI) accelerators has gradually shifted from focusing solely on standalone accelerator hardware to considering the entire system, giving rise to a new AI accelerator design paradigm that emphasizes full-stack integration. Systems designed based on this paradigm offer a user-friendly, end-to-end solution for deploying pre-trained models. While previous studies have identified vulnerabilities in individual hardware components or models, the security of this paradigm has not yet been thoroughly evaluated. This work, from an attacker’s perspective, proposes a threat model based on this paradigm and reveals the potential security vulnerabilities of systems by embedding malicious code in the design flow, highlighting the necessity of protection to address this security gap. In exploration and generation, it maliciously leverages the exploration unit to identify sensitive parameters in the model’s intermediate layers and insert hardware Trojan (HT) into the accelerator. In execution, malicious information is concealed within the control instructions, triggering the HT. Experimental results demonstrate that the proposed method, which manipulates sensitive parameters in a few selected kernels across the middle convolutional layers, successfully misclassifies input images into specified categories with high misclassification rates across various models: 97.3% in YOLOv8 by modifying only three parameters per layer in three layers, 99.2% in ResNet-18 by altering four parameters per layer in three layers and 98.1% for VGG-16 by changing seven parameters per layer in four layers. Additionally, the area overhead introduced by the proposed HT occupies no more than 0.39% of the total design while maintaining near-original performance as in uncompromised designs, which clearly illustrates the concealment of the proposed security threat.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"61237-61249"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10949221","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10949221/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, the design of Artificial Intelligence (AI) accelerators has gradually shifted from focusing solely on standalone accelerator hardware to considering the entire system, giving rise to a new AI accelerator design paradigm that emphasizes full-stack integration. Systems designed based on this paradigm offer a user-friendly, end-to-end solution for deploying pre-trained models. While previous studies have identified vulnerabilities in individual hardware components or models, the security of this paradigm has not yet been thoroughly evaluated. This work, from an attacker’s perspective, proposes a threat model based on this paradigm and reveals the potential security vulnerabilities of systems by embedding malicious code in the design flow, highlighting the necessity of protection to address this security gap. In exploration and generation, it maliciously leverages the exploration unit to identify sensitive parameters in the model’s intermediate layers and insert hardware Trojan (HT) into the accelerator. In execution, malicious information is concealed within the control instructions, triggering the HT. Experimental results demonstrate that the proposed method, which manipulates sensitive parameters in a few selected kernels across the middle convolutional layers, successfully misclassifies input images into specified categories with high misclassification rates across various models: 97.3% in YOLOv8 by modifying only three parameters per layer in three layers, 99.2% in ResNet-18 by altering four parameters per layer in three layers and 98.1% for VGG-16 by changing seven parameters per layer in four layers. Additionally, the area overhead introduced by the proposed HT occupies no more than 0.39% of the total design while maintaining near-original performance as in uncompromised designs, which clearly illustrates the concealment of the proposed security threat.
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.