Security Issues of GPUs and FPGAs for AI-powered near & far Edge Services

Stylianos Koumoutzelis, I. Giannoulakis, Titos Georgoulakis, G. Avdikos, E. Kafetzakis
{"title":"Security Issues of GPUs and FPGAs for AI-powered near & far Edge Services","authors":"Stylianos Koumoutzelis, I. Giannoulakis, Titos Georgoulakis, G. Avdikos, E. Kafetzakis","doi":"10.34190/eccws.22.1.1160","DOIUrl":null,"url":null,"abstract":"Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs) are widely applied to cloud and embedded applications in which such devices are applied to near and far edge computing operations. This pool of available devices has a wide range of power/size specifications to support servers ranging from big data centres to small cloudlets, or even down to embedded systems and IoT boards. Overall, the most prominent devices and vendors in the market today are the following Xilinx for FPGA-based accelerators, Nvidia and AMD for GPUs, Intel for FPGA- /GPU-based accelerators. Decreasing the latency and increasing the throughput of Artificial Intelligence Functions (AIF), either for network automation or user applications, requires some sort of parallelization inside such purpose-built hardware acceleration. The AI@EDGE project is developing a Connect-Compute Platform (CCP) in which hardware accelerators (1 Nvidia GPU Tesla V100 (near edge device) and 1 Jetson AGX and 1 Jetson Nano (far edge devices), as well as 2 Xilinx FPGAs Alveo U280+U200 (near edge devices) and 1 Versal VCK190 and 2 Zynq ZCU104) are placed inside a server node and execute edge computing scenarios involving multiple nodes of diverse compute capabilities each, to test various integration approaches, to study orchestration techniques measure AIF deployment efficiency, all while developing certain FPGA/GPU code to accelerate representative AIFs of AI@EDGE. In this paper we compare the power/size/performance specifications of all accelerators and highlight the security issues associated with the cloud and embedded accelerators. This study presents the security issues announced by the vendors with the results of our tests and proposes tests and security functions (policies and objectives) which will be applied to the CCP to increase the security level of CCP. It also considers security issues related with the hardware set-up (accelerators inside server nodes) from the network point of view.","PeriodicalId":258360,"journal":{"name":"European Conference on Cyber Warfare and Security","volume":"325 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Conference on Cyber Warfare and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34190/eccws.22.1.1160","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs) are widely applied to cloud and embedded applications in which such devices are applied to near and far edge computing operations. This pool of available devices has a wide range of power/size specifications to support servers ranging from big data centres to small cloudlets, or even down to embedded systems and IoT boards. Overall, the most prominent devices and vendors in the market today are the following Xilinx for FPGA-based accelerators, Nvidia and AMD for GPUs, Intel for FPGA- /GPU-based accelerators. Decreasing the latency and increasing the throughput of Artificial Intelligence Functions (AIF), either for network automation or user applications, requires some sort of parallelization inside such purpose-built hardware acceleration. The AI@EDGE project is developing a Connect-Compute Platform (CCP) in which hardware accelerators (1 Nvidia GPU Tesla V100 (near edge device) and 1 Jetson AGX and 1 Jetson Nano (far edge devices), as well as 2 Xilinx FPGAs Alveo U280+U200 (near edge devices) and 1 Versal VCK190 and 2 Zynq ZCU104) are placed inside a server node and execute edge computing scenarios involving multiple nodes of diverse compute capabilities each, to test various integration approaches, to study orchestration techniques measure AIF deployment efficiency, all while developing certain FPGA/GPU code to accelerate representative AIFs of AI@EDGE. In this paper we compare the power/size/performance specifications of all accelerators and highlight the security issues associated with the cloud and embedded accelerators. This study presents the security issues announced by the vendors with the results of our tests and proposes tests and security functions (policies and objectives) which will be applied to the CCP to increase the security level of CCP. It also considers security issues related with the hardware set-up (accelerators inside server nodes) from the network point of view.
ai驱动的近边缘和远边缘服务的gpu和fpga的安全问题
图形处理单元(gpu)和现场可编程门阵列(fpga)广泛应用于云和嵌入式应用,其中这些设备应用于近边缘和远边缘计算操作。这个可用设备池具有广泛的功率/尺寸规格,以支持从大型数据中心到小型云计算,甚至到嵌入式系统和物联网板的服务器。总的来说,目前市场上最突出的设备和供应商是基于FPGA的加速器的赛灵思,gpu的英伟达和AMD,基于FPGA / gpu的加速器的英特尔。对于网络自动化或用户应用程序,减少延迟和增加人工智能功能(AIF)的吞吐量需要在这种专用硬件加速中进行某种并行化。AI@EDGE项目正在开发一个连接计算平台(CCP),其中硬件加速器(1个Nvidia GPU Tesla V100(近边缘设备)和1个Jetson AGX和1个Jetson Nano(远边缘设备),以及2个Xilinx fpga Alveo U280+U200(近边缘设备)和1个Versal VCK190和2个Zynq ZCU104)被放置在服务器节点内,并执行涉及多个不同计算能力节点的边缘计算场景,以测试各种集成方法。研究编排技术,测量AIF的部署效率,同时开发特定的FPGA/GPU代码来加速AI@EDGE的代表性AIF。在本文中,我们比较了所有加速器的功率/尺寸/性能规格,并强调了与云和嵌入式加速器相关的安全问题。本研究将供应商公布的安全问题与我们的测试结果相结合,并提出将应用于CCP的测试和安全功能(策略和目标),以提高CCP的安全水平。它还从网络的角度考虑了与硬件设置(服务器节点内的加速器)相关的安全问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信