Mark D. Barnell, Courtney Raymond, Steven Smiley, Darrek Isereau, Daniel Brown
{"title":"基于Jetson Orin AGX硬件的超低功耗深度学习应用","authors":"Mark D. Barnell, Courtney Raymond, Steven Smiley, Darrek Isereau, Daniel Brown","doi":"10.1109/HPEC55821.2022.9926369","DOIUrl":null,"url":null,"abstract":"The latest NVIDIA Jetson Orin AGX hardware provides new capabilities for “at the edge processing,”, where sensor information is collected. The computing architecture does this by providing massive computation due to its high performance, small form factor, and low power consumption. The recently released (2022) Orin and the novel research completed on this effort were combined to accelerate development and demonstration of a new concept of operation for machine learning at the edge. This research included the development of a concept that uses the deep learning object detector, YOLOv4-tiny with the Jetson Orin AGX that obtains data through a video feed from a drone to emulate autonomous capabilities for onboard embedded computing. Further, this research included the development of model-based solutions on the both the public (VisDrone) and newly collected optical datasets. Extending on this further, the technical approach included applying these concepts through experiments and demonstrations. Specifically, a data collection and processing plan were developed and implemented. Importantly, our technical approach allowed us to rapidly move from non-real time processing and successfully demonstrate real-time, in flight capabilities. In summary, this research included the use of new compute hardware, novel processing algorithms, and a unique concept of operation. This technical approach resulted in the real-time detection of targets (vehicles) from various flight altitudes (nominally 400 ft) using newly collected electro-optical (EO) data obtained in real time through the drone's High-Definition Multimedia Interface (HDMI).","PeriodicalId":200071,"journal":{"name":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Ultra Low-Power Deep Learning Applications at the Edge with Jetson Orin AGX Hardware\",\"authors\":\"Mark D. Barnell, Courtney Raymond, Steven Smiley, Darrek Isereau, Daniel Brown\",\"doi\":\"10.1109/HPEC55821.2022.9926369\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The latest NVIDIA Jetson Orin AGX hardware provides new capabilities for “at the edge processing,”, where sensor information is collected. The computing architecture does this by providing massive computation due to its high performance, small form factor, and low power consumption. The recently released (2022) Orin and the novel research completed on this effort were combined to accelerate development and demonstration of a new concept of operation for machine learning at the edge. This research included the development of a concept that uses the deep learning object detector, YOLOv4-tiny with the Jetson Orin AGX that obtains data through a video feed from a drone to emulate autonomous capabilities for onboard embedded computing. Further, this research included the development of model-based solutions on the both the public (VisDrone) and newly collected optical datasets. Extending on this further, the technical approach included applying these concepts through experiments and demonstrations. Specifically, a data collection and processing plan were developed and implemented. Importantly, our technical approach allowed us to rapidly move from non-real time processing and successfully demonstrate real-time, in flight capabilities. In summary, this research included the use of new compute hardware, novel processing algorithms, and a unique concept of operation. This technical approach resulted in the real-time detection of targets (vehicles) from various flight altitudes (nominally 400 ft) using newly collected electro-optical (EO) data obtained in real time through the drone's High-Definition Multimedia Interface (HDMI).\",\"PeriodicalId\":200071,\"journal\":{\"name\":\"2022 IEEE High Performance Extreme Computing Conference (HPEC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE High Performance Extreme Computing Conference (HPEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPEC55821.2022.9926369\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPEC55821.2022.9926369","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
最新的NVIDIA Jetson Orin AGX硬件为收集传感器信息的“边缘处理”提供了新的功能。计算体系结构通过高性能、小尺寸和低功耗提供大量计算来实现这一点。最近发布的(2022)Orin和在此基础上完成的新研究相结合,以加速边缘机器学习新操作概念的开发和演示。这项研究包括开发一个概念,使用深度学习对象探测器YOLOv4-tiny与Jetson Orin AGX一起,通过无人机的视频馈讯获取数据,模拟机载嵌入式计算的自主能力。此外,这项研究还包括在公共(VisDrone)和新收集的光学数据集上开发基于模型的解决方案。在此基础上进一步扩展,技术方法包括通过实验和演示应用这些概念。具体而言,制定并实施了数据收集和处理计划。重要的是,我们的技术方法使我们能够迅速从非实时处理转移到成功演示实时飞行能力。总之,这项研究包括使用新的计算硬件、新的处理算法和独特的操作概念。这种技术方法通过无人机的高清多媒体接口(HDMI)实时获取新收集的光电(EO)数据,实现了从不同飞行高度(名义上为400英尺)实时探测目标(车辆)。
Ultra Low-Power Deep Learning Applications at the Edge with Jetson Orin AGX Hardware
The latest NVIDIA Jetson Orin AGX hardware provides new capabilities for “at the edge processing,”, where sensor information is collected. The computing architecture does this by providing massive computation due to its high performance, small form factor, and low power consumption. The recently released (2022) Orin and the novel research completed on this effort were combined to accelerate development and demonstration of a new concept of operation for machine learning at the edge. This research included the development of a concept that uses the deep learning object detector, YOLOv4-tiny with the Jetson Orin AGX that obtains data through a video feed from a drone to emulate autonomous capabilities for onboard embedded computing. Further, this research included the development of model-based solutions on the both the public (VisDrone) and newly collected optical datasets. Extending on this further, the technical approach included applying these concepts through experiments and demonstrations. Specifically, a data collection and processing plan were developed and implemented. Importantly, our technical approach allowed us to rapidly move from non-real time processing and successfully demonstrate real-time, in flight capabilities. In summary, this research included the use of new compute hardware, novel processing algorithms, and a unique concept of operation. This technical approach resulted in the real-time detection of targets (vehicles) from various flight altitudes (nominally 400 ft) using newly collected electro-optical (EO) data obtained in real time through the drone's High-Definition Multimedia Interface (HDMI).