Mark D. Barnell, Courtney Raymond, Matthew Wilson, Darrek Isereau, Chris Cicotta
{"title":"Target Classification in Synthetic Aperture Radar and Optical Imagery Using Loihi Neuromorphic Hardware","authors":"Mark D. Barnell, Courtney Raymond, Matthew Wilson, Darrek Isereau, Chris Cicotta","doi":"10.1109/HPEC43674.2020.9286246","DOIUrl":null,"url":null,"abstract":"Intel's novel Loihi processing chip has been used to explore new information exploitation techniques. Specifically, we analyzed two types of data (optical and radar). These data modalities and associated machine learning algorithms were used to showcase the ability of the system to address real world problems, such as object detection and classification. Intel's fully digital Loihi design is inspired by biological processes and brain functions. Neuromorphic architectures, such as Loihi, promise to improve computational efficiency for various machine learning tasks with a realizable path toward implementation into many systems, e.g., airborne computing for intelligence, surveillance and reconnaissance systems, and/or future autonomous vehicles and household appliances. With the current software development kit, it is possible to train an artificial neural network model in a common deep learning framework such as Keras and quantize the model weights for a simplistic, direct translation onto the Loihi hardware. The radar imagery analyzed included a seven-vehicle class target set, which was processed at a rate of 9.5 images per second and with an overall accuracy of 90.1%. The optical data included a binary (two classes), and another nine-class data set. The binary classifier processed the optical data at a rate of 12.8 images per second with 94.0% accuracy. The nine classes optical data was processed at a rate 12.9 images per second and 79.7% accuracy. Lastly, the system used ~6 Watts of total power with ~0.6 Watts being utilized by the neuromorphic cores. The inferencing energy used to classify each image varied between 14.9 and 63.2 millijoules/image.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPEC43674.2020.9286246","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Intel's novel Loihi processing chip has been used to explore new information exploitation techniques. Specifically, we analyzed two types of data (optical and radar). These data modalities and associated machine learning algorithms were used to showcase the ability of the system to address real world problems, such as object detection and classification. Intel's fully digital Loihi design is inspired by biological processes and brain functions. Neuromorphic architectures, such as Loihi, promise to improve computational efficiency for various machine learning tasks with a realizable path toward implementation into many systems, e.g., airborne computing for intelligence, surveillance and reconnaissance systems, and/or future autonomous vehicles and household appliances. With the current software development kit, it is possible to train an artificial neural network model in a common deep learning framework such as Keras and quantize the model weights for a simplistic, direct translation onto the Loihi hardware. The radar imagery analyzed included a seven-vehicle class target set, which was processed at a rate of 9.5 images per second and with an overall accuracy of 90.1%. The optical data included a binary (two classes), and another nine-class data set. The binary classifier processed the optical data at a rate of 12.8 images per second with 94.0% accuracy. The nine classes optical data was processed at a rate 12.9 images per second and 79.7% accuracy. Lastly, the system used ~6 Watts of total power with ~0.6 Watts being utilized by the neuromorphic cores. The inferencing energy used to classify each image varied between 14.9 and 63.2 millijoules/image.