N-DriverMotion: Driver Motion Learning and Prediction Using an Event-Based Camera and Directly Trained Spiking Neural Networks on Loihi 2

IF 5.3 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Hyo Jong Chung;Byungkon Kang;Yoon Seok Yang
{"title":"N-DriverMotion: Driver Motion Learning and Prediction Using an Event-Based Camera and Directly Trained Spiking Neural Networks on Loihi 2","authors":"Hyo Jong Chung;Byungkon Kang;Yoon Seok Yang","doi":"10.1109/OJVT.2024.3504481","DOIUrl":null,"url":null,"abstract":"Driver motion recognition is a key factor in ensuring the safety of driving systems. This paper presents a novel system for learning and predicting driver motions, along with an event-based (720 × 720) dataset, N-DriverMotion, newly collected to train a neuromorphic vision system. The system includes an event-based camera that generates a driver motion dataset representing spike inputs and efficient spiking neural networks (SNNs) that are effective in training and predicting the driver's gestures. The event dataset consists of 13 driver motion categories classified by direction (front, side), illumination (bright, moderate, dark), and participant. A novel optimized four-layer convolutional spiking neural network (CSNN) was trained directly without any time-consuming preprocessing. This enables efficient adaptation to energy- and resource-constrained on-device SNNs for real-time inference on high-resolution event-based streams. Compared to recent gesture recognition systems adopting neural networks for vision processing, the proposed neuromorphic vision system achieves competitive accuracy of 94.04% in a 13-class classification task, and 97.24% in an unexpected abnormal driver motion classification task with the CSNN architecture. Additionally, when deployed to Intel Loihi 2 neuromorphic chips, the energy-delay product (EDP) of the model achieved 20,721 times more efficient than that of a non-edge GPU, and 541 times more efficient than edge-purpose GPU. Our proposed CSNN and the dataset can be used to develop safer and more efficient driver-monitoring systems for autonomous vehicles or edge devices requiring an efficient neural network architecture.","PeriodicalId":34270,"journal":{"name":"IEEE Open Journal of Vehicular Technology","volume":"6 ","pages":"68-80"},"PeriodicalIF":5.3000,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10763457","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Vehicular Technology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10763457/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Driver motion recognition is a key factor in ensuring the safety of driving systems. This paper presents a novel system for learning and predicting driver motions, along with an event-based (720 × 720) dataset, N-DriverMotion, newly collected to train a neuromorphic vision system. The system includes an event-based camera that generates a driver motion dataset representing spike inputs and efficient spiking neural networks (SNNs) that are effective in training and predicting the driver's gestures. The event dataset consists of 13 driver motion categories classified by direction (front, side), illumination (bright, moderate, dark), and participant. A novel optimized four-layer convolutional spiking neural network (CSNN) was trained directly without any time-consuming preprocessing. This enables efficient adaptation to energy- and resource-constrained on-device SNNs for real-time inference on high-resolution event-based streams. Compared to recent gesture recognition systems adopting neural networks for vision processing, the proposed neuromorphic vision system achieves competitive accuracy of 94.04% in a 13-class classification task, and 97.24% in an unexpected abnormal driver motion classification task with the CSNN architecture. Additionally, when deployed to Intel Loihi 2 neuromorphic chips, the energy-delay product (EDP) of the model achieved 20,721 times more efficient than that of a non-edge GPU, and 541 times more efficient than edge-purpose GPU. Our proposed CSNN and the dataset can be used to develop safer and more efficient driver-monitoring systems for autonomous vehicles or edge devices requiring an efficient neural network architecture.
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
9.60
自引率
0.00%
发文量
25
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信