A fine-tuned feature descriptor for pedestrian action recognition in autonomous vehicles

Q4 Engineering
P. Ushapreethi, G. Priya
{"title":"A fine-tuned feature descriptor for pedestrian action recognition in autonomous vehicles","authors":"P. Ushapreethi, G. Priya","doi":"10.1504/IJVICS.2021.10035880","DOIUrl":null,"url":null,"abstract":"The autonomous vehicle is the dream project of most of the majestic companies; however, providing a full-fledged autonomous vehicle is very complicated. In this paper, the pedestrian actions are captured using cameras and fine-tuned within a limited amount of time. Certain features of the captured video and their efficient feature descriptors achieve improved accuracy in pedestrian action recognition. The Skeleton based Spatio-Temporal Interest Points (S-STIP) feature is combined with the new interclass discriminative dictionaries. The sparse descriptor is constructed using sparse coding based on orthogonal matching pursuit algorithm and dictionary learning based on Efficient Block Coordinate Descent (EBCD) algorithm. Finally, the sparse descriptor is given as input to the SVM classifier for recognising pedestrian actions. The human action data sets KTH, Weizmann and JAAD are used for experimentation, and the combination of the S-STIP feature and the enhanced sparse descriptor achieves better performance compared to other existing action recognition methods.","PeriodicalId":39333,"journal":{"name":"International Journal of Vehicle Information and Communication Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Vehicle Information and Communication Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/IJVICS.2021.10035880","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 1

Abstract

The autonomous vehicle is the dream project of most of the majestic companies; however, providing a full-fledged autonomous vehicle is very complicated. In this paper, the pedestrian actions are captured using cameras and fine-tuned within a limited amount of time. Certain features of the captured video and their efficient feature descriptors achieve improved accuracy in pedestrian action recognition. The Skeleton based Spatio-Temporal Interest Points (S-STIP) feature is combined with the new interclass discriminative dictionaries. The sparse descriptor is constructed using sparse coding based on orthogonal matching pursuit algorithm and dictionary learning based on Efficient Block Coordinate Descent (EBCD) algorithm. Finally, the sparse descriptor is given as input to the SVM classifier for recognising pedestrian actions. The human action data sets KTH, Weizmann and JAAD are used for experimentation, and the combination of the S-STIP feature and the enhanced sparse descriptor achieves better performance compared to other existing action recognition methods.
一种用于自动驾驶汽车行人动作识别的微调特征描述符
自动驾驶汽车是大多数大公司梦寐以求的项目;然而,提供一辆成熟的自动驾驶汽车是非常复杂的。在本文中,行人的动作是用相机捕捉的,并在有限的时间内进行微调。捕获视频的某些特征及其有效的特征描述符提高了行人动作识别的准确性。将基于骨架的时空兴趣点(S-STIP)特征与新的类间判别字典相结合。利用基于正交匹配追踪算法的稀疏编码和基于高效块坐标下降(EBCD)算法的字典学习构造稀疏描述子。最后,将稀疏描述符作为支持向量机分类器的输入,用于识别行人动作。使用KTH、Weizmann和JAAD人体动作数据集进行实验,S-STIP特征与增强的稀疏描述子相结合,比现有的其他动作识别方法获得了更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Vehicle Information and Communication Systems
International Journal of Vehicle Information and Communication Systems Computer Science-Computer Science Applications
CiteScore
1.20
自引率
0.00%
发文量
15
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信