Augmented and virtual reality based segmentation algorithm for human pose detection in wearable cameras

Q4 Engineering
Shraddha R. Modi , Hetalben Kanubhai Gevariya , Reshma Dayma , Adesh V. Panchal , Harshad L. Chaudhary
{"title":"Augmented and virtual reality based segmentation algorithm for human pose detection in wearable cameras","authors":"Shraddha R. Modi ,&nbsp;Hetalben Kanubhai Gevariya ,&nbsp;Reshma Dayma ,&nbsp;Adesh V. Panchal ,&nbsp;Harshad L. Chaudhary","doi":"10.1016/j.measen.2024.101402","DOIUrl":null,"url":null,"abstract":"<div><div>Pose graph optimization is a crucial method that helps reduce cumulative errors while estimating visual trajectories for wearable cameras. However, when the posture graph's size increases with each additional camera movement, the optimization's efficiency diminishes. In terms of ongoing sensitive applications, such as extended reality and computer-generated reality, direction assessment is a major test. This research proposes an incremental pose graph segmentation technique that accounts for camera orientation variations as a solution to this challenge. The computation only improves the cameras that have seen large direction changes by breaking the posture chart during these instances. As a result, pose graph optimization is essentially slowed down and optimized more quickly. For every camera that hasn't been optimized using a pose graph, the algorithm employs the wearable cameras at the start and end of each camera's trajectory segment. The final camera in attendance is then determined by weighted average the various postures evaluated with these wearable cameras; this eliminates the need for lengthy nonlinear enhancement computations, reduces disturbance, and achieves excellent accuracy. Experiments on the EuRoC, TUM, and KITTI datasets demonstrate that pose graph optimization scope is reduced while maintaining camera trajectories accuracy.</div></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"36 ","pages":"Article 101402"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Measurement Sensors","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2665917424003787","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

Abstract

Pose graph optimization is a crucial method that helps reduce cumulative errors while estimating visual trajectories for wearable cameras. However, when the posture graph's size increases with each additional camera movement, the optimization's efficiency diminishes. In terms of ongoing sensitive applications, such as extended reality and computer-generated reality, direction assessment is a major test. This research proposes an incremental pose graph segmentation technique that accounts for camera orientation variations as a solution to this challenge. The computation only improves the cameras that have seen large direction changes by breaking the posture chart during these instances. As a result, pose graph optimization is essentially slowed down and optimized more quickly. For every camera that hasn't been optimized using a pose graph, the algorithm employs the wearable cameras at the start and end of each camera's trajectory segment. The final camera in attendance is then determined by weighted average the various postures evaluated with these wearable cameras; this eliminates the need for lengthy nonlinear enhancement computations, reduces disturbance, and achieves excellent accuracy. Experiments on the EuRoC, TUM, and KITTI datasets demonstrate that pose graph optimization scope is reduced while maintaining camera trajectories accuracy.
基于增强现实和虚拟现实的分割算法,用于可穿戴相机中的人体姿态检测
姿态图优化是一种重要的方法,有助于在估计可穿戴相机的视觉轨迹时减少累积误差。然而,当姿态图的大小随着摄像机的每次额外移动而增加时,优化的效率就会降低。在扩展现实和计算机生成现实等持续敏感的应用中,方向评估是一项重大考验。本研究提出了一种增量姿态图分割技术,该技术考虑了摄像机的方向变化,以此来解决这一难题。在计算过程中,只对方向变化较大的摄像机进行改进,在这些情况下打破姿势图。因此,姿势图优化的速度基本上会减慢,优化的速度会加快。对于每台尚未使用姿势图进行优化的摄像机,算法都会在每台摄像机轨迹段的起点和终点采用可穿戴式摄像机。然后,通过这些可穿戴式摄像头评估的各种姿态的加权平均值来确定最终到场的摄像头;这样就不需要进行冗长的非线性增强计算,减少了干扰,并实现了极高的精确度。在 EuRoC、TUM 和 KITTI 数据集上的实验表明,姿势图优化范围缩小了,同时保持了摄像头轨迹的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Measurement Sensors
Measurement Sensors Engineering-Industrial and Manufacturing Engineering
CiteScore
3.10
自引率
0.00%
发文量
184
审稿时长
56 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信