Silhouette labeling and tracking in calibrated omnidirectional video sequences

K. Delibasis, Ilias Maglogiannis, V. Plagianakos
{"title":"Silhouette labeling and tracking in calibrated omnidirectional video sequences","authors":"K. Delibasis, Ilias Maglogiannis, V. Plagianakos","doi":"10.1145/2910674.2910726","DOIUrl":null,"url":null,"abstract":"In this paper, we present a methodology for labeling and tracking human silhouettes in indoor videos acquired by omnidirectional (fish-eye) cameras. The proposed methodology is based on a fisheye camera model that employs a spherical optical element and central projection, which has been calibrated to allow extraction of 3D geometry clues as described in [11]. The proposed algorithm requires input from a video segmentation algorithm, generating segmented human silhouettes. The history of a person's real position, as well as his appearance in the form of R, G, B color values are utilized in the described methodology. According to initial experimentation, the proposed algorithm is able to track efficiently multiple silhouettes with prolonged partial or full occlusions and it can calculate the trajectory of each silhouette. The algorithm can operate in the presence of imperfect segmentation, with the persons moving in any direction with respect to the camera, thus producing radically different shapes and color appearances.","PeriodicalId":359504,"journal":{"name":"Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2910674.2910726","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we present a methodology for labeling and tracking human silhouettes in indoor videos acquired by omnidirectional (fish-eye) cameras. The proposed methodology is based on a fisheye camera model that employs a spherical optical element and central projection, which has been calibrated to allow extraction of 3D geometry clues as described in [11]. The proposed algorithm requires input from a video segmentation algorithm, generating segmented human silhouettes. The history of a person's real position, as well as his appearance in the form of R, G, B color values are utilized in the described methodology. According to initial experimentation, the proposed algorithm is able to track efficiently multiple silhouettes with prolonged partial or full occlusions and it can calculate the trajectory of each silhouette. The algorithm can operate in the presence of imperfect segmentation, with the persons moving in any direction with respect to the camera, thus producing radically different shapes and color appearances.
在校准的全向视频序列剪影标记和跟踪
在本文中,我们提出了一种方法来标记和跟踪由全向(鱼眼)摄像机获得的室内视频中的人体轮廓。所提出的方法是基于采用球形光学元件和中心投影的鱼眼相机模型,该模型已经过校准,可以提取[11]中所述的3D几何线索。该算法需要从视频分割算法输入,生成分割的人体轮廓。在描述的方法中使用了一个人的真实位置的历史,以及他的R, G, B颜色值形式的外观。初步实验表明,该算法能够有效地跟踪长时间部分或全部遮挡的多个轮廓,并能计算出每个轮廓的轨迹。该算法可以在分割不完美的情况下运行,人物相对于相机的任何方向移动,从而产生完全不同的形状和颜色外观。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信