Fang Bao, Xiaoyu Sun, Weilan Luo, Xintao Liu, G. Ji, Bin Zhao
{"title":"Efficient Semantic Enrichment Process for Human Trajectories in Surveillance Videos","authors":"Fang Bao, Xiaoyu Sun, Weilan Luo, Xintao Liu, G. Ji, Bin Zhao","doi":"10.1109/BESC48373.2019.8963491","DOIUrl":null,"url":null,"abstract":"Nowadays, it becomes very convenient to collect large-scale videos that record trajectories of human mobility behavior in various situations in cities, due to the increasing availability of surveillance camera. Obviously, surveillance videos became a new data source of spatiotemporal trajectories. However, a typical trajectory semantic enrichment process receives as input spatiotemporal trajectories. The process methods cannot be applied to video data directly. In this paper, we propose a semantic enrichment process framework for human trajectories in surveillance videos. It includes trajectory identification in videos, trajectory transformation, sub-traj ectory segmentation, segment annotation. We can derive semantic trajectories from surveillance videos through the four phases. Having observed the common occurrence of the similarities between individual trajectories, we propose a grid index-based method to search similar pre-annotated sub-trajectory segments in pixel space for retrieving semantic trajectories in order to enhance the performance of this approach. Finally, we demonstrate the effectiveness and efficiency of our proposed approach by using a real world data set.","PeriodicalId":190867,"journal":{"name":"2019 6th International Conference on Behavioral, Economic and Socio-Cultural Computing (BESC)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 6th International Conference on Behavioral, Economic and Socio-Cultural Computing (BESC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BESC48373.2019.8963491","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Nowadays, it becomes very convenient to collect large-scale videos that record trajectories of human mobility behavior in various situations in cities, due to the increasing availability of surveillance camera. Obviously, surveillance videos became a new data source of spatiotemporal trajectories. However, a typical trajectory semantic enrichment process receives as input spatiotemporal trajectories. The process methods cannot be applied to video data directly. In this paper, we propose a semantic enrichment process framework for human trajectories in surveillance videos. It includes trajectory identification in videos, trajectory transformation, sub-traj ectory segmentation, segment annotation. We can derive semantic trajectories from surveillance videos through the four phases. Having observed the common occurrence of the similarities between individual trajectories, we propose a grid index-based method to search similar pre-annotated sub-trajectory segments in pixel space for retrieving semantic trajectories in order to enhance the performance of this approach. Finally, we demonstrate the effectiveness and efficiency of our proposed approach by using a real world data set.