{"title":"基于LDA的运动信息视频监控序列分类","authors":"A. Diop, S. Meza, M. Gordan, A. Vlaicu","doi":"10.23919/ICACT.2018.8323807","DOIUrl":null,"url":null,"abstract":"Video surveillance is one of the key components in todays' public security. The possibility to identify abnormal events in such sequences is a difficult problem in computer vision with the aim of providing automatic means of analysis. The use of Latent Dirichlet Allocation (LDA) provided encouraging results for topic classification in text documents and extensions to the video range have already been presented in the literature. The paper approaches video sequence classification considering the extension of the LDA model by building a vocabulary based on motion information “words” that are used to isolate events/topics present in the video. The implementation is tested on the PETS datasets and results are compared with state of the art.","PeriodicalId":228625,"journal":{"name":"2018 20th International Conference on Advanced Communication Technology (ICACT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"LDA based classification of video surveillance sequences using motion information\",\"authors\":\"A. Diop, S. Meza, M. Gordan, A. Vlaicu\",\"doi\":\"10.23919/ICACT.2018.8323807\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Video surveillance is one of the key components in todays' public security. The possibility to identify abnormal events in such sequences is a difficult problem in computer vision with the aim of providing automatic means of analysis. The use of Latent Dirichlet Allocation (LDA) provided encouraging results for topic classification in text documents and extensions to the video range have already been presented in the literature. The paper approaches video sequence classification considering the extension of the LDA model by building a vocabulary based on motion information “words” that are used to isolate events/topics present in the video. The implementation is tested on the PETS datasets and results are compared with state of the art.\",\"PeriodicalId\":228625,\"journal\":{\"name\":\"2018 20th International Conference on Advanced Communication Technology (ICACT)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 20th International Conference on Advanced Communication Technology (ICACT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ICACT.2018.8323807\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 20th International Conference on Advanced Communication Technology (ICACT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICACT.2018.8323807","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LDA based classification of video surveillance sequences using motion information
Video surveillance is one of the key components in todays' public security. The possibility to identify abnormal events in such sequences is a difficult problem in computer vision with the aim of providing automatic means of analysis. The use of Latent Dirichlet Allocation (LDA) provided encouraging results for topic classification in text documents and extensions to the video range have already been presented in the literature. The paper approaches video sequence classification considering the extension of the LDA model by building a vocabulary based on motion information “words” that are used to isolate events/topics present in the video. The implementation is tested on the PETS datasets and results are compared with state of the art.