Aravind Namandi Vembu, P. Natarajan, Shuang Wu, R. Prasad, P. Natarajan
{"title":"Graph based multimodal word clustering for video event detection","authors":"Aravind Namandi Vembu, P. Natarajan, Shuang Wu, R. Prasad, P. Natarajan","doi":"10.1109/ICASSP.2013.6638342","DOIUrl":null,"url":null,"abstract":"Combining diverse low-level features from multiple modalities has consistently improved performance over a range of video processing tasks, including event detection. In our work, we study graph based clustering techniques for integrating information from multiple modalities by identifying word clusters spread across the different modalities. We present different methods to identify word clusters including word similarity graph partitioning, word-video co-clustering and Latent Semantic Indexing and the impact of different metrics to quantify the co-occurrence of words. We present experimental results on a ≈45000 video dataset used in the TRECVID MED 11 evaluations. Our experiments show that multimodal features have consistent performance gains over the use of individual features. Further, word similarity graph construction using a complete graph representation consistently improves over partite graphs and early fusion based multimodal systems. Finally, we see additional performance gains by fusing multimodal features with individual features.","PeriodicalId":183968,"journal":{"name":"2013 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Conference on Acoustics, Speech and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP.2013.6638342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Combining diverse low-level features from multiple modalities has consistently improved performance over a range of video processing tasks, including event detection. In our work, we study graph based clustering techniques for integrating information from multiple modalities by identifying word clusters spread across the different modalities. We present different methods to identify word clusters including word similarity graph partitioning, word-video co-clustering and Latent Semantic Indexing and the impact of different metrics to quantify the co-occurrence of words. We present experimental results on a ≈45000 video dataset used in the TRECVID MED 11 evaluations. Our experiments show that multimodal features have consistent performance gains over the use of individual features. Further, word similarity graph construction using a complete graph representation consistently improves over partite graphs and early fusion based multimodal systems. Finally, we see additional performance gains by fusing multimodal features with individual features.
结合来自多种模式的各种低级功能,在一系列视频处理任务(包括事件检测)中不断提高性能。在我们的工作中,我们研究了基于图的聚类技术,通过识别分布在不同模态上的词簇来整合来自多个模态的信息。我们提出了不同的词簇识别方法,包括词相似图划分、词-视频共聚类和潜在语义索引,以及不同度量对词共现的影响。我们展示了在TRECVID MED 11评估中使用的约45000个视频数据集上的实验结果。我们的实验表明,与使用单个特征相比,多模态特征具有一致的性能提升。此外,使用完全图表示的词相似图构建始终优于部图和早期基于融合的多模态系统。最后,通过融合多模态特征和单个特征,我们看到了额外的性能提升。