{"title":"基于事件排序的大型新闻视频档案汇总","authors":"Duy-Dinh Le, S. Satoh","doi":"10.1109/ICSC.2011.91","DOIUrl":null,"url":null,"abstract":"We present an approach to extract and rank important events in large news video archives. Our approach relies on the assumption that frequent patterns occurring in the large video datasets might correspond to important events. We propose a method to automatically find, analyze, and associate frequent patterns to events in the video datasets. This problem is challenging because: firstly, the event boundary is unknown and large variations in illumination, camera motion, occlusions, and text overlays make it difficult to select appropriate features for event representation. Secondly, the number of frequent patterns is usually large, a method to rank them is required for applications such as recommendation and summarization. Thirdly, large datasets require scalable methods to handle. The novelty of the proposed method is that temporal information is used to rank frequent patterns and that scalable methods from video processing and data mining are integrated seamlessly to handle large datasets. Experimental results on 2,768 news video programs (approx. 1,400 hours of video) broadcast by NHK from 2001 to 2008 show that the method can find important events for summarization and is scalable on large datasets.","PeriodicalId":408382,"journal":{"name":"2011 IEEE Fifth International Conference on Semantic Computing","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Summarizing Large News Video Archives by Event Ranking\",\"authors\":\"Duy-Dinh Le, S. Satoh\",\"doi\":\"10.1109/ICSC.2011.91\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present an approach to extract and rank important events in large news video archives. Our approach relies on the assumption that frequent patterns occurring in the large video datasets might correspond to important events. We propose a method to automatically find, analyze, and associate frequent patterns to events in the video datasets. This problem is challenging because: firstly, the event boundary is unknown and large variations in illumination, camera motion, occlusions, and text overlays make it difficult to select appropriate features for event representation. Secondly, the number of frequent patterns is usually large, a method to rank them is required for applications such as recommendation and summarization. Thirdly, large datasets require scalable methods to handle. The novelty of the proposed method is that temporal information is used to rank frequent patterns and that scalable methods from video processing and data mining are integrated seamlessly to handle large datasets. Experimental results on 2,768 news video programs (approx. 1,400 hours of video) broadcast by NHK from 2001 to 2008 show that the method can find important events for summarization and is scalable on large datasets.\",\"PeriodicalId\":408382,\"journal\":{\"name\":\"2011 IEEE Fifth International Conference on Semantic Computing\",\"volume\":\"72 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE Fifth International Conference on Semantic Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSC.2011.91\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE Fifth International Conference on Semantic Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSC.2011.91","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Summarizing Large News Video Archives by Event Ranking
We present an approach to extract and rank important events in large news video archives. Our approach relies on the assumption that frequent patterns occurring in the large video datasets might correspond to important events. We propose a method to automatically find, analyze, and associate frequent patterns to events in the video datasets. This problem is challenging because: firstly, the event boundary is unknown and large variations in illumination, camera motion, occlusions, and text overlays make it difficult to select appropriate features for event representation. Secondly, the number of frequent patterns is usually large, a method to rank them is required for applications such as recommendation and summarization. Thirdly, large datasets require scalable methods to handle. The novelty of the proposed method is that temporal information is used to rank frequent patterns and that scalable methods from video processing and data mining are integrated seamlessly to handle large datasets. Experimental results on 2,768 news video programs (approx. 1,400 hours of video) broadcast by NHK from 2001 to 2008 show that the method can find important events for summarization and is scalable on large datasets.