{"title":"Motch: an automatic motion type characterization system for sensor-rich videos","authors":"Guanfeng Wang, Beomjoo Seo, Roger Zimmermann","doi":"10.1145/2393347.2396462","DOIUrl":null,"url":null,"abstract":"Camera motion information facilitates higher-level semantic description inference in many video applications, e.g., video retrieval. However, an efficient and accurate methodology for annotating videos with camera motion information is still an elusive goal. In our recent work we have investigated the fusion of captured video with a continuous stream of sensor meta-data. For these so-called sensor-rich videos we present a system, called Motch, which precisely partitions a video document into subshots, automatically characterizes the camera motions and provides video subshot browsing based on an interactive, map-based interface. Moreover, the system computes and presents motion type statistics for each video in real time and renders different subshots distinctively on the map synchronously with the video playback.","PeriodicalId":212654,"journal":{"name":"Proceedings of the 20th ACM international conference on Multimedia","volume":"107 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2393347.2396462","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Camera motion information facilitates higher-level semantic description inference in many video applications, e.g., video retrieval. However, an efficient and accurate methodology for annotating videos with camera motion information is still an elusive goal. In our recent work we have investigated the fusion of captured video with a continuous stream of sensor meta-data. For these so-called sensor-rich videos we present a system, called Motch, which precisely partitions a video document into subshots, automatically characterizes the camera motions and provides video subshot browsing based on an interactive, map-based interface. Moreover, the system computes and presents motion type statistics for each video in real time and renders different subshots distinctively on the map synchronously with the video playback.