{"title":"Consensus-based matching and tracking of keypoints for object tracking","authors":"G. Nebehay, R. Pflugfelder","doi":"10.1109/WACV.2014.6836013","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836013","url":null,"abstract":"We propose a novel keypoint-based method for long-term model-free object tracking in a combined matching-and-tracking framework. In order to localise the object in every frame, each keypoint casts votes for the object center. As erroneous keypoints are hard to avoid, we employ a novel consensus-based scheme for outlier detection in the voting behaviour. To make this approach computationally feasible, we propose not to employ an accumulator space for votes, but rather to cluster votes directly in the image space. By transforming votes based on the current keypoint constellation, we account for changes of the object in scale and rotation. In contrast to competing approaches, we refrain from updating the appearance information, thus avoiding the danger of making errors. The use of fast keypoint detectors and binary descriptors allows for our implementation to run in real-time. We demonstrate experimentally on a diverse dataset that is as large as 60 sequences that our method outperforms the state-of-the-art when high accuracy is required and visualise these results by employing a variant of success plots.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2 1","pages":"862-869"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88963142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"System for semi-automated surveying of street-lighting poles from street-level panoramic images","authors":"L. Hazelhoff, Ivo M. Creusen, P. D. With","doi":"10.1109/WACV.2014.6836109","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836109","url":null,"abstract":"Accurate and up-to-date inventories of lighting poles are of interest to energy companies, beneficial for the transition to energy-efficient lighting and may contribute to a more adequate lighting of streets. This potentially improves social security and reduces crime and vandalism during nighttime. This paper describes a system for automated surveying of lighting poles from street-level panoramic images. The system consists of two independent detectors, focusing at the detection of the pole itself and at the detection of a specific lighting fixture type. Both follow the same approach, and start with detection of the feature of interest (pole or fixture) within the individual images, followed by a multi-view analysis to retrieve the real-world coordinates of the poles. Afterwards, the detection output of both algorithms is merged. Large-scale validations, covering about 135 km of road, show that over 91% of the lighting poles is found, while the precision remains above 50%. When applying this system in a semi-automated fashion, high-quality inventories can be created up to 5 times more efficiently compared to manually surveying all poles from the images.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"120 1","pages":"129-136"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86167045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhangyang Wang, Zhaowen Wang, Shiyu Chang, Jianchao Yang, Thomas S. Huang
{"title":"A joint perspective towards image super-resolution: Unifying external- and self-examples","authors":"Zhangyang Wang, Zhaowen Wang, Shiyu Chang, Jianchao Yang, Thomas S. Huang","doi":"10.1109/WACV.2014.6836048","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836048","url":null,"abstract":"Existing example-based super resolution (SR) methods are built upon either external-examples or self-examples. Although effective in certain cases, both methods suffer from their inherent limitation. This paper goes beyond these two classes of most common example-based SR approaches, and proposes a novel joint SR perspective. The joint SR exploits and maximizes the complementary advantages of external- and self-example based methods. We elaborate on exploitable priors for image components of different nature, and formulate their corresponding loss functions mathematically. Equipped with that, we construct a unified SR formulation, and propose an iterative joint super resolution (IJSR) algorithm to solve the optimization. Such a joint perspective approach leads to an impressive improvement of SR results both quantitatively and qualitatively.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"55 1","pages":"596-603"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77366545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hoan Nguyen, Thomas Fasciano, D. Charbonneau, A. Dornhaus, M. Shin
{"title":"Data association based ant tracking with interactive error correction","authors":"Hoan Nguyen, Thomas Fasciano, D. Charbonneau, A. Dornhaus, M. Shin","doi":"10.1109/WACV.2014.6836003","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836003","url":null,"abstract":"The tracking of ants in video is important for the analysis of their complex group behavior. However, the manual analysis of these videos is tedious and time consuming. Automated tracking methods tend to drift due to frequent occlusions during their interactions and similarity in appearance. Semi-automated tracking methods enable corrections of tracking errors by incorporating user interaction. Although it is much lower than manual analysis, the required user time of the existing method is still typically 23 times the actual video length. In this paper, we propose a new semi-automated method that achieves similar accuracy while reducing the user interaction time by (1) mitigating user wait time by incorporating a data association tracking method to separate the tracking from user correction, and (2) minimizing the number of candidates visualized for user during correction. This proposed method is able to reduce the user interaction time by 67% while maintaining the accuracy within 3% of the previous semi-automated method [11].","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"33 1","pages":"941-946"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90699019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yin Cui, Yongzhou Xiang, Kun Rong, R. Feris, Liangliang Cao
{"title":"A spatial-color layout feature for representing galaxy images","authors":"Yin Cui, Yongzhou Xiang, Kun Rong, R. Feris, Liangliang Cao","doi":"10.1109/WACV.2014.6836098","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836098","url":null,"abstract":"We propose a spatial-color layout feature specially designed for galaxy images. Inspired by findings on galaxy formation and evolution from Astronomy, the proposed feature captures both global and local morphological information of galaxies. In addition, our feature is scale and rotation invariant. By developing a hashing-based approach with the proposed feature, we implemented an efficient galaxy image retrieval system on a dataset with more than 280 thousand galaxy images from the Sloan Digital Sky Survey project. Given a query image, the proposed system can rank-order all galaxies from the dataset according to relevance in only 35 milliseconds on a single PC. To the best of our knowledge, this is one of the first works on galaxy-specific feature design and large-scale galaxy image retrieval. We evaluated the performance of the proposed feature and the galaxy image retrieval system using web user annotations, showing that the proposed feature outperforms other classic features, including HOG, Gist, LBP, and Color-histograms. The success of our retrieval system demonstrates the advantages of leveraging computer vision techniques in Astronomy problems.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"66 1","pages":"213-219"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91133205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeremiah R. Barr, Leonardo A. Cament, K. Bowyer, P. Flynn
{"title":"Active Clustering with Ensembles for Social structure extraction","authors":"Jeremiah R. Barr, Leonardo A. Cament, K. Bowyer, P. Flynn","doi":"10.1109/WACV.2014.6835999","DOIUrl":"https://doi.org/10.1109/WACV.2014.6835999","url":null,"abstract":"We introduce a method for extracting the social network structure for the persons appearing in a set of video clips. Individuals are unknown, and are not matched against known enrollments. An identity cluster representing an individual is formed by grouping similar-appearing faces from different videos. Each identity cluster is represented by a node in the social network. Two nodes are linked if the faces from their clusters appeared together in one or more video frames. Our approach incorporates a novel active clustering technique to create more accurate identity clusters based on feedback from the user about ambiguously matched faces. The final output consists of one or more network structures that represent the social group(s), and a list of persons who potentially connect multiple social groups. Our results demonstrate the efficacy of the proposed clustering algorithm and network analysis techniques.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"9 1","pages":"969-976"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85994711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radu Dondera, Vlad I. Morariu, Yulu Wang, L. Davis
{"title":"Interactive video segmentation using occlusion boundaries and temporally coherent superpixels","authors":"Radu Dondera, Vlad I. Morariu, Yulu Wang, L. Davis","doi":"10.1109/WACV.2014.6836023","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836023","url":null,"abstract":"We propose an interactive video segmentation system built on the basis of occlusion and long term spatio-temporal structure cues. User supervision is incorporated in a superpixel graph clustering framework that differs crucially from prior art in that it modifies the graph according to the output of an occlusion boundary detector. Working with long temporal intervals (up to 100 frames) enables our system to significantly reduce annotation effort with respect to state of the art systems. Even though the segmentation results are less than perfect, they are obtained efficiently and can be used in weakly supervised learning from video or for video content description. We do not rely on a discriminative object appearance model and allow extracting multiple foreground objects together, saving user time if more than one object is present. Additional experiments with unsupervised clustering based on occlusion boundaries demonstrate the importance of this cue for video segmentation and thus validate our system design.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"180 1","pages":"784-791"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88468919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simultaneous recognition of facial expression and identity via sparse representation","authors":"M. Mohammadi, E. Fatemizadeh, M. Mahoor","doi":"10.1109/WACV.2014.6835986","DOIUrl":"https://doi.org/10.1109/WACV.2014.6835986","url":null,"abstract":"Automatic recognition of facial expression and facial identity from visual data are two challenging problems that are tied together. In the past decade, researchers have mostly tried to solve these two problems separately to come up with face identification systems that are expression-independent and facial expressions recognition systems that are person-independent. This paper presents a new framework using sparse representation for simultaneous recognition of facial expression and identity. Our framework is based on the assumption that any facial appearance is a sparse combination of identities and expressions (i.e., one identity and one expression). Our experimental results using the CK+ and MMI face datasets show that the proposed approach outperforms methods that conduct face identification and face recognition individually.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"8 1","pages":"1066-1073"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82892709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time video decolorization using bilateral filtering","authors":"Yibing Song, Linchao Bao, Qingxiong Yang","doi":"10.1109/WACV.2014.6836106","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836106","url":null,"abstract":"This paper presents a real-time decolorization method. Given the human visual systems preference for luminance information, the luminance should be preserved as much as possible during decolorization. As a result, the proposed decolorization method measures the amount of color contrast/detail lost when converting color to luminance. The detail loss is estimated by computing the difference between two intermediate images: one obtained by applying bilateral filter to the original color image, and the other obtained by applying joint bilateral filter to the original color image with its luminance as the guidance image. The estimated detail loss is then mapped to a grayscale image named residual image by minimizing the difference between the image gradients of the input color image and the objective grayscale image that is the sum of the residual image and the luminance. Apparently, the residual image will contain pixels with all zero values (that is the two intermediate images will be the same) only when no visual detail is missing in the luminance. Unlike most previous methods, the proposed decolorization method preserves both contrast in the color image and the luminance. Quantitative evaluation shows that it is the top performer on the standard test suite. Meanwhile it is very robust and can be directly used to convert videos while maintaining the temporal coherence. Specifically it can convert a high-resolution video (1280 × 720) in real time (about 28 Hz) on a 3.4 GHz i7 CPU.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"55 1","pages":"159-166"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90052446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video alignment to a common reference","authors":"Rahul Dutta, B. Draper, J. Beveridge","doi":"10.1109/WACV.2014.6836020","DOIUrl":"https://doi.org/10.1109/WACV.2014.6836020","url":null,"abstract":"Handheld videos include unintentional motion (jitter) and often intentional motion (pan and/or zoom). Human viewers prefer to see jitter removed, creating a smoothly moving camera. For video analysis, in contrast, aligning to a fixed stable background is sometimes preferable. This paper presents an algorithm that removes both forms of motion using a novel and efficient way of tracking background points while ignoring moving foreground points. The approach is related to image mosaicing, but the result is a video rather than an enlarged still image. It is also related to multiple object tracking approaches, but simpler since moving objects need not be explicitly tracked. The algorithm presented takes as input a video and returns one or several stabilized videos. Videos are broken into parts when the algorithm detects the background changing and it becomes necessary to fix upon a new background. Our approach assumes the person holding the camera is standing in one place and that objects in motion do not dominate the image. Our algorithm performs better than several previously published approaches when compared on 1,401 handheld videos from the recently released Point-and-Shoot Face Recognition Challenge (PASC). The source code for this algorithm is being made available.","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"6 1","pages":"808-815"},"PeriodicalIF":0.0,"publicationDate":"2014-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78767555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}