Maxim Lazarov, H. Pirsiavash, B. Sajadi, Uddipan Mukherjee, A. Majumder
{"title":"Data handling displays","authors":"Maxim Lazarov, H. Pirsiavash, B. Sajadi, Uddipan Mukherjee, A. Majumder","doi":"10.1109/CVPRW.2009.5204320","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204320","url":null,"abstract":"Imagine a world in which people can use their hand-held mobile devices to project and transfer content. Everyone can join the collaboration by simply bringing their mobile devices close to each other. People can grab data from each others' devices with simple hand gestures. Now imagine a large display created by tiling multiple displays where multiple users can interact with a large dynamically changing data set in a collocated, collaborative setting and the displays will take care of the data transfer and handling functions in a way that is transparent to the users. In this paper we present a novel data-handling display which works as not only a display device but also as an interaction and data transfer module. We propose simple gesture based solutions to transfer information between these data-handling modules. We achieve high scalability by presenting a fully distributed architecture in which each device is responsible for its own data and also communicates and collaborates with other devices. We also show the usefulness of our work in visualizing large datasets and at the same time allowing multiple users to interact with the data.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"460 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128076854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mining discriminative adjectives and prepositions for natural scene recognition","authors":"Bangpeng Yao, Juan Carlos Niebles, Li Fei-Fei","doi":"10.1109/CVPRW.2009.5204222","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204222","url":null,"abstract":"This paper presents a method that considers not only patch appearances, but also patch relationships in the form of adjectives and prepositions for natural scene recognition. Most of the existing scene categorization approaches only use patch appearances or co-occurrence of patch appearances to determine the scene categories, but the relationships among patches remain ignored. Those relationships are, however, critical for recognition and understanding. For example, a `beach' scene can be characterized by a `sky' region above `sand', and a `water' region between `sky' and `sand'. We believe that exploiting such relations between image regions can improve scene recognition. In our approach, each image is represented as a spatial pyramid, from which we obtain a collection of patch appearances with spatial layout information. We apply a feature mining approach to get discriminative patch combinations. The mined patch combinations can be interpreted as adjectives or prepositions, which are used for scene understanding and recognition. Experimental results on a fifteen class scene dataset show that our approach achieves competitive state-of-the-art recognition accuracy, while providing a rich description of the scene classes in terms of the mined adjectives and prepositions.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134517324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Lucey, J. Cohn, S. Lucey, S. Sridharan, K. Prkachin
{"title":"Automatically detecting action units from faces of pain: Comparing shape and appearance features","authors":"P. Lucey, J. Cohn, S. Lucey, S. Sridharan, K. Prkachin","doi":"10.1109/CVPRW.2009.5204279","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204279","url":null,"abstract":"Recent psychological research suggests that facial movements are a reliable measure of pain. Automatic detection of facial movements associated with pain would contribute to patient care but is technically challenging. Facial movements may be subtle and accompanied by abrupt changes in head orientation. Active appearance models (AAM) have proven robust to naturally occurring facial behavior, yet AAM-based efforts to automatically detect action units (AUs) are few. Using image data from patients with rotator-cuff injuries, we describe an AAM-based automatic system that decouples shape and appearance to detect AUs on a frame-by-frame basis. Most current approaches to AU detection use only appearance features. We explored the relative efficacy of shape and appearance for AU detection. Consistent with the experience of human observers, we found specific relationships between action units and types of facial features. Several AU (e.g. AU4, 12, and 43) were more discriminable by shape than by appearance, whilst the opposite pattern was found for others (e.g. AU6, 7 and 10). AU-specific feature sets may yield optimal results.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130690799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Mohottala, Shintaro Ono, M. Kagesawa, K. Ikeuchi
{"title":"Fusion of a camera and a laser range sensor for vehicle recognition","authors":"S. Mohottala, Shintaro Ono, M. Kagesawa, K. Ikeuchi","doi":"10.1109/CVPRW.2009.5204099","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204099","url":null,"abstract":"This paper presents a system that fuses data from a vision sensor and a laser sensor for detection and classification. Fusion of a vision sensor and a laser range sensor enables us to obtain 3D information of an object together with its textures, offering high reliability and robustness to outdoor conditions. To evaluate the performance of the system, it is applied to recognition of on-street parked vehicles scanned from a moving probe vehicle. The evaluation experiments show obviously successful results, with a detection rate of 100% and an accuracy over 95% in recognizing four vehicle classes.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115870690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. S. Kaminsky, Noah Snavely, S. Seitz, R. Szeliski
{"title":"Alignment of 3D point clouds to overhead images","authors":"R. S. Kaminsky, Noah Snavely, S. Seitz, R. Szeliski","doi":"10.1109/CVPRW.2009.5204180","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204180","url":null,"abstract":"We address the problem of automatically aligning structure-from-motion reconstructions to overhead images, such as satellite images, maps and floor plans, generated from an orthographic camera. We compute the optimal alignment using an objective function that matches 3D points to image edges and imposes free space constraints based on the visibility of points in each camera. We demonstrate the accuracy of our alignment algorithm on several outdoor and indoor scenes using both satellite and floor plan images. We also present an application of our technique, which uses a labeled overhead image to automatically tag the input photo collection with textual information.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116337905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yangming Ou, D. Shen, M. Feldman, J. Tomaszeweski, C. Davatzikos
{"title":"Non-rigid registration between histological and MR images of the prostate: A joint segmentation and registration framework","authors":"Yangming Ou, D. Shen, M. Feldman, J. Tomaszeweski, C. Davatzikos","doi":"10.1109/CVPRW.2009.5204347","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204347","url":null,"abstract":"This paper presents a 3D non-rigid registration algorithm between histological and MR images of the prostate with cancer. To compensate for the loss of 3D integrity in the histology sectioning process, series of 2D histological slices are first reconstructed into a 3D histological volume. After that, the 3D histology-MRI registration is obtained by maximizing a) landmark similarity and b) cancer region overlap between the two images. The former aims to capture distortions at prostate boundary and internal blob-like structures; and the latter aims to capture distortions specifically at cancer regions. In particular, landmark similarities, the former, is maximized by an annealing process, where correspondences between the automatically-detected boundary and internal landmarks are iteratively established in a fuzzy-to-deterministic fashion. Cancer region overlap, the latter, is maximized in a joint cancer segmentation and registration framework, where the two interleaved problems - segmentation and registration - inform each other in an iterative fashion. Registration accuracy is established by comparing against human-rater-defined landmarks and by comparing with other methods. The ultimate goal of this registration is to warp the histologically-defined cancer ground truth into MRI, for more thoroughly understanding MRI signal characteristics of the prostate cancerous tissue, which will promote the MRI-based prostate cancer diagnosis in the future studies.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123922609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Cordes, Oliver Müller, B. Rosenhahn, J. Ostermann
{"title":"HALF-SIFT: High-Accurate Localized Features for SIFT","authors":"Kai Cordes, Oliver Müller, B. Rosenhahn, J. Ostermann","doi":"10.1109/CVPRW.2009.5204283","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204283","url":null,"abstract":"In this paper, the accuracy of feature points in images detected by the scale invariant feature transform (SIFT) is analyzed. It is shown that there is a systematic error in the feature point localization. The systematic error is caused by the improper subpel and subscale estimation, an interpolation with a parabolic function. To avoid the systematic error, the detection of high-accurate localized features (HALF) is proposed. We present two models for the localization of a feature point in the scale-space, a Gaussian and a Difference of Gaussians based model function. For evaluation, ground truth image data is synthesized to experimentally prove the systematic error of SIFT and to show that the error is eliminated using HALF. Experiments with natural image data show that the proposed methods increase the accuracy of the feature point positions by 13.9% using the Gaussian and by 15.6% using the Difference of Gaussians model.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124022950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic recognition of fingerspelled words in British Sign Language","authors":"Stephan Liwicki, M. Everingham","doi":"10.1109/CVPRW.2009.5204291","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204291","url":null,"abstract":"We investigate the problem of recognizing words from video, fingerspelled using the British Sign Language (BSL) fingerspelling alphabet. This is a challenging task since the BSL alphabet involves both hands occluding each other, and contains signs which are ambiguous from the observer's viewpoint. The main contributions of our work include: (i) recognition based on hand shape alone, not requiring motion cues; (ii) robust visual features for hand shape recognition; (iii) scalability to large lexicon recognition with no re-training. We report results on a dataset of 1,000 low quality webcam videos of 100 words. The proposed method achieves a word recognition accuracy of 98.9%.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128666072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of involuntary subject movement on 3D face scans","authors":"Chris Boehnen, P. Flynn","doi":"10.1109/CVPRW.2009.5204324","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204324","url":null,"abstract":"The impact of natural movement/sway while standing still during the capture of a 3D face model for biometric applications has previously been believed to have a negligible impact on biometric performance. Utilizing a newly captured dataset this paper demonstrates a significant negative impact of standing. A 0.5 improvement in d' (test of correct/incorrect match distribution separation) per 3D face region and noticeable improvement to match distributions are shown to result from eliminating movement during the scanning process. By comparing these match distributions to those in the FRGC dataset this paper presents an argument for improving the accuracy of 3D face models by eliminating motion during the capture process.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125480572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scenes vs. objects: A comparative study of two approaches to context based recognition","authors":"Andrew Rabinovich, Serge J. Belongie","doi":"10.1109/CVPRW.2009.5204220","DOIUrl":"https://doi.org/10.1109/CVPRW.2009.5204220","url":null,"abstract":"Contextual models play a very important role in the task of object recognition. Over the years, two kinds of contextual models have emerged: models with contextual inference based on the statistical summary of the scene (we will refer to these as scene based context models, or SBC), and models representing the context in terms of relationships among objects in the image (object based context, or OBC). In designing object recognition systems, it is necessary to understand the theoretical and practical properties of such approaches. This work provides an analysis of these models and evaluates two of their representatives using the LabelMe dataset. We demonstrate a considerable margin of improvement using the OBC style approach.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125565794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}