Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)最新文献

筛选
英文 中文
A Real-Time Large Disparity Range Stereo-System using FPGAs 基于fpga的实时大视差范围立体系统
Divyang K. Masrani, W. James MacLean
{"title":"A Real-Time Large Disparity Range Stereo-System using FPGAs","authors":"Divyang K. Masrani, W. James MacLean","doi":"10.1007/11612704_5","DOIUrl":"https://doi.org/10.1007/11612704_5","url":null,"abstract":"","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115662275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Tracking Shopping Carts Using Mobile Cameras Viewing Ceiling-Mounted Retro-Reflective Bar Codes 使用移动摄像头查看天花板上的反光条形码来跟踪购物车
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.60
Tom G. Zimmerman
{"title":"Tracking Shopping Carts Using Mobile Cameras Viewing Ceiling-Mounted Retro-Reflective Bar Codes","authors":"Tom G. Zimmerman","doi":"10.1109/ICVS.2006.60","DOIUrl":"https://doi.org/10.1109/ICVS.2006.60","url":null,"abstract":"\"Shopping Buddy\" is a wireless multi-media terminal that clips onto a conventional shopping cart that allows customers to scan items as they shop, providing self-checkout at the cart. Cart tracking technology enables delivery of real-time locationbased content, including \"you are here\" navigation map, coupons and recipes. A new cart tracking technology is developed using a cart-mounted camera viewing ceiling-mounted retro-reflective bar codes. Tag acquisition and decoding is designed to run on an inexpensive microprocessor to meet stringent system cost, size and power constraints. The image is quantized and compressed \"on-the-fly\" to minimize memory and processing requirements. Field testing in a retail store demonstrates system robustness under a variety of lighting conditions, delivering 5 cm spatial resolution accuracy.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"459 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125835459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Simple, Effective System for Automated Capture of High Dynamic Range Images 一个简单、有效的高动态范围图像自动捕获系统
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.8
S. M. O'Malley
{"title":"A Simple, Effective System for Automated Capture of High Dynamic Range Images","authors":"S. M. O'Malley","doi":"10.1109/ICVS.2006.8","DOIUrl":"https://doi.org/10.1109/ICVS.2006.8","url":null,"abstract":"In recent years, high dynamic range imaging (HDRI) has become a topic of intense research interest in the fields of computer vision, computer graphics, and commercial visualization. Yet, despite the inherent limitations of traditional low dynamic range imaging (LDRI) and the emerging need for HDRI in many applications, existing systems for HDR image capture still remain proprietary, expensive, or manually-guided. All of these factors limit the availability of effective HDRI tools, thereby restricting studies which could otherwise benefit from this technology. To help alleviate this problem, we introduce a system and method for automated capture of high-quality, high-range images using commercially-available digital cameras. We report results comparing acquisition time, image resolution, and dynamic range and show these factors to compare very favorably both to traditional manual capture methods and to specialized HDRI systems developed commercially.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121957420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Strategies for Object Manipulation using Foveal and Peripheral Vision 基于中央凹和周边视觉的物体操作策略
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.57
D. Kragic, Mårten Björkman
{"title":"Strategies for Object Manipulation using Foveal and Peripheral Vision","authors":"D. Kragic, Mårten Björkman","doi":"10.1109/ICVS.2006.57","DOIUrl":"https://doi.org/10.1109/ICVS.2006.57","url":null,"abstract":"Visual feedback is used extensively in robotics and application areas range from human-robot interaction to object grasping and manipulation. There have been a number of examples of how to develop different components required by the above applications and very few general vision systems capable of performing a variety of tasks. In this paper, we concentrate on vision strategies for robotic manipulation tasks in a domestic environment. In particular, given fetch-and-carry type of tasks, the issues related to the whole detect-approach-grasp loop are considered. We deal with the problem of flexibility and robustness by using monocular and binocular visual cues and their integration. We demonstrate real-time disparity estimation, object recognition and pose estimation. We also show how a combination of foveal and peripheral vision system can be combined in order to provide a wide, low resolution and narrow, high resolution field of view.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121212891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
BC&GC-Based Dense Stereo By Belief Propagation 基于信念传播的bcgc密集立体声
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.62
Hongsheng Zhang, S. Negahdaripour
{"title":"BC&GC-Based Dense Stereo By Belief Propagation","authors":"Hongsheng Zhang, S. Negahdaripour","doi":"10.1109/ICVS.2006.62","DOIUrl":"https://doi.org/10.1109/ICVS.2006.62","url":null,"abstract":"Belief propagation (BP) have emerged as powerful tools in the realm of dense stereo computation. However the underlying brightness constancy (BC) assumption of existing methods severely limit the range of their applications. Augmenting BC with gradient constancy (GC) assumption has lead to a more accurate algorithm for optical flow computation. In this paper, these constraints are utilized in the frameworks of BP to broaden the application of stereo vision for 3D reconstruction. Results from experiments with semi-synthetic and real data illustrate that an algorithm incorporating these models generally yields better estimates, where the BC assumption is violated.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127166752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evaluation of Visual Attention Models for Robots 机器人视觉注意模型的评价
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.24
M. Z. Aziz, B. Mertsching, Mahmoud Shafik, R. Stemmer
{"title":"Evaluation of Visual Attention Models for Robots","authors":"M. Z. Aziz, B. Mertsching, Mahmoud Shafik, R. Stemmer","doi":"10.1109/ICVS.2006.24","DOIUrl":"https://doi.org/10.1109/ICVS.2006.24","url":null,"abstract":"This paper presents a new approach for providing visual attention on robot vision systems. Compared to other approaches our method is very fast as it processes regions rather than individual pixels. The proposed method first builds a list of regions by applying a shade and shadow tolerant segmentation step. The features of these regions are computed using their convex hulls in order to simplify and accelerate the processing. Feature values are stored within the records of respective regions instead of constructing a master map of attention. Then an algorithmic method is applied for finding the focus of attention in contrast to mathematical approaches used by existing models. Experiments conducted on simulated and real image data have not only demonstrated the validity of the proposed approach but have also led to the establishment of a comprehensive robotic vision system.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131713497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Face Tracking by Maximizing Classification Score of Face Detector Based on Rectangle Features 基于矩形特征的人脸检测器分类分数最大化的人脸跟踪
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.27
Akinori Hidaka, Kenji Nishida, Takio Kurita
{"title":"Face Tracking by Maximizing Classification Score of Face Detector Based on Rectangle Features","authors":"Akinori Hidaka, Kenji Nishida, Takio Kurita","doi":"10.1109/ICVS.2006.27","DOIUrl":"https://doi.org/10.1109/ICVS.2006.27","url":null,"abstract":"Face tracking continues to be an important topic in computer vision. We describe a tracking algorithm based on a static face detector. Our face detector is a rectanglefeature- based boosted classifier, which outputs the confidence whether an input image is a face. The function that outputs this confidence, called a score function, contains important information about the location of a moving target. A target that has moved will be located in the gradient direction of a score function from the location before moving. Therefore, our tracker will go to the region where the score is maximum using gradient information of this function. We show that this algorithm works by the combination of jumping to the gradient direction and precise search at the local region.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132188530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Parallel Pipeline Volume Intersection for Real-Time 3D Shape Reconstruction on a PC Cluster 基于并行管道体交的PC集群三维形状实时重建
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.49
Xiaojun Wu, O. Takizawa, T. Matsuyama
{"title":"Parallel Pipeline Volume Intersection for Real-Time 3D Shape Reconstruction on a PC Cluster","authors":"Xiaojun Wu, O. Takizawa, T. Matsuyama","doi":"10.1109/ICVS.2006.49","DOIUrl":"https://doi.org/10.1109/ICVS.2006.49","url":null,"abstract":"The human activity monitoring is one of the major tasks in the field of computer vision. Recently, not only the 2D images but also 3D shapes of a moving person are desired in kinds of cases, such as motion analysis, security monitoring, 3D video creation and so on. In this paper, we propose a parallel pipeline system on a PC cluster for reconstructing the 3D shape of a moving person in real-time. For the 3D shape reconstruction, we have extended the volume intersection method to the 3-base-plane volume intersection. By thus extension, the computation is accelerated greatly for arbitrary camera layouts. We also parallelized the 3-base-plane method and implemented it on a PC cluster. On each node, the pipeline processing is adopted to improve the throughput. To decrease the CPU idle time caused by I/O processing, image capturing, communications over nodes and so on, we implement the pipeline using multiple threads. So that, all stages can be executed concurrently. However, there exists resource conflicts between stages in a real system. To avoid the conflicts while keeping high percentage of CPU running time, we propose a tree structured thread control model. As a result, We achieve the performance as obtaining the full 3D volumes of a moving person at about 12 frames per second, where the voxel size is 5×5×5 [mm^3]. The effectiveness of the thread tree model in such real-time computation is also proved by the experimental results.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122879211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Integration and Coordination in a Cognitive Vision System 认知视觉系统的整合与协调
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.36
S. Wrede, Marc Hanheide, S. Wachsmuth, G. Sagerer
{"title":"Integration and Coordination in a Cognitive Vision System","authors":"S. Wrede, Marc Hanheide, S. Wachsmuth, G. Sagerer","doi":"10.1109/ICVS.2006.36","DOIUrl":"https://doi.org/10.1109/ICVS.2006.36","url":null,"abstract":"In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information that is triggered by perceptual and contextual cues. The system integrates a wide variety of visual functions like localization, object tracking and recognition, action recognition, interactive object learning, etc. We show how different kinds of system behavior are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and eventdriven integration approach.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129141501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A Real-Time Scene Understanding System for Airport Apron Monitoring 机场停机坪监控实时场景理解系统
Fourth IEEE International Conference on Computer Vision Systems (ICVS'06) Pub Date : 2006-01-04 DOI: 10.1109/ICVS.2006.7
D. Thirde, M. Borg, J. Ferryman, F. Fusier, V. Valentin, F. Brémond, M. Thonnat
{"title":"A Real-Time Scene Understanding System for Airport Apron Monitoring","authors":"D. Thirde, M. Borg, J. Ferryman, F. Fusier, V. Valentin, F. Brémond, M. Thonnat","doi":"10.1109/ICVS.2006.7","DOIUrl":"https://doi.org/10.1109/ICVS.2006.7","url":null,"abstract":"This paper presents a distributed multi-camera visual surveillance system for automatic scene interpretation of airport aprons. The system comprises two main modules Scene Tracking and Scene Understanding. The Scene Tracking module is responsible for detecting, tracking and classifying the objects on the apron. The Scene Understanding module performs high level interpretation of the apron activities by applying cognitive spatio-temporal reasoning. The performance of the complete system is demonstrated for a range of representative test scenarios.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129110395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信