2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops最新文献

筛选
英文 中文
Multi-fiber reconstruction from DW-MRI using a continuous mixture of von Mises-Fisher distributions 使用von Mises-Fisher分布连续混合的DW-MRI多纤维重建
Ritwik K. Kumar, Angelos Barmpoutis, B. Vemuri, P. Carney, T. Mareci
{"title":"Multi-fiber reconstruction from DW-MRI using a continuous mixture of von Mises-Fisher distributions","authors":"Ritwik K. Kumar, Angelos Barmpoutis, B. Vemuri, P. Carney, T. Mareci","doi":"10.1109/CVPRW.2008.4562991","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562991","url":null,"abstract":"In this paper we propose a method for reconstructing the Diffusion Weighted Magnetic Resonance (DW-MR) signal at each lattice point using a novel continuous mixture of von Mises-Fisher distribution functions. Unlike most existing methods, neither does this model assume a fixed functional form for the MR signal attenuation (e.g. 2nd or 4th order tensor) nor does it arbitrarily fix important mixture parameters like the number of components. We show that this continuous mixture has a closed form expression and leads to a linear system which can be easily solved. Through extensive experimentation with synthetic data we show that this technique outperforms various other state-of-the-art techniques in resolving fiber crossings. Finally, we demonstrate the effectiveness of this method using real DW-MRI data from rat brain and optic chiasm.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133412094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Effective image database search via dimensionality reduction 有效的图像数据库搜索通过降维
A. Dahl, H. Aanæs
{"title":"Effective image database search via dimensionality reduction","authors":"A. Dahl, H. Aanæs","doi":"10.1109/CVPRW.2008.4562957","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562957","url":null,"abstract":"Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction, vocabulary building, and searching with a query image. It is important to keep the computational cost low through all steps. In this paper we focus on the efficiency of the technique. To do that we substantially reduce the dimensionality of the features by the use of PCA and addition of color. Building of the visual vocabulary is typically done using k-means. We investigate a clustering algorithm based on the leader follower principle (LF-clustering), in which the number of clusters is not fixed. The adaptive nature of LF-clustering is shown to improve the quality of the visual vocabulary using this. In the query step, features from the query image are assigned to the visual vocabulary. The dimensionality reduction enables us to do exact feature labeling using kD-tree, instead of approximate approaches normally used. Despite the dimensionality reduction to between 6 and 15 dimensions we obtain improved results compared to the traditional bag-of-words approach based on 128 dimensional SIFT feature and k-means clustering.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124422969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Interleaved pixel lookup for embedded computer vision 用于嵌入式计算机视觉的交错像素查找
Kota Yamaguchi, Yoshihiro Watanabe, T. Komuro, M. Ishikawa
{"title":"Interleaved pixel lookup for embedded computer vision","authors":"Kota Yamaguchi, Yoshihiro Watanabe, T. Komuro, M. Ishikawa","doi":"10.1109/CVPRW.2008.4563152","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563152","url":null,"abstract":"This paper describes an in-depth investigation and implementation of interleaved memory for pixel lookup operations in computer vision. Pixel lookup, mapping between coordinates and pixels, is a common operation in computer vision, but is also a potential bottleneck due to formidable bandwidth requirements for real-time operation. We focus on the acceleration of pixel lookup operations through parallelizing memory banks by interleaving. The key to applying interleaving for pixel lookup is 2D block data partitioning and support for unaligned access. With this optimization of interleaving, pixel lookup operations can output a block of pixels at once without major overhead for unaligned access. An example implementation of our optimized interleaved memory for affine motion tracking shows that the pixel lookup operations can achieve 12.8 Gbps for random lookup of a 4x4 size block of 8-bit pixels under 100 MHz operation. Interleaving can be a cost-effective solution for fast pixel lookup in embedded computer vision.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114644999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time estimation of human attention field in LWIR and color surveillance videos LWIR和彩色监控视频中人类注意力场的实时估计
A. Leykin, R. Hammoud
{"title":"Real-time estimation of human attention field in LWIR and color surveillance videos","authors":"A. Leykin, R. Hammoud","doi":"10.1109/CVPRW.2008.4563059","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563059","url":null,"abstract":"Knowing the visual attention field of a monitored subject is of great value for many applications including surveillance and marketing. This paper proposes first to track peoplepsilas bodies, and then estimates visual attention field for each human using head pose information. The proposed head pose technique aims at estimating the yaw angle only. The method is shown to operate on monocular color camera sequences and is further refined with the data from a thermal sensor. In typical monocular tracking sequences the resolution of the head is very low and parts of the head are occluded with the face often invisible to the camera. We propose a method of combining a skin color detector with the direction of motion in a probabilistic way. We show how head profile obtained from the thermal sequence can be used to further improve the result.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115831059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
ToF-sensors: New dimensions for realism and interactivity tof传感器:现实性和交互性的新维度
A. Kolb, E. Barth, R. Koch
{"title":"ToF-sensors: New dimensions for realism and interactivity","authors":"A. Kolb, E. Barth, R. Koch","doi":"10.1109/CVPRW.2008.4563159","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563159","url":null,"abstract":"A growing number of applications depend on accurate and fast 3D scene analysis. Examples are object recognition, collision prevention, 3D modeling, mixed reality, and gesture recognition. The estimation of a range map by image analysis or laser scan techniques is still a time- consuming and expensive part of such systems. A lower-priced, fast and robust alternative for distance measurements are time-of-flight (ToF) cameras. Recently, significant improvements have been made in order to achieve low-cost and compact ToF-devices, that have the potential to revolutionize many fields of research, including computer vision, computer graphics and human computer interaction (HCI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become \"ubiquitous geometry devices\" for gaming, web-conferencing, and numerous other applications. This paper will give an account of some recent developments in ToF-technology and will discuss applications of this technology for vision, graphics, and HCI.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124262077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
3D model search and pose estimation from single images using VIP features 利用VIP特征对单幅图像进行三维模型搜索和姿态估计
Changchang Wu, F. Fraundorfer, Jan-Michael Frahm, M. Pollefeys
{"title":"3D model search and pose estimation from single images using VIP features","authors":"Changchang Wu, F. Fraundorfer, Jan-Michael Frahm, M. Pollefeys","doi":"10.1109/CVPRW.2008.4563037","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563037","url":null,"abstract":"This paper describes a method to efficiently search for 3D models in a city-scale database and to compute the camera poses from single query images. The proposed method matches SIFT features (from a single image) to viewpoint invariant patches (VIP) from a 3D model by warping the SIFT features approximately into the orthographic frame of the VIP features. This significantly increases the number of feature correspondences which results in a reliable and robust pose estimation. We also present a 3D model search tool that uses a visual word based search scheme to efficiently retrieve 3D models from large databases using individual query images. Together the 3D model search and the pose estimation represent a highly scalable and efficient city-scale localization system. The performance of the 3D model search and pose estimation is demonstrated on urban image data.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122722622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
3-D gesture-based scene navigation in medical imaging applications using Time-of-Flight cameras 使用飞行时间(Time-of-Flight)相机的医学成像应用中基于手势的三维场景导航
S. Soutschek, J. Penne, J. Hornegger, J. Kornhuber
{"title":"3-D gesture-based scene navigation in medical imaging applications using Time-of-Flight cameras","authors":"S. Soutschek, J. Penne, J. Hornegger, J. Kornhuber","doi":"10.1109/CVPRW.2008.4563162","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563162","url":null,"abstract":"For a lot of applications, and particularly for medical intra-operative applications, the exploration of and navigation through 3-D image data provided by sensors like ToF (time-of-flight) cameras, MUSTOF (multisensor-time-of-flight) endoscopes or CT (computed tomography) [8], requires a user-interface which avoids physical interaction with an input device. Thus, we process a touchless user-interface based on gestures classified by the data provided by a ToF camera. Reasonable and necessary user interactions are described. For those interactions a suitable set of gestures is introduced. A user-interface is then proposed, which interprets the current gesture and performs the assigned functionality. For evaluating the quality of the developed user-interface we considered the aspects of classification rate, real-time applicability, usability, intuitiveness and training time. The results of our evaluation show that our system, which provides a classification rate of 94.3% at a framerate of 11 frames per second, satisfactorily addresses all these quality requirements.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114275672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
A novel quality measure for information hiding in images 一种新的图像信息隐藏质量度量方法
KA Navas, M. Aravind, M. Sasikumar, Assitant
{"title":"A novel quality measure for information hiding in images","authors":"KA Navas, M. Aravind, M. Sasikumar, Assitant","doi":"10.1109/CVPRW.2008.4562985","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562985","url":null,"abstract":"Objective quality assessment has been widely used in image processing for decades and many researchers have been studying the objective quality assessment method based on human visual system (HVS). This paper presents a new measure which denotes the perceptual degradation produced in an image using certain subjectively evaluated weighing functions. Experimental analysis when carried out on different sets of images for different levels of data hiding and under different attacks shows that this new measure shows a high degree of acceptance with the subjective analysis measure.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114499918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Can similar scenes help surface layout estimation? 相似的场景能帮助估算表面布局吗?
S. Divvala, Alexei A. Efros, M. Hebert
{"title":"Can similar scenes help surface layout estimation?","authors":"S. Divvala, Alexei A. Efros, M. Hebert","doi":"10.1109/CVPRW.2008.4562951","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562951","url":null,"abstract":"We describe a preliminary investigation of utilising large amounts of unlabelled image data to help in the estimation of rough scene layout. We take the single-view geometry estimation system of Hoiem et al (2207) as the baseline and see if it is possible to improve its performance by considering a set of similar scenes gathered from the Web. The two complimentary approaches being considered are 1) improving surface classification by using average geometry estimated from the matches, and 2) improving surface segmentation by injecting segments generated from the average of the matched images. The system is evaluated using the labelled 300-image dataset of Hoiem et al. and shows promising results.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116831449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A statistical framework for the registration of 3D knee implant components to single-plane X-ray images 三维膝关节植入部件与单平面x射线图像配准的统计框架
Jeroen Hermans, J. Bellemans, F. Maes, D. Vandermeulen, P. Suetens
{"title":"A statistical framework for the registration of 3D knee implant components to single-plane X-ray images","authors":"Jeroen Hermans, J. Bellemans, F. Maes, D. Vandermeulen, P. Suetens","doi":"10.1109/CVPRW.2008.4563004","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563004","url":null,"abstract":"Registration of 3D knee implant components to single-plane X-ray image sequences provides insight into implanted knee kinematics. In this paper a maximum likelihood approach is proposed to align the pose-related occluding contour of an object with edge segments extracted from a single-plane X-ray image. This leads to an expectation maximization algorithm which simultaneously determines the objectpsilas pose, estimates point correspondences and rejects outlier points from the registration process. Considering (nearly) planar-symmetrical objects, the method is extended in order to simultaneously estimate two symmetrical object poses which both align the corresponding occluding contours with 2D edge information. The algorithmpsilas capacity to generate accurate pose estimates and the necessity of determining both symmetrical poses when aligning (nearly) planar-symmetrical objects will be demonstrated in the context of automated registration of knee implant components to simulated and real single-plane X-ray images.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117078202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信