2014 Canadian Conference on Computer and Robot Vision最新文献

筛选
英文 中文
Metadata-Weighted Score Fusion for Multimedia Event Detection 基于元数据加权评分融合的多媒体事件检测
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.47
Scott McCloskey, Jingchen Liu
{"title":"Metadata-Weighted Score Fusion for Multimedia Event Detection","authors":"Scott McCloskey, Jingchen Liu","doi":"10.1109/CRV.2014.47","DOIUrl":"https://doi.org/10.1109/CRV.2014.47","url":null,"abstract":"We address the problem of multimedia event detection from videos captured 'in the wild,' in particular the fusion of cues from multiple aspects of the video's content: detected objects, observed motion, audio signatures, etc. We employ score fusion, also known as late fusion, and propose a method that learns local weightings of the various base classifier scores which respect the performance differences arising from the video quality. Classifiers working with visual texture features, for instance, are given reduced weight when applied to subsets of the video corpus with high compression, and the weights associated with the other classifiers are adjusted to reflect this lack of confidence. We present a method to automatically partition the video corpus into relevant subsets, and to learn local weightings which optimally fuse scores on a particular subset. Improvements in event detection performance are demonstrated on the TRECVid Multimedia Event Detection (MED) MED Test dataset, and comparisons are provided to several other score fusion methods.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127817916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Segmenting Objects in Weakly Labeled Videos 弱标记视频中的对象分割
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.24
Mrigank Rochan, Shafin Rahman, Neil D. B. Bruce, Yang Wang
{"title":"Segmenting Objects in Weakly Labeled Videos","authors":"Mrigank Rochan, Shafin Rahman, Neil D. B. Bruce, Yang Wang","doi":"10.1109/CRV.2014.24","DOIUrl":"https://doi.org/10.1109/CRV.2014.24","url":null,"abstract":"We consider the problem of segmenting objects in weakly labeled video. A video is weakly labeled if it is associated with a tag (e.g. Youtube videos with tags) describing the main object present in the video. It is weakly labeled because the tag only indicates the presence/absence of the object, but does not give the detailed spatial/temporal location of the object in the video. Given a weakly labeled video, our method can automatically localize the object in each frame and segment it from the background. Our method is fully automatic and does not require any user-input. In principle, it can be applied to a video of any object class. We evaluate our proposed method on a dataset with more than 100 video shots. Our experimental results show that our method outperforms other baseline approaches.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126637276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Interactive Teleoperation Interface for Semi-autonomous Control of Robot Arms 机器人手臂半自主控制的交互式遥操作界面
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.55
C. P. Quintero, R. T. Fomena, A. Shademan, Oscar A. Ramirez, Martin Jägersand
{"title":"Interactive Teleoperation Interface for Semi-autonomous Control of Robot Arms","authors":"C. P. Quintero, R. T. Fomena, A. Shademan, Oscar A. Ramirez, Martin Jägersand","doi":"10.1109/CRV.2014.55","DOIUrl":"https://doi.org/10.1109/CRV.2014.55","url":null,"abstract":"We propose and develop an interactive semi-autonomous control of robot arms. Our system controls two interactions: (1) A user can naturally control a robot arm by a direct linkage to the arm motion from the tracked human skeleton. (2) An autonomous image-based visual servoing routine can be triggered for precise positioning. Coarse motions are executed by human teleoperation and fine motions by image-based visual servoing. A successful application of our proposed interaction is presented for a WAM arm equipped with an eye-in-hand camera.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121994130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
The Range Beacon Placement Problem for Robot Navigation 机器人导航的距离信标定位问题
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.28
River Allen, Neil MacMillan, D. Marinakis, R. Nishat, Rayhan Rahman, S. Whitesides
{"title":"The Range Beacon Placement Problem for Robot Navigation","authors":"River Allen, Neil MacMillan, D. Marinakis, R. Nishat, Rayhan Rahman, S. Whitesides","doi":"10.1109/CRV.2014.28","DOIUrl":"https://doi.org/10.1109/CRV.2014.28","url":null,"abstract":"Instrumentation of an environment with sensors can provide an effective and scalable localization solution for robots. Where GPS is not available, beacons that provide position estimates to a robot must be placed effectively in order to maximize a robots navigation accuracy and robustness. Sonar range-based beacons are reasonable candidates for low cost position estimate sensors. In this paper we explore heuristics derived from computational geometry to estimate the effectiveness of sonar beacon deployments given a predefined mobile robot path. Results from numerical simulations and experimentation demonstrate the effectiveness and scalability of our approach.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127726987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Scale-Space Decomposition and Nearest Linear Combination Based Approach for Face Recognition 基于尺度空间分解和最接近线性组合的人脸识别方法
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.37
F. A. Hoque, Liang Chen
{"title":"Scale-Space Decomposition and Nearest Linear Combination Based Approach for Face Recognition","authors":"F. A. Hoque, Liang Chen","doi":"10.1109/CRV.2014.37","DOIUrl":"https://doi.org/10.1109/CRV.2014.37","url":null,"abstract":"Among many illumination robust approaches, scale-space decomposition based methods play an important role to reduce the lighting effects in face images. However, most of the existing scale-space decomposition methods perform recognition, based on the illumination-invariant small-scale features only. We propose a scale-space decomposition based face recognition approach that extracts the features of different scales through the TV+L1 model and wavelet transform. The approach represents a subject's face image via a subspace spanned by linear combination of the features of different scales. To decide the proper identity of the probe, the nearest neighbor (NN) approach is used to measure the similarities between a probe face image and subspace representations of gallery face images. Experiments on various benchmarks have demonstrated that the system outperforms many recognition methods in the same category.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127913776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Toward a Unified Framework for EMG Signals Processing and Controlling an Exoskeleton 外骨骼肌电信号处理与控制的统一框架研究
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.46
G. Durandau, W. Suleiman
{"title":"Toward a Unified Framework for EMG Signals Processing and Controlling an Exoskeleton","authors":"G. Durandau, W. Suleiman","doi":"10.1109/CRV.2014.46","DOIUrl":"https://doi.org/10.1109/CRV.2014.46","url":null,"abstract":"In this paper, we present a control method of robotic system using electromyography (EMG) signals collected by surface EMG electrodes. The EMG signals are analyzed using a neuromusculoskeletal (NMS) model that represents at the same time the muscle and the skeleton of the body. It has the advantage of adding external forces to the model without changing the initial parameters which is particularly useful for the control of exoskeletons. The algorithm has been validated through experiments consisting of moving only the elbow joint freely or while handling a barbell having various sets of loads. The results of our algorithm are then compared to the motions obtained by a motion capture system during the same session. The comparison points out the efficiency of our algorithm for predicting and estimating the arm motion using only EMG signals.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116402753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Using Gradient Orientation to Improve Least Squares Line Fitting 利用梯度方向改进最小二乘拟合
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.38
T. Petković, S. Lončarić
{"title":"Using Gradient Orientation to Improve Least Squares Line Fitting","authors":"T. Petković, S. Lončarić","doi":"10.1109/CRV.2014.38","DOIUrl":"https://doi.org/10.1109/CRV.2014.38","url":null,"abstract":"Straight line fitting is an important problem in computer and robot vision. We propose a novel method for least squares line fitting that uses both the point coordinates and the local gradient orientation to fit an optimal line by minimizing the proposed algebraic distance. The proposed inclusion of gradient orientation offers several advantages: (a) one data point is sufficient for the line fit, (b) for the same number of points the fit is more precise due to inclusion of gradient orientation, and (c) outliers can be rejected based on the gradient orientation or the distance to line.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130794389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Full Omnidirectional Depth Sensing Using Active Vision for Small Unmanned Aerial Vehicles 基于主动视觉的小型无人机全向深度传感研究
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.12
A. Harmat, I. Sharf
{"title":"Towards Full Omnidirectional Depth Sensing Using Active Vision for Small Unmanned Aerial Vehicles","authors":"A. Harmat, I. Sharf","doi":"10.1109/CRV.2014.12","DOIUrl":"https://doi.org/10.1109/CRV.2014.12","url":null,"abstract":"Collision avoidance for small unmanned aerial vehicles operating in a variety of environments is limited by the types of available depth sensors. Currently, there are no sensors that are lightweight, function outdoors in sunlight, and cover enough of a field of view to be useful in complex environments, although many sensors excel in one or two of these areas. We present a new depth estimation method, based on concepts from multi-view stereo and structured light methods, that uses only lightweight miniature cameras and a small laser dot matrix projector to produce measurements in the range of 1-12 meters. The field of view of the system is limited only by the number and type of cameras/projectors used, and can be fully Omni directional if desired. The sensitivity of the system to design and calibration parameters is tested in simulation, and results from a functional prototype are presented.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114381817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimizing Camera Perspective for Stereo Visual Odometry 优化相机视角立体视觉里程计
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.9
Valentin Peretroukhin, Jonathan Kelly, T. Barfoot
{"title":"Optimizing Camera Perspective for Stereo Visual Odometry","authors":"Valentin Peretroukhin, Jonathan Kelly, T. Barfoot","doi":"10.1109/CRV.2014.9","DOIUrl":"https://doi.org/10.1109/CRV.2014.9","url":null,"abstract":"Visual Odometry (VO) is an integral part of many navigation techniques in mobile robotics. In this work, we investigate how the orientation of the camera affects the overall position estimates recovered from stereo VO. Through simulations and experimental work, we demonstrate that this error can be significantly reduced by changing the perspective of the stereo camera in relation to the moving platform. Specifically, we show that orienting the camera at an oblique angle to the direction of travel can reduce VO error by up to 82% in simulations and up to 59% in experimental data. A variety of parameters are investigated for their effects on this trend including frequency of captured images and camera resolution.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125505800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Trajectory Inference Using a Motion Sensing Network 基于运动传感网络的轨迹推断
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.29
Doug Cox, Darren Fairall, Neil MacMillan, D. Marinakis, D. Meger, Saamaan Pourtavakoli, Kyle Weston
{"title":"Trajectory Inference Using a Motion Sensing Network","authors":"Doug Cox, Darren Fairall, Neil MacMillan, D. Marinakis, D. Meger, Saamaan Pourtavakoli, Kyle Weston","doi":"10.1109/CRV.2014.29","DOIUrl":"https://doi.org/10.1109/CRV.2014.29","url":null,"abstract":"This paper addresses the problem of inferring human trajectories through an environment using low frequency, low fidelity data from a sensor network. We present a novel \"recombine\" proposal for Markov Chain construction and use the new proposal to devise a probabilistic trajectory inference algorithm that generates likely trajectories given raw sensor data. We also propose a novel, low-power, long range, 900 MHz IEEE 802.15.4 compliant sensor network that makes outdoors deployment viable. Finally, we present experimental results from our deployment at a retail environment.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126992044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信