2014 Canadian Conference on Computer and Robot Vision最新文献

筛选
英文 中文
N-Gram Based Image Representation and Classification Using Perceptual Shape Features 基于感知形状特征的N-Gram图像表示与分类
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.54
Albina Mukanova, Q. Gao, Gang Hu
{"title":"N-Gram Based Image Representation and Classification Using Perceptual Shape Features","authors":"Albina Mukanova, Q. Gao, Gang Hu","doi":"10.1109/CRV.2014.54","DOIUrl":"https://doi.org/10.1109/CRV.2014.54","url":null,"abstract":"Rapid growth of visual data processing and analysis applications, such as content based image retrieval, augmented reality, automated inspection and defect detection, medical image understanding, and remote sensing has made the problem of developing accurate and efficient image representation and classification methods one of the key research areas. This research proposes new higher-level perceptual shape features for image representation which are based on Gestalt principles of human vision. The concept of n-gram is adapted from text analysis as a grouping mechanism for coding global shape content of an image. The proposed perceptual shape features are translation, rotation, and scale invariant. Local shape features and n-gram grouping scheme are integrated together to create new Perceptual Shape Vocabulary (PSV). Different image representations based on PSVs with and without n-gram scheme are applied to image classification task using Support Vector Machine (SVM) classifier. The experimental evaluation results indicate that n-gram-based perceptual shape features can efficiently represent global shape information of an image, and augment the accuracy of image representation by low-level image features such as SIFT descriptors.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127376244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Trajectory Estimation Using Relative Distances Extracted from Inter-image Homographies 基于图像间同形词提取相对距离的轨迹估计
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.39
Mårten Wadenbäck, A. Heyden
{"title":"Trajectory Estimation Using Relative Distances Extracted from Inter-image Homographies","authors":"Mårten Wadenbäck, A. Heyden","doi":"10.1109/CRV.2014.39","DOIUrl":"https://doi.org/10.1109/CRV.2014.39","url":null,"abstract":"The main idea of this paper is to use distances between camera positions to recover the trajectory of a mobile robot. We consider a mobile platform equipped with a single fixed camera using images of the floor and their associated inter-image homographies to find these distances. We show that under the assumptions that the camera is rigidly mounted with a constant tilt and travelling at a constant height above the floor, the distance between two camera positions may be expressed in terms of the condition number of the inter-image homography. Experiments are conducted on synthetic data to verify that the derived distance formula gives distances close to the true ones and is not too sensitive to noise. We also describe how the robot trajectory may be represented as a graph with edge lengths determined by the distances computed using the formula above, and present one possible method to construct this graph given some of these distances. The experiments show promising results.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128296578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Decentralized Cooperative Localization for Heterogeneous Multi-robot System Using Split Covariance Intersection Filter 基于分割协方差交叉滤波的异构多机器人系统分散协同定位
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.30
Thumeera R. Wanasinghe, G. Mann, R. Gosine
{"title":"Decentralized Cooperative Localization for Heterogeneous Multi-robot System Using Split Covariance Intersection Filter","authors":"Thumeera R. Wanasinghe, G. Mann, R. Gosine","doi":"10.1109/CRV.2014.30","DOIUrl":"https://doi.org/10.1109/CRV.2014.30","url":null,"abstract":"This study proposes the use of a split covariance intersection filter (Split-CIF) for decentralized multi-robot cooperative localization. In the proposed method each robot maintains a local extended Kalman filter to estimate its own pose in a pre-defined reference frame. When a robot receives pose information from neighbouring robots it employs a Split-CIF-based approach to fuse this received measurement with its local belief. For a team of N mobile robots, the processing and communication complexity of the proposed method is linear, O(N), with respect to the number of robots in the team. The proposed method does not demand for fully connected synchronous communication channels between robots and can work with any asynchronous and partially connected communication network. Additionally, the proposed method gives consistent state updates and is capable of handling independent and interdependent parts of the estimations separately. The numerical simulations presented validate the proposed algorithm. The simulation results demonstrate that the proposed algorithm is outperformed compared to single-robot localization algorithms and also demonstrate approximately the same estimation accuracy as a centralized cooperative localization approach but with reduced computational cost.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125897884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
An Integrated Bud Detection and Localization System for Application in Greenhouse Automation 一种应用于温室自动化的综合花蕾检测与定位系统
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.53
Cole Tarry, Patrick Wspanialy, M. Veres, M. Moussa
{"title":"An Integrated Bud Detection and Localization System for Application in Greenhouse Automation","authors":"Cole Tarry, Patrick Wspanialy, M. Veres, M. Moussa","doi":"10.1109/CRV.2014.53","DOIUrl":"https://doi.org/10.1109/CRV.2014.53","url":null,"abstract":"This paper presents an integrated system for chrysanthemum bud detection that can be used to automate labour intensive tasks in floriculture greenhouses. The system will detect buds and their 3D location in order to guide a robot arm to perform selective pruning tasks on each plant. The detection algorithm is based on using radial hough transform. Testing on several samples showed promising results.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114694095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Asymmetric Rendezvous Search at Sea 海上非对称集合搜索
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.31
Malika Meghjani, F. Shkurti, J. A. G. Higuera, A. Kalmbach, David Whitney, G. Dudek
{"title":"Asymmetric Rendezvous Search at Sea","authors":"Malika Meghjani, F. Shkurti, J. A. G. Higuera, A. Kalmbach, David Whitney, G. Dudek","doi":"10.1109/CRV.2014.31","DOIUrl":"https://doi.org/10.1109/CRV.2014.31","url":null,"abstract":"In this paper we address the rendezvous problem between an autonomous underwater vehicle (AUV) and a passively floating drifter on the sea surface. The AUV's mission is to keep an estimate of the floating drifter's position while exploring the underwater environment and periodically attempting to rendezvous with it. We are interested in the case where the AUV loses track of the drifter, predicts its location and searches for it in the vicinity of the predicted location. We parameterize this search problem with respect to both the uncertainty in the drifter's position estimate and the ratio between the drifter and the AUV speeds. We examine two search strategies for the AUV, an inward spiral and an outward spiral. We derive conditions under which these patterns are guaranteed to find a drifter, and we empirically analyze them with respect to different parameters in simulation. In addition, we present results from field trials in which an AUV successfully found a drifter after periods of communication loss during which the robot was exploring.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122450622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Autonomous Lecture Recording with a PTZ Camera While Complying with Cinematographic Rules 在遵守电影规则的情况下,使用PTZ相机自动录制讲座
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.57
D. Hulens, T. Goedemé, Tom Rumes
{"title":"Autonomous Lecture Recording with a PTZ Camera While Complying with Cinematographic Rules","authors":"D. Hulens, T. Goedemé, Tom Rumes","doi":"10.1109/CRV.2014.57","DOIUrl":"https://doi.org/10.1109/CRV.2014.57","url":null,"abstract":"Nowadays, many lectures and presentations are recorded and broadcasted for teleteaching applications. When no human camera crew is present, the most obvious choice is for static cameras. In order to enhance the viewing experience, more advanced systems automatically track and steer the camera towards the lecturer. In this paper we propose an even more advanced system that tracks the lecturer while taking cinematographic rules into account. On top of that, the lecturer can be filmed in different types of shots. Our system is able to detect and track the position of the lecturer, even with non-static backgrounds and in difficult illumination. We developed an action axis determination system, needed to apply cinematographic rules and to steer the Pan-Tilt-Zoom (PTZ)camera towards the lecturer.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128173240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
MDS-based Multi-axial Dimensionality Reduction Model for Human Action Recognition 基于mds的人体动作识别多轴降维模型
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.42
Redha Touati, M. Mignotte
{"title":"MDS-based Multi-axial Dimensionality Reduction Model for Human Action Recognition","authors":"Redha Touati, M. Mignotte","doi":"10.1109/CRV.2014.42","DOIUrl":"https://doi.org/10.1109/CRV.2014.42","url":null,"abstract":"In this paper, we present an original and efficient method of human action recognition in a video sequence. The proposed model is based on the generation and fusion of a set of prototypes generated from different view-points of the data cube of the video sequence. More precisely, each prototype is generated by using a multidimensional scaling (MDS) based nonlinear dimensionality reduction technique both along the temporal axis but also along the spatial axis (row and column) of the binary video sequence of 2D silhouettes. This strategy aims at modeling each human action in a low dimensional space, as a trajectory of points or a specific curve, for each viewpoint of the video cube in a complementary way. A simple K-NN classifier is then used to classify the prototype, for a given viewpoint, associated with each action to be recognized and then the fusion of the classification results for each viewpoint allow us to significantly improve the recognition rate performance. The experiments of our approach have been conducted on the publicly available Weizmann data-set and show the sensitivity of the proposed recognition system to each individual viewpoint and the efficiency of our multi-viewpoint based fusion approach compared to the best existing state-of-the-art human action recognition methods recently proposed in the literature.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114044697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Computer Vision-Based Identification of Individual Turtles Using Characteristic Patterns of Their Plastrons 基于计算机视觉的海龟个体特征识别
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.35
T. Beugeling, A. Albu
{"title":"Computer Vision-Based Identification of Individual Turtles Using Characteristic Patterns of Their Plastrons","authors":"T. Beugeling, A. Albu","doi":"10.1109/CRV.2014.35","DOIUrl":"https://doi.org/10.1109/CRV.2014.35","url":null,"abstract":"The identification of pond turtles is important to scientists who monitor local populations, as it allows them to track the growth and health of subjects over their lifetime. Traditional non-invasive methods for turtle recognition involve the visual inspection of distinctive coloured patterns on their plastron. This visual inspection is time consuming and difficult to scale with a potential growth in the surveyed population. We propose an algorithm for automatic identification of individual turtles based on images of their plastron. Our approach uses a combination of image processing and neural networks. We perform a convexity-concavity analysis of the contours on the plastron. The output of this analysis is combined with additional region-based measurements to compute feature vectors that characterize individual turtles. These features are used to train a neural network. Our goal is to create a neural network which is able to query a database of images of turtles of known identity with an image of an unknown turtle, and which outputs the unknown turtle's identity. The paper provides a thorough experimental evaluation of the proposed approach. Results are promising and point towards future work in the area of standardized image acquisition and image denoising.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121589188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Indoor Scene Recognition with a Visual Attention-Driven Spatial Pooling Strategy 基于视觉注意力驱动的空间池策略的室内场景识别
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.43
Tarek Elguebaly, N. Bouguila
{"title":"Indoor Scene Recognition with a Visual Attention-Driven Spatial Pooling Strategy","authors":"Tarek Elguebaly, N. Bouguila","doi":"10.1109/CRV.2014.43","DOIUrl":"https://doi.org/10.1109/CRV.2014.43","url":null,"abstract":"Scene recognition is an important research topic in robotics and computer vision. Even though scene recognition is a problem that has been studied in depth, indoor scene categorization has had a slow progress. Indoor scene recognition is a challenging problem due to the severe high intra-class variability, mainly due to the intrinsic variety of objects that may be present, and inter-class similarities of man-made indoor structures. Therefore, most scene recognition techniques that work well for outdoor scenes demonstrate low performance on indoor scenes. Thus, in this paper, we present a simple, yet effective method for indoor scene recognition. Our approach can be illustrated as follows. First, we extract dense SIFT descriptors. Then, we combine a saliency-driven perceptual pooling with a simple spatial pooling scheme. Once the spatial and the saliency-driven encoding have been determined, we use vector quantization to compute histograms of local features from each sub-region. Later, the histograms from all sub-regions are concatenated together to generate the final representation of the image. Finally, a model based mixture classifier, which uses mixture models to characterize class densities, is applied. In order to address the problem of modeling non-Gaussian data which are largely present in our final representation of images, we use the generalized Gaussian mixture (GGM) which can be a good alternative to the Gaussian thanks to its shape flexibility. The learning of the proposed statistical model is carried out using the rival penalized expectation-maximization (RPEM) algorithm which is able to perform model selection and parameter learning together in a single step. Furthermore, we take into account the feature selection problem by determining a set of relevant features for each data cluster, so that we can speed up the used learning algorithm and get rid of noisy, redundant, or uninformative feature. To validate the proposed method we test it on the MIT indoor scenes data set.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128321471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Speed Daemon: Experience-Based Mobile Robot Speed Scheduling 速度守护程序:基于经验的移动机器人速度调度
2014 Canadian Conference on Computer and Robot Vision Pub Date : 2014-05-01 DOI: 10.1109/CRV.2014.16
C. Ostafew, Angela P. Schoellig, T. Barfoot, J. Collier
{"title":"Speed Daemon: Experience-Based Mobile Robot Speed Scheduling","authors":"C. Ostafew, Angela P. Schoellig, T. Barfoot, J. Collier","doi":"10.1109/CRV.2014.16","DOIUrl":"https://doi.org/10.1109/CRV.2014.16","url":null,"abstract":"A time-optimal speed schedule results in a mobile robot driving along a planned path at or near the limits of the robot's capability. However, deriving models to predict the effect of increased speed can be very difficult. In this paper, we present a speed scheduler that uses previous experience, instead of complex models, to generate time-optimal speed schedules. The algorithm is designed for a vision-based, path-repeating mobile robot and uses experience to ensure reliable localization, low path-tracking errors, and realizable control inputs while maximizing the speed along the path. To our knowledge, this is the first speed scheduler to incorporate experience from previous path traversals in order to address system constraints. The proposed speed scheduler was tested in over 4 km of path traversals in outdoor terrain using a large Ackermann-steered robot travelling between 0.5 m/s and 2.0 m/s. The approach to speed scheduling is shown to generate fast speed schedules while remaining within the limits of the robot's capability.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122686716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信