2012 IEEE/RSJ International Conference on Intelligent Robots and Systems最新文献

筛选
英文 中文
Harp plucking robotic finger 竖琴弹拨机器人的手指
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385720
D. Chadefaux, Jean-Loïc Le Carrou, M. Vitrani, Sylvere Billout, L. Quartier
{"title":"Harp plucking robotic finger","authors":"D. Chadefaux, Jean-Loïc Le Carrou, M. Vitrani, Sylvere Billout, L. Quartier","doi":"10.1109/IROS.2012.6385720","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385720","url":null,"abstract":"This paper describes results about the development of a repeatable and configurable robotic finger designed to pluck harp strings. Eventually, this device will be a tool to study string instruments in playing conditions. We use a classical robot with two degrees of freedom enhanced with silicone fingertips. The validation method requires a comparison with real harpist performance. A specific experimental setup using a high-speed camera combined with an accelerometer was carried out. It provides finger and string trajectories during the whole plucking action and the soundboard vibrations during the string oscillations. A set of vibrational features are then extracted from these signals to compare robotic finger to harpist plucking actions. These descriptors have been analyzed on six fingertips of various shapes and hardnesses. Results allow to select the optimal shape and hardness among the silicone fingertips according to vibrational features.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"121 1","pages":"4886-4891"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79049799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient search for correct and useful topological maps 有效搜索正确和有用的拓扑图
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386155
Collin Johnson, B. Kuipers
{"title":"Efficient search for correct and useful topological maps","authors":"Collin Johnson, B. Kuipers","doi":"10.1109/IROS.2012.6386155","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386155","url":null,"abstract":"We present an algorithm for probabilistic topological mapping that heuristically searches a tree of map hypotheses to provide a usable topological map hypothesis online, while still guaranteeing the correct map can always be found. Our algorithm annotates each leaf of the tree with a posterior probability. When a new place is encountered, we expand hypotheses based on their posterior probability, which means only the most probable hypotheses are expanded. By focusing on the most probable hypotheses, we dramatically reduce the number of hypotheses evaluated allowing real-time operation. Additionally, our approach never prunes consistent hypotheses from the tree, which means the correct hypothesis always exists within the tree.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"16 1","pages":"5277-5282"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79233603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A novel spring mechanism to reduce energy consumption of robotic arms 一种减少机械臂能量消耗的新型弹簧机构
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385488
M. Plooij, M. Wisse
{"title":"A novel spring mechanism to reduce energy consumption of robotic arms","authors":"M. Plooij, M. Wisse","doi":"10.1109/IROS.2012.6385488","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385488","url":null,"abstract":"Most conventional robotic arms use motors to accelerate the manipulator. This leads to an unnecessary high energy consumption when performing repetitive tasks. This paper presents an approach to reduce energy consumption in robotic arms by performing its repetitive tasks with the help of a parallel spring mechanism. A special non-linear spring characteristic has been achieved by attaching a spring to two connected pulleys. This parallel spring mechanism provides for the accelerations of the manipulator without compromising its ability to vary the task parameters (the time per stroke, the displacement per stroke the grasping time and the payload). The energy consumption of the arm with the spring mechanism is compared to that of the same arm without the spring mechanism. Optimal control studies show that the robotic arm uses 22% less energy due to the spring mechanism. On the 2 DOF prototype, we achieved an energy reduction of 20%. The difference was due to model simplifications. With a spring mechanism, there is an extra energetic cost, because potential energy has to be stored into the spring during startup. This cost is equal to the total energy savings of the 2 DOF arm during 8 strokes. Next, there could have been an energetic cost to position the manipulator outside the equilibrium position. We have designed the spring mechanism in such a way that this holding cost is negligible for a range of start- and end positions. The performed experiments showed that the implementation of the proposed spring mechanism results in a reduction of the energy consumption while the arm is still able to handle varying task parameters.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"85 1","pages":"2901-2908"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79728512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
A system of automated training sample generation for visual-based car detection 基于视觉的汽车检测自动训练样本生成系统
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386060
Chao Wang, Huijing Zhao, F. Davoine, H. Zha
{"title":"A system of automated training sample generation for visual-based car detection","authors":"Chao Wang, Huijing Zhao, F. Davoine, H. Zha","doi":"10.1109/IROS.2012.6386060","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386060","url":null,"abstract":"This paper presents a system to automatically generate car sample dataset for visual-based car detector training. The dataset contains multi-view car samples labeled with the car's pose, so that a view-discriminative training and car detection is also available. There are mainly two parts in the system: laser-based car detection and tracking generates motion trajectories of on-road cars, and then visual samples are extracted by fusing the detection and tracking results with visual-based detection. A multi-modal sensor system is developed for the omni-directional data collection on a test-bed vehicle. By processing the data of experiment conducted on the freeway of Beijing, a large number of multi-view car samples with pose information were generated. The samples' quality is evaluated by applying it in a visual car detector's training and testing procedure.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"11 1","pages":"4169-4176"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83555253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Robust and fast visual tracking using constrained sparse coding and dictionary learning 使用约束稀疏编码和字典学习的鲁棒和快速视觉跟踪
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385459
Tianxiang Bai, Youfu Li, Xiaolong Zhou
{"title":"Robust and fast visual tracking using constrained sparse coding and dictionary learning","authors":"Tianxiang Bai, Youfu Li, Xiaolong Zhou","doi":"10.1109/IROS.2012.6385459","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385459","url":null,"abstract":"We present a novel appearance model using sparse coding with online sparse dictionary learning techniques for robust visual tracking. In the proposed appearance model, the target appearance is modeled via online sparse dictionary learning technique with an “elastic-net constraint”. This scheme allows us to capture the characteristics of the target local appearance, and promotes the robustness against partial occlusions during tracking. Additionally, we unify the sparse coding and online dictionary learning by defining a “sparsity consistency constraint” that facilitates the generative and discriminative capabilities of the appearance model. Moreover, we propose a robust similarity metric that can eliminate the outliers from the corrupted observations. We then integrate the proposed appearance model with the particle filter framework to form a robust visual tracking algorithm. Experiments on publicly available benchmark video sequences demonstrate that the proposed appearance model improves the tracking performance compared with other state-of-the-art approaches.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"55 1","pages":"3824-3829"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81198314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fighting fires with human robot teams 与人类机器人团队一起灭火
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386269
E. Martinson, W. Lawson, Samuel Blisard, Anthony M. Harrison, J. Trafton
{"title":"Fighting fires with human robot teams","authors":"E. Martinson, W. Lawson, Samuel Blisard, Anthony M. Harrison, J. Trafton","doi":"10.1109/IROS.2012.6386269","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386269","url":null,"abstract":"This video submission demonstrates cooperative human-robot firefighting. A human team leader guides the robot to the fire using a combination of speech and gesture.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"29 1","pages":"2682-2683"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81207661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
An integrated approach of attention control of target human by nonverbal behaviors of robots in different viewing situations 不同视觉情境下机器人非语言行为对目标人注意力控制的综合研究
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385480
M. M. Hoque, Dipankar Das, Tomomi Onuki, Yoshinori Kobayashi, Y. Kuno
{"title":"An integrated approach of attention control of target human by nonverbal behaviors of robots in different viewing situations","authors":"M. M. Hoque, Dipankar Das, Tomomi Onuki, Yoshinori Kobayashi, Y. Kuno","doi":"10.1109/IROS.2012.6385480","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385480","url":null,"abstract":"A major challenge in HRI is to design a social robot that can attract a target human's attention to control his/her attention toward a particular direction in various social situations. If a robot would like to initiate an interaction with a person, it may turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be enough to initiate an interaction in all situations, especially when the robot and the human are not facing each other or the human intensely attends to his/her task. In this paper, we propose a conceptual model of attention control with four phases: attention attraction, eye contact, attention avoidance, and attention shift. In order to initiate an attention control process, the robot first tries to gain the target participant's attention toward it through head turning, or head shaking action depending on the three viewing situations where the robot is captured in his/her field of view (central field of view, near peripheral field of view, and far peripheral field of view). After gaining her/his attention, the robot makes eye contact only with the target person through showing gaze awareness by blinking its eyes, and directs her/his attention toward an object by turning its eyes and head cues. Moreover, the robot can show attention to aversion behaviors if non-target persons look at it. We design a robot based on the proposed approach, and it is confirmed as effective to control the target participant's attention in experimental evaluation.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"18 1","pages":"1399-1406"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88575610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Feature-based terrain classification for LittleDog 《LittleDog》基于特征的地形分类
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386042
Paul Filitchkin, Katie Byl
{"title":"Feature-based terrain classification for LittleDog","authors":"Paul Filitchkin, Katie Byl","doi":"10.1109/IROS.2012.6386042","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386042","url":null,"abstract":"Recent work in terrain classification has relied largely on 3D sensing methods and color based classification. We present an approach that works with a single, compact camera and maintains high classification rates that are robust to changes in illumination. Terrain is classified using a bag of visual words (BOVW) created from speeded up robust features (SURF) with a support vector machine (SVM) classifier. We present several novel techniques to augment this approach. A gradient descent inspired algorithm is used to adjust the SURF Hessian threshold to reach a nominal feature density. A sliding window technique is also used to classify mixed terrain images with high resolution. We demonstrate that our approach is suitable for small legged robots by performing real-time terrain classification on LittleDog. The classifier is used to select between predetermined gaits to traverse terrain of varying difficulty. Results indicate that real-time classification in-the-loop is faster than using a single all-terrain gait.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"24 1","pages":"1387-1392"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87229969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
Segmentation of unknown objects in indoor environments 室内环境中未知物体的分割
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385661
A. Richtsfeld, Thomas Morwald, J. Prankl, M. Zillich, M. Vincze
{"title":"Segmentation of unknown objects in indoor environments","authors":"A. Richtsfeld, Thomas Morwald, J. Prankl, M. Zillich, M. Vincze","doi":"10.1109/IROS.2012.6385661","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385661","url":null,"abstract":"We present a framework for segmenting unknown objects in RGB-D images suitable for robotics tasks such as object search, grasping and manipulation. While handling single objects on a table is solved, handling complex scenes poses considerable problems due to clutter and occlusion. After pre-segmentation of the input image based on surface normals, surface patches are estimated using a mixture of planes and NURBS (non-uniform rational B-splines) and model selection is employed to find the best representation for the given data. We then construct a graph from surface patches and relations between pairs of patches and perform graph cut to arrive at object hypotheses segmented from the scene. The energy terms for patch relations are learned from user annotated training data, where support vector machines (SVM) are trained to classify a relation as being indicative of two patches belonging to the same object. We show evaluation of the relations and results on a database of different test sets, demonstrating that the approach can segment objects of various shapes in cluttered table top scenes.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"os-39 1","pages":"4791-4796"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87423872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 172
Playmate robots that can act according to a child's mental state 可以根据孩子的精神状态采取行动的玩伴机器人
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386037
Kasumi Abe, Akiko Iwasaki, Tomoaki Nakamura, T. Nagai, A. Yokoyama, T. Shimotomai, Hiroyuki Okada, T. Omori
{"title":"Playmate robots that can act according to a child's mental state","authors":"Kasumi Abe, Akiko Iwasaki, Tomoaki Nakamura, T. Nagai, A. Yokoyama, T. Shimotomai, Hiroyuki Okada, T. Omori","doi":"10.1109/IROS.2012.6386037","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386037","url":null,"abstract":"We propose a playmate robot system that can play with a child. Unlike many therapeutic service robots, our proposed playmate system is implemented as a functionality of the domestic service robot with a high degree of freedom. This implies that the robot can play high-level games with children, i.e., beyond therapeutic play, using its physical features. The proposed system currently consists of ten play modules, including a chatbot with eye contact, card playing, and drawing. The algorithms of these modules are briefly discussed in this paper. To sustain the player's interest in the system, we also propose an action-selection strategy based on a transition model of the child's mental state. The robot can estimate the child's state and select an appropriate action in the course of play. A portion of the proposed algorithms was implemented on a real robot platform, and experiments were carried out to design and evaluate the proposed system.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"183 1","pages":"4660-4667"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88178339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信