2017 18th International Conference on Advanced Robotics (ICAR)最新文献

筛选
英文 中文
Cooperative motion planning of redundant rover manipulators on uneven terrains 非均匀地形上冗余漫游机器人的协同运动规划
2017 18th International Conference on Advanced Robotics (ICAR) Pub Date : 2017-07-01 DOI: 10.1109/ICAR.2017.8023502
R. Raja, B. Dasgupta, A. Dutta
{"title":"Cooperative motion planning of redundant rover manipulators on uneven terrains","authors":"R. Raja, B. Dasgupta, A. Dutta","doi":"10.1109/ICAR.2017.8023502","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023502","url":null,"abstract":"In this paper we consider the problem of cooperative motion planning for redundant mobile manipulator on uneven terrains. This approach involves formulating the trajectory planning as non-linear constrained minimization problem of joint angle movement of mobile manipulator at each instance. The main problem is to solve (i) the redundancy exist in the system considering parameters of wheel-terrain interactions, (ii) the cooperative behavior of the mobile manipulator while performing the task, and (iii) the manipulability issues. To perform task the manipulator moves towards desired location, while the mobile robot moves to enhance the manipulator task space. A weighting factors has been introduced to define the level of importance of movement of each joint of the mobile manipulator. A quality measure has been computed to measure the ability of mobile manipulator for a particular configuration. The problem of trajectory planning and redundancy resolution has been solved by Augmented Lagrangian Method (ALM). To evaluate the method several simulations have been performed. The simulation and experimental results have been presented, which shows that the method provides feasible trajectories and successfully tracks the desired end-effector path.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121871960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controlled tactile exploration and haptic object recognition 控制触觉探索和触觉对象识别
2017 18th International Conference on Advanced Robotics (ICAR) Pub Date : 2017-06-27 DOI: 10.1109/ICAR.2017.8023495
Massimo Regoli, Nawid Jamali, G. Metta, L. Natale
{"title":"Controlled tactile exploration and haptic object recognition","authors":"Massimo Regoli, Nawid Jamali, G. Metta, L. Natale","doi":"10.1109/ICAR.2017.8023495","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023495","url":null,"abstract":"In this paper we propose a novel method for in-hand object recognition. The method is composed of a grasp stabilization controller and two exploratory behaviours to capture the shape and the softness of an object. Grasp stabilization plays an important role in recognizing objects. First, it prevents the object from slipping and facilitates the exploration of the object. Second, reaching a stable and repeatable position adds robustness to the learning algorithm and increases invariance with respect to the way in which the robot grasps the object. The stable poses are estimated using a Gaussian mixture model (GMM). We present experimental results showing that using our method the classifier can successfully distinguish 30 objects. We also compare our method with a benchmark experiment, in which the grasp stabilization is disabled. We show, with statistical significance, that our method outperforms the benchmark method.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128888281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Independent motion detection with event-driven cameras 独立运动检测与事件驱动的相机
2017 18th International Conference on Advanced Robotics (ICAR) Pub Date : 2017-06-27 DOI: 10.1109/ICAR.2017.8023661
Valentina Vasco, Arren J. Glover, Elias Mueggler, D. Scaramuzza, L. Natale, C. Bartolozzi
{"title":"Independent motion detection with event-driven cameras","authors":"Valentina Vasco, Arren J. Glover, Elias Mueggler, D. Scaramuzza, L. Natale, C. Bartolozzi","doi":"10.1109/ICAR.2017.8023661","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023661","url":null,"abstract":"Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ∼ 90% and show that the method is robust to changes in speed of both the head and the target.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128003510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Robot trajectory planning method based on genetic chaos optimization algorithm 基于遗传混沌优化算法的机器人轨迹规划方法
2017 18th International Conference on Advanced Robotics (ICAR) Pub Date : 2017-05-31 DOI: 10.1109/ICAR.2017.8023673
Qiwan Zhang, Mingting Yuan, R. Song
{"title":"Robot trajectory planning method based on genetic chaos optimization algorithm","authors":"Qiwan Zhang, Mingting Yuan, R. Song","doi":"10.1109/ICAR.2017.8023673","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023673","url":null,"abstract":"In order to smooth the trajectory of the robot end effector and optimize the running time of the robot, the paper presents a new robot trajectory planning method based on genetic chaos optimization algorithm. Firstly, the planned quintic polynomial is used to interpolate the position nodes in joint space to model the running trajectory of the robot. Subsequently, genetic chaos optimization algorithm based on genetic algorithm and chaos algorithm is introduced. Finally, it is proved that the novel method can make the running trajectory of the robot end effector smooth and time optimal under the constraints of velocity, acceleration and jerk through simulation and analysis.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123172468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A fully end-to-end deep learning approach for real-time simultaneous 3D reconstruction and material recognition 一种完全端到端的深度学习方法,用于实时同步3D重建和材料识别
2017 18th International Conference on Advanced Robotics (ICAR) Pub Date : 2017-03-14 DOI: 10.1109/ICAR.2017.8023499
Cheng Zhao, Li Sun, R. Stolkin
{"title":"A fully end-to-end deep learning approach for real-time simultaneous 3D reconstruction and material recognition","authors":"Cheng Zhao, Li Sun, R. Stolkin","doi":"10.1109/ICAR.2017.8023499","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023499","url":null,"abstract":"This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Non-iterative SLAM 价值大满贯
2017 18th International Conference on Advanced Robotics (ICAR) Pub Date : 2017-01-19 DOI: 10.1109/ICAR.2017.8023500
Chen Wang, Junsong Yuan, Lihua Xie
{"title":"Non-iterative SLAM","authors":"Chen Wang, Junsong Yuan, Lihua Xie","doi":"10.1109/ICAR.2017.8023500","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023500","url":null,"abstract":"The goal of this paper is to create a new framework for dense SLAM that is light enough for micro-robot systems based on depth camera and inertial sensor. Feature-based and direct methods are two mainstreams in visual SLAM. Both methods minimize photometric or reprojection error by iterative solutions, which are computationally expensive. To overcome this problem, we propose a non-iterative framework to reduce computational requirement. First, the attitude and heading reference system (AHRS) and axonometric projection are utilized to decouple the 6 Degree-of-Freedom (DoF) data, so that point clouds can be matched in independent spaces respectively. Second, based on single key-frame training, the matching process is carried out in frequency domain by Fourier transformation, which provides a closed-form non-iterative solution. In this manner, the time complexity is reduced to O(n log n), where n is the number of matched points in each frame. To the best of our knowledge, this method is the first non-iterative and online trainable approach for data association in visual SLAM. Compared with the state-of-the-arts, it runs at a faster speed and obtains 3-D maps with higher resolution yet still with comparable accuracy.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131670236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
PCA-aided fully convolutional networks for semantic segmentation of multi-channel fMRI pca辅助的全卷积网络在多通道fMRI语义分割中的应用
2017 18th International Conference on Advanced Robotics (ICAR) Pub Date : 2016-10-06 DOI: 10.1109/ICAR.2017.8023506
L. Tai, Haoyang Ye, Qiong Ye, Ming Liu
{"title":"PCA-aided fully convolutional networks for semantic segmentation of multi-channel fMRI","authors":"L. Tai, Haoyang Ye, Qiong Ye, Ming Liu","doi":"10.1109/ICAR.2017.8023506","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023506","url":null,"abstract":"Semantic segmentation of functional magnetic resonance imaging (fMRI) makes great sense for pathology diagnosis and decision system of medical robots. The multi-channel fMRI provides more information of the pathological features. But the increased amount of data causes complexity in feature detections. This paper proposes a principal component analysis (PCA)-aided fully convolutional network to particularly deal with multi-channel fMRI. We transfer the learned weights of contemporary classification networks to the segmentation task by fine-tuning. The results of the convolutional network are compared with various methods e.g. k-NN. A new labeling strategy is proposed to solve the semantic segmentation problem with unclear boundaries. Even with a small-sized training dataset, the test results demonstrate that our model outperforms other pathological feature detection methods. Besides, its forward inference only takes 90 milliseconds for a single set of fMRI data. To our knowledge, this is the first time to realize pixel-wise labeling of multi-channel magnetic resonance image using FCN.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130240585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
ROSRemote, using ROS on cloud to access robots remotely ROSRemote,使用云上的ROS远程访问机器人
2017 18th International Conference on Advanced Robotics (ICAR) Pub Date : 1900-01-01 DOI: 10.1109/ICAR.2017.8023621
Alyson B. M. Pereira, G. S. Bastos
{"title":"ROSRemote, using ROS on cloud to access robots remotely","authors":"Alyson B. M. Pereira, G. S. Bastos","doi":"10.1109/ICAR.2017.8023621","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023621","url":null,"abstract":"Cloud computing is an area that, nowadays, has been attracting a lot of researches and is expanding not only for processing data, but also for robotics. Cloud robotics is becoming a well-known subject, but it only works in a way to find a faster manner of processing data, which is almost like the idea of cloud computing. In this paper we have created a way to use cloud not only for this kind of operation but, also, to create a framework that helps users to work with ROS in a remote master, giving the possibility to create several applications that may run remotely. Using SpaceBrew, we do not have to worry about finding the robots addresses, which makes this application easier to implement because programmers only have to code as if the application is local.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115814033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信