2020 17th International Conference on Ubiquitous Robots (UR)最新文献

筛选
英文 中文
Fast In-situ Mesh Generation using Orb-SLAM2 and OpenMVS 基于Orb-SLAM2和OpenMVS的快速原位网格生成
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144879
Thomas Wright, Toshihide Hanari, K. Kawabata, B. Lennox
{"title":"Fast In-situ Mesh Generation using Orb-SLAM2 and OpenMVS","authors":"Thomas Wright, Toshihide Hanari, K. Kawabata, B. Lennox","doi":"10.1109/UR49135.2020.9144879","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144879","url":null,"abstract":"In exploratory robotics for nuclear decommissioning, environmental understanding is key. Sites such as Fukushima Daiichi Power Station and Sellafield often use manually controlled or semi-autonomous vehicles for exploration and monitoring of assets. In many cases, robots have limited sensing capabilities such as a single camera to provide video to the operators. These limitations can cause issues, where a lack of data about the environment and limited understanding of depth within the image can lead to a mis-understanding of asset state or potential damage being caused to the robot or environment. This work aims to aid operators by using the limited sensors provided i.e. a single monocular camera, to allow estimates of the robot’s surrounding environments to be generated insitu without having to off load large amounts of data for processing. This information can then be displayed as a mesh and manipulated in 3D to improve the operator awareness. Due to the target environment for operation being radioactive, speed is prioritised over accuracy, due to the damaging effects radiation can cause to electronics. In well lit environments images can be overlaid onto the meshes to improve the operators understanding and add detail to the mesh. From the results it has been found that 3D meshes of an environment/object can be generated in an acceptable time frame, less than 5 minutes. This differs from many current methods which require offline processing due to heavy computational requirement of Photogrammetry, or are far less informative giving data as raw point clouds, which can be hard to interpret. The proposed technique allows for lower resolution meshes good enough for avoiding collisions within an environment to be generated during a mission due to it’s speed of generation, however there are still several issues which need to be solved before such a technique is ready for deployment.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129777455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Persistent Area Coverage for Swarms Utilizing Deployment Entropy with Potential Fields 基于势场部署熵的蜂群持续区域覆盖
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144917
John D. Kelly, D. Lofaro, D. Sofge
{"title":"Persistent Area Coverage for Swarms Utilizing Deployment Entropy with Potential Fields","authors":"John D. Kelly, D. Lofaro, D. Sofge","doi":"10.1109/UR49135.2020.9144917","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144917","url":null,"abstract":"Our work focuses on persistent area coverage using a large number of agents. This is a valuable capability for multi-agent and swarm-based systems. Specifically, we strive to effectively disperse the agents throughout an area of interest such that it is sufficiently and persistently covered by the sensing sweeps of the agents. This capability can be applied toward tasks such as surveillance, target tracking, search and rescue, and exploration of unknown areas. Many methods can be implemented as behaviors for the agents to accomplish this. One strategy involves measuring area coverage using a measure known as deployment entropy, which relies on the area being divided into regions. Deployment entropy expresses the coverage of the area as the uniformity of agents per region across all regions. This strategy is useful due to its low computational complexity, scalability, and potential implementation on decentralized systems. Though previous results are promising, they focus on instantaneous area coverage and are not persistent. It is proposed in this paper that combining the split region strategy with the implementation of potential fields can retain the benefits of the split region strategy while increasing the spread of agents and therefore the total area that is persistently covered by the agents’ sensors. This approach is implemented and demonstrated to be effective through simulations of various numbers and densities of agents. Ultimately, these studies showed that a greater spread of agents and increased sensor coverage is obtained when compared to previous results not using potential fields with deployment entropy.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130291645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Fusion of CNNs and ICP for 3-D Point Cloud Registration* 三维点云配准的cnn和ICP融合*
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144767
Wen-Chung Chang, Van-Toan Pham, Yang-Cheng Huang
{"title":"A Fusion of CNNs and ICP for 3-D Point Cloud Registration*","authors":"Wen-Chung Chang, Van-Toan Pham, Yang-Cheng Huang","doi":"10.1109/UR49135.2020.9144767","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144767","url":null,"abstract":"3-D point cloud registration appears to be one of the principal techniques to estimate object pose in 3-D space and is critical to object picking and assembly in automated manufacturing lines. Thereby, this paper proposes an effective registration architecture with the aim of estimating the transformation between a data point cloud and the model point cloud. Specifically, in the first registration stage, a trainable Convolutional Neural Network (CNN) model is developed to learn the pose estimation between two point clouds in the case of a full range of orientation from −180° to 180°. In order to generate the training data set, a descriptor is proposed to extract features which are employed to train the CNN model from point clouds. Then, based on the rough estimation of the trained CNN model in the first stage, two point clouds can be further aligned precisely in the second stage by using the Iterative Closest Point (ICP) algorithm. Finally, the performance of the proposed two-stage registration architecture has been verified by experiments in comparison with a baseline method. The experimental results illustrate that the developed algorithm can guarantee high precision while significantly reducing the estimation time.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130761566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Proactive Trajectory Planning Algorithm for Autonomous Mobile Robots in Dynamic Social Environments 动态社会环境下自主移动机器人的主动轨迹规划算法
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144925
L. A. Nguyen, T. Pham, T. Ngo, Xuan-Tung Truong
{"title":"A Proactive Trajectory Planning Algorithm for Autonomous Mobile Robots in Dynamic Social Environments","authors":"L. A. Nguyen, T. Pham, T. Ngo, Xuan-Tung Truong","doi":"10.1109/UR49135.2020.9144925","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144925","url":null,"abstract":"This paper proposes a proactive trajectory planning algorithm for autonomous mobile robots in dynamic social environments. The main idea of the proposed proactive timed elastic band (PTEB) system is to combine the advantages of the timed elastic band (TEB) technique and the hybrid reciprocal velocity obstacle (HRVO) model by incorporating the potential collision generated by the HRVO model into the objective function of the TEB technique. The output of the proposed PTEB system is the optimal trajectory, which enables the mobile robots to navigate safely in the dynamic social environments. We validate the effectiveness of the proposed model through a series of experiments in simulation environments. The simulation results show that, our proposed motion model is capable of driving the mobile robots to proactively avoid dynamic obstacles, providing the safe navigation for the robots.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134075089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Task Planning with Mixed-Integer Programming for Multiple Cooking Task Using dual-arm Robot 双臂机器人多烹饪任务的混合整数规划
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144803
June-sup Yi, M. Ahn, Hosik Chae, Hyunwoo Nam, Donghun Noh, D. Hong, H. Moon
{"title":"Task Planning with Mixed-Integer Programming for Multiple Cooking Task Using dual-arm Robot","authors":"June-sup Yi, M. Ahn, Hosik Chae, Hyunwoo Nam, Donghun Noh, D. Hong, H. Moon","doi":"10.1109/UR49135.2020.9144803","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144803","url":null,"abstract":"This work proposes a task scheduling method in an optimization framework with applications on a dual-arm cooking robot in a controlled cooking environment. A mixed-integer programming (MIP) framework is used to find an optimal sequence of tasks to be done for each arm. The optimization is fast and simple as a priori information about the tasks to be scheduled reveal dependency and kinematic constraints between them which significantly reduces the problem size as infeasible solutions are removed pre-optimization. The optimization approach’s feasibility is validated on a series of simulations and an in-depth scalability analysis is conducted by changing the number of tasks to be done, the dishes to be completed, as well as the locations where the tasks can be done. Considering the unique configuration of the platform, analysis on selecting the minimum time required tasks as opposed tasks that will give the most flexibility to the other arm is also done. An example is presented on a real set of tasks to show the optimality of the solution.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116028421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Stiffness-controlled Robotic Palm based on a Granular Jamming Mechanism 基于颗粒干扰机制的刚度控制机器人手掌
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144805
Jeongwon Lee, W. Han, Eunchan Kim, Ingu Choi, Sungwook Yang
{"title":"A Stiffness-controlled Robotic Palm based on a Granular Jamming Mechanism","authors":"Jeongwon Lee, W. Han, Eunchan Kim, Ingu Choi, Sungwook Yang","doi":"10.1109/UR49135.2020.9144805","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144805","url":null,"abstract":"This paper presents a new type of a robotic palm based on a granular jamming mechanism to improve grasping performance. The granular jamming principle is adopted to alter the shape and stiffness of the robotic palm by controlling a transition between a solid-state and a fluid-state of a granular material used. The robotic palm incorporates a specifically designed granular chamber that is optimized for dealing with large volume change. The control system is also developed for the proposed granular jamming mechanism to be electrically operated without any pneumatic components. In addition, the stiffness of the palm can be precisely regulated by the feedback control of the negative pressure applied to the granular chamber. We evaluate the shape-adaptability of the robotic palm for various objects. As a result, the robotic palm could accommodate the various shapes of the testing objects by conformably altering its shape during contact. Moreover, the stiffness-controllability is also investigated for the three different sizes of granular materials. The stiffness increases up to 30 fold under fully jammed state for the small size of the grain. Finally, we evaluate the grasping performance of the robotic palm with a commercially available robot hand. 1.7 times higher grasping force was attained with the conformably deformed and stiffened surface, compared to the flat skin of the rigid palm. Therefore, the stiffness-controlled robotic palm can improve grasping performance with the enhanced shape-adaptability and stiffness-controllability.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125482192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pointing Direction Estimation for Attention Target Extraction Using Body-mounted Camera 基于摄像机的注意力目标提取的指向估计
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144978
Yusei Oozono, H. Yamazoe, Joo-Ho Lee
{"title":"Pointing Direction Estimation for Attention Target Extraction Using Body-mounted Camera","authors":"Yusei Oozono, H. Yamazoe, Joo-Ho Lee","doi":"10.1109/UR49135.2020.9144978","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144978","url":null,"abstract":"In this paper, we propose a pointing-directionestimation method using a body-mounted camera. The opportunities to capture a large amount of image data in daily life are increasing due to the spread of smartphones and wearable cameras. In order to efficiently look back at the captured images, we aim to extract attention targets from the image sequences because the attention target is important for reminding people of their memories. Toward this purpose, in this paper, we propose a method for estimating the pointing direction from wearable camera images. The proposed method consists of two steps: arm skeleton estimation and pointing direction estimation. We model three types of pointing-directionestimation models and compare the estimations’ accuracy for evaluating which parts are important for pointing direction estimation. The experimental results show that the model based on the wrists and elbows had the best results.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127043946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Landmark Localization Robust on the Eyes with Position Regression Network 基于位置回归网络的人眼面部特征鲁棒定位
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144702
Chanwoong Kwak, Jaeyoon Jang, Hosub Yoon
{"title":"Facial Landmark Localization Robust on the Eyes with Position Regression Network","authors":"Chanwoong Kwak, Jaeyoon Jang, Hosub Yoon","doi":"10.1109/UR49135.2020.9144702","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144702","url":null,"abstract":"Facial landmark localization is essential for robot-human interaction. In particular, the human eye is more important because it can grasp a person’s interests. However, the traditional method does not consider eye changes from the dataset, so the limitation is clear, this paper presents a data augmentation method for acquiring various eye images and a method for creating a robust eye landmark model with 2-stage training. Experiments on augmented 300W-LP datasets show that our method outperforms performance than the previous method.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124372444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fall detection based on CNN models implemented on a mobile robot 基于CNN模型的跌倒检测在移动机器人上实现
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144836
Carlos Menacho, Jhon Ordoñez
{"title":"Fall detection based on CNN models implemented on a mobile robot","authors":"Carlos Menacho, Jhon Ordoñez","doi":"10.1109/UR49135.2020.9144836","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144836","url":null,"abstract":"Fall accidents are serious events that need to be addressed. Generally, elderly people could suffer these accidents that may lead injures or even death. The use of Convolutional Neural Networks (CNN) has achieved the state of the art for fall detection, but it requires a high computational cost. In this work, we propose an efficient CNN architecture with a reduced number of parameters, which is applied to fall detection in a service with a mobile robot, equipped with a resource-constrained hardware (Nvidia Jetson TX2 platform). Also, different pre-trained CNN models are compared to measure their performances in real scenarios, in addition with other functions like following people and navigation. Furthermore, fall detection is carried out by extraction of temporal features obtained with an Optical Flow extraction from two consecutive RGB images. The proposed network is confirmed by our results to be faster and more suitable for running on resource-constrained Hardware. Our model achieves 88.55% of accuracy using the proposed architecture and it works at 23.16 FPS on GPU and 10.23 FPS on CPU.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116078203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Pneumatic Soft Gripper with Configurable Workspace and Self-sensing 具有可配置工作空间和自感知的气动软夹持器
2020 17th International Conference on Ubiquitous Robots (UR) Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144922
Qiwen Shao, Ningbin Zhang, Zequn Shen, Guoying Gu
{"title":"A Pneumatic Soft Gripper with Configurable Workspace and Self-sensing","authors":"Qiwen Shao, Ningbin Zhang, Zequn Shen, Guoying Gu","doi":"10.1109/UR49135.2020.9144922","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144922","url":null,"abstract":"In this paper, we present a novel pneumatic soft gripper with a configurable workspace and perception to grasp various objects and recognize their sizes. The soft gripper consists of three pneu-net soft fingers embedded with resistive strain sensors and cascaded by a stretchable palm. The pneu-net soft fingers are fabricated through a lost-wax casting process. Each strain sensor is designed with an ionic hydrogel-elastomer hybrid structure and embedded into a soft finger to recognize its deformation. The stretchable palm is designed with an opening-closing parallel mechanism driven by a pneumatic fiber-reinforced soft actuator to modify the grasping workspace of the gripper. The characterization experiments are conducted to demonstrate the excellent performance of soft gripper. Based on the measurement of the strain sensors, we propose two kinds of grasping strategies for the soft gripper: a traditional finger-bending identification strategy (FBI strategy) without the active palm and a new palm-closing identification strategy (PCI strategy) with the active palm. Experimental results with an industrial robot demonstrate that our soft gripper with the PCI strategy can perform more robust picking tasks and more accurate identification tasks than with the FBI strategy.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"303 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123233824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信