2023 IEEE International Conference on Robotics and Automation (ICRA)最新文献

筛选
英文 中文
Generalization of Impact Response Factors for Proprioceptive Collaborative Robots 本体感觉协作机器人冲击响应因子的概化
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10160613
Carlos Relaño, D. Sanz-Merodio, Miguel López, C. Monje
{"title":"Generalization of Impact Response Factors for Proprioceptive Collaborative Robots","authors":"Carlos Relaño, D. Sanz-Merodio, Miguel López, C. Monje","doi":"10.1109/ICRA48891.2023.10160613","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10160613","url":null,"abstract":"Physical Human-Robot Interaction(pHRI) re-quires taking safety into account from the design board to the collaborative operation of any robot. For collaborative robotic environments, where human and machine are sharing space and interacting physically, the analysis and quantification of impacts becomes very relevant and necessary. Furthermore, analyses of this kind are a valuable source of information for the design of safer, more efficient pHRI. In the definition of the first parameter for dynamic impact analysis, the dynamic impact mitigation capacity was considered for certain configurations of the robot, but the design characteristics of the robot, such as the inertia of actuators, were not included. This paradigm changed when MIT presented the “impact mitigation factor” (IMF) with which, in addition to considering the ability of a certain robot to mitigate impacts for every configuration, it was possible to quantify backdriveability by taking the inertia of actuators into account for the calculation of the factor. However, IMF was proposed as a method to analyse floating robots like. This paper presents the Generalised Impact Absorption Factor (GIAF), suitable for both floating and fixed-base robots. GIAF is a valuable design parameter, as it provides information about the backdriveability of each joint, while allowing the comparison of impact response between floating and fixed-base robotic platforms. In this work, the mathematical definition of GIAF is developed and examples of possible uses of GIAF are presented.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115765718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transparent Objects: A Corner Case in Stereo Matching 透明物体:立体匹配中的一个角落案例
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10161385
Zhiyuan Wu, Shuai Su, Qijun Chen, Rui Fan
{"title":"Transparent Objects: A Corner Case in Stereo Matching","authors":"Zhiyuan Wu, Shuai Su, Qijun Chen, Rui Fan","doi":"10.1109/ICRA48891.2023.10161385","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10161385","url":null,"abstract":"Stereo matching is a common technique used in 3D perception, but transparent objects such as reflective and penetrable glass pose a challenge as their disparities are often estimated inaccurately. In this paper, we propose transparency-aware stereo (TA-Stereo), an effective solution to tackle this issue. TA-Stereo first utilizes a semantic segmentation or salient object detection network to identify transparent objects, and then homogenizes them to enable stereo matching algorithms to handle them as non-transparent objects. To validate the effectiveness of our proposed TA-Stereo strategy, we collect 260 images containing transparent objects from the KITTI Stereo 2012 and 2015 datasets and manually label pixel-level ground truth. We evaluate our strategy with six deep stereo networks and two types of transparent object detection methods. Our experiments demonstrate that TA-Stereo significantly improves the disparity accuracy of transparent objects. Our project webpage can be accessed at mias.group/TA-Stereo.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115865386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Collision Detection and Contact Point Estimation Using Virtual Joint Torque Sensing Applied to a Cobot 基于虚拟关节力矩传感的协作机器人碰撞检测与接触点估计
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10160661
Dario Zurlo, T. Heitmann, M. Morlock, Alessandro De Luca
{"title":"Collision Detection and Contact Point Estimation Using Virtual Joint Torque Sensing Applied to a Cobot","authors":"Dario Zurlo, T. Heitmann, M. Morlock, Alessandro De Luca","doi":"10.1109/ICRA48891.2023.10160661","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10160661","url":null,"abstract":"In physical human-robot interaction (pHRI) it is essential to reliably estimate and localize contact forces between the robot and the environment. In this paper, a complete contact detection, isolation, and reaction scheme is presented and tested on a new 6-dof industrial collaborative robot. We combine two popular methods, based on monitoring energy and generalized momentum, to detect and isolate collisions on the whole robot body in a more robust way. The experimental results show the effectiveness of our implementation on the LARA 5 cobot, that only relies on motor current and joint encoder measurements. For validation purposes, contact forces are also measured using an external GTE CoboSafe sensor. After a successful collision detection, the contact point location is isolated using a combination of the residual method based on the generalized momentum with a contact particle filter (CPF) scheme. We show for the first time a successful implementation of such combination on a real robot, without relying on joint torque sensor measurements.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124345548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MagTac: Magnetic Six-Axis Force/Torque Fingertip Tactile Sensor for Robotic Hand Applications 磁力六轴力/扭矩指尖触觉传感器,用于机械手应用
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10161042
Sungwoo Park, Sang-Rok Oh, Donghyun Hwang
{"title":"MagTac: Magnetic Six-Axis Force/Torque Fingertip Tactile Sensor for Robotic Hand Applications","authors":"Sungwoo Park, Sang-Rok Oh, Donghyun Hwang","doi":"10.1109/ICRA48891.2023.10161042","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10161042","url":null,"abstract":"We develop a novel hall-effect-based six-axis force/torque (F/T) tactile sensor integrated into the fingertip of robotic hands. When the robotic hands performs the grasping tasks in an unstructured environment, the visual information plays a main role in sensing the external properties of the objects. However, the various intrinsic properties of the objects such as softness, roughness, mass distribution, and weight cannot be measured properly only with the visual information. To detect the various force information in performing diverse tasks, we aim to implement the six-axis F/T fingertip tactile sensor with hall-effect-based principle. The experimental results demonstrate that the proposed sensor can measure the six-axis F/T with average errors of about 3.3%. In addition, it is observed that the effect of stray field can be shielded by applying a soft magnetic shielding film to the sensor.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124364503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Burst Stimulation for Enhanced Locomotion Control of Terrestrial Cyborg Insects 突发刺激增强陆生半机械昆虫的运动控制
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10160443
H. D. Nguyen, Hirotaka Sato, T. Vo-Doan
{"title":"Burst Stimulation for Enhanced Locomotion Control of Terrestrial Cyborg Insects","authors":"H. D. Nguyen, Hirotaka Sato, T. Vo-Doan","doi":"10.1109/ICRA48891.2023.10160443","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10160443","url":null,"abstract":"Terrestrial cyborg insects are biohybrid systems integrating living insects as mobile platforms. The insects' locomotion is controlled by the electrical stimulation of their sensory, muscular, or neural systems, in which continuous pulse trains are usually chosen as the stimulation waveform. Although this waveform is easy to generate and can elicit graded responses from the insects, its locomotion control efficiency has not been consistent among existing literature. This study demonstrates an improvement in locomotion control by using a new stimulation protocol, named Burst Stimulation, to stimulate a cyborg beetle's antennae (Zophobas morio). Modulating the continuous pulse train into multiple bursts enhanced the beetle's turning responses. At the same stimulation intensity (amplitude, pulse width, and active duration), the Burst Stimulation improved the turning angle by up to 50% compared to the continuous waveform. Moreover, the beetle's graded response was preserved. Increasing the stimulation frequency from 10 Hz to 40 Hz raised the turning rate by 40 deg/s. In addition, the initial implementation of this protocol in the feedback control-based navigation achieved a success rate of 81%, suggesting its potential use to optimize further the autonomous navigation of terrestrial cyborg insects.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124417245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforced Learning for Label-Efficient 3D Face Reconstruction 标签高效三维人脸重建的强化学习
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10161362
H. Mohaghegh, H. Rahmani, Hamid Laga, F. Boussaid, Bennamoun
{"title":"Reinforced Learning for Label-Efficient 3D Face Reconstruction","authors":"H. Mohaghegh, H. Rahmani, Hamid Laga, F. Boussaid, Bennamoun","doi":"10.1109/ICRA48891.2023.10161362","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10161362","url":null,"abstract":"3D face reconstruction plays a major role in many human-robot interaction systems, from automatic face authentication to human-computer interface-based entertainment. To improve robustness against occlusions and noise, 3D face reconstruction networks are often trained on a set of in-the-wild face images preferably captured along different viewpoints of the subject. However, collecting the required large amounts of 3D annotated face data is expensive and time-consuming. To address the high annotation cost and due to the importance of training on a useful set, we propose an Active Learning (AL) framework that actively selects the most informative and representative samples to be labeled. To the best of our knowledge, this paper is the first work on tackling active learning for 3D face reconstruction to enable a label-efficient training strategy. In particular, we propose a Reinforcement Active Learning approach in conjunction with a clustering-based pooling strategy to select informative view-points of the subjects. Experimental results on 300W-LP and AFLW2000 datasets demonstrate that our proposed method is able to 1) efficiently select the most influencing view-points for labeling and outperforms several baseline AL techniques and 2) further improve the performance of a 3D Face Reconstruction network trained on the full dataset.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114913024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPF-BG: A Hierarchical Vision-Based Planning Framework for Safe Quadrupedal Navigation GPF-BG:安全四足导航的分层视觉规划框架
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10160804
Shiyu Feng, Ziyi Zhou, Justin S. Smith, M. Asselmeier, Ye Zhao, P. Vela
{"title":"GPF-BG: A Hierarchical Vision-Based Planning Framework for Safe Quadrupedal Navigation","authors":"Shiyu Feng, Ziyi Zhou, Justin S. Smith, M. Asselmeier, Ye Zhao, P. Vela","doi":"10.1109/ICRA48891.2023.10160804","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10160804","url":null,"abstract":"Safe quadrupedal navigation through unknown environments is a challenging problem. This paper proposes a hierarchical vision-based planning framework (GPF-BG) integrating our previous Global Path Follower (GPF) navigation system and a gap-based local planner using Bézier curves, so called $B$ézier Gap (BG). This BG-based trajectory synthesis can generate smooth trajectories and guarantee safety for point-mass robots. With a gap analysis extension based on non-point, rectangular geometry, safety is guaranteed for an idealized quadrupedal motion model and significantly improved for an actual quadrupedal robot model. Stabilized perception space improves performance under oscillatory internal body motions that impact sensing. Simulation-based and real experiments under different benchmarking configurations test safe navigation performance. GPF-BG has the best safety outcomes across all experiments.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116976189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Contact-Based Pose Estimation of Workpieces for Robotic Setups 基于接触的机器人工件姿态估计
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10161465
Yitaek Kim, Aljaz Kramberger, A. Buch, Christoffer Sloth
{"title":"Contact-Based Pose Estimation of Workpieces for Robotic Setups","authors":"Yitaek Kim, Aljaz Kramberger, A. Buch, Christoffer Sloth","doi":"10.1109/ICRA48891.2023.10161465","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10161465","url":null,"abstract":"This paper presents a method for contact-based pose estimation of workpieces using a collaborative robot. The proposed pose estimation exploits positions and surface normal vectors along an arbitrary path on an object with known geometry, where surface normal vectors are estimated based on contact forces measured by the robot. When data is only available along a single path, it is difficult to find initial correspondences between source data (recorded points and normal vectors) and target data (CAD of an object); hence, a novel weighted incremental spatial search approach for generating correspondences based on point pair features is proposed. Subsequently, robust pose estimation is employed to reduce the effect of erroneous correspondences. The proposed pose estimation is verified in simulation on three paths on two objects and with different levels of noise on the source data to quantify the robustness of the algorithm. Finally, the method is experimentally validated to provide an average pose rotation and translation accuracy of $mathbf{0.55}^{circ}$ and 0.51 mm, respectively, when using the robust estimation cost function Geman-McClure.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116334539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contact Based Turning Gait of a Novel Legged-Wheeled Quadruped 一种新型腿轮式四足动物基于接触的转身步态
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10161241
Alper Yeldan, Abhimanyu Arora, G. Soh
{"title":"Contact Based Turning Gait of a Novel Legged-Wheeled Quadruped","authors":"Alper Yeldan, Abhimanyu Arora, G. Soh","doi":"10.1109/ICRA48891.2023.10161241","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10161241","url":null,"abstract":"How does a wheeled robot move and turn? The answer is straightforward for a conventional wheeled robot, but it is not so easy for a robot with a discrete wheel design. Regular wheeled robots always have four contact points, resulting in static stability during locomotion. However, QuadRunner's novel leg mechanism provides only a semi-circular wheel shape, and proper gait planning is needed to go straight or turn. Therefore, this paper presents a dual frequency gait planning method which controls the robot's gait cycle's duty factor and generates unique turning gait patterns for wheel locomotion. Describing requirements and limitations, we found sets of solutions that can achieve turning. Results show that the smallest turning radius QuadRunner achieved is 1.05m, and the biggest is 1.86m. In addition, detailed experiments were made to observe the performance and stability of straight and turning wheel behaviors. Finally, a gait verification is made using high-speed cameras.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123502859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Underwater Monocular Depth Estimation with Single-Beam Echosounder 基于单波束测深仪的深海单目深度估计
2023 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2023-05-29 DOI: 10.1109/ICRA48891.2023.10161439
Haowen Liu, Monika Roznere, Alberto Quattrini Li
{"title":"Deep Underwater Monocular Depth Estimation with Single-Beam Echosounder","authors":"Haowen Liu, Monika Roznere, Alberto Quattrini Li","doi":"10.1109/ICRA48891.2023.10161439","DOIUrl":"https://doi.org/10.1109/ICRA48891.2023.10161439","url":null,"abstract":"Underwater depth estimation is essential for safe Autonomous Underwater Vehicles (AUV) navigation. While there has been recent advances in out-of-water monocular depth estimation, it is difficult to apply these methods to the underwater domain due to the lack of well-established datasets with labelled ground truths. In this paper, we propose a novel method for self-supervised underwater monocular depth estimation by leveraging a low-cost single-beam echosounder (SBES). We also present a synthetic dataset for underwater depth estimation to facilitate visual learning research in the underwater domain, available at https://github.com/hdacnw/sbes-depth. We evaluated our method on the proposed dataset with results outperforming previous methods and tested our method in a dataset we collected with an inexpensive AUV. We further investigated the use of SBES as an additional component in our self-supervised method for up-to-scale depth estimation providing insights on next research directions.","PeriodicalId":360533,"journal":{"name":"2023 IEEE International Conference on Robotics and Automation (ICRA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121941813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信