Autonomous RobotsPub Date : 2023-04-19DOI: 10.1007/s10514-023-10089-6
Tahiya Salam, M. Ani Hsieh
{"title":"Heterogeneous robot teams for modeling and prediction of multiscale environmental processes","authors":"Tahiya Salam, M. Ani Hsieh","doi":"10.1007/s10514-023-10089-6","DOIUrl":"10.1007/s10514-023-10089-6","url":null,"abstract":"<div><p>This paper presents a framework to enable a team of heterogeneous mobile robots to model and sense a multiscale system. We propose a coupled strategy, where robots of one type collect high-fidelity measurements at a slow time scale and robots of another type collect low-fidelity measurements at a fast time scale, for the purpose of fusing measurements together. The multiscale measurements are fused to create a model of a complex, nonlinear spatiotemporal process. The model helps determine optimal sensing locations and predict the evolution of the process. Key contributions are: (i) consolidation of multiple types of data into one cohesive model, (ii) fast determination of optimal sensing locations for mobile robots, and (iii) adaptation of models online for various monitoring scenarios. We illustrate the proposed framework by modeling and predicting the evolution of an artificial plasma cloud. We test our approach using physical marine robots adaptively sampling a process in a water tank.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"353 - 376"},"PeriodicalIF":3.5,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49418994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-04-12DOI: 10.1007/s10514-023-10094-9
Eric Heiden, Miles Macklin, Yashraj Narang, Dieter Fox, Animesh Garg, Fabio Ramos
{"title":"DiSECt: a differentiable simulator for parameter inference and control in robotic cutting","authors":"Eric Heiden, Miles Macklin, Yashraj Narang, Dieter Fox, Animesh Garg, Fabio Ramos","doi":"10.1007/s10514-023-10094-9","DOIUrl":"10.1007/s10514-023-10094-9","url":null,"abstract":"<div><p>Robotic cutting of soft materials is critical for applications such as food processing, household automation, and surgical manipulation. As in other areas of robotics, simulators can facilitate controller verification, policy learning, and dataset generation. Moreover, <i>differentiable</i> simulators can enable gradient-based optimization, which is invaluable for calibrating simulation parameters and optimizing controllers. In this work, we present DiSECt: the first differentiable simulator for cutting soft materials. The simulator augments the finite element method with a continuous contact model based on signed distance fields, as well as a continuous damage model that inserts springs on opposite sides of the cutting plane and allows them to weaken until zero stiffness, enabling crack formation. Through various experiments, we evaluate the performance of the simulator. We first show that the simulator can be calibrated to match resultant forces and deformation fields from a state-of-the-art commercial solver and real-world cutting datasets, with generality across cutting velocities and object instances. We then show that Bayesian inference can be performed efficiently by leveraging the differentiability of the simulator, estimating posteriors over hundreds of parameters in a fraction of the time of derivative-free methods. Next, we illustrate that control parameters in the simulation can be optimized to minimize cutting forces via lateral slicing motions. Finally, we conduct experiments on a real robot arm equipped with a slicing knife to infer simulation parameters from force measurements. By optimizing the slicing motion of the knife, we show on fruit cutting scenarios that the average knife force can be reduced by more than <span>(40%)</span> compared to a vertical cutting motion. We publish code and additional materials on our project website at https://diff-cutting-sim.github.io.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"549 - 578"},"PeriodicalIF":3.5,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10094-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48242007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-04-12DOI: 10.1007/s10514-023-10096-7
Rahaf Rahal, Amir M. Ghalamzan-E., Firas Abi-Farraj, Claudio Pacchierotti, Paolo Robuffo Giordano
{"title":"Haptic-guided grasping to minimise torque effort during robotic telemanipulation","authors":"Rahaf Rahal, Amir M. Ghalamzan-E., Firas Abi-Farraj, Claudio Pacchierotti, Paolo Robuffo Giordano","doi":"10.1007/s10514-023-10096-7","DOIUrl":"10.1007/s10514-023-10096-7","url":null,"abstract":"<div><p>Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in a human subjects study. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visual-only guidance, although combining them together leads to the best overall results.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"405 - 423"},"PeriodicalIF":3.5,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48097502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-04-11DOI: 10.1007/s10514-023-10101-z
Dimitrios Dimou, José Santos-Victor, Plinio Moreno
{"title":"Robotic hand synergies for in-hand regrasping driven by object information","authors":"Dimitrios Dimou, José Santos-Victor, Plinio Moreno","doi":"10.1007/s10514-023-10101-z","DOIUrl":"10.1007/s10514-023-10101-z","url":null,"abstract":"<div><p>We develop a conditional generative model to represent dexterous grasp postures of a robotic hand and use it to generate in-hand regrasp trajectories. Our model learns to encode the robotic grasp postures into a low-dimensional space, called Synergy Space, while taking into account additional information about the object such as its size and its shape category. We then generate regrasp trajectories through linear interpolation in this low-dimensional space. The result is that the hand configuration moves from one grasp type to another while keeping the object stable in the hand. We show that our model achieves higher success rate on in-hand regrasping compared to previous methods used for synergy extraction, by taking advantage of the grasp size conditional variable.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"453 - 464"},"PeriodicalIF":3.5,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10101-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46694831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning instance-level N-ary semantic knowledge at scale for robots operating in everyday environments","authors":"Weiyu Liu, Dhruva Bansal, Angel Daruna, Sonia Chernova","doi":"10.1007/s10514-023-10099-4","DOIUrl":"10.1007/s10514-023-10099-4","url":null,"abstract":"<div><p>Robots operating in everyday environments need to effectively perceive, model, and infer semantic properties of objects. Existing knowledge reasoning frameworks only model binary relations between an object’s class label and its semantic properties, unable to collectively reason about object properties detected by different perception algorithms and grounded in diverse sensory modalities. We bridge the gap between multimodal perception and knowledge reasoning by introducing an n-ary representation that models complex, inter-related object properties. To tackle the problem of collecting n-ary semantic knowledge at scale, we propose transformer neural networks that generalize knowledge from observations of object instances by learning to predict single missing properties or predict joint probabilities of all properties. The learned models can reason at different levels of abstraction, effectively predicting unknown properties of objects in different environmental contexts given different amounts of observed information. We quantitatively validate our approach against prior methods on LINK, a unique dataset we contribute that contains 1457 object instances in different situations, amounting to 15 multimodal properties types and 200 total properties. Compared to the top-performing baseline, a Markov Logic Network, our models obtain a 10% improvement in predicting unknown properties of novel object instances while reducing training and inference time by more than 150 times. Additionally, we apply our work to a mobile manipulation robot, demonstrating its ability to leverage n-ary reasoning to retrieve objects and actively detect object properties. The code and data are available at https://github.com/wliu88/LINK.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"529 - 547"},"PeriodicalIF":3.5,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46907553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multimodal embodied attribute learning by robots for object-centric action policies","authors":"Xiaohan Zhang, Saeid Amiri, Jivko Sinapov, Jesse Thomason, Peter Stone, Shiqi Zhang","doi":"10.1007/s10514-023-10098-5","DOIUrl":"10.1007/s10514-023-10098-5","url":null,"abstract":"<div><p>Robots frequently need to perceive object attributes, such as <span>red</span>, <span>heavy</span>, and <span>empty</span>, using multimodal exploratory behaviors, such as <i>look</i>, <i>lift</i>, and <i>shake</i>. One possible way for robots to do so is to learn a classifier for each perceivable attribute given an exploratory behavior. Once the attribute classifiers are learned, they can be used by robots to select actions and identify attributes of new objects, answering questions, such as “<i>Is this object</i> <span>red</span> <i> and</i> <span>empty</span> ?” In this article, we introduce a robot interactive perception problem, called <b>M</b>ultimodal <b>E</b>mbodied <b>A</b>ttribute <b>L</b>earning (<span>meal</span>), and explore solutions to this new problem. Under different assumptions, there are two classes of <span>meal</span> problems. <span>offline-meal</span> problems are defined in this article as learning attribute classifiers from pre-collected data, and sequencing actions towards attribute identification under the challenging trade-off between information gains and exploration action costs. For this purpose, we introduce <b>M</b>ixed <b>O</b>bservability <b>R</b>obot <b>C</b>ontrol (<span>morc</span>), an algorithm for <span>offline-meal</span> problems, that dynamically constructs both fully and partially observable components of the state for multimodal attribute identification of objects. We further investigate a more challenging class of <span>meal</span> problems, called <span>online-meal</span>, where the robot assumes no pre-collected data, and works on both attribute classification and attribute identification at the same time. Based on <span>morc</span>, we develop an algorithm called <b>I</b>nformation-<b>T</b>heoretic <b>R</b>eward <b>S</b>haping (<span>morc</span>-<span>itrs</span>) that actively addresses the trade-off between exploration and exploitation in <span>online-meal</span> problems. <span>morc</span> and <span>morc</span>-<span>itrs</span> are evaluated in comparison with competitive <span>meal</span> baselines, and results demonstrate the superiority of our methods in learning efficiency and identification accuracy.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"505 - 528"},"PeriodicalIF":3.5,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46355867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-03-20DOI: 10.1007/s10514-023-10093-w
Manabu Nakanoya, Sai Shankar Narasimhan, Sharachchandra Bhat, Alexandros Anemogiannis, Akul Datta, Sachin Katti, Sandeep Chinchali, Marco Pavone
{"title":"Co-design of communication and machine inference for cloud robotics","authors":"Manabu Nakanoya, Sai Shankar Narasimhan, Sharachchandra Bhat, Alexandros Anemogiannis, Akul Datta, Sachin Katti, Sandeep Chinchali, Marco Pavone","doi":"10.1007/s10514-023-10093-w","DOIUrl":"10.1007/s10514-023-10093-w","url":null,"abstract":"<div><p>Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for <i>human, not robotic</i>, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn <i>task-relevant</i> representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11<span>(times )</span> more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"579 - 594"},"PeriodicalIF":3.5,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10093-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41639268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-03-20DOI: 10.1007/s10514-023-10100-0
Paulo Rezeck, Héctor Azpúrua, Maurício F. S. Corrêa, Luiz Chaimowicz
{"title":"HeRo 2.0: a low-cost robot for swarm robotics research","authors":"Paulo Rezeck, Héctor Azpúrua, Maurício F. S. Corrêa, Luiz Chaimowicz","doi":"10.1007/s10514-023-10100-0","DOIUrl":"10.1007/s10514-023-10100-0","url":null,"abstract":"<div><p>The current state of electronic component miniaturization coupled with the increasing efficiency in hardware and software allow the development of smaller and compact robotic systems. The convenience of using these small, simple, yet capable robots has gathered the research community’s attention towards practical applications of swarm robotics. This paper presents the design of a novel platform for swarm robotics applications that is low cost, easy to assemble using off-the-shelf components, and deeply integrated with the most used robotic framework available today: ROS (Robot Operating System). The robotic platform is entirely open, composed of a 3D printed body and open-source software. We describe its architecture, present its main features, and evaluate its functionalities executing experiments using a couple of robots. Results demonstrate that the proposed mobile robot is capable of performing different swarm tasks, given its small size and reduced cost, being suitable for swarm robotics research and education.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 7","pages":"879 - 903"},"PeriodicalIF":3.5,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91282746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-03-14DOI: 10.1007/s10514-023-10091-y
Nicolás Navarro-Guerrero, Sibel Toprak, Josip Josifovski, Lorenzo Jamone
{"title":"Visuo-haptic object perception for robots: an overview","authors":"Nicolás Navarro-Guerrero, Sibel Toprak, Josip Josifovski, Lorenzo Jamone","doi":"10.1007/s10514-023-10091-y","DOIUrl":"10.1007/s10514-023-10091-y","url":null,"abstract":"<div><p>The object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"377 - 403"},"PeriodicalIF":3.5,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10091-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46918377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-03-02DOI: 10.1007/s10514-023-10085-w
Tim Yuqing Tang, Daniele De Martini, Paul Newman
{"title":"Point-based metric and topological localisation between lidar and overhead imagery","authors":"Tim Yuqing Tang, Daniele De Martini, Paul Newman","doi":"10.1007/s10514-023-10085-w","DOIUrl":"10.1007/s10514-023-10085-w","url":null,"abstract":"<div><p>In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"595 - 615"},"PeriodicalIF":3.5,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10085-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45125784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}