{"title":"Learning instance-level N-ary semantic knowledge at scale for robots operating in everyday environments","authors":"Weiyu Liu, Dhruva Bansal, Angel Daruna, Sonia Chernova","doi":"10.1007/s10514-023-10099-4","DOIUrl":"10.1007/s10514-023-10099-4","url":null,"abstract":"<div><p>Robots operating in everyday environments need to effectively perceive, model, and infer semantic properties of objects. Existing knowledge reasoning frameworks only model binary relations between an object’s class label and its semantic properties, unable to collectively reason about object properties detected by different perception algorithms and grounded in diverse sensory modalities. We bridge the gap between multimodal perception and knowledge reasoning by introducing an n-ary representation that models complex, inter-related object properties. To tackle the problem of collecting n-ary semantic knowledge at scale, we propose transformer neural networks that generalize knowledge from observations of object instances by learning to predict single missing properties or predict joint probabilities of all properties. The learned models can reason at different levels of abstraction, effectively predicting unknown properties of objects in different environmental contexts given different amounts of observed information. We quantitatively validate our approach against prior methods on LINK, a unique dataset we contribute that contains 1457 object instances in different situations, amounting to 15 multimodal properties types and 200 total properties. Compared to the top-performing baseline, a Markov Logic Network, our models obtain a 10% improvement in predicting unknown properties of novel object instances while reducing training and inference time by more than 150 times. Additionally, we apply our work to a mobile manipulation robot, demonstrating its ability to leverage n-ary reasoning to retrieve objects and actively detect object properties. The code and data are available at https://github.com/wliu88/LINK.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"529 - 547"},"PeriodicalIF":3.5,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46907553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multimodal embodied attribute learning by robots for object-centric action policies","authors":"Xiaohan Zhang, Saeid Amiri, Jivko Sinapov, Jesse Thomason, Peter Stone, Shiqi Zhang","doi":"10.1007/s10514-023-10098-5","DOIUrl":"10.1007/s10514-023-10098-5","url":null,"abstract":"<div><p>Robots frequently need to perceive object attributes, such as <span>red</span>, <span>heavy</span>, and <span>empty</span>, using multimodal exploratory behaviors, such as <i>look</i>, <i>lift</i>, and <i>shake</i>. One possible way for robots to do so is to learn a classifier for each perceivable attribute given an exploratory behavior. Once the attribute classifiers are learned, they can be used by robots to select actions and identify attributes of new objects, answering questions, such as “<i>Is this object</i> <span>red</span> <i> and</i> <span>empty</span> ?” In this article, we introduce a robot interactive perception problem, called <b>M</b>ultimodal <b>E</b>mbodied <b>A</b>ttribute <b>L</b>earning (<span>meal</span>), and explore solutions to this new problem. Under different assumptions, there are two classes of <span>meal</span> problems. <span>offline-meal</span> problems are defined in this article as learning attribute classifiers from pre-collected data, and sequencing actions towards attribute identification under the challenging trade-off between information gains and exploration action costs. For this purpose, we introduce <b>M</b>ixed <b>O</b>bservability <b>R</b>obot <b>C</b>ontrol (<span>morc</span>), an algorithm for <span>offline-meal</span> problems, that dynamically constructs both fully and partially observable components of the state for multimodal attribute identification of objects. We further investigate a more challenging class of <span>meal</span> problems, called <span>online-meal</span>, where the robot assumes no pre-collected data, and works on both attribute classification and attribute identification at the same time. Based on <span>morc</span>, we develop an algorithm called <b>I</b>nformation-<b>T</b>heoretic <b>R</b>eward <b>S</b>haping (<span>morc</span>-<span>itrs</span>) that actively addresses the trade-off between exploration and exploitation in <span>online-meal</span> problems. <span>morc</span> and <span>morc</span>-<span>itrs</span> are evaluated in comparison with competitive <span>meal</span> baselines, and results demonstrate the superiority of our methods in learning efficiency and identification accuracy.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"505 - 528"},"PeriodicalIF":3.5,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46355867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-03-20DOI: 10.1007/s10514-023-10093-w
Manabu Nakanoya, Sai Shankar Narasimhan, Sharachchandra Bhat, Alexandros Anemogiannis, Akul Datta, Sachin Katti, Sandeep Chinchali, Marco Pavone
{"title":"Co-design of communication and machine inference for cloud robotics","authors":"Manabu Nakanoya, Sai Shankar Narasimhan, Sharachchandra Bhat, Alexandros Anemogiannis, Akul Datta, Sachin Katti, Sandeep Chinchali, Marco Pavone","doi":"10.1007/s10514-023-10093-w","DOIUrl":"10.1007/s10514-023-10093-w","url":null,"abstract":"<div><p>Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for <i>human, not robotic</i>, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn <i>task-relevant</i> representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11<span>(times )</span> more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"579 - 594"},"PeriodicalIF":3.5,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10093-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41639268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-03-20DOI: 10.1007/s10514-023-10100-0
Paulo Rezeck, Héctor Azpúrua, Maurício F. S. Corrêa, Luiz Chaimowicz
{"title":"HeRo 2.0: a low-cost robot for swarm robotics research","authors":"Paulo Rezeck, Héctor Azpúrua, Maurício F. S. Corrêa, Luiz Chaimowicz","doi":"10.1007/s10514-023-10100-0","DOIUrl":"10.1007/s10514-023-10100-0","url":null,"abstract":"<div><p>The current state of electronic component miniaturization coupled with the increasing efficiency in hardware and software allow the development of smaller and compact robotic systems. The convenience of using these small, simple, yet capable robots has gathered the research community’s attention towards practical applications of swarm robotics. This paper presents the design of a novel platform for swarm robotics applications that is low cost, easy to assemble using off-the-shelf components, and deeply integrated with the most used robotic framework available today: ROS (Robot Operating System). The robotic platform is entirely open, composed of a 3D printed body and open-source software. We describe its architecture, present its main features, and evaluate its functionalities executing experiments using a couple of robots. Results demonstrate that the proposed mobile robot is capable of performing different swarm tasks, given its small size and reduced cost, being suitable for swarm robotics research and education.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 7","pages":"879 - 903"},"PeriodicalIF":3.5,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91282746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-03-14DOI: 10.1007/s10514-023-10091-y
Nicolás Navarro-Guerrero, Sibel Toprak, Josip Josifovski, Lorenzo Jamone
{"title":"Visuo-haptic object perception for robots: an overview","authors":"Nicolás Navarro-Guerrero, Sibel Toprak, Josip Josifovski, Lorenzo Jamone","doi":"10.1007/s10514-023-10091-y","DOIUrl":"10.1007/s10514-023-10091-y","url":null,"abstract":"<div><p>The object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"377 - 403"},"PeriodicalIF":3.5,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10091-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46918377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-03-02DOI: 10.1007/s10514-023-10085-w
Tim Yuqing Tang, Daniele De Martini, Paul Newman
{"title":"Point-based metric and topological localisation between lidar and overhead imagery","authors":"Tim Yuqing Tang, Daniele De Martini, Paul Newman","doi":"10.1007/s10514-023-10085-w","DOIUrl":"10.1007/s10514-023-10085-w","url":null,"abstract":"<div><p>In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"595 - 615"},"PeriodicalIF":3.5,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10085-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45125784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-02-28DOI: 10.1007/s10514-023-10092-x
Maximilian Gießler, Bernd Waltersberger
{"title":"Robust inverse dynamics by evaluating Newton–Euler equations with respect to a moving reference and measuring angular acceleration","authors":"Maximilian Gießler, Bernd Waltersberger","doi":"10.1007/s10514-023-10092-x","DOIUrl":"10.1007/s10514-023-10092-x","url":null,"abstract":"<div><p>Maintaining stability while walking on arbitrary surfaces or dealing with external perturbations is of great interest in humanoid robotics research. Increasing the system’s autonomous robustness to a variety of postural threats during locomotion is the key despite the need to evaluate noisy sensor signals. The equations of motion are the foundation of all published approaches. In contrast, we propose a more adequate evaluation of the equations of motion with respect to an arbitrary moving reference point in a non-inertial reference frame. Conceptual advantages are, e.g., getting independent of global position and velocity vectors estimated by sensor fusions or calculating the imaginary zero-moment point walking on different inclined ground surfaces. Further, we improve the calculation results by reducing noise-amplifying methods in our algorithm and using specific characteristics of physical robots. We use simulation results to compare our algorithm with established approaches and test it with experimental robot data.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"465 - 481"},"PeriodicalIF":3.5,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10092-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47285445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-02-25DOI: 10.1007/s10514-023-10084-x
Pouria Razzaghi, Ehab Al Khatib, Yildirim Hurmuzlu
{"title":"Automated group motion control of magnetically actuated millirobots","authors":"Pouria Razzaghi, Ehab Al Khatib, Yildirim Hurmuzlu","doi":"10.1007/s10514-023-10084-x","DOIUrl":"10.1007/s10514-023-10084-x","url":null,"abstract":"<div><p>Small-size robots offer access to spaces that are inaccessible to larger ones. This type of access is crucial in applications such as drug delivery, environmental detection, and collection of small samples. However, there are some tasks that are not possible to perform using only one robot including assembly and manufacturing at small scales, manipulation of micro- and nano- objects, and robot-based structuring of small-scale materials. In this article, we focus on tasks that can be achieved using a group of small-scale robots like pattern formation. These robots are typically externally actuated due to their size limitation. Yet, one faces the challenge of controlling a group of robots using a single global input. In this study, we propose a control algorithm to position individual members of a group in predefined positions. In our previous work, we presented a small-scaled magnetically actuated millirobot. An electromagnetic coil system applied external force and steered the millirobots in various modes of motion such as pivot walking and tumbling. In this paper, we propose two new designs of these millirobots. In the first design, the magnets are placed at the center of body to reduce the magnetic attraction force between the millirobots. In the second design, the millirobots are of identical length with two extra legs acting as the pivot points and varying pivot separation in design to take advantage of variable speed in pivot walking mode while keeping the speed constant in tumbling mode. This paper presents an algorithm for positional control of <i>n</i> millirobots with different lengths to move them from given initial positions to final desired ones. This method is based on choosing a leader that is fully controllable. Then, the motions of other millirobots are regulated by following the leader and determining their appropriate pivot separations in order to implement the intended group motion. Simulations and hardware experiments validate these results.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 7","pages":"865 - 877"},"PeriodicalIF":3.5,"publicationDate":"2023-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10084-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44859563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-02-22DOI: 10.1007/s10514-022-10081-6
SeyedZahir Qazavi, Samaneh Hosseini Semnani
{"title":"Distributed swarm collision avoidance based on angular calculations","authors":"SeyedZahir Qazavi, Samaneh Hosseini Semnani","doi":"10.1007/s10514-022-10081-6","DOIUrl":"10.1007/s10514-022-10081-6","url":null,"abstract":"<div><p>Collision avoidance is one of the most important topics in the robotics field. In this problem, the goal is to move the robots from initial locations to target locations such that they follow the shortest non-colliding paths in the shortest time and with the least amount of energy. Robot navigation among pedestrians is an example application of this problem which is the focus of this paper. This paper presents a distributed and real-time algorithm for solving collision avoidance problems in dense and complex 2D and 3D environments. This algorithm uses angular calculations to select the optimal direction for the movement of each robot and it has been shown that these separate calculations lead to a form of cooperative behavior among agents. We evaluated the proposed approach on various simulation and experimental scenarios and compared the results with ORCA one of the most important algorithms in this field. The results show that the proposed approach is at least 25% faster than ORCA while is also more reliable. The proposed method is shown to enable fully autonomous navigation of a swarm of Crazyflies.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"425 - 434"},"PeriodicalIF":3.5,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45297151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous RobotsPub Date : 2023-02-12DOI: 10.1007/s10514-023-10088-7
Cong Wei, Chuchu Chen, Herbert G. Tanner
{"title":"Navigation functions with moving destinations and obstacles","authors":"Cong Wei, Chuchu Chen, Herbert G. Tanner","doi":"10.1007/s10514-023-10088-7","DOIUrl":"10.1007/s10514-023-10088-7","url":null,"abstract":"<div><p>Dynamic environments challenge existing robot navigation methods, and motivate either stringent assumptions on workspace variation or relinquishing of collision avoidance and convergence guarantees. This paper shows that the latter can be preserved even in the absence of knowledge of how the environment evolves, through a navigation function methodology applicable to sphere-worlds with moving obstacles and robot destinations. Assuming bounds on speeds of robot destination and obstacles, and sufficiently higher maximum robot speed, the navigation function gradient can be used produce robot feedback laws that guarantee obstacle avoidance, and theoretical guarantees of bounded tracking errors and asymptotic convergence to the target when the latter eventually stops moving. The efficacy of the gradient-based feedback controller derived from the new navigation function construction is demonstrated both in numerical simulations as well as experimentally.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"435 - 451"},"PeriodicalIF":3.5,"publicationDate":"2023-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45930849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}