Baxi Chong, Yasemin Ozkan-Aydin, Guillaume Sartoretti, Jennifer M. Rieser, Chaohui Gong, Haosen Xing, H. Choset, D. Goldman
{"title":"A Hierarchical Geometric Framework to Design Locomotive Gaits for Highly Articulated Robots","authors":"Baxi Chong, Yasemin Ozkan-Aydin, Guillaume Sartoretti, Jennifer M. Rieser, Chaohui Gong, Haosen Xing, H. Choset, D. Goldman","doi":"10.15607/RSS.2019.XV.067","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.067","url":null,"abstract":"—Motion planning for mobile robots with many degrees-of-freedom (DoF) is challenging due to their high-dimensional configuration spaces. To manage this curse of di- mensionality, this paper proposes a new hierarchical framework that decomposes the system into sub-systems (based on shared capabilities of DoFs), for which we can design and coordinate motions. Instead of constructing a high-dimensional configuration space, we establish a hierarchy of two-dimensional spaces on which we can visually design gaits using geometric mechanics tools. We then coordinate motions among the two-dimensional spaces in a pairwise fashion to obtain desired robot locomotion. Further geometric analysis of the two-dimensional spaces allows us to visualize the contribution of each sub-system to the locomotion, as well as the contribution of the coordination among the sub-systems. We demonstrate our approach by designing gaits for quadrupedal robots with different morphologies, and experimentally validate our findings on a robot with a long actuated back and intermediate-sized legs.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121285341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon Araki, Kiran Vodrahalli, Thomas Leech, C. Vasile, Mark Donahue, D. Rus
{"title":"Learning to Plan with Logical Automata","authors":"Brandon Araki, Kiran Vodrahalli, Thomas Leech, C. Vasile, Mark Donahue, D. Rus","doi":"10.15607/RSS.2019.XV.064","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.064","url":null,"abstract":"equally Abstract —This paper introduces the Logic-based Value Iter- ation Network (LVIN) framework, which combines imitation learning and logical automata to enable agents to learn complex behaviors from demonstrations. We address two problems with learning from expert knowledge: (1) how to generalize learned policies for a task to larger classes of tasks, and (2) how to account for erroneous demonstrations. Our LVIN model solves finite gridworld environments by instantiating a recurrent, convolutional neural network as a value iteration procedure over a learned Markov Decision Process (MDP) that factors into two MDPs: a small finite state automaton (FSA) corresponding to logical rules, and a larger MDP corresponding to motions in the environment. The parameters of LVIN (value function, reward map, FSA transitions, large MDP transitions) are approximately learned from expert trajectories. Since the model represents the learned rules as an FSA, the model is interpretable ; since the FSA is integrated into planning, the behavior of the agent can be manipulated by modifying the FSA transitions. We demonstrate these abilities in several domains of interest, including a lunchbox- packing manipulation task and a driving domain.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131433145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lakshmi Nair, N. S. Srikanth, Zackory M. Erickson, S. Chernova
{"title":"Autonomous Tool Construction Using Part Shape and Attachment Prediction","authors":"Lakshmi Nair, N. S. Srikanth, Zackory M. Erickson, S. Chernova","doi":"10.15607/RSS.2019.XV.009","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.009","url":null,"abstract":"This work explores the problem of robot tool construction creating tools from parts available in the environment. We advance the state-of-the-art in robotic tool construction by introducing an approach that enables the robot to construct a wider range of tools with greater computational efficiency. Specifically, given an action that the robot wishes to accomplish and a set of building parts available to the robot, our approach reasons about the shape of the parts and potential ways of attaching them, generating a ranking of part combinations that the robot then uses to construct and test the target tool. We validate our approach on the construction of five tools using a physical 7-DOF robot arm.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126339501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinke Deng, Arsalan Mousavian, Yu Xiang, Fei Xia, T. Bretl, D. Fox
{"title":"PoseRBPF: A Rao-Blackwellized Particle Filter for6D Object Pose Estimation","authors":"Xinke Deng, Arsalan Mousavian, Yu Xiang, Fei Xia, T. Bretl, D. Fox","doi":"10.15607/RSS.2019.XV.049","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.049","url":null,"abstract":"Tracking 6D poses of objects from videos provides rich information to a robot in performing different tasks such as manipulation and navigation. In this work, we formulate the 6D object pose tracking problem in the Rao-Blackwellized particle filtering framework, where the 3D rotation and the 3D translation of an object are decoupled. This factorization allows our approach, called PoseRBPF to efficiently estimate the 3D translation of an object along with the full distribution over the 3D rotation. This is achieved by discretizing the rotation space in a fine-grained manner, and training an auto-encoder network to construct a codebook of feature embeddings for the discretized rotations. As a result, PoseRBPF can track objects with arbitrary symmetries while still maintaining adequate posterior distributions. Our approach achieves state-of-the-art results on two 6D pose estimation benchmarks.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116808582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DESPOT-Alpha: Online POMDP Planning with Large State and Observation Spaces","authors":"Neha P. Garg, David Hsu, Wee Sun Lee","doi":"10.15607/RSS.2019.XV.006","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.006","url":null,"abstract":"","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129016619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Estimator for Partial Trajectory Alignment","authors":"Przemyslaw A. Lasota, J. Shah","doi":"10.15607/RSS.2019.XV.080","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.080","url":null,"abstract":"The problem of temporal alignment of time series is common across many fields of study. Within the domain of robotics, human motion trajectories are one type of time series that is often utilized for recognition and prediction of human intent. In these applications, online temporal alignment of partial trajectories to a full representative trajectory is of particular interest, as it is desirable to make accurate intent prediction decisions early in a motion in order to enable proactive robot behavior. This is a particularly difficult problem, however, due to the potential for overlapping trajectory regions and temporary stops, both of which can degrade the performance of existing alignment techniques. Furthermore, it is desirable to not only provide the most likely alignment but also characterize the uncertainty around it, which current methods are unable to accomplish. To address these difficulties and drawbacks, we present BEST-PTA, a framework that combines optimization, supervised learning, and unsupervised learning components in order to build a Bayesian model that outputs distributions over likely correspondence points based on observed partial trajectory data. Through an evaluation incorporating multiple datasets, we show that BEST-PTA outperforms previous alignment techniques; furthermore, we demonstrate that this improvement can significantly boost human motion prediction performance and discuss the implications of these results on improving the quality of human-robot interaction.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127996538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Grover, Daniel Vedova, Nalini Jain, M. Travers, H. Choset
{"title":"Motion Planning, Design Optimization and Fabrication of Ferromagnetic Swimmers","authors":"J. Grover, Daniel Vedova, Nalini Jain, M. Travers, H. Choset","doi":"10.15607/RSS.2019.XV.079","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.079","url":null,"abstract":"Small-scale robots have the potential to impact many areas of medicine and manufacturing including targeted drug delivery, telemetry and micromanipulation. This paper develops an algorithmic framework for regulating external magnetic fields to induce motion in millimeter-scale robots in a viscous liquid, to simulate the physics of swimming at the micrometer scale. Our approach for planning motions for these swimmers is based on tools from geometric mechanics that provide a novel means to design periodic changes in the physical shape of a robot that propels it in a desired direction. Using these tools, we are able to derive new motion primitives for generating locomotion in these swimmers. We use these primitives for optimizing swimming efficiency as a function of its internal magnetization and describe a principled approach to encode the best magnetization distributions in the swimmers. We validate this procedure experimentally and conclude by implementing these newly computed motion primitives on several magnetic swimmer prototypes that include two-link and three-link swimmers.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130117966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Merits of Joint Space and Orientation Representations in Learning the Forward Kinematics in SE(3)","authors":"R. Grassmann, J. Burgner-Kahrs","doi":"10.15607/RSS.2019.XV.017","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.017","url":null,"abstract":"This paper investigates the influence of different joint space and orientation representations on the approximation of the forward kinematics. We consider all degrees of freedom in three dimensional space SE(3) and in the robot’s joint space Q. In order to approximate the forward kinematics, different shallow artificial neural networks with ReLU (rectified linear unit) activation functions are designed. The amount of weights and bias’ of each network are normalized. The results show that quaternion/vector-pairs outperform other SE(3) representations with respect to the approximation capabilities, which is demonstrated with two robot types; a Stanford Arm and a concentric tube continuum robot. For the latter, experimental measurements from a robot prototype are used as well. Regarding measured data, if quaternion/vector-pairs are used, the approximation error with respect to translation and to rotation is found to be seven times and three times more accurate, respectively. By utilizing a four-parameter orientation representation, the position tip error is less than 0.8% with respect to the robot length on measured data showing higher accuracy compared to the state-of-the-artmodeling (1.5%) for concentric tube continuum robots. Other three-parameter representations of SO(3) cannot achieve this, for instance any sets of Euler angles (in the best case 3.5% with respect to the robot length).","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124919452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingyu Wang, Zijian Wang, J. Talbot, J. C. Gerdes, M. Schwager
{"title":"Game Theoretic Planning for Self-Driving Cars in Competitive Scenarios","authors":"Mingyu Wang, Zijian Wang, J. Talbot, J. C. Gerdes, M. Schwager","doi":"10.15607/RSS.2019.XV.048","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.048","url":null,"abstract":"We propose a nonlinear receding horizon gametheoretic planner for autonomous cars in competitive scenarios with other cars. The online planner is specifically formulated for a two car autonomous racing game in which each car tries to advance along a given track as far as possible with respect to the other car. The algorithm extends previous work on gametheoretic planning for single integrator agents to be suitable for autonomous cars in the following ways: (i) by representing the trajectory as a piecewise-polynomial, (ii) incorporating bicycle kinematics into the trajectory, (iii) enforcing constraints on path curvature and acceleration. The game theoretic planner iteratively plans a trajectory for the ego vehicle, then the other vehicle until convergence. Crucially, the trajectory optimization includes a sensitivity term that allows the ego vehicle to reason about how much the other vehicle will yield to the ego vehicle to avoid collisions. The resulting trajectories for the ego vehicle exhibit rich game strategies such as blocking, faking, and opportunistic overtaking. The game-theoretic planner is shown to significantly out-perform a baseline planner using Model Predictive Control which does not take interaction into account. The performance is validated in high-fidelity numerical simulations, in experiments with two scale autonomous cars, and in experiments with a fullscale autonomous car racing against a simulated vehicle.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130879755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conditional Neural Movement Primitives","authors":"M. Seker, Mert Imre, J. Piater, Emre Ugur","doi":"10.15607/RSS.2019.XV.071","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.071","url":null,"abstract":"Conditional Neural Movement Primitives (CNMPs) is a learning from demonstration framework that is designed as a robotic movement learning and generation system built on top of a recent deep neural architecture, namely Conditional Neural Processes (CNPs). Based on CNPs, CNMPs extract the prior knowledge directly from the training data by sampling observations from it, and uses it to predict a conditional distribution over any other target points. CNMPs specifically learns complex temporal multi-modal sensorimotor relations in connection with external parameters and goals; produces movement trajectories in joint or task space; and executes these trajectories through a high-level feedback control loop. Conditioned with an external goal that is encoded in the sensorimotor space of the robot, predicted sensorimotor trajectory that is expected to be observed during the successful execution of the task is generated by the CNMP, and the corresponding motor commands are executed. In order to detect and react to unexpected events during action execution, CNMP is further conditioned with the actual sensor readings in each time-step. Through simulations and real robot experiments, we showed that CNMPs can learn the nonlinear relations between low-dimensional parameter spaces and complex movement trajectories from few demonstrations; and they can also model the associations between high-dimensional sensorimotor spaces and complex motions using large number of demonstrations. The experiments further showed that even the task parameters were not explicitly provided to the system, the robot could learn their influence by associating the learned sensorimotor representations with the movement trajectories. The robot, for example, learned the influence of object weights and shapes through exploiting its sensorimotor space that includes proprioception and force measurements; and be able to change the movement trajectory on the fly when one of these factors were changed through external intervention.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128034041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}