Yulin Yang, Patrick Geneva, Xingxing Zuo, G. Huang
{"title":"Online IMU Intrinsic Calibration: Is It Necessary?","authors":"Yulin Yang, Patrick Geneva, Xingxing Zuo, G. Huang","doi":"10.15607/rss.2020.xvi.026","DOIUrl":"https://doi.org/10.15607/rss.2020.xvi.026","url":null,"abstract":"—This paper addresses the problem of visual- inertial self-calibration while focusing on the necessity of online IMU intrinsic calibration. To this end, we perform observability analysis for visual-inertial navigation systems (VINS) with four different inertial model variants contain- ing intrinsic parameters that encompass one commonly used IMU model for low-cost inertial sensors. The analysis theoretically confirms what is intuitively believed in the literature, that is, the IMU intrinsics are observable given fully-excited 6-axis motion. Moreover, we, for the first time, identify 6 primitive degenerate motions for IMU intrinsic calibration. Each degenerate motion profile will cause a set of intrinsic parameters to be unobservable and any combination of these degenerate motions are still degenerate. This result holds for all four inertial model variants and has significant implications on the necessity to perform online IMU intrinsic calibration in many robotic applications. Extensive simulations and real-world experiments are performed to validate both our observability analysis and degenerate motion analysis.","PeriodicalId":231005,"journal":{"name":"Robotics: Science and Systems XVI","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121220652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grounding Language to Non-Markovian Tasks with No Supervision of Task Specifications","authors":"Roma Patel, Ellie Pavlick, Stefanie Tellex","doi":"10.15607/rss.2020.xvi.016","DOIUrl":"https://doi.org/10.15607/rss.2020.xvi.016","url":null,"abstract":"—Natural language instructions often exhibit sequential constraints rather than being simply goal-oriented, for example “go around the lake and then travel north until the intersection”. Existing approaches map these kinds of natural language expressions to Linear Temporal Logic expressions but require an expensive dataset of LTL expressions paired with English sentences. We introduce an approach that can learn to map from English to LTL expressions given only pairs of English sentences and trajectories, enabling a robot to understand commands with sequential constraints. We use formal methods of LTL progression to reward the produced logical forms by progressing each LTL logical form against the ground-truth trajectory, represented as a sequence of states, so that no LTL expressions are needed during training. We evaluate in two ways: on the SAIL dataset, a benchmark artificial environment of 3,266 trajectories and language commands as well as on 10 newly-collected real-world environments of roughly the same size. We show that our model correctly interprets natural language commands with 76.9% accuracy on average. We demonstrate the end-to-end process in real-time in simulation, starting with only a natural language instruction and an initial robot state, producing a logical form from the model trained with trajectories, and finding a trajectory that satisfies sequential constraints with an LTL planner in the environment.","PeriodicalId":231005,"journal":{"name":"Robotics: Science and Systems XVI","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115865553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filipa Correia, Samuel Gomes, S. Mascarenhas, Francisco S. Melo, Ana Paiva
{"title":"The Dark Side of Embodiment - Teaming Up With Robots VS Disembodied Agents","authors":"Filipa Correia, Samuel Gomes, S. Mascarenhas, Francisco S. Melo, Ana Paiva","doi":"10.15607/rss.2020.xvi.010","DOIUrl":"https://doi.org/10.15607/rss.2020.xvi.010","url":null,"abstract":"In the past years, research on the embodiment of interactive social agents has been focused on comparisons between robots and virtually-displayed agents. Our work contributes to this line of research by providing a comparison between social robots and disembodied agents exploring the role of embodiment within group interactions. We conducted a user study where participants formed a team with two agents to play a Collective Risk Dilemma (CRD). Besides having two levels of embodiment as between-subjects —physically-embodied and disembodied—, we also manipulated the agents’ degree of cooperation as a within-subjects variable —one of the agents used a prosocial strategy and the other used selfish strategy. Our results show that while trust levels were similar between the two conditions of embodiment, participants identified more with the team of embodied agents. Surprisingly, when the agents were disembodied, the prosocial agent was rated more positively and the selfish agent was rated more negatively, compared to when they were embodied. The obtained results support that embodied interactions might improve how humans relate with agents in team settings. However, if the social aspects can positively mask selfish behaviours, as our results suggest, a dark side of embodiment may emerge.","PeriodicalId":231005,"journal":{"name":"Robotics: Science and Systems XVI","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131499180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serkan Cabi, Sergio Gomez Colmenarejo, Alexander Novikov, Ksenia Konyushkova, Scott E. Reed, Rae Jeong, Konrad Zolna, Y. Aytar, D. Budden, Mel Vecerík, Oleg O. Sushkov, David Barker, Jonathan Scholz, Misha Denil, Nando de Freitas, Ziyun Wang
{"title":"Scaling data-driven robotics with reward sketching and batch reinforcement learning","authors":"Serkan Cabi, Sergio Gomez Colmenarejo, Alexander Novikov, Ksenia Konyushkova, Scott E. Reed, Rae Jeong, Konrad Zolna, Y. Aytar, D. Budden, Mel Vecerík, Oleg O. Sushkov, David Barker, Jonathan Scholz, Misha Denil, Nando de Freitas, Ziyun Wang","doi":"10.15607/rss.2020.xvi.076","DOIUrl":"https://doi.org/10.15607/rss.2020.xvi.076","url":null,"abstract":"We present a framework for data-driven robotics that makes use of a large dataset of recorded robot experience and scales to several tasks using learned reward functions. We show how to apply this framework to accomplish three different object manipulation tasks on a real robot platform. Given demonstrations of a task together with task-agnostic recorded experience, we use a special form of human annotation as supervision to learn a reward function, which enables us to deal with real-world tasks where the reward signal cannot be acquired directly. Learned rewards are used in combination with a large dataset of experience from different tasks to learn a robot policy offline using batch RL. We show that using our approach it is possible to train agents to perform a variety of challenging manipulation tasks including stacking rigid objects and handling cloth.","PeriodicalId":231005,"journal":{"name":"Robotics: Science and Systems XVI","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134254573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}