Gokarna Sharma, Pavan Poudel, Ayan Dutta, Vala Zeinali, T. T. Khoei, Jong-Hoon Kim
{"title":"A 2-Approximation Algorithm for the Online Tethered Coverage Problem","authors":"Gokarna Sharma, Pavan Poudel, Ayan Dutta, Vala Zeinali, T. T. Khoei, Jong-Hoon Kim","doi":"10.15607/RSS.2019.XV.025","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.025","url":null,"abstract":"We consider the problem of covering a planar environment, possibly containing unknown obstacles, using a robot of square size D×D attached to a fixed point S by a cable of finite length L. The environment is discretized into 4-connected grid cells with resolution proportional to the robot size. Starting at S, the task of the robot is to visit each cell in the environment that are not occupied by obstacles and return to S with the cable fully retracted. Our goal is to minimize the total distance traveled by the robot to fully cover the unknown environment while avoiding tangling of the cable. In this paper, we present a novel online algorithm to solve this problem that achieves 2-approximation for the total distance traveled by the robot compared to the minimum distance that needs to be traveled. Our algorithm significantly improves the 2L/D-approximation achieved by the best previously known online algorithm designed for this problem. The approximation bound is also validated using rigorous simulated experiments.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127457184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Remote Telemanipulation with Adapting Viewpoints in Visually Complex Environments","authors":"D. Rakita, Bilge Mutlu, Michael Gleicher","doi":"10.15607/RSS.2019.XV.068","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.068","url":null,"abstract":"In this paper, we introduce a novel method to support remote telemanipulation tasks in complex environments by providing operators with an enhanced view of the task environment. Our method features a novel viewpoint adjustment algorithm designed to automatically mitigate occlusions caused by workspace geometry, supports visual exploration to provide operators with situation awareness in the remote environment, and mediates context-specific visual challenges by making viewpoint adjustments based on sparse input from the user. Our method builds on the dynamic camera telemanipulation viewing paradigm, where a user controls a manipulation robot, and a camera-in-hand robot alongside the manipulation robot servos to provide a sufficient view of the remote environment. We discuss the real-time motion optimization formulation used to arbitrate the various objectives in our shared-control-based method, particularly highlighting how our occlusion avoidance and viewpoint adaptation approaches fit within this framework. We present results from an empirical evaluation of our proposed occlusion avoidance approach as well as a user study that compares our telemanipulation shared-control method against alternative telemanipulation approaches. We discuss the implications of our work for future shared-control research and robotics applications.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125571814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Direct Drive Hands: Force-Motion Transparency in Gripper Design","authors":"Ankita Bhatia, Aaron M. Johnson, M. T. Mason","doi":"10.15607/RSS.2019.XV.053","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.053","url":null,"abstract":"The Direct Drive Hand (DDHand) project is exploring an alternative design philosophy for grippers. The conventional approach is to prioritize clamping force, leading to high gear ratios, slow motion, and poor transmission of force/motion signals. Instead, the DDHand prioritizes transparency: we view the gripper as a signal transmission channel, and seek high-bandwidth, highfidelity transmission of force and motion signals in both directions. The resulting design has no gears and no springs, occupying a new quadrant in the servo gripper design space. This paper presents the direct drive gripper design philosophy, compares the performance of different design choices, describes our current design and implementation, and demonstrates a fly-by “smack and snatch” grasping motion to show the gripper’s ability to safely detect and respond quickly to variations in the task environment.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116148722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaesung Choe, Kyungdon Joo, François Rameau, Gyumin Shim, I. Kweon
{"title":"Segment2Regress: Monocular 3D Vehicle Localization in Two Stages","authors":"Jaesung Choe, Kyungdon Joo, François Rameau, Gyumin Shim, I. Kweon","doi":"10.15607/RSS.2019.XV.016","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.016","url":null,"abstract":"High-quality depth information is required to perform 3D vehicle detection, consequently, there exists a large performance gap between camera and LiDAR-based approaches. In this paper, our monocular camera-based 3D vehicle localization method alleviates the dependency on high-quality depth maps by taking advantage of the commonly accepted assumption that the observed vehicles lie on the road surface. We propose a two-stage approach that consists of a segment network and a regression network, called Segment2Regress. For a given single RGB image and a prior 2D object detection bounding box, the two stages are as follows: 1) The segment network activates the pixels under the vehicle (modeled as four line segments and a quadrilateral representing the area beneath the vehicle projected on the image coordinate). These segments are trained to lie on the road plane such that our network does not require full depth estimation. Instead, the depth is directly approximated from the known ground plane parameters. 2) The regression network takes the segments fused with the plane depth to predict the 3D location of a car at the ground level. To stabilize the regression, we introduce a coupling loss that enforces structural constraints. The efficiency, accuracy, and robustness of the proposed technique are highlighted through a series of experiments and ablation assessments. These tests are conducted on the KITTI bird’s eye view dataset where Segment2Regress demonstrates state-of-the-art performance. Further results are available at https://github.com/LifeBeyondExpectations/Segment2Regress","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121402596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting Human Interpretations of Affect and Valence in a Social Robot","authors":"D. McNeill, C. Kennington","doi":"10.15607/RSS.2019.XV.041","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.041","url":null,"abstract":"In this paper we seek to understand how people interpret a social robot’s performance of an emotion, what we term ‘affective display,’ and the positive or negative valence of that affect. To this end, we tasked annotators with observing the Anki Cozmo robot perform its over 900 pre-scripted behaviors and labeling those behaviors with 16 possible affective display labels (e.g., interest, boredom, disgust, etc.). In our first experiment, we trained a neural network to predict annotated labels given multimodal information about the robot’s movement, face, and audio. The results suggest that pairing affects to predict the valence between them is more informative, which we confirmed in a second experiment. Both experiments show that certain modalities are more useful for predicting displays of affect and valence. For our final experiment, we generated novel robot behaviors and tasked human raters with assigning scores to valence pairs instead of applying labels, then compared our model’s predictions of valence between the affective pairs and compared the results to the human ratings. We conclude that some modalities have information that can be contributory or inhibitive when considered in conjunction with other modalities, depending on the emotional valence pair being considered.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122251186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Dynamical System Approach to Motion and Force Generation in Contact Tasks","authors":"Walid Amanhoud, Mahdi Khoramshahi, A. Billard","doi":"10.15607/RSS.2019.XV.021","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.021","url":null,"abstract":"—Many tasks require the robot to enter in contact with surfaces, be it to take support, to polish or to grasp an object. It is crucial that the robot controls forces both upon making contact and while in contact. While many solutions exist to control for contact, none offer the required robustness to adapt to real-world uncertainties, such as sudden displacement of the object prior and once in contact. To adapt to such disturbances require to re-plan on the fly both the trajectory and the force. Dynamical systems (DS) offer a framework for instant re-planning of trajectories. They are however limited to control of motions. We extend this framework here to enable generating contact forces and trajectories through DS. The framework allows also to modulate the impedance so as to show rigidity to maintain contact, and compliance to ensure safe interaction with humans. We validate the approach in single and dual arm setting using KUKA LWR 4+ robotic arms. We show that the approach allows 1) to make smooth contact while applying large forces, 2) to maintain desired contact force when scanning non-linear surfaces, even when the surface is moved, and 3) to grasp and lift smoothly an object in the air, and to re-balance forces on the fly to maintain the grasp even when subjected to strong external disturbances.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122788187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Kantaros, Brent Schlotfeldt, Nikolay A. Atanasov, George J. Pappas
{"title":"Asymptotically Optimal Planning for Non-Myopic Multi-Robot Information Gathering","authors":"Y. Kantaros, Brent Schlotfeldt, Nikolay A. Atanasov, George J. Pappas","doi":"10.15607/RSS.2019.XV.062","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.062","url":null,"abstract":"This paper proposes a novel highly scalable sampling-based planning algorithm for multi-robot active information acquisition tasks in complex environments. Active information gathering scenarios include target localization and tracking, active SLAM, surveillance, environmental monitoring and others. The objective is to compute control policies for sensing robots which minimize the accumulated uncertainty of a dynamic hidden state over an a priori unknown horizon. To address this problem, we propose a new sampling-based algorithm that simultaneously explores both the robot motion space and the reachable information space. Unlike relevant samplingbased approaches, we show that the proposed algorithm is probabilistically complete, asymptotically optimal and is supported by convergence rate bounds. Moreover, we demonstrate that by introducing bias in the sampling process towards informative areas, the proposed method can quickly compute sensor policies that achieve desired levels of uncertainty in large-scale estimation tasks that may involve large sensor teams, workspaces, and dimensions of the hidden state. We provide extensive simulation results that corroborate the theoretical analysis and show that the proposed algorithm can address large-scale estimation tasks which were previously infeasible.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115739193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact-Friendly Robust Control Design with Task-Space Quadratic Optimization","authors":"Yuquan Wang, A. Kheddar","doi":"10.15607/RSS.2019.XV.032","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.032","url":null,"abstract":"Almost all known robots fear impacts. Unlike humans , robots keep guarded motions to near zero-velocity prior to establishing contacts with their surroundings. This significantly slows down robotic tasks involving physical interaction. Two main ingredients are necessary to remedy this limitation: impact-friendly hardware design, and impact-friendly controllers. Our work focuses on the controller aspect. Task-space controllers formulated as quadratic programming (QP) are widely used in robotics to generate modular and reactive motion for a large range of task specifications under various constraints. We explicitly introduce discrete impact dynamics model into the QP-based controllers to generate robot motions that are robust to impact-induced state jumps in the joint velocities and joint torques. Our simulations, validate that our proposed impact-friendly QP controller is robust to contact impacts, shall they be expected or not. Therefore, we can exploit it for establishing contacts with high velocities, and explicitly generate task-purpose impulsive forces.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122712966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Risk Contours Map for Risk Bounded Motion Planning under Perception Uncertainties","authors":"A. Jasour, B. Williams","doi":"10.15607/RSS.2019.XV.056","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.056","url":null,"abstract":"In this paper, we introduce ”risk contours map” that contains the risk information of different regions in uncertain environments. Risk is defined as the probability of collision of robots with obstacles in presence of probabilistic uncertainties in location, size, and geometry of obstacles. We use risk contours to obtain safe paths for robots with guaranteed bounded risk. We formulate the problem of obtaining risk contours as a chance constrained optimization. We leverage the theory of moments and nonnegative polynomials to provide a convex optimization in the form of sum of squares optimization. Provided approach deals with nonconvex obstacles and probabilistic bounded and unbounded uncertainties. We demonstrate the performance of the provided approach by solving risk bounded motion planning problems.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127045233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Zanardi, Andreas Aumiller, J. Zilly, A. Censi, Emilio Frazzoli
{"title":"Cross-Modal Learning Filters for RGB-Neuromorphic Wormhole Learning","authors":"A. Zanardi, Andreas Aumiller, J. Zilly, A. Censi, Emilio Frazzoli","doi":"10.15607/RSS.2019.XV.045","DOIUrl":"https://doi.org/10.15607/RSS.2019.XV.045","url":null,"abstract":"Robots that need to act in an uncertain, populated, and varied world need heterogeneous sensors to be able to perceive and act robustly. For example, self-driving cars currently on the road are equipped with dozens of sensors of several types (lidar, radar, sonar, cameras, . . . ). All of this existing and emerging complexity opens up many interesting questions regarding how to deal with multi-modal perception and learning. The recently developed technique of “wormhole learning” shows that even temporary access to a different sensor with complementary invariance characteristics can be used to enlarge the operating domain of an existing object detector without the use of additional training data. For example, an RGB object detector trained with daytime data can be updated to function at night time by using a “wormhole” jump through a different modality that is more illumination invariant, such as an IR camera. It turns out that having an additional sensor improves performance, even if you subsequently lose it. In this work we extend wormhole learning to allow it to cope with sensors that are radically different, such as RGB cameras and event-based neuromorphic sensors. Their profound differences imply that we need a more careful selection of which samples to transfer, thus we design “cross-modal learning filters”. We will walk in a relatively unexplored territory of multi-modal observability that is not usually considered in machine learning. We show that wormhole learning increases performance even though the intermediate neuromorphic modality is on average much worse at the task. These results suggest that multi-modal learning for perception is still an early field and there might be many opportunities to improve the perception performance by accessing a rich set of heterogeneous sensors (even if some are not actually deployed on the robot).","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126318237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}