{"title":"Towards co-creative drawing with a robot","authors":"Chipp Jansen, E. Sklar","doi":"10.31256/ll3do2j","DOIUrl":"https://doi.org/10.31256/ll3do2j","url":null,"abstract":"—This paper describes research into the development of a co-creative human-robot drawing system. Based on a pilot user study to survey the drawing practices of artists, various interaction factors have been identified that define example roles that a robot might take as a co-creative drawing partner. A research prototype system which observes an artist drawing with physical media—on paper—through the use of a drawing tablet and multiple cameras. The robotic system observes and captures data in real-time, as the artist draws. The longterm goal is to generate a data-backed model of the artist’s drawing process, which in future will respond through projected visual interactions upon the drawn surface. The design and technical details of the observational system are described in this short paper.","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115850855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Pre-processing vs. Transfer Learning for Visual Route Navigation","authors":"William H. B. Smith, Y. Pétillot, R. Fisher","doi":"10.31256/nh4vy4l","DOIUrl":"https://doi.org/10.31256/nh4vy4l","url":null,"abstract":"This paper investigates image pre-processing and triplet learning for place recognition in route navigation. The first contribution combines image pre-processing and ImageNet pre-trained neural networks for generating improved image descriptors. The second contribution is a fast, compact ‘FullDrop’ layer that can be appended to an ImageNet pre-trained network and taught to generate invariant image descriptors with triplet learning. The proposals decrease inference time by 8x and parameters by 30x while keeping comparable performance to NetVLAD, the state of the art for this task","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129127375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards the development of a deposition technology for an automated rail repair system","authors":"D. Becker, J. Dobrzanski, M. Goh, L. Justham","doi":"10.31256/vz2jt4i","DOIUrl":"https://doi.org/10.31256/vz2jt4i","url":null,"abstract":"—The work presented in this paper explored the use of a laser line scanner to generate robotic deposition paths for the repair portion of an automated rail repair system. Currently surface defects cost the UK around £4 million per annum [1] , with little traceability being available throughout the repair process. This paper proposes a robotic repair system primarily focussed on the development of the deposition system. The deposition system utilised two different deposition strategies, the first extracted the weld prep from the point cloud to generate the deposition paths for the robot and the second measured the height of the previously deposited material and adjusted the generated path. This paper focusses on the use of the two algorithms and the testing completed on a representative geometry, utilising a caulking gun as a reusable material replacement for the additive welding system.","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olivia Bridgewater-Smith, Gabriele Maurizi, S. Fichera, David A. Marquez-Gamez, Andrew I. Cooper, P. Paoletti
{"title":"An Automatic Design Tool for Fluid Elastomer Actuators","authors":"Olivia Bridgewater-Smith, Gabriele Maurizi, S. Fichera, David A. Marquez-Gamez, Andrew I. Cooper, P. Paoletti","doi":"10.31256/xu9da6q","DOIUrl":"https://doi.org/10.31256/xu9da6q","url":null,"abstract":"—Soft robotic actuators are a very promising tech- nology to enable use of robotic manipulators in scenarios that are inaccessible to traditional robots. However, their design and fabrication is a laborious process, especially for users with little knowledge of CAD software and 3D printing. The skills and time necessary for making the moulds used to create such actuators, leaves the process open for human errors and design variations, making accurate and repeatable experimental testing difficult to achieve. To reach a better understanding of this new technology, extensive and detailed experimental work should be undertaken, but this is currently hindered by the time-consuming design process. The design software presented in this paper provides the soft robotic community with a user-friendly design tool for generating 3D printed moulds that the users can customise to their needs. The tool aims to simplify the design process for soft robotics and to also make this technology accessible to users without extensive engineering background.","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122284140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Unsupervised Natural Language Grounding through Explicit Teaching","authors":"Oliver Roesler","doi":"10.31256/bf9vw8c","DOIUrl":"https://doi.org/10.31256/bf9vw8c","url":null,"abstract":"—In this paper, a grounding framework is proposed that combines unsupervised and supervised grounding by extending an unsupervised grounding model with a mechanism to learn from explicit human teaching. To investigate whether explicit teaching improves the sample efficiency of the original model, both models are evaluated through an interaction experiment between a human tutor and a robot in which synonymous shape, color, and action words are grounded through geometric object characteristics, color histograms, and kinematic joint features. The results show that explicit teaching improves the sample efficiency of the unsupervised baseline model.","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122814932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic ignition systems for oil fields","authors":"M. Lami, L. Alboul, I. Abed","doi":"10.31256/ln9sf6f","DOIUrl":"https://doi.org/10.31256/ln9sf6f","url":null,"abstract":"In the oil extraction industry, igniting the flare stacks is an essential operation. Oil sites have two kinds of flares, ground flares and flares that installed on towers. The ignition systems generate electrical sparks to burn the gases blowing out of the flares. Due to the permanent high operating temperature and the need for special thermal isolation, classical igniters have low reliability and high cost. In this work, two novel ignition systems have been implemented, the first is the robotic ignition system for ground flares, it utilises a mobile robot which moves toward the flare, avoiding the obstacles in its way and stops after detecting the gas, then it starts igniting the flare before heading to a safe point with no gas and low temperature. The second solution is the automated ignition system to light up the flares on the towers, which is a car that moves on a rail vertically, and begins igniting once it arrives at the tip of the tower, then it comes back to its starting point. As the igniters in both suggested systems are movable, so the system will be exposed to the heat generated by the flame within a very short time, this new feature increases the reliability of the igniter and reduces the complexity and the cost of the system. Keywords—igniters, microcontrollers, mobile robots, stack flares","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115834913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyle L. Walker, A. Stokes, A. Kiprakis, F. Giorgio-Serchi
{"title":"Investigating PID Control for Station Keeping ROVs","authors":"Kyle L. Walker, A. Stokes, A. Kiprakis, F. Giorgio-Serchi","doi":"10.31256/ky3xg3b","DOIUrl":"https://doi.org/10.31256/ky3xg3b","url":null,"abstract":"For controlling Unmanned Underwater Vehicles (UUVs) in deep water, Proportional-Integral-Derivative (PID) control has previously been proposed. Disturbances due to waves are minimal at high depths, so PID provides an acceptable level of control for performing tasks such as station-keeping. In shallow water, disturbances from waves are considerably larger and thus station-keeping performance naturally degrades. By means of simulation, this letter details the performance of PID control when station keeping in a typical shallow-wave operating environment, such as that encountered during inspection of marine renewable energy devices. Using real wave data, a maximum positional error of 0.635m in the x-direction and 0.537m in the z-direction at a depth of 15 m is seen whilst subjected to a wave train with a significant wave height of 5.404m. Furthermore, estimates of likely displacements of a Remotely Operated Vehicle (ROV) are given for a variety of significant wave heights while operating at various depths. Our analysis provides a range of operational conditions within which hydrodynamic disturbances don’t preclude employment of UUVs and identify the conditions where PID-controlled station keeping becomes impractical and unsafe.","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124185171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Game Theory For Self-Driving Cars","authors":"F. Camara, Charles W. Fox","doi":"10.31256/sk9zg2d","DOIUrl":"https://doi.org/10.31256/sk9zg2d","url":null,"abstract":"Pedestrian behaviour understanding is of utmost importance for autonomous vehicles (AVs). Pedestrian behaviour is complex and harder to model and predict than other road users such as drivers and cyclists. In this paper, we present an overview of our ongoing work on modelling AV-human interactions using game theory for autonomous vehicles control.","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128923396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melanie Zimmer, Ali Al-Yacoub, P. Ferreira, N. Lohse
{"title":"Towards Human-Chatbot Interaction: A Virtual Assistant for the Ramp-up Process","authors":"Melanie Zimmer, Ali Al-Yacoub, P. Ferreira, N. Lohse","doi":"10.31256/QX5DT5V","DOIUrl":"https://doi.org/10.31256/QX5DT5V","url":null,"abstract":"Nowadays, we are surrounded by virtual assistants in everyday life. But one domain that is assumed to massively benefit from virtual assistants, is manufacturing. In particular, where activities are reliant on human expertise and knowledge, a virtual assistant could help support the human. The vision of this work is inspired by the need for bringing an assembly system more rapidly to an operational state. To achieve this vision, a decision-support framework that aims to better integrate the human operator into the ramp-up activity is proposed. As part of this framework, natural language processing tools are applied to allow the development of a virtual assistant for the ramp-up process. This paper provides an overview of the current work in progress, which is part of a PhD research undertaken at the Intelligent Automation Centre at Loughborough University. It outlines the initial efforts and future steps that have been completed and are planned.","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121371255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feasibility Study of In-Field Phenotypic Trait Extraction for Robotic Soft-Fruit Operations","authors":"Raymond Kirk, M. Mangan, Grzegorz Cielniak","doi":"10.31256/uk4td6i","DOIUrl":"https://doi.org/10.31256/uk4td6i","url":null,"abstract":"There are many agricultural applications that would benefit from robotic monitoring of soft-fruit, examples include harvesting and yield forecasting. Autonomous mobile robotic platforms enable digitisation of horticultural processes in-field reducing labour demand and increasing efficiency through con- tinuous operation. It is critical for vision-based fruit detection methods to estimate traits such as size, mass and volume for quality assessment, maturity estimation and yield forecasting. Estimating these traits from a camera mounted on a mobile robot is a non-destructive/invasive approach to gathering qualitative fruit data in-field. We investigate the feasibility of using vision- based modalities for precise, cheap, and real time computation of phenotypic traits: mass and volume of strawberries from planar RGB slices and optionally point data. Our best method achieves a marginal error of 3.00cm3 for volume estimation. The planar RGB slices can be computed manually or by using common object detection methods such as Mask R-CNN.","PeriodicalId":393014,"journal":{"name":"UKRAS20 Conference: \"Robots into the real world\" Proceedings","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125112453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}