Thekla Stefanou, Allie J. Turton, A. Lenz, S. Dogramadzi
{"title":"Upper limb motion intent recognition using tactile sensing","authors":"Thekla Stefanou, Allie J. Turton, A. Lenz, S. Dogramadzi","doi":"10.1109/IROS.2017.8206573","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206573","url":null,"abstract":"Focusing on upper limb rehabilitation of weak stroke patients, this pilot study explores how motion intent can be detected using force sensitive resistors (FSR). This is part of a bigger project which will see the actuation and control of an intent-driven exoskeleton. The limited time stroke survivors have with their therapists means that they can not often get enough training. During active-assisted training, therapists guide the paralysed limb through a movement only after detecting visual or haptic cues of the motion intent from the patient. Aiming to replicate therapist practices of recognising patients' intention to move, a pilot study of a tactile system is performed. The system will perform consistently even with patients who have low muscle strength and control ability. Currently available devices for detecting muscle activity do not offer the robustness and performance necessary; Electromyography (EMG) sensors, a well-established method, is affected by factors like skin moisture and BCI (Brain Computer Interface) has a slow response time. The proposed tactile sensing system is a simple yet robust solution both from a sensing as well as a usability point of view. Pilot experiments have been performed with a healthy subject emulating low muscle activation conditions. An overall accuracy of 80.45% is achieved when detecting forearm and arm muscle contractions and hence motion intent.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"97 1","pages":"6601-6608"},"PeriodicalIF":0.0,"publicationDate":"2017-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81735912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Soft fluidic rotary actuator with improved actuation properties","authors":"J. Fras, Y. Noh, H. Wurdemann, K. Althoefer","doi":"10.1109/IROS.2017.8206448","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206448","url":null,"abstract":"The constantly increasing amount of machines operating in the vicinity of humans makes it necessary to rethink the design approach for such machines to ensure that they are safe when interacting with humans. Traditional mechanisms are rigid and heavy and as such considered unsuitable, even dangerous when a controlled physical contact with humans is desired. A huge improvement in terms of safe human-robot interaction has been achieved by a radically new approach to robotics — soft material robotics. These new robots are made of compliant materials that render them safe when compared to the conventional rigid-link robots. This undeniable advantage of compliance and softness is paired with a number of drawbacks. One of them is that a complex and sophisticated controller is required to move a soft robot into the desired positions or along a desired trajectory, especially with external forces being present. In this paper we propose an improved soft fluidic rotary actuator composed of silicone rubber and fiber-based reinforcement. The actuator is cheap and easily manufactured providing near linear actuation properties when compared to pneumatic actuators presented elsewhere. The paper presents the actuator design, manufacturing process and a mathematical model of the actuator behavior as well as an experimental validation of the model. Four different actuator types are compared including a square-shaped and three differently reinforced cylindrical actuators.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"32 1","pages":"5610-5615"},"PeriodicalIF":0.0,"publicationDate":"2017-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87029503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Direct visual SLAM fusing proprioception for a humanoid robot","authors":"Raluca Scona, S. Nobili, Y. Pétillot, M. Fallon","doi":"10.1109/IROS.2017.8205943","DOIUrl":"https://doi.org/10.1109/IROS.2017.8205943","url":null,"abstract":"In this paper we investigate the application of semi-dense visual Simultaneous Localisation and Mapping (SLAM) to the humanoid robotics domain. Challenges of visual SLAM applied to humanoids include the type of dynamic motion executed by the robot, a lack of features in man-made environments and the presence of dynamics in the scene. Previous research on humanoid SLAM focused mostly on feature-based methods which result in sparse environment reconstructions. Instead, we investigate the application of a modern direct method to obtain a semi-dense visually interpretable map which can be used for collision free motion planning. We tackle the challenge of using direct visual SLAM on a humanoid by proposing a more robust pose tracking method. This is formulated as an optimisation problem over a cost function which combines information from the stereo camera and a low-drift kinematic-inertial motion prior. Extensive experimental demonstrations characterise the performance of our method using the NASA Valkyrie humanoid robot in a laboratory environment equipped with a Vicon motion capture system. Our experiments demonstrate pose tracking robustness to challenges such as sudden view change, motion blur in the image, change in illumination and tracking through sequences of featureless areas in the environment. Finally, we provide a qualitative evaluation of our stereo reconstruction against a LIDAR map.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"13 1","pages":"1419-1426"},"PeriodicalIF":0.0,"publicationDate":"2017-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78407116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive perception: Learning from sensory predictions to extract object shape with a biomimetic fingertip","authors":"Uriel Martinez-Hernandez, T. Prescott","doi":"10.1109/IROS.2017.8206590","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206590","url":null,"abstract":"In this work, we present an adaptive perception method to improve the performance in accuracy and speed of a tactile exploration task. This work extends our previous studies on sensorimotor control strategies for active tactile perception in robotics. First, we present the active Bayesian perception method to actively reposition a robot to accumulate evidence from better locations to reduce uncertainty. Second, we describe the adaptive perception method that, based on a forward model and a predicted information gain approach, allows to the robot to analyse ‘what would have happened' if a different decision ‘would have been made’ at previous decision time. This approach permits to adapt the active Bayesian perception process to improve the performance in accuracy and reaction time of an exploration task. Our methods are validated with a contour following exploratory procedure with a touch sensor. The results show that the adaptive perception method allows the robot to make sensory predictions and autonomously adapt, improving the performance of the exploration task.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"84 1","pages":"6735-6740"},"PeriodicalIF":0.0,"publicationDate":"2017-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91380193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Underwater 3D structures as semantic landmarks in SONAR mapping","authors":"Thomas Guerneve, K. Subr, Y. Pétillot","doi":"10.1109/IROS.2017.8202215","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202215","url":null,"abstract":"SONAR mapping of underwater environments leads to dense point-clouds. These maps have large memory footprints, are inherently noisy and consist of raw data with no semantic information. This paper presents an approach to underwater semantic mapping where known man-made structures that appear in multibeam SONAR data are automatically recognised. From a set of SONAR images acquired by an Autonomous Underwater Vehicle (AUV) and a catalogue of ‘a-priori’ 3D CAD models of structures that may potentially be found in the data, our algorithm proceeds in two phases. First we recognise objects using an efficient, rotation-invariant 2D descriptor combined with a histogram matching method. Then, we determine pose using a 6 degree-of-freedom registration of the 3D object to the local scene using a fast 2D correlation, refined with an iterative closest point (ICP)-based method. Once the structures located and identified, we build a semantic representation of the world based on the initial CAD models, resulting in a lightweight yet accurate world model. We demonstrate the applicability of our method on field data acquired by an AUV in Loch Linnhe, Scotland. Our method proves to be suitable for online semantic mapping of a partially man-made underwater environment such as a typical oil field.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"19 1","pages":"614-619"},"PeriodicalIF":0.0,"publicationDate":"2017-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88960412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear velocity from commotion motion","authors":"Wenbo Dong, Volkan Isler","doi":"10.1109/IROS.2017.8206187","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206187","url":null,"abstract":"Most Unmanned Aerial Vehicle (UAV) controllers require linear velocities as input. An effective method to obtain linear velocity is to place a downward facing camera and to estimate the velocity from the optical flow. However, this technique fails in outdoor environments when the ground is covered with grass or other objects which move due to winds such as those caused by the propellers. We present a novel method to estimate the linear velocities from stereo images even in the presence of disorderly motion of image features. We validate the approach using imagery obtained from a UAV flying through orchard rows.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"11 1","pages":"3467-3472"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77141806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Insertion method for minimizing fine needle deflection in bowel insertion based on experimental analysis","authors":"R. Tsumura, K. Shitashima, H. Iwata","doi":"10.1109/IROS.2017.8202156","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202156","url":null,"abstract":"Accurate insertion of fine needles is difficult due to needle deflection. Needle deflection in the lower abdomen is particularly complex, as the needle has to pass through various tissues. As the area of the bowel is dominant in lower abdominal insertion, it is important to analyze the deflection during bowel insertion and to control the needle to minimize deflection. Few studies have focused on bowel insertion. We performed a fundamental deflection analysis of needle insertion in the bowel. Moreover, we have proposed an insertion method for minimizing the needle deflection based on our analysis. First, we performed needle insertion at various insertion positions and insertion angles into the hollow-shaped bowel, and determined the trend of the needle deflection during bowel insertion. The results revealed that the needle deflection was increased due to an increase in insertion angle, and therefore insertion angle should be minimized as much as possible. However, during actual bowel needle insertion, there are many situations in which the ideal path cannot be selected due to the arrangement of the bowel loops. We proposed an insertion method that can eliminate the total needle deflections during bowel insertion by controlling the needle tip direction at the breaching of each bowel wall. The results suggest that the needle deflection can be minimized by selecting the insertion path in which the sum of each insertion angle is zero. We verified the results of this insertion method in multiple bowels in in vivo experiments, and showed that it has the potential to be used in clinical practice.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"187-192"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78532560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive trajectory tracking control for the ball-pendulum system with time-varying uncertainties","authors":"Yang Bai, M. Svinin, Motoji Yamamoto","doi":"10.1109/IROS.2017.8206026","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206026","url":null,"abstract":"An adaptive trajectory tracking problem for a spherical rolling robot driven by a 2DOF pendulum is considered in this paper. A feedback controller is proposed for the goal of tracking the trajectory for the full configuration of the spherical robot. To deal with time-varying uncertainty of the system dynamics, an adaptation mechanism is included in the trajectory tracking controller by parameterizing the system uncertainty with a polynomial function, weighted by unknown constant parameters adjustable by the defined update law. The constructed controller is first tested for the planar hoop-pendulum and then applied to the ball-pendulum system. The convergence to the desired trajectories under the control law is proved and then verified by simulations for tracking circular motions under perturbation.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"30 1","pages":"2083-2090"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76441317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thibault Kensicher, J. Leclerc, Daniel Biediger, D. Shah, I. Seimenis, Aaron T. Becker, N. Tsekos
{"title":"Towards MRI-guided and actuated tetherless milli-robots: Preoperative planning and modeling of control","authors":"Thibault Kensicher, J. Leclerc, Daniel Biediger, D. Shah, I. Seimenis, Aaron T. Becker, N. Tsekos","doi":"10.1109/IROS.2017.8206550","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206550","url":null,"abstract":"Image-guided and robot-assisted surgical procedures are rapidly evolving due to their potential to improve patient management and cost effectiveness. Magnetic Resonance Imaging (MRI) is used for pre-operative planning and is also investigated for real-time intra-operative guidance. A new type of technology is emerging that uses the magnetic field gradients of the MR scanner to maneuver ferromagnetic agents for local delivery of therapeutics. With this approach, MRI is both a sensor and forms a closed-loop controlled entity that behaves as a robot (we refer to them as MRbots). The objective of this paper is to introduce a computational framework for preoperative planning using MRI and modeling of MRbot maneuvering inside tortuous blood vessels. This platform generates a virtual corridor that represents a safety zone inside the vessel that is then used to access the safety of the MRbot maneuvering. In addition, to improve safety we introduce a control that sets speed based on the local curvature of the vessel. The functionality of the framework was then tested on a realistic operational scenario of accessing a neurological lesion, a meningioma. This virtual case study demonstrated the functionality and potential of MRbots as well as revealed two primary challenges: real-time MRI (during propulsion) and the need of very strong gradients for maneuvering small MRbots inside narrow cerebral vessels. Our ongoing research focuses on further developing the computational core, MR tracking methods, and on-line interfacing to the MR scanner.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"78 1","pages":"6440-6447"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83236778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Nakadai, M. Kumon, HIroshi G. Okuno, Kotaro Hoshiba, Mizuho Wakabayashi, Kai Washizaki, Takahiro Ishiki, D. Gabriel, Yoshiaki Bando, Takayuki Morito, Ryosuke Kojima, Osamu Sugiyama
{"title":"Development of microphone-array-embedded UAV for search and rescue task","authors":"K. Nakadai, M. Kumon, HIroshi G. Okuno, Kotaro Hoshiba, Mizuho Wakabayashi, Kai Washizaki, Takahiro Ishiki, D. Gabriel, Yoshiaki Bando, Takayuki Morito, Ryosuke Kojima, Osamu Sugiyama","doi":"10.1109/IROS.2017.8206494","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206494","url":null,"abstract":"This paper addresses online outdoor sound source localization using a microphone array embedded in an unmanned aerial vehicle (UAV). In addition to sound source localization, sound source enhancement and robust communication method are also described. This system is one instance of deployment of our continuously developing open source software for robot audition called HARK (Honda Research Institute Japan Audition for Robots with Kyoto University). To improve the robustness against outdoor acoustic noise, we propose to combine two sound source localization methods based on MUSIC (multiple signal classification) to cope with trade-off between latency and noise robustness. The standard Eigenvalue decomposition based MUSIC (SEVD-MUSIC) has smaller latency but less noise robustness, whereas the incremental generalized singular value decomposition based MUSIC (iGSVD-MUSIC) has higher noise robustness but larger latency. A UAV operator can use an appropriate method according to the situation. A sound enhancement method called online robust principal component analysis (ORPCA) enables the operator to detect a target sound source more easily. To improve the stability of wireless communication, and robustness of the UAV system against weather changes, we developed data compression based on free lossless audio codec (FLAC) extended to support a 16 ch audio data stream via UDP, and developed a water-resistant microphone array. The resulting system successfully worked in an outdoor search and rescue task in ImPACT Tough Robotics Challenge in November 2016.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"5985-5990"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89545308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}