{"title":"Magnetic laser scanner for endoscopic microsurgery","authors":"Alperen Acemoglu, L. Mattos","doi":"10.1109/ICRA.2017.7989485","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989485","url":null,"abstract":"Scanning lasers increase the quality of the laser microsurgery enabling fast tissue ablation with less thermal damage. However, the possibility to perform scanning laser microsurgery in confined workspaces is restricted by the large size of currently available actuators, which are typically located outside the patient and require direct line-of-sight to the microsurgical area. Here, a magnetic scanner tool is designed to allow endoscopic scanning laser microsurgery. The tool consists of two miniature electromagnetic coil pairs and permanent magnets attached to a flexible optical fiber. The actuation mechanism is based on the interaction between the electromagnetic field and the permanent magnets. Controlled and high-speed laser scanning is achieved by bending of the optical fiber with magnetic torque. Results demonstrate the achievement of a 3×3 mm2 scanning range within the laser spot is controlled with 35μm precision. The system is also capable of automatically executing high-speed laser scanning operations over customized trajectories with a root-mean-squared-error (RMSE) in the order of 75μm. Furthermore, it can be teleoperated in real-time using any appropriate user interface device. This new technology enables laser scanning in narrow and difficult to reach workspaces, promising to bring the benefits of scanning laser microsurgery to laparoscopic or even flexible endoscopic procedures. In addition, the same technology can be potentially used for optical fiber based imaging, enabling for example the creation of new family of scanning endoscopic OCT or hyperspectral probes.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"127 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116373138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Nobili, Raluca Scona, Marco Caravagna, M. Fallon
{"title":"Overlap-based ICP tuning for robust localization of a humanoid robot","authors":"S. Nobili, Raluca Scona, Marco Caravagna, M. Fallon","doi":"10.1109/ICRA.2017.7989547","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989547","url":null,"abstract":"State estimation techniques for humanoid robots are typically based on proprioceptive sensing and accumulate drift over time. This drift can be corrected using exteroceptive sensors such as laser scanners via a scene registration procedure. For this procedure the common assumption of high point cloud overlap is violated when the scenario and the robot's point-of-view are not static and the sensor's field-of-view (FOV) is limited. In this paper we focus on the localization of a robot with limited FOV in a semi-structured environment. We analyze the effect of overlap variations on registration performance and demonstrate that where overlap varies, outlier filtering needs to be tuned accordingly. We define a novel parameter which gives a measure of this overlap. In this context, we propose a strategy for robust non-incremental registration. The pre-filtering module selects planar macro-features from the input clouds, discarding clutter. Outlier filtering is automatically tuned at run-time to allow registration to a common reference in conditions of non-uniform overlap. An extensive experimental demonstration is presented which characterizes the performance of the algorithm using two humanoids: the NASA Valkyrie, in a laboratory environment, and the Boston Dynamics Atlas, during the DARPA Robotics Challenge Finals.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125171011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sajad Saeedi, Luigi Nardi, Edward Johns, Bruno Bodin, P. Kelly, A. Davison
{"title":"Application-oriented design space exploration for SLAM algorithms","authors":"Sajad Saeedi, Luigi Nardi, Edward Johns, Bruno Bodin, P. Kelly, A. Davison","doi":"10.1109/ICRA.2017.7989673","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989673","url":null,"abstract":"In visual SLAM, there are many software and hardware parameters, such as algorithmic thresholds and GPU frequency, that need to be tuned; however, this tuning should also take into account the structure and motion of the camera. In this paper, we determine the complexity of the structure and motion with a few parameters calculated using information theory. Depending on this complexity and the desired performance metrics, suitable parameters are explored and determined. Additionally, based on the proposed structure and motion parameters, several applications are presented, including a novel active SLAM approach which guides the camera in such a way that the SLAM algorithm achieves the desired performance metrics. Real-world and simulated experimental results demonstrate the effectiveness of the proposed design space and its applications.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124145604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gauthier Gras, K. Leibrandt, Piyamate Wisanuvej, P. Giataganas, C. Seneci, Menglong Ye, Jianzhong Shang, Guang-Zhong Yang
{"title":"Implicit gaze-assisted adaptive motion scaling for highly articulated instrument manipulation","authors":"Gauthier Gras, K. Leibrandt, Piyamate Wisanuvej, P. Giataganas, C. Seneci, Menglong Ye, Jianzhong Shang, Guang-Zhong Yang","doi":"10.1109/ICRA.2017.7989488","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989488","url":null,"abstract":"Traditional robotic surgical systems rely entirely on robotic arms to triangulate articulated instruments inside the human anatomy. This configuration can be ill-suited for working in tight spaces or during single access approaches, where little to no triangulation between the instrument shafts is possible. The control of these instruments is further obstructed by ergonomic issues: The presence of motion scaling imposes the use of clutching mechanics to avoid the workspace limitations of master devices, and forces the user to choose between slow, precise movements, or fast, less accurate ones. This paper presents a bi-manual system using novel self-triangulating 6-degrees-of-freedom (DoF) tools through a flexible elbow, which are mounted on robotic arms. The control scheme for the resulting 9-DoF system is detailed, with particular emphasis placed on retaining maximum dexterity close to joint limits. Furthermore, this paper introduces the concept of gaze-assisted adaptive motion scaling. By combining eye tracking with hand motion and instrument information, the system is capable of inferring the user's destination and modifying the motion scaling accordingly. This safe, novel approach allows the user to quickly reach distant locations while retaining full precision for delicate manoeuvres. The performance and usability of this adaptive motion scaling is evaluated in a user study, showing a clear improvement in task completion speed and in the reduction of the need for clutching.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128740597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Room layout estimation from rapid omnidirectional exploration","authors":"Robert Lukierski, Stefan Leutenegger, A. Davison","doi":"10.1109/ICRA.2017.7989747","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989747","url":null,"abstract":"A new generation of practical, low-cost indoor robots is now using wide-angle cameras to aid navigation, but usually this is limited to position estimation via sparse feature-based SLAM. Such robots usually have little global sense of the dimensions, demarcation or identities of the rooms they are in, information which would be very useful to enable behaviour with much more high level intelligence. In this paper we show that we can augment an omni-directional SLAM pipeline with straightforward dense stereo estimation and simple and robust room model fitting to obtain rapid and reliable estimation of the global shape of typical rooms from short robot motions. We have tested our method extensively in real homes, offices and on synthetic data. We also give examples of how our method can extend to making composite maps of larger rooms, and detecting room transitions.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132192217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"O-POCO: Online point cloud compression mapping for visual odometry and SLAM","authors":"Luis Contreras, W. Mayol-Cuevas","doi":"10.1109/ICRA.2017.7989523","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989523","url":null,"abstract":"This paper presents O-POCO, a visual odometry and SLAM system that makes online decisions regarding what to map and what to ignore. It takes a point cloud from classical SfM and aims to sample it on-line by selecting map features useful for future 6D relocalisation. We use the camera's traveled trajectory to compartamentalize the point cloud, along with visual and spatial information to sample and compress the map. We propose and evaluate a number of different information layers such as the descriptor information's relative entropy, map-feature occupancy grid, and the point cloud's geometry error. We compare our proposed system against both SfM, and online and offline ORB-SLAM using publicly available datasets in addition to our own. Results show that our online compression strategy is capable of outperforming the baseline even for conditions when the number of features per key-frame used for mapping is four times less.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121369940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compression of topological models and localization using the global appearance of visual information","authors":"L. Payá, W. Mayol, Sergio Cebollada, Ó. Reinoso","doi":"10.1109/ICRA.2017.7989661","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989661","url":null,"abstract":"In this work, a clustering approach to obtain compact topological models of an environment is developed and evaluated. The usefulness of these models is tested by studying their utility to solve the robot localization problem subsequently. Omnidirectional visual information and global appearance descriptors are used both to create and compress the models and to estimate the position of the robot. Comparing to the methods based on the extraction and description of landmarks, global appearance approaches permit building models that can be handled and interpreted more intuitively and using relatively straightforward algorithms to estimate the position of the robot. The proposed algorithms are tested with a set of panoramic images captured with a catadioptric vision sensor in a large environment under real working conditions. The results show that it is possible to compress substantially the visual information contained in topological models to arrive to a balance between the computational cost and the accuracy of the localization process.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127372000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-eye model-based gaze estimation from a Kinect sensor","authors":"Xiaolong Zhou, Haibin Cai, Youfu Li, Honghai Liu","doi":"10.1109/ICRA.2017.7989194","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989194","url":null,"abstract":"In this paper, we present an effective and accurate gaze estimation method based on two-eye model of a subject with the tolerance of free head movement from a Kinect sensor. To accurately and efficiently determine the point of gaze, i) we employ two-eye model to improve the estimation accuracy; ii) we propose an improved convolution-based means of gradients method to localize the iris center in 3D space; iii) we present a new personal calibration method that only needs one calibration point. The method approximates the visual axis as a line from the iris center to the gaze point to determine the eyeball centers and the Kappa angles. The final point of gaze can be calculated by using the calibrated personal eye parameters. We experimentally evaluate the proposed gaze estimation method on eleven subjects. Experimental results demonstrate that our gaze estimation method has an average estimation accuracy around 1.99°, which outperforms many leading methods in the state-of-the-art.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1997 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123551058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning task constraints in operational space formulation","authors":"Hsiu-Chin Lin, Prabhakar Ray, M. Howard","doi":"10.1109/ICRA.2017.7989039","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989039","url":null,"abstract":"Many human skills can be described in terms of performing a set of prioritised tasks. While a number of tools have become available that recover the underlying control policy from constrained movements, few have explicitly considered learning how constraints should be imposed in order to perform the control policy. In this paper, a method for learning the self-imposed constraints present in movement observations is proposed. The problem is formulated into the operational space control framework, where the goal is to estimate the constraint matrix and its null space projection that decompose the task space and any redundant degrees of freedom. The proposed method requires no prior knowledge about either the dimensionality of the constraints nor the underlying control policies. The techniques are evaluated on a simulated three degree-of-freedom arm and on the AR10 humanoid hand.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130617531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual stability prediction for robotic manipulation","authors":"Wenbin Li, A. Leonardis, Mario Fritz","doi":"10.1109/ICRA.2017.7989304","DOIUrl":"https://doi.org/10.1109/ICRA.2017.7989304","url":null,"abstract":"Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way — bypassing the need for an explicit simulation at run-time. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. We first evaluate the approach on synthetic data and compared the results to human judgments on the same stimuli. Further, we extend this approach to reason about future states of such towers that in return enables successful stacking.","PeriodicalId":195122,"journal":{"name":"2017 IEEE International Conference on Robotics and Automation (ICRA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123983772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}