{"title":"Omnidirectional stereovision system for occupancy grid","authors":"F. Corrêa, J. Okamoto","doi":"10.1109/ICAR.2005.1507474","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507474","url":null,"abstract":"Vision systems provide large amount of data to robots about its environment. Particularly, an omnidirectional vision system provides information in all directions in just one image. By processing a pair of omnidirectional images it is possible to obtain distances between the robot and the objects in its working environment. Using only an omnidirectional stereovision system as source of information to create a stochastic representation of the environment, known as occupancy grids, a robot can determine the probability of occupation of the space and navigate autonomously. This article shows a stereo algorithm with a matching restricted by geometrical properties of the vision system using directly the omnidirectional image and a model of that sensor to update an occupancy grid","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132517508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Torque distribution and slip minimization in an omnidirectional mobile base","authors":"Yuan Ping Li, D. Oetomo, M. Ang, Chee Wang Lim","doi":"10.1109/ICAR.2005.1507465","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507465","url":null,"abstract":"Two forward kinematic models which are used in the control of an omnidirectional mobile base are evaluated. These two models result in different sensitivities to joint position error. The analysis and experimental results in this paper demonstrate the capabilities of dynamic model in improving the sensitivity of the forward kinematic model, resulting in a more even distribution of joint torques and in minimizing the amount of slip between wheels","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134261183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic intercept and manipulation of objects using a novel pneumatic robot hand","authors":"Yanfei Liu, A. Hoover, I. Walker","doi":"10.1109/ICAR.2005.1507402","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507402","url":null,"abstract":"Most current dexterous robot hands are expensive and heavy, which limits the possibility of putting them on industrial robots because the maximum payload that an industrial robot can carry is small. Additionally, current robot hands are not very adaptable. Most grasping research doesn't include vision. A few works incorporate visual sensing, however, these concentrate on stationary objects. In this work, we present a novel pneumatic three-finger hand and use our earlier work in model based dynamic intercept to allow the hand to grasp semi-randomly moving objects. Experimental results show that this novel three-finger hand is simple, light and effective","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134538219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Milighetti, H. Kuntze, C. Frey, B. Diestel-Feddersen, J. Balzer
{"title":"On a primitive skill-based supervisory robot control architecture","authors":"G. Milighetti, H. Kuntze, C. Frey, B. Diestel-Feddersen, J. Balzer","doi":"10.1109/ICAR.2005.1507404","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507404","url":null,"abstract":"Smart interaction of humanoid robots in a complex public, private or industrial environment requires the introduction of primitive skill-based discrete-continuous supervisory control concepts. The functionality of the proposed hierarchical robot supervisory control architecture captures both the hierarchy that is required for representing complex skills as well as the mechanisms for detecting failures during their execution. At first by means of several complementary (e.g. internal, optical, tactile or acoustic) sensors and by neuro-fuzzy based fusion of relevant sensor features, the actual motion phase or fault event is continuously diagnosed. Depending on the identified motion phase or random fault event, the most appropriate discrete-continuous control strategy coping optimally with the corresponding situation will be selected and executed. First experimental and simulation results are reported in this paper","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115238083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotics and autonomous technology for asteroid sample return mission","authors":"T. Kubota, S. Sawai, T. Hashimoto, J. Kawaguchi","doi":"10.1109/ICAR.2005.1507387","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507387","url":null,"abstract":"The MUSES-C mission is the world's first sample and return attempt to/from the near Earth asteroid. In deep space, it is hard to navigate, guide, and control a spacecraft on a real-time basis remotely from the earth mainly due to the communication delay. So autonomy is required for final approach and landing on an unknown body. It is important to navigate and guide a spacecraft to the landing point without hitting rocks or big stones. In the final descent phase, cancellation of the horizontal speed relative to the surface of the landing site is essential. This paper describes various kinds of robotics technologies applied for MUSES-C mission. A global mapping method, an autonomous descent scheme, and a novel sample-collection method, and asteroid exploration robot are proposed and presented in detail. The validity and the effectiveness of the proposed methods are confirmed and evaluated by numerical simulations and some experiments","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114523370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fault detection of tool/load grasping for telerobotics using neural networks","authors":"Sewoong Kim, W. Hamel","doi":"10.1109/ICAR.2005.1507508","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507508","url":null,"abstract":"For the safe and reliable execution of tasks, the tool grasping conditions of the manipulator must be checked to determine whether the tool has been grasped in the desired manner in real time. Especially in the case of telerobotics, grasping errors are critical to the completion of tasks since the human operator cannot access the hazardous and remote work environment. This paper proposes a time-delayed neural network to identify the load of manipulators in real time. The developed scheme is applied to a two-link manipulator, and the simulation results show the feasibility of the approach for grasping fault detection","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116821077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention shifts during action sequence recognition for social robots","authors":"B. Khadhouri, Y. Demiris","doi":"10.1109/ICAR.2005.1507451","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507451","url":null,"abstract":"Human action understanding is an important component of our research towards social robots that can operate among humans. A crucial element of this component is visual attention - where should a robot direct its limited visual and computational resources during the perception of a human action? In this paper, we propose a computational model of an attention mechanism that combines the saliency of top-down elements, based on multiple hypotheses about the demonstrated action, with the saliency of bottom up components. We implement our attention mechanism on a robot, and examine its performance during the observation of object-directed human actions. Furthermore, we propose a method for resetting this model that allows it to work on multiple behaviours observed in a sequence. We also implement and investigate this method's performance on the robot","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129655391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-localization through color features detection","authors":"M. Castelnovi, A. Sgorbissa, R. Zaccaria","doi":"10.1109/ICAR.2005.1507421","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507421","url":null,"abstract":"Self-localization plays a fundamental role in all the activities of a service mobile robot, from simple point-to-point navigation to complex fetch-and-carry tasks. In particular, in presence of an environment which changes dynamically, a trade-off must be found between apparently opposite characteristics: uniqueness (i.e. the ability to univocally recognize every location in the environment) and ductility (i.e. the ability to recognize a location of the environment in spite of small changes). The paper shows a vision-based approach which exploits color analysis and clustering to match perceptions with a pre-stored model of the environment, and relies on a Markovian model to update a probability density over the possible robot's configurations","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127317724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A.Y. Yang, H. González-Baños, V. Ng-Thow-Hing, J. Davis
{"title":"RoboTalk: controlling arms, bases and androids through a single motion interface","authors":"A.Y. Yang, H. González-Baños, V. Ng-Thow-Hing, J. Davis","doi":"10.1109/ICAR.2005.1507425","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507425","url":null,"abstract":"Despite several successful humanoid robot projects from both industry and academia, generic motion interfaces for higher-level applications are still absent. Direct robot driver access proves to be either very difficult due to the complexity of humanoid robots, very unstable due to constant robot hardware upgrade and re-design, or inaccessible due to proprietary software and hardware. Motion interfaces do exist, but these are either hardware-specific designs, or generic interlaces that support very simple robots (non-humanoids). Thus, this paper introduces RoboTalk, a new motion interface for controlling robots. From the ground up our design model considers three factors: mechanism-independence to abstract the hardware from higher-level applications, a versatile network support mechanism to enable both remote and local motion control, and an easy-to-manage driver interface to facilitate the incorporation of features by hardware developers. The interface is based on a motion specification that supports a wide range of robotic mechanisms, from mobile bases such as a Pioneer 2 to humanoid robots. The specification allows us to construct interfaces from basic blocks, such as wheeled bases, robot arms and legs. We have tested and implemented our approach on the Honda ASIMO robot and a Pioneer 2 mobile robot","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130013686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D and 2D finite element analysis in soft tissue cutting for haptic display","authors":"T. Chanthasopsephan, J. Desai, A. Lau","doi":"10.1109/ICAR.2005.1507436","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507436","url":null,"abstract":"Real-time medical simulation for robotic surgery planning and surgery training requires realistic yet computationally fast models of the mechanical behavior of soft tissue. This paper presents a study to develop such a model to enable fast haptics display in simulation of soft-tissue cutting. An apparatus was developed and experiments were conducted to generate force-displacement data for cutting of soft tissue such as pig liver. The force-displacement curve of cutting pig liver revealed a characteristic pattern: the overall curve is formed by repeating units consisting of a local deformation segment followed by a local crack-growth segment. The modeling effort reported here focused on characterizing the tissue in the local deformation segment in a way suitable for fast haptic display. The deformation resistance of the tissue was quantified in terms of the local effective modulus (LEM) consistent with experimental force-displacement data. An algorithm was developed to determine LEM by solving an inverse problem with iterative finite element models. To enable faster simulation of cutting of a three-dimensional (3D) liver specimen of naturally varying thickness, three levels of model order reduction were studied. Firstly, a 3D quadratic-element model reduced to uniform thickness but otherwise haptics-equivalent (have identical force-displacement feedback) to a 3D model with varying thickness matching that of the liver was used. Next, haptics-equivalent 2D quadratic-element models were used. Finally, haptics-equivalent 2D linear-element models were used. These three models had a model reduction in the ratio of 1.0:0.3:0.04 but all preserved the same input-output (displacement, force) behavior measured in the experiments. The values of the LEM determined using the three levels of model reduction were close to one another. Additionally, the variation of the LEM with cutting speed was determined. The values of LEM decreased as the cutting speed increased","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130025085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}