{"title":"Spacecraft attitude estimation with the aid of Locally Linear Neurofuzzy models and multi sensor data fusion approaches","authors":"M. Mirmomeni, K. Rahmani, C. Lucas","doi":"10.1109/ICARA.2000.4803983","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4803983","url":null,"abstract":"in this paper the Locally Linear Neurofuzzy (LLNF) models with data fusion approach are used to solve the spacecraft attitude estimation problem based on magnetometer sensors and sun sensors observations. LLNF with Locally Linear Model Tree (LoLiMoT) algorithm as an incremental learning algorithm have been used several times as a well-known method for nonlinear system identification and estimation. The efficiency of the LLNF estimator is verified through numerical simulation of a fully actuated rigid body with three sun sensors and three-axis-magnetometers (TAM). For comparison, Kalman filter (KF) as a well-known method in spacecraft attitude estimation and MLP and RBF neural networks are used to evaluate the performance of LLNF. The results presented in this paper clearly demonstrate that the LLNF is superior to other methods in coping with the nonlinear model.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131867710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotional control of inverted pendulum system: A soft switching from imitative to emotional learning","authors":"M. J. Roshtkhari, A. Arami, C. Lucas","doi":"10.1109/ICARA.2000.4803996","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4803996","url":null,"abstract":"Model-free control of unidentified systems with unstable equilibriums results in serious problems. In order to surmount these difficulties, firstly an existing model-based controller is used as a mentor for emotional-learning controller. This learning phase prepares the controller to behave like the mentor, while prevents any instability. Next, the controller is softly switched from model based to emotional one, using a FIS1. Also the emotional stress is softly switched from the mentor-imitator output difference to the combination of objectives generated by a FIS which attentionally modulated stresses. For evaluating the proposed model free controller, a laboratorial inverted pendulum2 is employed.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123471845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Localization of a high-speed mobile robot using global features","authors":"Seungkeun Cho, Jangmyung Lee","doi":"10.1017/S0263574710000536","DOIUrl":"https://doi.org/10.1017/S0263574710000536","url":null,"abstract":"This paper proposes a new localization algorithm for a fast-moving mobile robot, which utilizes only one beacon and the global features of the differential-driving mobile robot. It takes a relatively long time to localize a mobile robot with active beacon sensors, since the distance to the beacon is measured by the freight time of the ultrasonic signal. When the mobile robot is moving slowly, the measurement time does not yield a high error. At a higher mobile robot speed, however, the localization error becomes too high to use for the mobile robot navigation. Therefore, in high-speed mobile robot operations, instead of using two or more active beacons for localization, this research used an active beacon and the global features of the mobile robot to localize the mobile robot. The global features of mobile robots are as follows: (1) The speed of the mobile robot does not change rapidly, and (2) The curvature of the mobile robot motion is instantaneously constant. This new approach resolves the high localization error caused by the speed of the mobile robot. The performance of the new localization algorithm has been verified in an experiment with a high-speed mobile robot.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122078612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ELA+: Goal-oriented navigation with obstacle avoidance for rescue robots","authors":"H. Jeong, K. Hyun, Y. Kwak","doi":"10.1109/ICARA.2000.4803911","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4803911","url":null,"abstract":"This paper describes development and demonstration of an obstacle avoidance algorithm termed ELA+ that is adaptable to rescue robots. ELA+ consists of two routines; one related to goal-oriented intention and the other to standing rotation for steering. In ELA+, autonomous navigation based on ELA (Emergency Level Around) proceeds until a goal is reached, and goal-oriented navigation with ELA then follows. It was assumed that the tested scenario was similar to an actual disaster situation, and ELA+ was shown to be able to avoid obstacles located in a 2D virtual space. Simulation results show that ELA+ is able to guide a robot successfully to a goal using only bearing information, even when the distance to the goal and the localization are not prepared.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125018675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualization of clusters in very large rectangular dissimilarity data","authors":"L. Park, J. Bezdek, C. Leckie","doi":"10.1109/ICARA.2000.4803948","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4803948","url":null,"abstract":"D is an m×n matrix of pairwise dissimilarities between m row objects Or and n column objects Oc, which, taken together, comprise m+n objects O = [o<inf>1</inf>,…o<inf>m</inf>,o<inf>m+1</inf>,…o<inf>m+n</inf>]. There are four clustering problems associated with O: (P1) amongst the row objects O<inf>r</inf>; (P2) amongst the column objects O<inf>c</inf>; (P3) amongst the union of the row and column objects O=O<inf>r</inf>∪O<inf>c</inf>; and (P4) amongst the union of the row and column objects that contain at least one object of each type (co-clusters). The coVAT algorithm, which builds images for visual assessment of clustering tendency for these problems, is limited to m×n ≈ O(10<sup>4</sup>×10<sup>4</sup>). We develop a scalable version of coVAT that approximates coVAT images when D is very large. Two examples are given to illustrate and evaluate the new method.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127071100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Baeksuk Chu, Kyungmo Jung, Youngsu Chu, D. Hong, M. Lim, Shinsuk Park, Yongkwun Lee, Sung-Uk Lee, Min Chul Kim, K. Ko
{"title":"Robotic automation system for steel beam assembly in building construction","authors":"Baeksuk Chu, Kyungmo Jung, Youngsu Chu, D. Hong, M. Lim, Shinsuk Park, Yongkwun Lee, Sung-Uk Lee, Min Chul Kim, K. Ko","doi":"10.1109/ICARA.2000.4803937","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4803937","url":null,"abstract":"In building construction, steel beam assembly has been considered as one of the most dangerous manual operations. In this paper, applying robotic technologies to the steel beam assembly task is discussed. Employing the robotic systems to automate such construction tasks gives the following advantages: saving construction time and cost, enhancing operator's safety, and improving overall quality. The automated robotic assembly system presented in the paper consists of a robotic bolting device, a robotic mobile mechanism, and a bolting control system including human-machine interface. The robotic bolting device, which includes a bolting end-effector and a robotic manipulator, performs actual bolting operation. Utilizing the robotic mobile mechanism composed of a rail sliding boom mechanism and a scissors-jack type mobile manipulator, the robotic assembly system can be transported to a target position for bolting operation. The bolting control system plays a role of safely and efficiently operating the robotic assembly system employing a hole recognition system based on vision technology and a haptic based HMI (Human-Machine Interface) system. This paper includes the major components of the entire robot system that have been built and the future plans to integrate them.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129093394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A scheme for an embodied artificial intelligence","authors":"R. Flemmer","doi":"10.1109/ICARA.2000.4804031","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4804031","url":null,"abstract":"A method is presented to construct an embodied artificial intelligence. The method grows out of a system for object recognition in artificial vision and relies upon such a capability. Examples from the biosphere are extensively discussed in coming to the method which has, as its central tenet, that all intelligence is fundamentally related to objects. All the aspects needed by an embodied intelligence are developed including consciousness, memory and volition.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132871790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kensuke Tsubata, Keiji Suzuki, S. Mikami, Eiichi Osawa
{"title":"Recognition of lawn information for mowing robots","authors":"Kensuke Tsubata, Keiji Suzuki, S. Mikami, Eiichi Osawa","doi":"10.1109/ICARA.2000.4803975","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4803975","url":null,"abstract":"In this paper, the method for recognition of lawn information in mowing robots (Robomower) is proposed. The mowing robots will be expected to work in open areas such as parks. To work in an open area, the robots must keep the safety of the public. It is necessary to collect surrounding information adequately. The Robomower is a small and low-power robot, which uses GPS information to get coordinates of its current position. It has many sensors: GPS, terrestrial magnetism direction sensor, ultrasonic wave sensors, tactile bumper switches etc. However, the robot did not have the means to measure the height of the grass. Reflection type sensors and image analysis system are not effective for this task. We need to recognize the lawn to complete mowing. We succeeded in measuring grass height with a simple, cheap sensor. The method is to estimate the height of the turf by using a transmission type Photointerrupter. The effectiveness of the sensor was confirmed by experimentation. It is necessary to collect information in a real environment to decide the action of the robot. The purpose of past research o was an efficiency gain of the entire work, and mowing in a real environment was not a purpose. The entire efficiency is high' is more useful for the cooperation working of robots than 'actual mowing work'. The acquisition means of appropriate environmental information is necessary for 'actual mowing work'. However, a turf sensor was developed to acquire necessary information on the lawn. The information on the lawn necessary for the mowing work is the places of height turf areas. To acquire the lawn information, transmission type's sensor is developed for recognizing the lawn environments. In this method, a simple, cheap sensor made the situation acquisition possible. The output result is shown below.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130941374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bisection method for evaluation of attraction region of passive dynamic walking","authors":"Peijie Zhang, Yantao Tian, Zhenze Liu","doi":"10.1109/ICARA.2000.4803965","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4803965","url":null,"abstract":"A new numerical method for calculation of the attraction region of limit cycle in passive dynamic walking are proposed in the paper. The cell-mapping method is usually used in the evaluation of attraction region, but the calculation is complex and time-consuming. The existing studies show that the attraction region of passive dynamic walking gait is a continuous region and can be determined by its edge. The new calculation method called the bisection method is presented to determine the basin of attraction approximately simply by searching for its edge. Using the proposed bisection method, the basins of attraction are calculated for passive dynamic walker with and without knees, and the result is compared with the cell mapping method. Compared with the usually used cell mapping method, the bisection method can locate the attraction region with a much higher accuracy and yet need much less of calculation amount.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125397644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive game AI for Gomoku","authors":"Kuan Liang Tan, Chin Hiong Tan, K. C. Tan, A. Tay","doi":"10.1109/ICARA.2000.4804026","DOIUrl":"https://doi.org/10.1109/ICARA.2000.4804026","url":null,"abstract":"The field of game intelligence has seen an increase in player centric research. That is, machine learning techniques are employed in games with the objective of providing an entertaining and satisfying game experience for the human player. This paper proposes an adaptive game AI that can scale its level of difficulty according to the human player's level of capability for the game freestyle Gomoku. The proposed algorithm scales the level of difficulty during the game and between games based on how well the human player is performing such that it will not be too easy or too difficult. The adaptive game AI was sent out to 50 human respondents as feasibility. It was observed that the adaptive AI was able to successfully scale the level of difficulty to match that of the human player, and the human player found it enjoyable playing at a level similar to his/her own.","PeriodicalId":435769,"journal":{"name":"2009 4th International Conference on Autonomous Robots and Agents","volume":"512 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123063220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}