{"title":"Reaction Force Inspection System Using Neural Network Classifier","authors":"Y. Yamada, Y. Komura","doi":"10.1109/ROBOT.2005.1570252","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570252","url":null,"abstract":"People recognize the quality of a product while in operation by hands or fingers. The operation feeling by hands or fingers is one of the important indexes for the high-grade products. However, skilled inspectors are used to inspect some products because automatic inspection is technologically difficult or too high in cost. This paper looks at a system for inspection of the quality of a product’s reaction force characteristics. This system, until now considered difficult to realize, automates the inspection method utilizing the touching of an inspector's finger. Neural network classifier is applied to the system for products to learn an inspector's finger judgment. We provide an input layer of a neural network classifier with nodes corresponding to time-and frequency-domain features of reaction forces of a product and an output layer with three nodes corresponding to a judgment; being one of non-defective, defective, or unable to judge. From experimental results, the effectiveness of the proposed neural network classifier has been clarified.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116970783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Use of UML for Modeling Physical Systems","authors":"C. Secchi, C. Fantuzzi, M. Bonfè","doi":"10.1109/ROBOT.2005.1570731","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570731","url":null,"abstract":"The aim of this paper is to provide a unified language for modeling both control software and physical plants in real time control systems. This is done by embedding the bond graph modeling language for physical systems into the UML-RT framework, widely used to model distributed real-time software.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117003651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sensor Fusion based 3D Target Visual Tracking for Autonomous Vehicles with IMM","authors":"Zhen Jia, Arjuna Balasuriya, S. Challa","doi":"10.1109/ROBOT.2005.1570379","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570379","url":null,"abstract":"This paper proposes an approach for object identification and tracking for autonomous vehicle application. In this scheme, data from the vehicle’s onboard vision and motion sensors are fused to identify the target 3D dynamic features in the world coordinate. Here several simple and basic linear dynamic models are combined to make the approximation of the target’s unpredicted or complex motion properties. With these basic linear dynamic models a detailed description of the 3D target tracking system with the interacting multiple models (IMM) for Extended Kalman Filtering is presented. The target’s final state estimates are obtained as a weighted combination of the outputs from each different model. Performance of the proposed interacting multiple dynamic model tracking algorithm is demonstrated through experimental results.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"28 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123441929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Biomimetic Design of the Berkeley Lower Extremity Exoskeleton (BLEEX)","authors":"A. Chu, H. Kazerooni, A. Zoss","doi":"10.1109/ROBOT.2005.1570789","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570789","url":null,"abstract":"Many places in the world are too rugged or enclosed for vehicles to access. Even today, material transport to such areas is limited to manual labor and beasts of burden. Modern advancements in wearable robotics may make those methods obsolete. Lower extremity exoskeletons seek to supplement the intelligence and sensory systems of a human with the significant strength and endurance of a pair of wearable robotic legs that support a payload. This paper outlines the use of Clinical Gait Analysis data as the framework for the design of such a system at UC Berkeley.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123640714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Micro Manipulators for Intrauterine Fetal Surgery in an Open MRI","authors":"K. Harada, K. Tsubouchi, T. Chiba, M. Fujie","doi":"10.1109/ROBOT.2005.1570168","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570168","url":null,"abstract":"We propose a new surgical robotic system for intrauterine fetal surgery in an Open MRI. The target disease of the fetal surgery is spina bifida or myelomeningocele that is incomplete closure in the spinal column and one of the common fetal diseases. In the proposed surgical process, the abdominal wall and uterine wall would not widely be opened but rather surgical instruments inserted through the small holes in both walls to perform minimally invasive surgery. In this paper, a prototype of the micro manipulator of diameter is 2.4mm and bending radius 2.45 mm is presented. The diameter and bending radius of this manipulator is one of the smallest ever developed among surgical robots to the best of the knowledge of the investigating authors. The mechanism of the manipulator includes two ball joints and is driven using four wires able to bend through 90 degrees in any direction. The features of the mechanism include a small diameter, small bending radius, ease of fabrication, high rigidity and applicability for other surgical applications. Although the manipulator is not yet MRI compatible, the feature of the prototype demonstrated the feasibility of robotic intrauterine fetal surgery.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116838777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developing a Control Architecture for Multiple Unmanned Aerial Vehicles to Search and Localize RF Time-Varying Mobile Targets: Part I","authors":"D. Pack, G. York","doi":"10.1109/ROBOT.2005.1570725","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570725","url":null,"abstract":"In this paper, we present a control architecture that allows multiple Unmanned Aerial Vehicles (UAVs) to cooperatively detect mobile RF (Radio Frequency) emitting ground targets. The architecture is developed under the premise that UAVs are controlled as a distributed system. The distributed system-based technique maximizes the search and detection capabilities of multiple UAVs. We use a hybrid approach that combines a set of intentional cooperative rules with emerging properties of a swarm to accomplish the objective. The UAVs are equipped only with low-precision RF direction finding sensors and we assume the targets may emit signals randomly with variable duration. Once a target is detected, each UAV optimizes a cost function to determine whether to participate in a cooperative localization task. The cost function balances between the completion of detecting all targets (global search) in the search space and increasing the precision of cooperatively locating already detected targets. A search function for each UAV determines the collective search patterns of collaborating UAVs. Two functions used by each UAV determine (1) the optimal number of UAVs involved in locating targets, (2) the search pattern to detect all targets, and (3) the UAV flight path for an individual UAV. We show the validity of our algorithm using simulation results. Hardware implementation of the strategies is planned for this coming year.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124014418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vision SLAM in the Measurement Subspace","authors":"John Folkesson, P. Jensfelt, H. Christensen","doi":"10.1109/ROBOT.2005.1570092","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570092","url":null,"abstract":"In this paper we describe an approach to feature representation for simultaneous localization and mapping, SLAM. It is a general representation for features that addresses symmetries and constraints in the feature coordinates. Furthermore, the representation allows for the features to be added to the map with partial initialization. This is an important property when using oriented vision features where angle information can be used before their full pose is known. The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for, the commonalities of all map features are also exploited to allow SLAM algorithms to be interchanged as well as choice of sensors and features. In other words the SLAM implementation need not be changed at all when changing sensors and features and vice versa. Experimental results both with vision and range data and combinations thereof are presented.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124409713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Task Assignment with Dynamic Perception and Constrained Tasks in a Multi-Robot System","authors":"A. Farinelli, L. Iocchi, D. Nardi, V. Ziparo","doi":"10.1109/ROBOT.2005.1570330","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570330","url":null,"abstract":"In this paper we present an asynchronous distributed mechanism for allocating tasks in a team of robots. Tasks to be allocated are dynamically perceived from the environment and can be tied by execution constraints. Conflicts among team mates arise when an uncontrolled number of robots execute the same task, resulting in waste of effort and spatial conflicts. The critical aspect of task allocation in Multi Robot Systems is related to conflicts generated by limited and noisy perception capabilities of real robots. This requires significant extensions to the task allocation techniques developed for software agents. The proposed approach is able to successfully allocate roles to robots avoiding conflicts among team mates and maintaining low communication overhead. We implemented our method on AIBO robots and performed quantitative analysis in a simulated environment.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125798953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reachability Analysis of Sampling Based Planners","authors":"Roland Geraerts, M. Overmars","doi":"10.1109/ROBOT.2005.1570152","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570152","url":null,"abstract":"The last decade, sampling based planners like the Probabilistic Roadmap Method have proved to be successful in solving complex motion planning problems. We give a reachability based analysis for these planners which leads to a better understanding of the success of the approach and enhancements of the techniques suggested. This also enables us to study the effect of using new local planners.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125837596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Geometric Theory for Synthesis and Analysis of Sub-6 DoF Parallel Manipulators","authors":"J. Meng, Guanfeng Liu, Zexiang Li","doi":"10.1109/ROBOT.2005.1570560","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570560","url":null,"abstract":"This paper presents a rigorous and precise geometric theory for the analysis and synthesis of sub-6 DoF parallel manipulators. We give a rigorous definition for the parallel manipulator synthesis problem, and introduce a general method for specifying the corresponding subchains which will result in the desired parallel manipulator. Following this, a procedure for solving the parallel manipulator synthesis problem is proposed when the set of desired end-effector motions is in the form of Lie subgroup or a regular submanifold of SE(3). Numerous examples are used to illustrate the generality and effectiveness of the proposed synthesis method.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"330 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124662295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}