Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97最新文献
{"title":"Development of power assist system with individual compensation ratios for gravity and dynamic load","authors":"Y. Hayashibara, K. Tanie, H. Arai, H. Tokashiki","doi":"10.1109/IROS.1997.655079","DOIUrl":"https://doi.org/10.1109/IROS.1997.655079","url":null,"abstract":"This paper present the design concept of a power assist system. In such system, when the controller is designed without considering the maximum torque of the actuators, the actuators can sometimes become saturated, resulting in a loss of stability and manoeuvrability. We propose a method for dealing with this problem. The load force is divided into gravitational and dynamic component, and each component is attenuated by an individual ratio. These ratios are determined considering the maximum power of the operator and the actuators.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125890666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual learning and object verification with illumination invariance","authors":"K. Ohba, Yoichi Sato, K. Ikeuchi","doi":"10.1109/IROS.1997.655139","DOIUrl":"https://doi.org/10.1109/IROS.1997.655139","url":null,"abstract":"This paper describes a method for recognizing partially occluded objects to realize a bin-picking task under different levels of illumination brightness by using the eigenspace analysis. In the proposed method, a measured color in the RGB color space is transformed into the HSV color space. Then, the hue of the measured color, which is invariant to change in illumination brightness and direction, is used for recognizing multiple objects under different levels of illumination conditions. The proposed method was applied to real images of multiple objects under different illumination conditions, and the objects were recognized and localized successfully.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123685916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experiments on depth from magnification and blurring","authors":"S. Ahn, Sukhan Lee, A. Meyyappan, P. Schenker","doi":"10.1109/IROS.1997.655092","DOIUrl":"https://doi.org/10.1109/IROS.1997.655092","url":null,"abstract":"A new method of extracting depth from blurring and magnification of objects or local scene is presented. Assuming no active illumination, the images are taken at two camera positions of a small displacement, using a single standard camera with telecentric lens. Thus, the depth extraction method is simple in structure and efficient in computation. Fusing the two disparate sources of depth information, magnification and blurring, the proposed method provides more accurate and robust depth estimation. This paper describes the process of various experimentations performed to validate this concept and describes the present work that has been done in that field. The experimental result shows less than 1% error for an optimal depth range. The ultimate aim of this concept would be the construction of dense 3D maps of objects and real time continuous estimation of depth.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115487767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual tracking of an end-effector by adaptive kinematic prediction","authors":"A. Ruf, M. Tonko, R. Horaud, H. Nagel","doi":"10.1109/IROS.1997.655115","DOIUrl":"https://doi.org/10.1109/IROS.1997.655115","url":null,"abstract":"Presents results of a model-based approach to visual tracking and pose estimation for a moving polyhedral tool in position-based visual servoing. This enables the control of a robot in look-and-move mode to achieve six degree of freedom goal configurations. Robust solutions of the correspondence problem-known as \"matching\" in the static case and \"tracking\" in the dynamic one-are crucial to the feasibility of such an approach in real-world environments. The object's motion along an arbitrary trajectory in space is tracked using visual pose estimates through consecutive images. Subsequent positions are predicted from robot joint angle measurements. To deal with inaccurate models and to relax calibration requirements, adaptive online calibration of the kinematic chain is proposed. The kinematic predictions enable unambiguous feature matching by a pessimistic algorithm. The performance of the suggested algorithms and the robustness of the proposed system are evaluated on real image sequences of a moving gripper. The results fulfill the requirements of visual servoing, and the computational demands are sufficiently low to allow for real-time implementation.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115570656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Mizoguchi, Katsuyuki Takagi, Y. Hatamura, M. Nakao, Tomomasa Sato
{"title":"Behavioral expression by an expressive mobile robot-expressing vividness, mental distance, and attention","authors":"H. Mizoguchi, Katsuyuki Takagi, Y. Hatamura, M. Nakao, Tomomasa Sato","doi":"10.1109/IROS.1997.649070","DOIUrl":"https://doi.org/10.1109/IROS.1997.649070","url":null,"abstract":"This paper proposes an idea that it is possible for a mobile robot to display behavioral expressions by its motion. Behavioral expressions are expressions of vividness, sense of distance and attention. To confirm the idea concretely, an expressive mobile robot has been designed and implemented to display the behavioral expressions. Utilizing the robot, psychological experiments have been conducted to evaluate impressions on three items: 1) velocity changing pattern, 2) distance between human and the robot, and 3) various poses. The experimental results indicate: firstly, there is a proper speed pattern for expression of vividness, the pattern being triangular along the time axis; secondly, there is a proper distance range between human and robot for expression of mental distance between them, its average value being about 2.5 m; thirdly, when the robot faces the human, the impression of attention is increased where the robot puts its head on one side or raises its hands. The implemented expressive mobile robot is puppy-sized and has 2 DOFs for motion, 2 DOFs for two swingable arms and 3 DOFs for pan, tilt and yaw of its head. The experimental results prove feasibility of the proposed idea of the behavioral expression by the robot.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116084906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"G/sup en/oM: a tool for the specification and the implementation of operating modules in a distributed robot architecture","authors":"S. Fleury, M. Herrb, R. Chatila","doi":"10.1109/IROS.1997.655108","DOIUrl":"https://doi.org/10.1109/IROS.1997.655108","url":null,"abstract":"This paper presents a general methodology for the specification and the integration of functional modules in a distributed reactive robot architecture. The approach is based on a hybrid architecture basically composed of two levels: a lower distributed functional level controlled by a centralized decisional level. Due to this methodology, synchronous or asynchronous operating capabilities (servo-control, data processing, event monitoring) can be easily added to the functional level. They are encapsulated into modules, built according to a generic model, that are seen by the decisional level as homogeneous, programmable, reactive and robust communicant services. Each module is simply described with a specific language and is automatically produced by a generator of modules (G/sup en/oM) according to the generic model. G/sup en/oM also produces an interactive test program and interface libraries to control the module and to read the resulting data, which allow one to directly integrate the module into the architecture.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116186577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model-based object tracking in cluttered scenes with occlusions","authors":"F. Jurie","doi":"10.1109/IROS.1997.655114","DOIUrl":"https://doi.org/10.1109/IROS.1997.655114","url":null,"abstract":"We propose an efficient method for tracking 3D modelled objects in cluttered scenes. Rather than tracking objects in the image, our approach relies on the object recognition aspect of tracking. Candidate matches between image and model features define volumes in the space of transformations. The volumes of the pose space satisfying the maximum number of correspondences are those that best align the model with the image. Object motion defines a trajectory in the pose space. We give some results showing that the presented method allows tracking of objects even when they are totally occluded for a short while, without supposing any motion model and with a low computational cost (below 200 ms per frame on a basic workstation). Furthermore, this algorithm can also be used to initialize the tracking.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122703838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acquisition of statistical motion patterns in dynamic environments and their application to mobile robot motion planning","authors":"E. Kruse, R. Gutsche, F. Wahl","doi":"10.1109/IROS.1997.655089","DOIUrl":"https://doi.org/10.1109/IROS.1997.655089","url":null,"abstract":"In recent papers we (1996, 1997) have proposed a new path planning approach for mobile robots: statistical motion planning with respect to typical obstacle behavior in order to improve pre-planning in dynamic environments. In this paper, we present our experimental system: in a real environment, cameras observe the workspace in order to detect obstacle motions and to derive statistical data. We have developed new techniques based on stochastic trajectories to model obstacle behavior. Collision probabilities are calculated for polygonal objects moving on piecewise linear trajectories. The statistical data can be applied directly, thus the entire chain from raw sensor data to a stochastic assessment of robot trajectories is closed. Finally, some new work regarding different applications of statistical motion planning is outlined, including road-map approaches for pre-planning, expected time to reach the goal, and reactive behaviors.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122729656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Autonomous navigation in ill-structured outdoor environment","authors":"Josep Fernández, A. Casals","doi":"10.1109/IROS.1997.649093","DOIUrl":"https://doi.org/10.1109/IROS.1997.649093","url":null,"abstract":"Presents a methodology for autonomous navigation in weakly structured outdoor environments such as dirt roads or mountain ways. The main problem to solve is the detection of an ill-defined structure-the way-and the obstacles in the scene, when working in variable lighting conditions. First, we discuss the road description requirements to perform autonomous navigation in this kind of environment and propose a simple sensors configuration based on vision. A simplified road description is generated from the analysis of a sequence of color images, considering the constraints imposed by the model of ill-structured roads. This environment description is done in three steps: region segmentation, obstacle detection and coherence evaluation.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122563080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Camera calibration from multiple views of a 2D object, using a global nonlinear minimization method","authors":"M. Devy, V. Garric, J. Orteu","doi":"10.1109/IROS.1997.656569","DOIUrl":"https://doi.org/10.1109/IROS.1997.656569","url":null,"abstract":"An important task in most 3D vision systems is camera calibration. Many camera models, numerical methods and experimental set-ups have been proposed in the literature to solve the calibration problem. We have analysed and tried many methods, and we conclude that the main problems lie in the choice of the numerical methods and on the calibration object. We propose in this paper a method which is based on a camera model that incorporates lens distortion, and involves a nonlinear minimization technique which can be performed using multiple views of a single 2D object and subpixel feature extraction. We present an application for which only a 2D calibration object can be used.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"213 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}