{"title":"Desargues theorem for augmented reality applications","authors":"C. Maaoui, R. Chellali, J. Fontaine","doi":"10.1109/IROS.2005.1545510","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545510","url":null,"abstract":"In this paper, we propose a new approach for some augmented reality applications by exploiting minimal geometric knowledge and using Desargues theorem. The idea underlying our approach is to use a generalization of Desargues theorem in uncalibrated images context. This approach allows the realization of three applications that includes: points matching, novel view synthesis and adding a virtual object to real scene. Examples on real and synthetic images are presented.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124406507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bipedal walking pattern design based on synchronization of the motions in sagittal and lateral planes","authors":"C. Zhu, Y. Tomizawa, A. Kawamura","doi":"10.1109/IROS.2005.1545397","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545397","url":null,"abstract":"In this paper, a new design approach of bipedal walking pattern based on the synchronization of the motions in sagittal and lateral planes are presented and two walking patterns of ZMP fixed and ZMP variable cases are developed. Based on our previous work, bipedal walking is separated into the initial acceleration, double support, deceleration, and acceleration phases; consequently, the nature that bipedal walking is in fact a continuous acceleration and deceleration motion is revealed. With the discusses on the motions both in the sagittal and lateral planes, the fact that the motions in these two planes are tightly coupled together is clarified. The motion parameters such as the walking velocity, walking time, and phase stride can be easily changed simply by altering the swinging amplitude in lateral plane that is determined by the double support phase. The constraint conditions of the phase stride, velocity and swinging amplitude are investigated. Therefore, an approach for adjusting walking velocity by controlling the swinging amplitude is naturally developed. The motion planning is also presented and a numerical example is given out.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114510908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new redundancy formalism for avoidance in visual servoing","authors":"N. Mansard, F. Chaumette","doi":"10.1109/IROS.2005.1545222","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545222","url":null,"abstract":"The paper presents a new approach to construct a control law that realizes a main task and simultaneously takes supplementary constraints into account. Classically, this is done by using the redundancy formalism. If the main task does not constrain all the motions of the robot, a secondary task can be achieved by using only the remaining degrees of freedom (DOF). We propose a new general method that frees up some of the DOF constrained by the main task in addition of the remaining DOF. The general idea is to enable the motions produced by the secondary control law that help the main task to be completed faster. The main advantage is to enhance the performance of the secondary task by enlarging the number of available DOF. In a formal framework, a projection operator is built which ensures that the secondary control law does not disturb the main task. A control law can be then easily computed from the two tasks considered. Experiments that implement and validate this approach are proposed. The visual servoing framework is used to position a 6-DOF robot while simultaneously avoiding occlusions and joint limits.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117345801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The obstacle-restriction method for robot obstacle avoidance in difficult environments","authors":"J. Minguez","doi":"10.1109/IROS.2005.1545546","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545546","url":null,"abstract":"This paper addresses the obstacle avoidance problem in difficult scenarios that usually are dense, complex and cluttered. The proposal is a method called the obstacle-restriction. At each iteration of the control cycle, this method addresses the obstacle avoidance in two steps. First there is procedure to compute instantaneous subgoals in the obstacle structure (obtained by the sensors). The second step associates a motion restriction to each obstacle, which are managed next to compute the most promising motion direction. The advantage of this technique is that it avoids common limitations of previous obstacle avoidance methods, improving their navigation performance in difficult scenarios. Furthermore, we obtain similar results to the recent methods that achieve navigation in troublesome scenarios. However, the new method improves their behavior in open spaces. The performance of this method is illustrated with experimental results obtained with a robotic wheelchair vehicle.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116288697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deepak R. Karuppiah, R. Grupen, A. Hanson, E. Riseman
{"title":"Smart resource reconfiguration by exploiting dynamics in perceptual tasks","authors":"Deepak R. Karuppiah, R. Grupen, A. Hanson, E. Riseman","doi":"10.1109/IROS.2005.1545247","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545247","url":null,"abstract":"In robot and sensor networks, one of the key challenges is to decide when and where to deploy sensory resources to gather information of optimal value. The problem is essentially one of planning, scheduling and controlling the sensors in the network to acquire data from an environment that is constantly varying. The dynamic nature of the problem precludes the use of traditional rule-based strategies that can handle only quasi-static context changes. Automatic context derivation procedures are thus essential for providing fault recovery and fault pre-emption in such systems. We posit that the quality of a sensor network configuration depends on sensor coverage and geometry, sensor allocation policies, and the dynamic processes in the environment. In this paper, we show how these factors can be manipulated in an adaptive framework for robust run-time resource management. We demonstrate our ideas in a people tracking application using a network of multiple cameras. The task specification for our multi-camera network is one of allocating a camera pair that can best localize a human subject given the current context. The system automatically derives policies for switching between camera pairs that enable robust tracking while being attentive to performance measures. Our approach is unique in that we do not make any a priori assumptions about the scene or the activities that take place in the scene. Models of motion dynamics in the scene and the camera network configuration steer the policies to provide robust tracking.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116303228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Sugahara, A. Ohta, K. Hashimoto, H. Sunazuka, M. Kawase, C. Tanaka, Hun-ok Lim, A. Takanishi
{"title":"Walking up and down stairs carrying a human by a biped locomotor with parallel mechanism","authors":"Y. Sugahara, A. Ohta, K. Hashimoto, H. Sunazuka, M. Kawase, C. Tanaka, Hun-ok Lim, A. Takanishi","doi":"10.1109/IROS.2005.1545500","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545500","url":null,"abstract":"This paper describes the means of tuning-up method of the walking parameters to go up and down stairs for a biped robot with leg mechanisms using Stewart platforms. It has been confirmed that the stroke range of use could be reduced by tuning up the waist yaw trajectory and preset ZMP trajectories for motion pattern generation. By using the developed method, a walking experiment involving movement up and down a stair with the rise of 250 mm and certain walking experiments ascending and descending stairs carrying a human were successfully completed. Through these experiments, the effectiveness of the proposed method was confirmed.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116455797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating near minimal spanning control sets for constrained motion planning in discrete state spaces","authors":"M. Pivtoraiko, A. Kelly","doi":"10.1109/IROS.2005.1545046","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545046","url":null,"abstract":"We propose a principled method to create a search space for constrained motion planning, which efficiently encodes only feasible motion plans. The space of possible paths is encoded implicitly in the connections between states, but only feasible and only local connections are allowed. Furthermore, we propose a systematic method to generate a near-minimal set of spatially distinct motion alternatives. This set of motion primitives preserves the connectivity of the representation while eliminating redundancy - leading to a very efficient structure for motion planning at the chosen resolution.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117200200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online visual motion estimation using FastSLAM with SIFT features","authors":"T. Barfoot","doi":"10.1109/IROS.2005.1545444","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545444","url":null,"abstract":"This paper describes a technique to estimate the 3D motion of a vehicle using odometric sensors and a stereo camera. The algorithm falls into the category of simultaneous localization and mapping as a large database of visual landmarks is created. The algorithm has been field tested online on a rover traversing loose terrain in the presence of obstacles. The resulting position estimation errors are between 0.5% and 4% of distance travelled, a significant improvement over odometry alone.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124606159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A case study of 3D stereoscopic vs. 2D monoscopic tele-reality in real-time dexterous teleoperation","authors":"W. Fung, W. Lo, Yunhui Liu, N. Xi","doi":"10.1109/IROS.2005.1545299","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545299","url":null,"abstract":"This paper reports a case study of using single 3D stereoscopic visual feedback for real-time teleoperation of dexterous tasks. In traditional teleoperation systems, real-time visual feedbacks of multiple monoscopic views of the robot workspace are provided for remote operator. However, it is difficult for the operator to control remote robot to perform dexterous tasks by looking at multiple video feedbacks at the same time. During teleoperation, remote operators usually find multiple 2D visual feedbacks confusing, especially when performing dexterous tasks that require accurate positioning and orientating of robot end-effectors. In this paper, we propose to provide single real-time 3D stereoscopic visual feedback for remote operators so that they perceive remote robot workspace with the sense of depth. This sense of 3D empowers remote operators to accurately position and orient robot end-effector with confidence. Experiments have been conducted to reveal the usefulness of real-time 3D stereoscopic video feedback over multiple monoscopic video feedback in real-time teleoperation.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124638468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Random sampling algorithm for multi-agent cooperation planning","authors":"Shotaro Kamio, H. Iba","doi":"10.1109/IROS.2005.1545219","DOIUrl":"https://doi.org/10.1109/IROS.2005.1545219","url":null,"abstract":"The cooperation of several robots is needed for complex tasks. The cooperation methods for multiple robots generally require exact goal or sub-goal positions. However, it is difficult to direct the goal or sub-goal positions to multiple robots for the sake of cooperation with each other. Planning algorithms reduce the burden for this purpose. In this paper, we propose a multi-agent planning algorithm based on a random sampling method. This method doesn't require the exact sub-goal positions nor the times at which cooperation occurs. The effectiveness of this approach is empirically shown by simulation results.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129769662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}