{"title":"Design and calibration of single-camera catadioptric omnistereo system for miniature aerial vehicles (MAVs)","authors":"Ling Guo, I. Labutov, Jizhong Xiao","doi":"10.1109/IROS.2010.5650276","DOIUrl":"https://doi.org/10.1109/IROS.2010.5650276","url":null,"abstract":"Stereo system plays an important role in the navigation of MAVs. In this paper, we design a single-camera catadioptric omnistereo system for MAV, which consists of one hyperboloidal mirror, one hyperboloidal-planar combined mirror, and one conventional camera. System parameters are optimized based on the analysis of constraints and each parameter's influence on performance. Projective model of this system is derived, which provides a foundation for sphere-based calibration algorithm. It calibrates not only the conventional camera parameters, but also the mirror parameters. We also prove that a minimum of two spheres are needed to calibrate the seven parameters.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116782250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. L. Tagliamonte, F. Sergi, G. Buttazzo, D. Accoto, E. Guglielmelli
{"title":"Design of a variable impedance differential actuator for wearable robotics applications","authors":"N. L. Tagliamonte, F. Sergi, G. Buttazzo, D. Accoto, E. Guglielmelli","doi":"10.1109/IROS.2010.5649982","DOIUrl":"https://doi.org/10.1109/IROS.2010.5649982","url":null,"abstract":"In the design of wearable robots, the possibility of dynamically regulating the mechanical output impedance is crucial to achieve an efficient and safe human-robot interaction and to produce useful emergent dynamical behaviors.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116911277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael J. Quinlan, T. Au, Jesse Zhu, Nicolae Stiurca, P. Stone
{"title":"Bringing simulation to life: A mixed reality autonomous intersection","authors":"Michael J. Quinlan, T. Au, Jesse Zhu, Nicolae Stiurca, P. Stone","doi":"10.1109/IROS.2010.5651993","DOIUrl":"https://doi.org/10.1109/IROS.2010.5651993","url":null,"abstract":"Fully autonomous vehicles are technologically feasible with the current generation of hardware, as demonstrated by recent robot car competitions. Dresner and Stone proposed a new intersection control protocol called Autonomous Intersection Management (AIM) and showed that with autonomous vehicles it is possible to make intersection control much more efficient than the traditional control mechanisms such as traffic signals and stop signs. The protocol, however, has only been tested in simulation and has not been evaluated with real autonomous vehicles. To realistically test the protocol, we implemented a mixed reality platform on which an autonomous vehicle can interact with multiple virtual vehicles in a simulation at a real intersection in real time. From this platform we validated realistic parameters for our autonomous vehicle to safely traverse an intersection in AIM. We present several techniques to improve efficiency and show that the AIM protocol can still outperform traffic signals and stop signs even if the cars are not as precisely controllable as has been assumed in previous studies.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117235390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating IMU and landmark sensors for 3D SLAM and the observability analysis","authors":"Farhad Aghili","doi":"10.1109/IROS.2010.5650359","DOIUrl":"https://doi.org/10.1109/IROS.2010.5650359","url":null,"abstract":"This paper investigates 3-dimensional Simultaneous Localization and Mapping (SLAM) and the corresponding observability analysis by fusing data from landmark sensors and a strap-down Inertial Measurement Unit (IMU) in an adaptive Kalman filter (KF). In addition to the vehicle's states and landmark positions, the self-tuning filter estimates the IMU calibration parameters as well as the covariance of the measurement noise. Examining the observability of the 3D SLAM system leads to the the conclusion that the system remains observable provided that at least one of these conditions is satisfied i) two known landmarks of which the connecting line is not collinear with the vector of the acceleration are observed ii) three known landmarks which are not placed in a straight line are observed.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127546178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the initialization of statistical optimum filters with application to motion estimation","authors":"L. Kneip, D. Scaramuzza, R. Siegwart","doi":"10.1109/IROS.2010.5652200","DOIUrl":"https://doi.org/10.1109/IROS.2010.5652200","url":null,"abstract":"The present paper is focusing on the initialization of statistical optimum filters for motion estimation in robotics. It shows that if certain conditions concerning the stability of a system are fulfilled, and some knowledge about the mean of the state is given, an initial error covariance matrix that is optimal with regard to the convergence behavior of the filter estimate might be analytically obtained. Easy algorithms for the n-dimensional continuous and discrete cases are presented. The applicability to non-linear systems is also pointed out. The convergence of a normal Kalman filter is analyzed in simulation using the discrete model of a theoretical example.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125044381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A visual exploration algorithm using semantic cues that constructs image based hybrid maps","authors":"Aravindhan K. Krishnan, K. Krishna","doi":"10.1109/IROS.2010.5649870","DOIUrl":"https://doi.org/10.1109/IROS.2010.5649870","url":null,"abstract":"A vision based exploration algorithm that invokes semantic cues for constructing a hybrid map of images - a combination of semantic and topological maps is presented in this paper. At the top level the map is a graph of semantic constructs. Each node in the graph is a semantic construct or label such as a room or a corridor, the edge represented by a transition region such as a doorway that links the two semantic constructs. Each semantic node embeds within it a topological graph that constitutes the map at the middle level. The topological graph is a set of nodes, each node representing an image of the higher semantic construct. At the low level the topological graph embeds metric values and relations, where each node embeds the pose of the robot from which the image was taken and any two nodes in the graph are related by a transformation consisting of a rotation and translation. The exploration algorithm explores a semantic construct completely before moving or branching onto a new construct. Within each semantic construct it uses a local feature based exploration algorithm that uses a combination of local and global decisions to decide the next best place to move. During the process of exploring a semantic construct it identifies transition regions that serve as gateways to move from that construct to another. The exploration is deemed complete when all transition regions are marked visited. Loop detection happens at transition regions and graph relaxation techniques are used to close loops when detected to obtain a consistent metric embedding of the robot poses. Semantic constructs are labeled using a visual bag of words(VBOW) representation with a probabilistic SVM classifier.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125124939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Innovative kinematics and control to improve robot spatial resolution","authors":"J. Brethé","doi":"10.1109/IROS.2010.5650352","DOIUrl":"https://doi.org/10.1109/IROS.2010.5650352","url":null,"abstract":"The paper presents innovative kinematics and control of a planar redundant robot designed to improve spatial resolution by a factor 5. This result is obtained in a restricted area of the workspace using position information from external sensors. This innovation results from a clearer understanding of the factors that influence the robot micrometric behavior: axes control resolution generates a set of attainable points in the robot workspace and different spatial resolution patterns appear when introducing redundancy depending on the final axes chosen to correct the position.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125772429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Imposing joint kinematic constraints with an upper limb exoskeleton without constraining the end-point motion","authors":"V. Crocher, A. Sahbani, G. Morel","doi":"10.1109/IROS.2010.5650961","DOIUrl":"https://doi.org/10.1109/IROS.2010.5650961","url":null,"abstract":"One of the key features of upper limb exoskeletons is their ability to take advantage of the human arm kinematic redundancy in order to modify the subject's joint dynamics without affecting his/her hand motion. This is of particular interest in the field of neurorehabilitation, when an exoskeleton is used to interact with a patient who suffers from joint motions desynchronization, resulting e.g. from brain damage following a stroke.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126096704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic evaluation of region of interest for intelligent robot","authors":"M. Rokunuzzaman, K. Sekiyama, T. Fukuda","doi":"10.1109/IROS.2010.5652064","DOIUrl":"https://doi.org/10.1109/IROS.2010.5652064","url":null,"abstract":"This paper introduces the concept of semantic evaluation of Region of Interest (ROI) for intelligent robots. The intelligent robot must have the capability of understanding situations. The first step of understanding of the situation is to find where to focus on and how to behave. Focusing on some particular area or region needs selection of the objects of interaction relevant to the context. Moreover, the focused area needs to be semantically evaluated to quantify the semantic relations. In this paper, we first detect interacting objects based on dynamic interaction. Then we recognize probable objects using Dynamic Bayesian Networks. Using the probable objects and a mutual supplementation model, we determine the contextual object. We form ROIs based on possible combinations of objects and the contextual object. Finally, we semantically evaluate each ROI. Various experimental results are provided to illustrate our method.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123243148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A bio-plausible design for visual pose stabilization","authors":"Shuo Han, A. Censi, A. Straw, R. Murray","doi":"10.1109/IROS.2010.5652857","DOIUrl":"https://doi.org/10.1109/IROS.2010.5652857","url":null,"abstract":"We consider the problem of purely visual pose stabilization (also known as servoing) of a second-order rigid-body system with six degrees of freedom: how to choose forces and torques, based on the current view and a memorized goal image, to steer the pose towards a desired one. Emphasis has been given to the bio-plausibility of the computation, in the sense that the control laws could be in principle implemented on the neural substrate of simple insects. We show that stabilizing laws can be realized by bilinear/quadratic operations on the visual input. This particular computational structure has several numerically favorable characteristics (sparse, local, and parallel), and thus permits an efficient engineering implementation. We show results of the control law tested on an indoor helicopter platform.","PeriodicalId":420658,"journal":{"name":"2010 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"21 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123559687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}