R. Macknojia, A. Chávez-Aragón, P. Payeur, R. Laganière
{"title":"Calibration of a network of Kinect sensors for robotic inspection over a large workspace","authors":"R. Macknojia, A. Chávez-Aragón, P. Payeur, R. Laganière","doi":"10.1109/WORV.2013.6521936","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521936","url":null,"abstract":"This paper presents an approach for calibrating a network of Kinect devices used to guide robotic arms with rapidly acquired 3D models. The method takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy within the range of the depth measurements accuracy provided by this technology. The internal calibration of the sensor in between the color and depth measurement is also presented. The resulting system is developed to inspect large objects, such as vehicles, positioned within an enlarged field of view created by the network of RGB-D sensors.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130356153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sensitivity evaluation of embedded code detection in imperceptible structured light sensing","authors":"Jingwen Dai, R. Chung","doi":"10.1109/WORV.2013.6521910","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521910","url":null,"abstract":"We address the use of pre-trained primitive-shape detectors for identifying embedded codes in imperceptible structured light (ISL) sensing. The accuracy of the whole sensing system is determined by the performance of such detectors. In training-based methods, generalization of the training results is often an issue, and it is especially so when the work scenario could have substantial variation between the training stage and the operation stage. This paper presents sensitivity evaluation results of embedded code detection in ISL sensing, together with the associated statistical analysis. They show that the scheme of embedding imperceptible codes into normal video projection can be maintained effective despite possible variations on sensing distance, projection-surface orientation, projection-surface shape, projection-surface texture and hardware configuration. The finding indicates the feasibility of integrating the ISL method into robotic systems for operation over a wide domain of circumstances.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115415573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trinocular visual odometry for divergent views with minimal overlap","authors":"Jaeheon Jeong, J. Mulligan, N. Correll","doi":"10.1109/WORV.2013.6521943","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521943","url":null,"abstract":"We present a visual odometry algorithm for trinocular systems with divergent views and minimal overlap. Whereas the bundle adjustment is the preferred method for multi-view visual odometry problems, it is infeasible if the number of features in the images-such as in HD videos-is large. We propose a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. Unlike the bundle adjustment method, whose computational complexity is O(n3), the proposed approach allows to match features only between neighboring cameras and can therefore be executed in O(n2). Assuming constant motion of the cameras, temporal tracking therefore allows us to make up for the missing overlap between cameras as objects from the center view eventually appear in the left or right camera. The scale factors that cannot be determined by monocular visual odometry are computed by constructing a system of equations based on known relative camera pose and the five monocular VO estimates. The system is solved using a weighted least squares scheme and remains over-defined even when the camera path follows a straight line. We evaluate the resulting system using synthetic and real video sequences that were recorded for a virtual exercise environment.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125862109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dense range images from sparse point clouds using multi-scale processing","authors":"L. Do, Lingni Ma, P. D. De with","doi":"10.1109/WORV.2013.6521928","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521928","url":null,"abstract":"Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate high-accuracy dense range images from sparse point clouds to facilitate such applications. Our proposal addresses the problem of sparse data, mixed-pixels at the discontinuities and occlusions by combining multi-scale range images. The visual results show that our algorithm can create high-resolution dense range images with sharp discontinuities, while preserving the topology of objects even for environments that contain occlusions. To demonstrate the effectiveness of our approach, we propose an iterative perspective-to-point algorithm that aligns the edges between the color image and the range image from various viewpoints. The experimental results from 46 viewpoints show that the camera pose can be corrected when using high-accuracy dense range images, so that 3D reconstruction or 3D rendering can obtain a clearly higher quality.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126679441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Miranda Neto, A. Victorino, I. Fantoni, J. V. Ferreira
{"title":"Real-time Collision Risk Estimation based on Pearson's Correlation Coefficient","authors":"A. Miranda Neto, A. Victorino, I. Fantoni, J. V. Ferreira","doi":"10.1109/WORV.2013.6521911","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521911","url":null,"abstract":"The perception of the environment is a major issue in autonomous robots. In our previous works, we have proposed a visual perception system based on an automatic image discarding method as a simple solution to improve the performance of a real-time navigation system. In this paper, we take place in the obstacle avoidance context for vehicles in dynamic and unknown environments, and we propose a new method for Collision Risk Estimation based on Pearson's Correlation Coefficient (PCC). Applying the PCC to real-time CRE has not been done yet, making the concept unique. This paper provides a novel way of calculating collision risk and applying it for object avoidance using the PCC. This real-time perception system has been evaluated from real data obtained by our intelligent vehicle.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127697747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Alqassis, C. A. Castro, S. Smith, T. Ketterl, Yu Sun, P. P. Savage, R. Gitlin
{"title":"A wireless robotic video laparo-endoscope for minimal invasive surgery","authors":"A. Alqassis, C. A. Castro, S. Smith, T. Ketterl, Yu Sun, P. P. Savage, R. Gitlin","doi":"10.1109/WORV.2013.6521931","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521931","url":null,"abstract":"This paper describes the design, prototype and deployment of a network of wireless Miniature Anchored Robotic Videoscopes for Expedited Laparoscopy (MARVEL). The MARVEL robotic Camera Modules (CMs) remove the need for a dedicated trocar port for an external laparoscope, additional incisions for surgical instrumentation, camera cabling for power, video and xenon light, and an assistant in the operating room to hold and position the laparoscope. The system includes: (1) Multiple MARVEL CMs that feature a wireless controlled pan/tilt camera platform, which provides a full hemisphere field of view inside the abdominal cavity from different angles, wirelessly controlled focus and a wireless illumination control system, (2) a Master Control Module (MCM) that provides a near-zero latency video wireless communications link, independent wireless control for multiple MARVEL CMs, digital zoom, manual focus, and a wireless Human-Machine Interface (HMI) that provides the surgeon with full control over all the functions of the CMs. In-vivo experiments on a porcine subject were carried out to test the performance of the system.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117272192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Near surface light source estimation from a single view image","authors":"Wu Yuan Xie, C. Chung","doi":"10.1109/WORV.2013.6521920","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521920","url":null,"abstract":"Several techniques have been developed for estimating light source position in indoor or outdoor environment. However, those techniques assume that the light source can be approximated by a point, which cannot be applied safely to, for example, some case of Photometric Stereo reconstruction, when the light source is placed quite close to a small-size target, and hence the size of light source cannot be ignored. In this paper, we present a novel approach for estimating light source from single image of a scene that is illuminated by near surface light source. We propose to employ a shiny sphere and a Lambertion plate as light probe to locate light source position, where albedo variance of the Lambertian plate is used as the basis of the object function. We also illustrate the convexity of this object function and propose an efficient way to search the optimal value, i.e. source position. We test our calibration results on real images by means of Photometric Stereo reconstruction and image rendering, and both testing results show the accuracy of our estimation framework.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124891960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active view planing for human observation through a RGB-D camera","authors":"Jianhao Du, W. Sheng","doi":"10.1109/WORV.2013.6521923","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521923","url":null,"abstract":"Human sensing is always an important topic for robotic applications. In this paper, we proposed an active view planning approach for human observation on a mobile robot platform with sensor data processing. The sensor adopted in our research is an inexpensive RGB-D camera. A new measure based on distance and orientation information is introduced to evaluate the quality of the viewpoint when the robot detects the human subject. The result shows that the robot can move to the best viewpoint based on the proposed approach.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122569733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuichi Kobayashi, Yutaro Ohshima, T. Kaneko, A. Yamashita
{"title":"Meal support system with spoon using laser range finder and manipulator","authors":"Yuichi Kobayashi, Yutaro Ohshima, T. Kaneko, A. Yamashita","doi":"10.2316/Journal.206.2016.3.206-4342","DOIUrl":"https://doi.org/10.2316/Journal.206.2016.3.206-4342","url":null,"abstract":"This paper presents an autonomous meal support robot system that can handle non-rigid solid food. The robot system is equipped with a laser range finder (LRF) and a manipulator holding a spoon. The LRF measures the 3D coordinates of surface points belonging to food on a plate. Then the robot determines the position of food surface to scoop, and the manipulator moves according to the calculated trajectory. The system has an advantage that preparation of food cutting in bite-size is not required. The proposed scooping control was implemented and verified in experiment with two kinds of non-rigid solid foods. It was shown that the robot can scoop foods for the most part with high success rate.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122715861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Ellis, N. Pugeault, K. Ofjall, J. Hedborg, R. Bowden, M. Felsberg
{"title":"Autonomous navigation and sign detector learning","authors":"L. Ellis, N. Pugeault, K. Ofjall, J. Hedborg, R. Bowden, M. Felsberg","doi":"10.1109/WORV.2013.6521929","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521929","url":null,"abstract":"This paper presents an autonomous robotic system that incorporates novel Computer Vision, Machine Learning and Data Mining algorithms in order to learn to navigate and discover important visual entities. This is achieved within a Learning from Demonstration (LfD) framework, where policies are derived from example state-to-action mappings. For autonomous navigation, a mapping is learnt from holistic image features (GIST) onto control parameters using Random Forest regression. Additionally, visual entities (road signs e.g. STOP sign) that are strongly associated to autonomously discovered modes of action (e.g. stopping behaviour) are discovered through a novel Percept-Action Mining methodology. The resulting sign detector is learnt without any supervision (no image labeling or bounding box annotations are used). The complete system is demonstrated on a fully autonomous robotic platform, featuring a single camera mounted on a standard remote control car. The robot carries a PC laptop, that performs all the processing on board and in real-time.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128730443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}