2013 IEEE Workshop on Robot Vision (WORV)最新文献

筛选
英文 中文
Calibration of a network of Kinect sensors for robotic inspection over a large workspace 校准Kinect传感器网络,用于大型工作空间的机器人检查
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521936
R. Macknojia, A. Chávez-Aragón, P. Payeur, R. Laganière
{"title":"Calibration of a network of Kinect sensors for robotic inspection over a large workspace","authors":"R. Macknojia, A. Chávez-Aragón, P. Payeur, R. Laganière","doi":"10.1109/WORV.2013.6521936","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521936","url":null,"abstract":"This paper presents an approach for calibrating a network of Kinect devices used to guide robotic arms with rapidly acquired 3D models. The method takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy within the range of the depth measurements accuracy provided by this technology. The internal calibration of the sensor in between the color and depth measurement is also presented. The resulting system is developed to inspect large objects, such as vehicles, positioned within an enlarged field of view created by the network of RGB-D sensors.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130356153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Sensitivity evaluation of embedded code detection in imperceptible structured light sensing 难以察觉结构光传感中嵌入代码检测的灵敏度评价
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521910
Jingwen Dai, R. Chung
{"title":"Sensitivity evaluation of embedded code detection in imperceptible structured light sensing","authors":"Jingwen Dai, R. Chung","doi":"10.1109/WORV.2013.6521910","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521910","url":null,"abstract":"We address the use of pre-trained primitive-shape detectors for identifying embedded codes in imperceptible structured light (ISL) sensing. The accuracy of the whole sensing system is determined by the performance of such detectors. In training-based methods, generalization of the training results is often an issue, and it is especially so when the work scenario could have substantial variation between the training stage and the operation stage. This paper presents sensitivity evaluation results of embedded code detection in ISL sensing, together with the associated statistical analysis. They show that the scheme of embedding imperceptible codes into normal video projection can be maintained effective despite possible variations on sensing distance, projection-surface orientation, projection-surface shape, projection-surface texture and hardware configuration. The finding indicates the feasibility of integrating the ISL method into robotic systems for operation over a wide domain of circumstances.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115415573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trinocular visual odometry for divergent views with minimal overlap 用最小重叠的不同视角进行三目视觉里程测定
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521943
Jaeheon Jeong, J. Mulligan, N. Correll
{"title":"Trinocular visual odometry for divergent views with minimal overlap","authors":"Jaeheon Jeong, J. Mulligan, N. Correll","doi":"10.1109/WORV.2013.6521943","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521943","url":null,"abstract":"We present a visual odometry algorithm for trinocular systems with divergent views and minimal overlap. Whereas the bundle adjustment is the preferred method for multi-view visual odometry problems, it is infeasible if the number of features in the images-such as in HD videos-is large. We propose a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. Unlike the bundle adjustment method, whose computational complexity is O(n3), the proposed approach allows to match features only between neighboring cameras and can therefore be executed in O(n2). Assuming constant motion of the cameras, temporal tracking therefore allows us to make up for the missing overlap between cameras as objects from the center view eventually appear in the left or right camera. The scale factors that cannot be determined by monocular visual odometry are computed by constructing a system of equations based on known relative camera pose and the five monocular VO estimates. The system is solved using a weighted least squares scheme and remains over-defined even when the camera path follows a straight line. We evaluate the resulting system using synthetic and real video sequences that were recorded for a virtual exercise environment.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125862109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Dense range images from sparse point clouds using multi-scale processing 稀疏点云的密集距离图像采用多尺度处理
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521928
L. Do, Lingni Ma, P. D. De with
{"title":"Dense range images from sparse point clouds using multi-scale processing","authors":"L. Do, Lingni Ma, P. D. De with","doi":"10.1109/WORV.2013.6521928","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521928","url":null,"abstract":"Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate high-accuracy dense range images from sparse point clouds to facilitate such applications. Our proposal addresses the problem of sparse data, mixed-pixels at the discontinuities and occlusions by combining multi-scale range images. The visual results show that our algorithm can create high-resolution dense range images with sharp discontinuities, while preserving the topology of objects even for environments that contain occlusions. To demonstrate the effectiveness of our approach, we propose an iterative perspective-to-point algorithm that aligns the edges between the color image and the range image from various viewpoints. The experimental results from 46 viewpoints show that the camera pose can be corrected when using high-accuracy dense range images, so that 3D reconstruction or 3D rendering can obtain a clearly higher quality.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126679441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Collision Risk Estimation based on Pearson's Correlation Coefficient 基于Pearson相关系数的实时碰撞风险估计
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521911
A. Miranda Neto, A. Victorino, I. Fantoni, J. V. Ferreira
{"title":"Real-time Collision Risk Estimation based on Pearson's Correlation Coefficient","authors":"A. Miranda Neto, A. Victorino, I. Fantoni, J. V. Ferreira","doi":"10.1109/WORV.2013.6521911","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521911","url":null,"abstract":"The perception of the environment is a major issue in autonomous robots. In our previous works, we have proposed a visual perception system based on an automatic image discarding method as a simple solution to improve the performance of a real-time navigation system. In this paper, we take place in the obstacle avoidance context for vehicles in dynamic and unknown environments, and we propose a new method for Collision Risk Estimation based on Pearson's Correlation Coefficient (PCC). Applying the PCC to real-time CRE has not been done yet, making the concept unique. This paper provides a novel way of calculating collision risk and applying it for object avoidance using the PCC. This real-time perception system has been evaluated from real data obtained by our intelligent vehicle.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127697747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A wireless robotic video laparo-endoscope for minimal invasive surgery 用于微创手术的无线机器人视频腹腔镜内窥镜
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521931
A. Alqassis, C. A. Castro, S. Smith, T. Ketterl, Yu Sun, P. P. Savage, R. Gitlin
{"title":"A wireless robotic video laparo-endoscope for minimal invasive surgery","authors":"A. Alqassis, C. A. Castro, S. Smith, T. Ketterl, Yu Sun, P. P. Savage, R. Gitlin","doi":"10.1109/WORV.2013.6521931","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521931","url":null,"abstract":"This paper describes the design, prototype and deployment of a network of wireless Miniature Anchored Robotic Videoscopes for Expedited Laparoscopy (MARVEL). The MARVEL robotic Camera Modules (CMs) remove the need for a dedicated trocar port for an external laparoscope, additional incisions for surgical instrumentation, camera cabling for power, video and xenon light, and an assistant in the operating room to hold and position the laparoscope. The system includes: (1) Multiple MARVEL CMs that feature a wireless controlled pan/tilt camera platform, which provides a full hemisphere field of view inside the abdominal cavity from different angles, wirelessly controlled focus and a wireless illumination control system, (2) a Master Control Module (MCM) that provides a near-zero latency video wireless communications link, independent wireless control for multiple MARVEL CMs, digital zoom, manual focus, and a wireless Human-Machine Interface (HMI) that provides the surgeon with full control over all the functions of the CMs. In-vivo experiments on a porcine subject were carried out to test the performance of the system.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117272192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Near surface light source estimation from a single view image 单视图近地表光源估计
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521920
Wu Yuan Xie, C. Chung
{"title":"Near surface light source estimation from a single view image","authors":"Wu Yuan Xie, C. Chung","doi":"10.1109/WORV.2013.6521920","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521920","url":null,"abstract":"Several techniques have been developed for estimating light source position in indoor or outdoor environment. However, those techniques assume that the light source can be approximated by a point, which cannot be applied safely to, for example, some case of Photometric Stereo reconstruction, when the light source is placed quite close to a small-size target, and hence the size of light source cannot be ignored. In this paper, we present a novel approach for estimating light source from single image of a scene that is illuminated by near surface light source. We propose to employ a shiny sphere and a Lambertion plate as light probe to locate light source position, where albedo variance of the Lambertian plate is used as the basis of the object function. We also illustrate the convexity of this object function and propose an efficient way to search the optimal value, i.e. source position. We test our calibration results on real images by means of Photometric Stereo reconstruction and image rendering, and both testing results show the accuracy of our estimation framework.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124891960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Active view planing for human observation through a RGB-D camera 通过RGB-D相机进行人类观察的主动视图规划
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521923
Jianhao Du, W. Sheng
{"title":"Active view planing for human observation through a RGB-D camera","authors":"Jianhao Du, W. Sheng","doi":"10.1109/WORV.2013.6521923","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521923","url":null,"abstract":"Human sensing is always an important topic for robotic applications. In this paper, we proposed an active view planning approach for human observation on a mobile robot platform with sensor data processing. The sensor adopted in our research is an inexpensive RGB-D camera. A new measure based on distance and orientation information is introduced to evaluate the quality of the viewpoint when the robot detects the human subject. The result shows that the robot can move to the best viewpoint based on the proposed approach.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122569733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Meal support system with spoon using laser range finder and manipulator 餐勺支撑系统采用激光测距仪和机械手
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.2316/Journal.206.2016.3.206-4342
Yuichi Kobayashi, Yutaro Ohshima, T. Kaneko, A. Yamashita
{"title":"Meal support system with spoon using laser range finder and manipulator","authors":"Yuichi Kobayashi, Yutaro Ohshima, T. Kaneko, A. Yamashita","doi":"10.2316/Journal.206.2016.3.206-4342","DOIUrl":"https://doi.org/10.2316/Journal.206.2016.3.206-4342","url":null,"abstract":"This paper presents an autonomous meal support robot system that can handle non-rigid solid food. The robot system is equipped with a laser range finder (LRF) and a manipulator holding a spoon. The LRF measures the 3D coordinates of surface points belonging to food on a plate. Then the robot determines the position of food surface to scoop, and the manipulator moves according to the calculated trajectory. The system has an advantage that preparation of food cutting in bite-size is not required. The proposed scooping control was implemented and verified in experiment with two kinds of non-rigid solid foods. It was shown that the robot can scoop foods for the most part with high success rate.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122715861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Autonomous navigation and sign detector learning 自主导航和标志检测器学习
2013 IEEE Workshop on Robot Vision (WORV) Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521929
L. Ellis, N. Pugeault, K. Ofjall, J. Hedborg, R. Bowden, M. Felsberg
{"title":"Autonomous navigation and sign detector learning","authors":"L. Ellis, N. Pugeault, K. Ofjall, J. Hedborg, R. Bowden, M. Felsberg","doi":"10.1109/WORV.2013.6521929","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521929","url":null,"abstract":"This paper presents an autonomous robotic system that incorporates novel Computer Vision, Machine Learning and Data Mining algorithms in order to learn to navigate and discover important visual entities. This is achieved within a Learning from Demonstration (LfD) framework, where policies are derived from example state-to-action mappings. For autonomous navigation, a mapping is learnt from holistic image features (GIST) onto control parameters using Random Forest regression. Additionally, visual entities (road signs e.g. STOP sign) that are strongly associated to autonomously discovered modes of action (e.g. stopping behaviour) are discovered through a novel Percept-Action Mining methodology. The resulting sign detector is learnt without any supervision (no image labeling or bounding box annotations are used). The complete system is demonstrated on a fully autonomous robotic platform, featuring a single camera mounted on a standard remote control car. The robot carries a PC laptop, that performs all the processing on board and in real-time.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128730443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信