基于形状上下文的水下自主采样视觉识别

K. McBryan, D. Akin
{"title":"基于形状上下文的水下自主采样视觉识别","authors":"K. McBryan, D. Akin","doi":"10.1109/AUV.2012.6380730","DOIUrl":null,"url":null,"abstract":"The ocean floor is one of the few remaining unexplored places on the planet. Underwater vehicles, both teleoperated and autonomous, have been built to take images of the ocean floor. The depth that a teleoperated vehicle can achieve is limited by its tether. Autonomous vehicles are able to study the deepest parts of the ocean without a complex tether system. These vehicles, while being great at mapping the ocean floor, are not able to autonomously retrieve samples. In order to retrieve samples the vehicle must: know what objects look like, correctly identify new instances of the target object, estimate the pose so the manipulator can grab it, and retrieve its coordinates in 3D space. Color filtering, shape context and the use of stereovision have been used to autonomously locate, identify, and estimate the pose of objects. Color filtering allows the image to be filtered so that only objects of similar color remain and extraneous information can be disregarded. Shape context matches the shape, as defined by the edge pixels, of each potential target to a known object. Shape context uses a costing function to determine if the potential target is a match to the known object. The costing function takes into account the amount of 'bending energy' it takes to make the shape of the potential target conform to that of the known object. This gives a metric of how well the match is between the potential target and a known object and is done for both the left and right cameras. Once objects have been identified in each image, calibration parameters can be used to retrieve the 3D position of the object. This allows a manipulator on an underwater vehicle to autonomously sample targets.","PeriodicalId":340133,"journal":{"name":"2012 IEEE/OES Autonomous Underwater Vehicles (AUV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Vision recognition using shape context for autonomous underwater sampling\",\"authors\":\"K. McBryan, D. Akin\",\"doi\":\"10.1109/AUV.2012.6380730\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ocean floor is one of the few remaining unexplored places on the planet. Underwater vehicles, both teleoperated and autonomous, have been built to take images of the ocean floor. The depth that a teleoperated vehicle can achieve is limited by its tether. Autonomous vehicles are able to study the deepest parts of the ocean without a complex tether system. These vehicles, while being great at mapping the ocean floor, are not able to autonomously retrieve samples. In order to retrieve samples the vehicle must: know what objects look like, correctly identify new instances of the target object, estimate the pose so the manipulator can grab it, and retrieve its coordinates in 3D space. Color filtering, shape context and the use of stereovision have been used to autonomously locate, identify, and estimate the pose of objects. Color filtering allows the image to be filtered so that only objects of similar color remain and extraneous information can be disregarded. Shape context matches the shape, as defined by the edge pixels, of each potential target to a known object. Shape context uses a costing function to determine if the potential target is a match to the known object. The costing function takes into account the amount of 'bending energy' it takes to make the shape of the potential target conform to that of the known object. This gives a metric of how well the match is between the potential target and a known object and is done for both the left and right cameras. Once objects have been identified in each image, calibration parameters can be used to retrieve the 3D position of the object. This allows a manipulator on an underwater vehicle to autonomously sample targets.\",\"PeriodicalId\":340133,\"journal\":{\"name\":\"2012 IEEE/OES Autonomous Underwater Vehicles (AUV)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE/OES Autonomous Underwater Vehicles (AUV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AUV.2012.6380730\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE/OES Autonomous Underwater Vehicles (AUV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AUV.2012.6380730","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

海底是地球上为数不多未被探索过的地方之一。遥控和自主的水下航行器已经被用来拍摄海底的图像。遥控车辆所能达到的深度受到其系绳的限制。无人驾驶汽车无需复杂的系绳系统就能研究海洋的最深处。这些交通工具虽然在绘制海底地图方面很出色,但却无法自动检索样本。为了检索样本,车辆必须:知道物体的样子,正确识别目标物体的新实例,估计姿态,以便机械手可以抓取它,并检索其在3D空间中的坐标。颜色过滤、形状上下文和立体视觉的使用已被用于自主定位、识别和估计物体的姿态。颜色过滤允许对图像进行过滤,以便只保留相似颜色的物体,并且可以忽略无关信息。形状上下文将每个潜在目标的形状(由边缘像素定义)与已知对象相匹配。形状上下文使用成本计算函数来确定潜在目标是否与已知对象匹配。成本函数考虑了使潜在目标的形状符合已知物体的形状所需的“弯曲能量”的数量。这提供了潜在目标与已知对象之间匹配程度的度量,并为左右摄像机完成了匹配。一旦在每张图像中识别出物体,就可以使用校准参数来检索物体的3D位置。这使得水下航行器上的操纵器能够自主地对目标进行采样。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Vision recognition using shape context for autonomous underwater sampling
The ocean floor is one of the few remaining unexplored places on the planet. Underwater vehicles, both teleoperated and autonomous, have been built to take images of the ocean floor. The depth that a teleoperated vehicle can achieve is limited by its tether. Autonomous vehicles are able to study the deepest parts of the ocean without a complex tether system. These vehicles, while being great at mapping the ocean floor, are not able to autonomously retrieve samples. In order to retrieve samples the vehicle must: know what objects look like, correctly identify new instances of the target object, estimate the pose so the manipulator can grab it, and retrieve its coordinates in 3D space. Color filtering, shape context and the use of stereovision have been used to autonomously locate, identify, and estimate the pose of objects. Color filtering allows the image to be filtered so that only objects of similar color remain and extraneous information can be disregarded. Shape context matches the shape, as defined by the edge pixels, of each potential target to a known object. Shape context uses a costing function to determine if the potential target is a match to the known object. The costing function takes into account the amount of 'bending energy' it takes to make the shape of the potential target conform to that of the known object. This gives a metric of how well the match is between the potential target and a known object and is done for both the left and right cameras. Once objects have been identified in each image, calibration parameters can be used to retrieve the 3D position of the object. This allows a manipulator on an underwater vehicle to autonomously sample targets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信