Charles M. Felps, Michael H. Fick, Keegan R. Kinkade, Jeremy Searock, J. Piepmeier
{"title":"Integration of semantic vision techniques for an autonomous robot platform","authors":"Charles M. Felps, Michael H. Fick, Keegan R. Kinkade, Jeremy Searock, J. Piepmeier","doi":"10.1109/SSST.2010.5442826","DOIUrl":null,"url":null,"abstract":"The Semantic Robot Vision Challenge is a research competition designed to advance the ability of agent's to automatically acquire knowledge and use this knowledge to identity objects in an unknown and unstructured environment. In this paper, we present a complete design and implementation of a robotic system intended to compete in the Semantic Robot Vision Challenge. The system takes a text input document of specific objects to search an online visual database to find a training image. The system then autonomously navigates through a cluttered environment, captures images of objects in the area, and uses the training images to identify objects found in the captured images. The system is complete, robust, and achieved first place in the 2009 competition.","PeriodicalId":6463,"journal":{"name":"2010 42nd Southeastern Symposium on System Theory (SSST)","volume":"42 1","pages":"243-247"},"PeriodicalIF":0.0000,"publicationDate":"2010-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 42nd Southeastern Symposium on System Theory (SSST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSST.2010.5442826","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The Semantic Robot Vision Challenge is a research competition designed to advance the ability of agent's to automatically acquire knowledge and use this knowledge to identity objects in an unknown and unstructured environment. In this paper, we present a complete design and implementation of a robotic system intended to compete in the Semantic Robot Vision Challenge. The system takes a text input document of specific objects to search an online visual database to find a training image. The system then autonomously navigates through a cluttered environment, captures images of objects in the area, and uses the training images to identify objects found in the captured images. The system is complete, robust, and achieved first place in the 2009 competition.