{"title":"实时姿态确定和现实注册系统","authors":"C. Cohen, G. Beach, D. Haanpaa, C. Jacobus","doi":"10.1117/12.384867","DOIUrl":null,"url":null,"abstract":"We have developed and demonstrated a vision-based pose determination and reality registration system for identifying objects in an unstructured visual environment. A wire-frame template of the object to be identified is compared to the input images form one or more cameras. If the object is found, an output of the object's position and orientation is computed. The placement of the template can be performed by a human in-the-loop, or through an automated real-time front end system. The three steps for classification and pose determination are comprised of two estimation modules and a module which refines the estimates to determine an answer. The first module in the sequence uses input images and models to generate a coarse pose estimate for the object. The second module in the sequence uses the estimates from the coarse pose estimation module, input images, and the model to further refine the pose. The last module in the sequence uses the fine pose estimation, the images, and the model to determine an exact match between the model and the image.","PeriodicalId":354140,"journal":{"name":"Applied Imaging Pattern Recognition","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real-time pose determination and reality registration system\",\"authors\":\"C. Cohen, G. Beach, D. Haanpaa, C. Jacobus\",\"doi\":\"10.1117/12.384867\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We have developed and demonstrated a vision-based pose determination and reality registration system for identifying objects in an unstructured visual environment. A wire-frame template of the object to be identified is compared to the input images form one or more cameras. If the object is found, an output of the object's position and orientation is computed. The placement of the template can be performed by a human in-the-loop, or through an automated real-time front end system. The three steps for classification and pose determination are comprised of two estimation modules and a module which refines the estimates to determine an answer. The first module in the sequence uses input images and models to generate a coarse pose estimate for the object. The second module in the sequence uses the estimates from the coarse pose estimation module, input images, and the model to further refine the pose. The last module in the sequence uses the fine pose estimation, the images, and the model to determine an exact match between the model and the image.\",\"PeriodicalId\":354140,\"journal\":{\"name\":\"Applied Imaging Pattern Recognition\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Imaging Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.384867\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Imaging Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.384867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Real-time pose determination and reality registration system
We have developed and demonstrated a vision-based pose determination and reality registration system for identifying objects in an unstructured visual environment. A wire-frame template of the object to be identified is compared to the input images form one or more cameras. If the object is found, an output of the object's position and orientation is computed. The placement of the template can be performed by a human in-the-loop, or through an automated real-time front end system. The three steps for classification and pose determination are comprised of two estimation modules and a module which refines the estimates to determine an answer. The first module in the sequence uses input images and models to generate a coarse pose estimate for the object. The second module in the sequence uses the estimates from the coarse pose estimation module, input images, and the model to further refine the pose. The last module in the sequence uses the fine pose estimation, the images, and the model to determine an exact match between the model and the image.