{"title":"场景线框图素描的无人机","authors":"R. Santos, X. López, Xosé R. Fernández-Vidal","doi":"10.1109/AIPR.2017.8457938","DOIUrl":null,"url":null,"abstract":"This paper introduces novel insights to improve the state of the art line-based unsupervised observation and abstraction models of urban environments. The scene observation is performed by an UAV, using self-detected and matched straight segments from streamed video frames. The increasing use of autonomous UAV s inside buildings and human built structures demands new accurate and comprehensive representations for their environment. Most of the 3D scene abstraction methods published are using invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with unaccurate directions will limit the understanding of the objective scenes, that include environments with no texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a better 3D representation for future autonomous UAV.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Scene wireframes sketching for UAVs\",\"authors\":\"R. Santos, X. López, Xosé R. Fernández-Vidal\",\"doi\":\"10.1109/AIPR.2017.8457938\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces novel insights to improve the state of the art line-based unsupervised observation and abstraction models of urban environments. The scene observation is performed by an UAV, using self-detected and matched straight segments from streamed video frames. The increasing use of autonomous UAV s inside buildings and human built structures demands new accurate and comprehensive representations for their environment. Most of the 3D scene abstraction methods published are using invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with unaccurate directions will limit the understanding of the objective scenes, that include environments with no texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a better 3D representation for future autonomous UAV.\",\"PeriodicalId\":128779,\"journal\":{\"name\":\"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"2 3\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2017.8457938\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2017.8457938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper introduces novel insights to improve the state of the art line-based unsupervised observation and abstraction models of urban environments. The scene observation is performed by an UAV, using self-detected and matched straight segments from streamed video frames. The increasing use of autonomous UAV s inside buildings and human built structures demands new accurate and comprehensive representations for their environment. Most of the 3D scene abstraction methods published are using invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with unaccurate directions will limit the understanding of the objective scenes, that include environments with no texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a better 3D representation for future autonomous UAV.