M. Sanfourche, B. L. Saux, A. Plyer, G. L. Besnerais
{"title":"无人机环境制图和解释","authors":"M. Sanfourche, B. L. Saux, A. Plyer, G. L. Besnerais","doi":"10.1109/JURSE.2015.7120454","DOIUrl":null,"url":null,"abstract":"In this paper we present the processing chain for geometric and semantic mapping of a drone environment that we developed for search-and-rescue purposes. A precise 3D modelling of the environment is computed using video and (if available) Lidar data captured during the drone flight. Then semantic mapping is performed by interactive learning on the model, thus allowing generic object detection. Finally, tracking of moving objects are performed in the video stream and localized in the 3D model, thus giving a global view of the situation. We assess our system on real data captured on various test locations.","PeriodicalId":207233,"journal":{"name":"2015 Joint Urban Remote Sensing Event (JURSE)","volume":"266 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Environment mapping & interpretation by drone\",\"authors\":\"M. Sanfourche, B. L. Saux, A. Plyer, G. L. Besnerais\",\"doi\":\"10.1109/JURSE.2015.7120454\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we present the processing chain for geometric and semantic mapping of a drone environment that we developed for search-and-rescue purposes. A precise 3D modelling of the environment is computed using video and (if available) Lidar data captured during the drone flight. Then semantic mapping is performed by interactive learning on the model, thus allowing generic object detection. Finally, tracking of moving objects are performed in the video stream and localized in the 3D model, thus giving a global view of the situation. We assess our system on real data captured on various test locations.\",\"PeriodicalId\":207233,\"journal\":{\"name\":\"2015 Joint Urban Remote Sensing Event (JURSE)\",\"volume\":\"266 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 Joint Urban Remote Sensing Event (JURSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/JURSE.2015.7120454\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 Joint Urban Remote Sensing Event (JURSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/JURSE.2015.7120454","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In this paper we present the processing chain for geometric and semantic mapping of a drone environment that we developed for search-and-rescue purposes. A precise 3D modelling of the environment is computed using video and (if available) Lidar data captured during the drone flight. Then semantic mapping is performed by interactive learning on the model, thus allowing generic object detection. Finally, tracking of moving objects are performed in the video stream and localized in the 3D model, thus giving a global view of the situation. We assess our system on real data captured on various test locations.