{"title":"一种基于视觉的自主着陆方法","authors":"A. Cabrera-Ponce, J. Martínez-Carranza","doi":"10.1109/RED-UAS.2017.8101655","DOIUrl":null,"url":null,"abstract":"Monocular vision is frequently used in Micro Air Vehicles for many tasks such autonomous navigation, tracking, search and autonomous landing. To address this problem and in the context of autonomous landing of a MAV on a platform, we use a template-based matching in an image pyramid scheme in combination with an edge detector. Thus, the landing zone is localised via image processing in a frame-to-frame basis. Images are captured by the MAV's onboard camera of the MAV and processed with a multi-scale image processing strategy to detect the landing zone at different scales. We assessed our approach in real-time experiments using a Parrot Bebop 2.0 in outdoors and at different heights.","PeriodicalId":299104,"journal":{"name":"2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"A vision-based approach for autonomous landing\",\"authors\":\"A. Cabrera-Ponce, J. Martínez-Carranza\",\"doi\":\"10.1109/RED-UAS.2017.8101655\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Monocular vision is frequently used in Micro Air Vehicles for many tasks such autonomous navigation, tracking, search and autonomous landing. To address this problem and in the context of autonomous landing of a MAV on a platform, we use a template-based matching in an image pyramid scheme in combination with an edge detector. Thus, the landing zone is localised via image processing in a frame-to-frame basis. Images are captured by the MAV's onboard camera of the MAV and processed with a multi-scale image processing strategy to detect the landing zone at different scales. We assessed our approach in real-time experiments using a Parrot Bebop 2.0 in outdoors and at different heights.\",\"PeriodicalId\":299104,\"journal\":{\"name\":\"2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RED-UAS.2017.8101655\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RED-UAS.2017.8101655","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Monocular vision is frequently used in Micro Air Vehicles for many tasks such autonomous navigation, tracking, search and autonomous landing. To address this problem and in the context of autonomous landing of a MAV on a platform, we use a template-based matching in an image pyramid scheme in combination with an edge detector. Thus, the landing zone is localised via image processing in a frame-to-frame basis. Images are captured by the MAV's onboard camera of the MAV and processed with a multi-scale image processing strategy to detect the landing zone at different scales. We assessed our approach in real-time experiments using a Parrot Bebop 2.0 in outdoors and at different heights.