Barrett Edwards, J. Archibald, Wade S. Fife, Dah-Jye Lee
{"title":"一种MAV精确目标着陆视觉系统","authors":"Barrett Edwards, J. Archibald, Wade S. Fife, Dah-Jye Lee","doi":"10.1109/CIRA.2007.382912","DOIUrl":null,"url":null,"abstract":"A field programmable gate array (FPGA) system implementation capable of being mounted onboard a micro aerial vehicle (MAV) (less than 5 pounds) that can perform the processing tasks necessary to identify and track a marked target landing site in real-time is presented. This implementation was designed to be an image processing subsystem that is mounted on a MAV to assist an autopilot system with vision-related tasks. This paper describes the FPGA vision system architecture and algorithms implemented to segment and locate a colored cloth target that specifies the exact landing location. Once the target landing site is identified, the exact location of the landing site is transmitted to the autopilot, which then implements the trajectory adjustments required to autonomously land the MAV on the target. Results of two flight test situations are presented. In the first situation, the MAV lands on a static target. The second situation includes a moving target, which in our tests was the back of a moving vehicle. This FPGA system is an application-specific configuration of the helios robotic vision platform developed at Brigham Young University.","PeriodicalId":301626,"journal":{"name":"2007 International Symposium on Computational Intelligence in Robotics and Automation","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"A Vision System for Precision MAV Targeted Landing\",\"authors\":\"Barrett Edwards, J. Archibald, Wade S. Fife, Dah-Jye Lee\",\"doi\":\"10.1109/CIRA.2007.382912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A field programmable gate array (FPGA) system implementation capable of being mounted onboard a micro aerial vehicle (MAV) (less than 5 pounds) that can perform the processing tasks necessary to identify and track a marked target landing site in real-time is presented. This implementation was designed to be an image processing subsystem that is mounted on a MAV to assist an autopilot system with vision-related tasks. This paper describes the FPGA vision system architecture and algorithms implemented to segment and locate a colored cloth target that specifies the exact landing location. Once the target landing site is identified, the exact location of the landing site is transmitted to the autopilot, which then implements the trajectory adjustments required to autonomously land the MAV on the target. Results of two flight test situations are presented. In the first situation, the MAV lands on a static target. The second situation includes a moving target, which in our tests was the back of a moving vehicle. This FPGA system is an application-specific configuration of the helios robotic vision platform developed at Brigham Young University.\",\"PeriodicalId\":301626,\"journal\":{\"name\":\"2007 International Symposium on Computational Intelligence in Robotics and Automation\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 International Symposium on Computational Intelligence in Robotics and Automation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIRA.2007.382912\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 International Symposium on Computational Intelligence in Robotics and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIRA.2007.382912","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Vision System for Precision MAV Targeted Landing
A field programmable gate array (FPGA) system implementation capable of being mounted onboard a micro aerial vehicle (MAV) (less than 5 pounds) that can perform the processing tasks necessary to identify and track a marked target landing site in real-time is presented. This implementation was designed to be an image processing subsystem that is mounted on a MAV to assist an autopilot system with vision-related tasks. This paper describes the FPGA vision system architecture and algorithms implemented to segment and locate a colored cloth target that specifies the exact landing location. Once the target landing site is identified, the exact location of the landing site is transmitted to the autopilot, which then implements the trajectory adjustments required to autonomously land the MAV on the target. Results of two flight test situations are presented. In the first situation, the MAV lands on a static target. The second situation includes a moving target, which in our tests was the back of a moving vehicle. This FPGA system is an application-specific configuration of the helios robotic vision platform developed at Brigham Young University.