{"title":"计算机视觉辅助技术在高精度无人机导航定位中的应用研究","authors":"Yu Cao, Hongyu Liang, Yujie Fang, Wenliang Peng","doi":"10.1109/ICISCAE51034.2020.9236821","DOIUrl":null,"url":null,"abstract":"Traditional UAV human-computer interaction requires specialized equipment and professional training. Convenient and novel interaction methods are often more popular. Using ordinary cameras, the UAV gesture control system based on computer vision and deep learning is studied. Under this background, the paper proposes a new method for the navigation information extraction of UAV autonomous landing terminal based on monocular vision / INS integrated navigation; it only needs to detect two feature points on the runway and then combine the attitude of the inertial navigation system Angle information, you can directly calculate the position vector of the aircraft relative to the airport; because the straight line has obvious observability, for the feature points that are difficult to extract, you can detect the two runway edge lines and a landing threshold line, and then take the intersection of the threshold line and the edge of the runway is used as the feature point; the algorithm is deduced in detail based on the geometric principle of imaging. Finally, the landing experiment shows that the extraction and conversion of the method is feasible.","PeriodicalId":355473,"journal":{"name":"2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Research on Application of Computer Vision Assist Technology in High-precision UAV Navigation and Positioning\",\"authors\":\"Yu Cao, Hongyu Liang, Yujie Fang, Wenliang Peng\",\"doi\":\"10.1109/ICISCAE51034.2020.9236821\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Traditional UAV human-computer interaction requires specialized equipment and professional training. Convenient and novel interaction methods are often more popular. Using ordinary cameras, the UAV gesture control system based on computer vision and deep learning is studied. Under this background, the paper proposes a new method for the navigation information extraction of UAV autonomous landing terminal based on monocular vision / INS integrated navigation; it only needs to detect two feature points on the runway and then combine the attitude of the inertial navigation system Angle information, you can directly calculate the position vector of the aircraft relative to the airport; because the straight line has obvious observability, for the feature points that are difficult to extract, you can detect the two runway edge lines and a landing threshold line, and then take the intersection of the threshold line and the edge of the runway is used as the feature point; the algorithm is deduced in detail based on the geometric principle of imaging. Finally, the landing experiment shows that the extraction and conversion of the method is feasible.\",\"PeriodicalId\":355473,\"journal\":{\"name\":\"2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE)\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICISCAE51034.2020.9236821\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICISCAE51034.2020.9236821","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Research on Application of Computer Vision Assist Technology in High-precision UAV Navigation and Positioning
Traditional UAV human-computer interaction requires specialized equipment and professional training. Convenient and novel interaction methods are often more popular. Using ordinary cameras, the UAV gesture control system based on computer vision and deep learning is studied. Under this background, the paper proposes a new method for the navigation information extraction of UAV autonomous landing terminal based on monocular vision / INS integrated navigation; it only needs to detect two feature points on the runway and then combine the attitude of the inertial navigation system Angle information, you can directly calculate the position vector of the aircraft relative to the airport; because the straight line has obvious observability, for the feature points that are difficult to extract, you can detect the two runway edge lines and a landing threshold line, and then take the intersection of the threshold line and the edge of the runway is used as the feature point; the algorithm is deduced in detail based on the geometric principle of imaging. Finally, the landing experiment shows that the extraction and conversion of the method is feasible.