{"title":"Multiclass Geospatial Object Detection using Machine Learning-Aviation Case Study","authors":"D. Dhulipudi, Rajan Ks","doi":"10.1109/DASC50938.2020.9256771","DOIUrl":null,"url":null,"abstract":"There is growing interest to explore the autonomous taxiing that can sense its environment and maneuver safely with little or no human input. This technology is like the one developed for driver less cars that synthesize information from multiple sensors, which sense surrounding environment to detect road surface, lanes, obstacles and signage. This paper presents application of computer vision and machine learning to autonomous method for the surface movement of an air vehicle. We present a system and method that uses pattern recognition which aids unmanned aircraft system (UAS) and enhance the manned air vehicle landing and taxiing. Encouraged with our previous results [1], we extend upon our research to include multiple object relevant to taxiing. The objective of the current project is to build training dataset of annotated objects acquired from overhead perspective. It is useful for training a deep neural network to learn to detect, count specific airport objects in a video or image. This paper details the procedure and parameters used to create training dataset for running convolutional neural networks (CNNs) on a set of aerial images for efficient and automated object recognition. In this method, multiple airport surface signage dataset from satellite images are subjected to training for pattern recognition. This trained system learns and then identifies and locates important visual references from imaging sensors and could help in decision making during taxiing phase.","PeriodicalId":112045,"journal":{"name":"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASC50938.2020.9256771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
There is growing interest to explore the autonomous taxiing that can sense its environment and maneuver safely with little or no human input. This technology is like the one developed for driver less cars that synthesize information from multiple sensors, which sense surrounding environment to detect road surface, lanes, obstacles and signage. This paper presents application of computer vision and machine learning to autonomous method for the surface movement of an air vehicle. We present a system and method that uses pattern recognition which aids unmanned aircraft system (UAS) and enhance the manned air vehicle landing and taxiing. Encouraged with our previous results [1], we extend upon our research to include multiple object relevant to taxiing. The objective of the current project is to build training dataset of annotated objects acquired from overhead perspective. It is useful for training a deep neural network to learn to detect, count specific airport objects in a video or image. This paper details the procedure and parameters used to create training dataset for running convolutional neural networks (CNNs) on a set of aerial images for efficient and automated object recognition. In this method, multiple airport surface signage dataset from satellite images are subjected to training for pattern recognition. This trained system learns and then identifies and locates important visual references from imaging sensors and could help in decision making during taxiing phase.