Multiclass Geospatial Object Detection using Machine Learning-Aviation Case Study

D. Dhulipudi, Rajan Ks
{"title":"Multiclass Geospatial Object Detection using Machine Learning-Aviation Case Study","authors":"D. Dhulipudi, Rajan Ks","doi":"10.1109/DASC50938.2020.9256771","DOIUrl":null,"url":null,"abstract":"There is growing interest to explore the autonomous taxiing that can sense its environment and maneuver safely with little or no human input. This technology is like the one developed for driver less cars that synthesize information from multiple sensors, which sense surrounding environment to detect road surface, lanes, obstacles and signage. This paper presents application of computer vision and machine learning to autonomous method for the surface movement of an air vehicle. We present a system and method that uses pattern recognition which aids unmanned aircraft system (UAS) and enhance the manned air vehicle landing and taxiing. Encouraged with our previous results [1], we extend upon our research to include multiple object relevant to taxiing. The objective of the current project is to build training dataset of annotated objects acquired from overhead perspective. It is useful for training a deep neural network to learn to detect, count specific airport objects in a video or image. This paper details the procedure and parameters used to create training dataset for running convolutional neural networks (CNNs) on a set of aerial images for efficient and automated object recognition. In this method, multiple airport surface signage dataset from satellite images are subjected to training for pattern recognition. This trained system learns and then identifies and locates important visual references from imaging sensors and could help in decision making during taxiing phase.","PeriodicalId":112045,"journal":{"name":"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASC50938.2020.9256771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

There is growing interest to explore the autonomous taxiing that can sense its environment and maneuver safely with little or no human input. This technology is like the one developed for driver less cars that synthesize information from multiple sensors, which sense surrounding environment to detect road surface, lanes, obstacles and signage. This paper presents application of computer vision and machine learning to autonomous method for the surface movement of an air vehicle. We present a system and method that uses pattern recognition which aids unmanned aircraft system (UAS) and enhance the manned air vehicle landing and taxiing. Encouraged with our previous results [1], we extend upon our research to include multiple object relevant to taxiing. The objective of the current project is to build training dataset of annotated objects acquired from overhead perspective. It is useful for training a deep neural network to learn to detect, count specific airport objects in a video or image. This paper details the procedure and parameters used to create training dataset for running convolutional neural networks (CNNs) on a set of aerial images for efficient and automated object recognition. In this method, multiple airport surface signage dataset from satellite images are subjected to training for pattern recognition. This trained system learns and then identifies and locates important visual references from imaging sensors and could help in decision making during taxiing phase.
基于机器学习的多类地理空间目标检测——航空案例研究
人们越来越有兴趣探索能够感知环境并在很少或没有人工输入的情况下安全机动的自动驾驶出租车。这项技术就像无人驾驶汽车开发的技术一样,可以综合多个传感器的信息,通过感知周围环境来检测路面、车道、障碍物和标志。本文介绍了计算机视觉和机器学习在飞行器表面运动自动控制中的应用。提出了一种利用模式识别辅助无人机系统的系统和方法,提高了载人飞行器的着陆和滑行能力。受先前研究结果[1]的鼓舞,我们将研究扩展到包括与滑行相关的多个对象。当前项目的目标是构建从头顶视角获取的带注释对象的训练数据集。这对于训练深度神经网络学习检测、计数视频或图像中的特定机场物体很有用。本文详细介绍了用于在一组航空图像上运行卷积神经网络(cnn)的训练数据集的过程和参数,以实现高效和自动的目标识别。在该方法中,从卫星图像中提取多个机场地面标识数据集进行模式识别训练。经过训练的系统从成像传感器中学习并识别和定位重要的视觉参考,可以帮助在滑行阶段做出决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信