Drones最新文献

筛选
英文 中文
MFEFNet: A Multi-Scale Feature Information Extraction and Fusion Network for Multi-Scale Object Detection in UAV Aerial Images MFEFNet:用于无人机航空图像中多尺度物体检测的多尺度特征信息提取与融合网络
Drones Pub Date : 2024-05-08 DOI: 10.3390/drones8050186
Liming Zhou, Shuai Zhao, Ziye Wan, Yang Liu, Yadi Wang, Xianyu Zuo
{"title":"MFEFNet: A Multi-Scale Feature Information Extraction and Fusion Network for Multi-Scale Object Detection in UAV Aerial Images","authors":"Liming Zhou, Shuai Zhao, Ziye Wan, Yang Liu, Yadi Wang, Xianyu Zuo","doi":"10.3390/drones8050186","DOIUrl":"https://doi.org/10.3390/drones8050186","url":null,"abstract":"Unmanned aerial vehicles (UAVs) are now widely used in many fields. Due to the randomness of UAV flight height and shooting angle, UAV images usually have the following characteristics: many small objects, large changes in object scale, and complex background. Therefore, object detection in UAV aerial images is a very challenging task. To address the challenges posed by these characteristics, this paper proposes a novel UAV image object detection method based on global feature aggregation and context feature extraction named the multi-scale feature information extraction and fusion network (MFEFNet). Specifically, first of all, to extract the feature information of objects more effectively from complex backgrounds, we propose an efficient spatial information extraction (SIEM) module, which combines residual connection to build long-distance feature dependencies and effectively extracts the most useful feature information by building contextual feature relations around objects. Secondly, to improve the feature fusion efficiency and reduce the burden brought by redundant feature fusion networks, we propose a global aggregation progressive feature fusion network (GAFN). This network adopts a three-level adaptive feature fusion method, which can adaptively fuse multi-scale features according to the importance of different feature layers and reduce unnecessary intermediate redundant features by utilizing the adaptive feature fusion module (AFFM). Furthermore, we use the MPDIoU loss function as the bounding-box regression loss function, which not only enhances model robustness to noise but also simplifies the calculation process and improves the final detection efficiency. Finally, the proposed MFEFNet was tested on VisDrone and UAVDT datasets, and the mAP0.5 value increased by 2.7% and 2.2%, respectively.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140998316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Packet Priority Module for a Network of Unmanned Aerial Vehicles Using Manhattan Long Short-Term Memory 使用曼哈顿长短时记忆的无人机网络智能数据包优先级模块
Drones Pub Date : 2024-05-07 DOI: 10.3390/drones8050183
Dino Budi Prakoso, J. H. Windiatmaja, Agus Mulyanto, Riri Fitri Sari, R. Nordin
{"title":"Intelligent Packet Priority Module for a Network of Unmanned Aerial Vehicles Using Manhattan Long Short-Term Memory","authors":"Dino Budi Prakoso, J. H. Windiatmaja, Agus Mulyanto, Riri Fitri Sari, R. Nordin","doi":"10.3390/drones8050183","DOIUrl":"https://doi.org/10.3390/drones8050183","url":null,"abstract":"Unmanned aerial vehicles (UAVs) are becoming more common in wireless communication networks. Using UAVs can lead to network problems. An issue arises when the UAVs function in a network-access-limited environment with nodes causing interference. This issue could potentially hinder UAV network connectivity. This paper introduces an intelligent packet priority module (IPPM) to minimize network latency. This study analyzed Network Simulator–3 (NS-3) network modules utilizing Manhattan long short-term memory (MaLSTM) for packet classification of critical UAV, ground control station (GCS), or interfering nodes. To minimize network latency and packet delivery ratio (PDR) issues caused by interfering nodes, packets from prioritized nodes are transmitted first. Simulation results and evaluation show that our proposed intelligent packet priority module (IPPM) method outperformed previous approaches. The proposed IPPM based on MaLSTM implementation for the priority packet module led to a lower network delay and a higher packet delivery ratio. The performance of the IPPM averaged 62.2 ms network delay and 0.97 packet delivery ratio (PDR). The MaLSTM peaked at 97.5% accuracy. Upon further evaluation, the stability of LSTM Siamese models was observed to be consistent across diverse similarity functions, including cosine and Euclidean distances.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141002016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV-Mounted RIS-Aided Mobile Edge Computing System: A DDQN-Based Optimization Approach 无人机安装的 RIS 辅助移动边缘计算系统:基于 DDQN 的优化方法
Drones Pub Date : 2024-05-07 DOI: 10.3390/drones8050184
Min Wu, Shibing Zhu, Changqing Li, Jiao Zhu, Yudi Chen, Xiangyu Liu, Rui Liu
{"title":"UAV-Mounted RIS-Aided Mobile Edge Computing System: A DDQN-Based Optimization Approach","authors":"Min Wu, Shibing Zhu, Changqing Li, Jiao Zhu, Yudi Chen, Xiangyu Liu, Rui Liu","doi":"10.3390/drones8050184","DOIUrl":"https://doi.org/10.3390/drones8050184","url":null,"abstract":"Unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) are increasingly employed in mobile edge computing (MEC) systems to flexibly modify the signal transmission environment. This is achieved through the active manipulation of the wireless channel facilitated by the mobile deployment of UAVs and the intelligent reflection of signals by RISs. However, these technologies are subject to inherent limitations such as the restricted range of UAVs and limited RIS coverage, which hinder their broader application. The integration of UAVs and RISs into UAV–RIS schemes presents a promising approach to surmounting these limitations by leveraging the strengths of both technologies. Motivated by the above observations, we contemplate a novel UAV–RIS-aided MEC system, wherein UAV–RIS plays a pivotal role in facilitating communication between terrestrial vehicle users and MEC servers. To address this challenging non-convex problem, we propose an energy-constrained approach to maximize the system’s energy efficiency based on a double-deep Q-network (DDQN), which is employed to realize joint control of the UAVs, passive beamforming, and resource allocation for MEC. Numerical results demonstrate that the proposed optimization scheme significantly enhances the system efficiency of the UAV–RIS-aided time division multiple access (TDMA) network.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141005143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-Guided Tracking and Emergency Landing for UAVs on Moving Targets 视觉引导无人机对移动目标进行跟踪和紧急降落
Drones Pub Date : 2024-05-03 DOI: 10.3390/drones8050182
Yisak Debele, Hayoung Shi, Assefinew Wondosen, H. Warku, T. Ku, Beom-Soo Kang
{"title":"Vision-Guided Tracking and Emergency Landing for UAVs on Moving Targets","authors":"Yisak Debele, Hayoung Shi, Assefinew Wondosen, H. Warku, T. Ku, Beom-Soo Kang","doi":"10.3390/drones8050182","DOIUrl":"https://doi.org/10.3390/drones8050182","url":null,"abstract":"This paper presents a vision-based adaptive tracking and landing method for multirotor Unmanned Aerial Vehicles (UAVs), designed for safe recovery amid propulsion system failures that reduce maneuverability and responsiveness. The method addresses challenges posed by external disturbances such as wind and agile target movements, specifically, by considering maneuverability and control limitations caused by propulsion system failures. Building on our previous research in actuator fault detection and tolerance, our approach employs a modified adaptive pure pursuit guidance technique with an extra adaptation parameter to account for reduced maneuverability, thus ensuring safe tracking of moving objects. Additionally, we present an adaptive landing strategy that adapts to tracking deviations and minimizes off-target landings caused by lateral tracking errors and delayed responses, using a lateral offset-dependent vertical velocity control. Our system employs vision-based tag detection to ascertain the position of the Unmanned Ground Vehicle (UGV) in relation to the UAV. We implemented this system in a mid-mission emergency landing scenario, which includes actuator health monitoring of emergency landings. Extensive testing and simulations demonstrate the effectiveness of our approach, significantly advancing the development of safe tracking and emergency landing methods for UAVs with compromised control authority due to actuator failures.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141014659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Grained Feature Perception for Unmanned Aerial Vehicle Target Detection Algorithm 用于无人机目标检测算法的细粒度特征感知
Drones Pub Date : 2024-05-03 DOI: 10.3390/drones8050181
Shi Liu, Meng Zhu, Rui Tao, Honge Ren
{"title":"Fine-Grained Feature Perception for Unmanned Aerial Vehicle Target Detection Algorithm","authors":"Shi Liu, Meng Zhu, Rui Tao, Honge Ren","doi":"10.3390/drones8050181","DOIUrl":"https://doi.org/10.3390/drones8050181","url":null,"abstract":"Unmanned aerial vehicle (UAV) aerial images often present challenges such as small target sizes, high target density, varied shooting angles, and dynamic poses. Existing target detection algorithms exhibit a noticeable performance decline when confronted with UAV aerial images compared to general scenes. This paper proposes an outstanding small target detection algorithm for UAVs, named Fine-Grained Feature Perception YOLOv8s-P2 (FGFP-YOLOv8s-P2), based on YOLOv8s-P2 architecture. We specialize in improving inspection accuracy while meeting real-time inspection requirements. First, we enhance the targets’ pixel information by utilizing slice-assisted training and inference techniques, thereby reducing missed detections. Then, we propose a feature extraction module with deformable convolutions. Decoupling the learning process of offset and modulation scalar enables better adaptation to variations in the size and shape of diverse targets. In addition, we introduce a large kernel spatial pyramid pooling module. By cascading convolutions, we leverage the advantages of large kernels to flexibly adjust the model’s attention to various regions of high-level feature maps, better adapting to complex visual scenes and circumventing the cost drawbacks associated with large kernels. To match the excellent real-time detection performance of the baseline model, we propose an improved Random FasterNet Block. This block introduces randomness during convolution and captures spatial features of non-linear transformation channels, enriching feature representations and enhancing model efficiency. Extensive experiments and comprehensive evaluations on the VisDrone2019 and DOTA-v1.0 datasets demonstrate the effectiveness of FGFP-YOLOv8s-P2. This achievement provides robust technical support for efficient small target detection by UAVs in complex scenarios.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141016244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-driven Learning-Based Multiple-Input Multiple-Output Signal Detection Unmanned Aerial Vehicle Air-to-Ground Communications 基于双驱动学习的多输入多输出信号检测 无人机空对地通信
Drones Pub Date : 2024-05-02 DOI: 10.3390/drones8050180
Haihan Li , Yongming He , Shuntian Zheng , Fan Zhou , Hongwen Yang 
{"title":"Dual-driven Learning-Based Multiple-Input Multiple-Output Signal Detection Unmanned Aerial Vehicle Air-to-Ground Communications","authors":"Haihan Li , Yongming He , Shuntian Zheng , Fan Zhou , Hongwen Yang ","doi":"10.3390/drones8050180","DOIUrl":"https://doi.org/10.3390/drones8050180","url":null,"abstract":"Unmanned aerial vehicle (UAV) air-to-ground (AG) communication plays a critical role in the evolving space–air–ground integrated network of the upcoming sixth-generation cellular network (6G). The integration of massive multiple-input multiple-output (MIMO) systems has become essential for ensuring optimal performing communication technologies. This article presents a novel dual-driven learning-based network for millimeter-wave (mm-wave) massive MIMO symbol detection of UAV AG communications. Our main contribution is that the proposed approach combines a data-driven symbol-correction network with a model-driven orthogonal approximate message passing network (OAMP-Net). Through joint training, the dual-driven network reduces symbol detection errors propagated through each iteration of the model-driven OAMP-Net. The numerical results demonstrate the superiority of the dual-driven detector over the conventional minimum mean square error (MMSE), orthogonal approximate message passing (OAMP), and OAMP-Net detectors at various noise powers and channel estimation errors. The dual-driven MIMO detector exhibits a 2–3 dB lower signal-to-noise ratio (SNR) requirement compared to the MMSE and OAMP-Net detectors to achieve a bit error rate (BER) of 1×10−2 when the channel estimation error is −30 dB. Moreover, the dual-driven MIMO detector exhibits an increased tolerance to channel estimation errors by 2–3 dB to achieve a BER of 1×10−3.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141021202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-Free RBF Neural Network Intelligent-PID Control Applying Adaptive Robust Term for Quadrotor System 为四旋翼飞行器系统应用自适应鲁棒术语的无模型 RBF 神经网络智能 PID 控制
Drones Pub Date : 2024-05-01 DOI: 10.3390/drones8050179
Sung-Jae Kim, Jinho Suh
{"title":"Model-Free RBF Neural Network Intelligent-PID Control Applying Adaptive Robust Term for Quadrotor System","authors":"Sung-Jae Kim, Jinho Suh","doi":"10.3390/drones8050179","DOIUrl":"https://doi.org/10.3390/drones8050179","url":null,"abstract":"This paper proposes a quadrotor system control scheme using an intelligent–proportional–integral–differential control (I-PID)-based controller augmented with a radial basis neural network (RBF neural network) and the proposed adaptive robust term. The I-PID controller, similar to the widely utilized PID controller in quadrotor systems, demonstrates notable robustness. To enhance this robustness further, the time-delay estimation error was compensated with an RBF neural network. Additionally, an adaptive robust term was proposed to address the shortcomings of the neural network system, thereby constructing a more robust controller. This supplementary control input integrated an adaptation term to address significant signal changes and was amalgamated with a reverse saturation filter to remove unnecessary control input during a steady state. The adaptive law of the proposed controller was designed based on Lyapunov stability to satisfy control system stability. To verify the control system, simulations were conducted on a quadrotor system maneuvering along a spiral path in a disturbed environment. The simulation results demonstrate that the proposed controller achieves high tracking performance across all six axes. Therefore, the controller proposed in this paper can be configured similarly to the previous PID controller and shows satisfactory performance.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141053761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing UAV Aerial Docking: A Hybrid Approach Combining Offline and Online Reinforcement Learning 增强无人机空中对接:结合离线和在线强化学习的混合方法
Drones Pub Date : 2024-04-24 DOI: 10.3390/drones8050168
Yuting Feng, Tao Yang, Yushu Yu
{"title":"Enhancing UAV Aerial Docking: A Hybrid Approach Combining Offline and Online Reinforcement Learning","authors":"Yuting Feng, Tao Yang, Yushu Yu","doi":"10.3390/drones8050168","DOIUrl":"https://doi.org/10.3390/drones8050168","url":null,"abstract":"In our study, we explore the task of performing docking maneuvers between two unmanned aerial vehicles (UAVs) using a combination of offline and online reinforcement learning (RL) methods. This task requires a UAV to accomplish external docking while maintaining stable flight control, representing two distinct types of objectives at the task execution level. Direct online RL training could lead to catastrophic forgetting, resulting in training failure. To overcome these challenges, we design a rule-based expert controller and accumulate an extensive dataset. Based on this, we concurrently design a series of rewards and train a guiding policy through offline RL. Then, we conduct comparative verification on different RL methods, ultimately selecting online RL to fine-tune the model trained offline. This strategy effectively combines the efficiency of offline RL with the exploratory capabilities of online RL. Our approach improves the success rate of the UAV’s aerial docking task, increasing it from 40% under the expert policy to 95%.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140661851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Early Drought Detection in Maize Using UAV Images and YOLOv8+ 利用无人机图像和 YOLOv8+ 进行玉米早期干旱检测
Drones Pub Date : 2024-04-24 DOI: 10.3390/drones8050170
Shanwei Niu, Zhigang Nie, Guang Li, Wenyu Zhu
{"title":"Early Drought Detection in Maize Using UAV Images and YOLOv8+","authors":"Shanwei Niu, Zhigang Nie, Guang Li, Wenyu Zhu","doi":"10.3390/drones8050170","DOIUrl":"https://doi.org/10.3390/drones8050170","url":null,"abstract":"The escalating global climate change significantly impacts the yield and quality of maize, a vital staple crop worldwide, especially during seedling stage droughts. Traditional detection methods are limited by their single-scenario approach, requiring substantial human labor and time, and lack accuracy in the real-time monitoring and precise assessment of drought severity. In this study, a novel early drought detection method for maize based on unmanned aerial vehicle (UAV) images and Yolov8+ is proposed. In the Backbone section, the C2F-Conv module is adopted to reduce model parameters and deployment costs, while incorporating the CA attention mechanism module to effectively capture tiny feature information in the images. The Neck section utilizes the BiFPN fusion architecture and spatial attention mechanism to enhance the model’s ability to recognize small and occluded targets. The Head section introduces an additional 10 × 10 output, integrates loss functions, and enhances accuracy by 1.46%, reduces training time by 30.2%, and improves robustness. The experimental results demonstrate that the improved Yolov8+ model achieves precision and recall rates of approximately 90.6% and 88.7%, respectively. The mAP@50 and mAP@50:95 reach 89.16% and 71.14%, respectively, representing respective increases of 3.9% and 3.3% compared to the original Yolov8. The UAV image detection speed of the model is up to 24.63 ms, with a model size of 13.76 MB, optimized by 31.6% and 28.8% compared to the original model, respectively. In comparison with the Yolov8, Yolov7, and Yolo5s models, the proposed method exhibits varying degrees of superiority in mAP@50, mAP@50:95, and other metrics, utilizing drone imagery and deep learning techniques to truly propel agricultural modernization.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140658819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Control-Theoretic Spatio-Temporal Model for Wildfire Smoke Propagation Using UAV-Based Air Pollutant Measurements 利用基于无人机的空气污染物测量建立野火烟雾传播的时空控制理论模型
Drones Pub Date : 2024-04-24 DOI: 10.3390/drones8050169
Prabhash Ragbir, A. Kaduwela, Xiaodong Lan, Adam Watts, Zhaodan Kong
{"title":"A Control-Theoretic Spatio-Temporal Model for Wildfire Smoke Propagation Using UAV-Based Air Pollutant Measurements","authors":"Prabhash Ragbir, A. Kaduwela, Xiaodong Lan, Adam Watts, Zhaodan Kong","doi":"10.3390/drones8050169","DOIUrl":"https://doi.org/10.3390/drones8050169","url":null,"abstract":"Wildfires have the potential to cause severe damage to vegetation, property and most importantly, human life. In order to minimize these negative impacts, it is crucial that wildfires are detected at the earliest possible stages. A potential solution for early wildfire detection is to utilize unmanned aerial vehicles (UAVs) that are capable of tracking the chemical concentration gradient of smoke emitted by wildfires. A spatiotemporal model of wildfire smoke plume dynamics can allow for efficient tracking of the chemicals by utilizing both real-time information from sensors as well as future information from the model predictions. This study investigates a spatiotemporal modeling approach based on subspace identification (SID) to develop a data-driven smoke plume dynamics model for the purposes of early wildfire detection. The model was learned using CO2 concentration data which were collected using an air quality sensor package onboard a UAV during two prescribed burn experiments. Our model was evaluated by comparing the predicted values to the measured values at random locations and showed mean errors of 6.782 ppm and 30.01 ppm from the two experiments. Additionally, our model was shown to outperform the commonly used Gaussian puff model (GPM) which showed mean errors of 25.799 ppm and 104.492 ppm, respectively.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140659772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信