Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments 比较 YOLOv8 和 Mask R-CNN 在复杂果园环境中的实例分割功能
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2024-07-16 DOI: 10.1016/j.aiia.2024.07.001
{"title":"Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments","authors":"","doi":"10.1016/j.aiia.2024.07.001","DOIUrl":"10.1016/j.aiia.2024.07.001","url":null,"abstract":"<div><p>Instance segmentation, an important image processing operation for automation in agriculture, is used to precisely delineate individual objects of interest within images, which provides foundational information for various automated or robotic tasks such as selective harvesting and precision pruning. This study compares the one-stage YOLOv8 and the two-stage Mask R-CNN machine learning models for instance segmentation under varying orchard conditions across two datasets. Dataset 1, collected in dormant season, includes images of dormant apple trees, which were used to train multi-object segmentation models delineating tree branches and trunks. Dataset 2, collected in the early growing season, includes images of apple tree canopies with green foliage and immature (green) apples (also called fruitlet), which were used to train single-object segmentation models delineating only immature green apples. The results showed that YOLOv8 performed better than Mask R-CNN, achieving good precision and near-perfect recall across both datasets at a confidence threshold of 0.5. Specifically, for Dataset 1, YOLOv8 achieved a precision of 0.90 and a recall of 0.95 for all classes. In comparison, Mask R-CNN demonstrated a precision of 0.81 and a recall of 0.81 for the same dataset. With Dataset 2, YOLOv8 achieved a precision of 0.93 and a recall of 0.97. Mask R-CNN, in this single-class scenario, achieved a precision of 0.85 and a recall of 0.88. Additionally, the inference times for YOLOv8 were 10.9 ms for multi-class segmentation (Dataset 1) and 7.8 ms for single-class segmentation (Dataset 2), compared to 15.6 ms and 12.8 ms achieved by Mask R-CNN's, respectively. These findings show YOLOv8's superior accuracy and efficiency in machine learning applications compared to two-stage models, specifically Mask-R-CNN, which suggests its suitability in developing smart and automated orchard operations, particularly when real-time applications are necessary in such cases as robotic harvesting and robotic immature green fruit thinning.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172400028X/pdfft?md5=d0b3ae6930c8dca43a65b49ca13f6d47&pid=1-s2.0-S258972172400028X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive survey on weed and crop classification using machine learning and deep learning 利用机器学习和深度学习对杂草和作物进行分类的综合调查
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2024-06-26 DOI: 10.1016/j.aiia.2024.06.005
Faisal Dharma Adhinata , Wahyono , Raden Sumiharto
{"title":"A comprehensive survey on weed and crop classification using machine learning and deep learning","authors":"Faisal Dharma Adhinata ,&nbsp;Wahyono ,&nbsp;Raden Sumiharto","doi":"10.1016/j.aiia.2024.06.005","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.005","url":null,"abstract":"<div><p>Machine learning and deep learning are subsets of Artificial Intelligence that have revolutionized object detection and classification in images or videos. This technology plays a crucial role in facilitating the transition from conventional to precision agriculture, particularly in the context of weed control. Precision agriculture, which previously relied on manual efforts, has now embraced the use of smart devices for more efficient weed detection. However, several challenges are associated with weed detection, including the visual similarity between weed and crop, occlusion and lighting effects, as well as the need for early-stage weed control. Therefore, this study aimed to provide a comprehensive review of the application of both traditional machine learning and deep learning, as well as the combination of the two methods, for weed detection across different crop fields. The results of this review show the advantages and disadvantages of using machine learning and deep learning. Generally, deep learning produced superior accuracy compared to machine learning under various conditions. Machine learning required the selection of the right combination of features to achieve high accuracy in classifying weed and crop, particularly under conditions consisting of lighting and early growth effects. Moreover, a precise segmentation stage would be required in cases of occlusion. Machine learning had the advantage of achieving real-time processing by producing smaller models than deep learning, thereby eliminating the need for additional GPUs. However, the development of GPU technology is currently rapid, so researchers are more often using deep learning for more accurate weed identification.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000278/pdfft?md5=13d026a04a00bc2bca21fc068166d32c&pid=1-s2.0-S2589721724000278-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer vision in smart agriculture and precision farming: Techniques and applications 智能农业和精准农业中的计算机视觉:技术与应用
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2024-06-25 DOI: 10.1016/j.aiia.2024.06.004
Sumaira Ghazal , Arslan Munir , Waqar S. Qureshi
{"title":"Computer vision in smart agriculture and precision farming: Techniques and applications","authors":"Sumaira Ghazal ,&nbsp;Arslan Munir ,&nbsp;Waqar S. Qureshi","doi":"10.1016/j.aiia.2024.06.004","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.004","url":null,"abstract":"<div><p>The transformation of age-old farming practices through the integration of digitization and automation has sparked a revolution in agriculture that is driven by cutting-edge computer vision and artificial intelligence (AI) technologies. This transformation not only promises increased productivity and economic growth, but also has the potential to address important global issues such as food security and sustainability. This survey paper aims to provide a holistic understanding of the integration of vision-based intelligent systems in various aspects of precision agriculture. By providing a detailed discussion on key areas of digital life cycle of crops, this survey contributes to a deeper understanding of the complexities associated with the implementation of vision-guided intelligent systems in challenging agricultural environments. The focus of this survey is to explore widely used imaging and image analysis techniques being utilized for precision farming tasks. This paper first discusses various salient crop metrics used in digital agriculture. Then this paper illustrates the usage of imaging and computer vision techniques in various phases of digital life cycle of crops in precision agriculture, such as image acquisition, image stitching and photogrammetry, image analysis, decision making, treatment, and planning. After establishing a thorough understanding of related terms and techniques involved in the implementation of vision-based intelligent systems for precision agriculture, the survey concludes by outlining the challenges associated with implementing generalized computer vision models for real-time deployment of fully autonomous farms.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000266/pdfft?md5=85ca785f72940b6f0eede997e4743f8c&pid=1-s2.0-S2589721724000266-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141539935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An artificial neuronal network coupled with a genetic algorithm to optimise the production of unsaturated fatty acids in Parachlorella kessleri 人工神经元网络与遗传算法相结合,优化克氏伞藻不饱和脂肪酸的生产
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2024-06-21 DOI: 10.1016/j.aiia.2024.06.003
Pablo Fernández Izquierdo , Leslie Cerón Delagado , Fedra Ortiz Benavides
{"title":"An artificial neuronal network coupled with a genetic algorithm to optimise the production of unsaturated fatty acids in Parachlorella kessleri","authors":"Pablo Fernández Izquierdo ,&nbsp;Leslie Cerón Delagado ,&nbsp;Fedra Ortiz Benavides","doi":"10.1016/j.aiia.2024.06.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.003","url":null,"abstract":"<div><p>In this study, an Artificial Neural Network-Genetic Algorithm (ANN-GA) approach was successfully applied to optimise the physicochemical factors influencing the synthesis of unsaturated fatty acids (UFAs) in the microalgae <em>P. kessleri</em> UCM 001. The optimized model recommended specific cultivation conditions, including glucose at 29 g/L, NaNO<sub>3</sub> at 2.4 g/L, K<sub>2</sub>HPO<sub>4</sub> at 0.4 g/L, red LED light, an intensity of 1000 lx, and an 8:16-h light-dark cycle. Through ANN-GA optimisation, a remarkable 66.79% increase in UFAs production in <em>P. kessleri</em> UCM 001 was achieved, compared to previous studies. This underscores the potential of this technology for enhancing valuable lipid production. Sequential variations in the application of physicochemical factors during microalgae culture under mixotrophic conditions, as optimized by ANN-GA, induced alterations in UFAs production and composition in <em>P. kessleri</em> UCM 001. This suggests the feasibility of tailoring the lipid profile of microalgae to obtain specific lipids for diverse industrial applications. The microalgae were isolated from a high-mountain lake in Colombia, highlighting their adaptation to extreme conditions. This underscores their potential for sustainable lipid and biomaterial production. This study demonstrates the effectiveness of using ANN-GA technology to optimise UFAs production in microalgae, offering a promising avenue for obtaining valuable lipids. The microalgae's unique origin in a high-mountain environment in Colombia emphasises the importance of exploring and harnessing microbial resources in distinctive geographical regions for biotechnological applications.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000254/pdfft?md5=5e368428bd6813d6d581e52a6bbbc317&pid=1-s2.0-S2589721724000254-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image classification on smart agriculture platforms: Systematic literature review 智慧农业平台上的图像分类:系统文献综述
Artificial Intelligence in Agriculture Pub Date : 2024-06-08 DOI: 10.1016/j.aiia.2024.06.002
Juan Felipe Restrepo-Arias , John W. Branch-Bedoya , Gabriel Awad
{"title":"Image classification on smart agriculture platforms: Systematic literature review","authors":"Juan Felipe Restrepo-Arias ,&nbsp;John W. Branch-Bedoya ,&nbsp;Gabriel Awad","doi":"10.1016/j.aiia.2024.06.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.002","url":null,"abstract":"<div><p>In recent years, smart agriculture has gained strength due to the application of industry 4.0 technologies in agriculture. As a result, efforts are increasing in proposing artificial vision applications to solve many problems. However, many of these applications are developed separately. Many academic works have proposed solutions integrating image classification techniques through IoT platforms. For this reason, this paper aims to answer the following research questions: (1) What are the main problems to be solved with smart farming IoT platforms that incorporate images? (2) What are the main strategies for incorporating image classification methods in smart agriculture IoT platforms? and (3) What are the main image acquisition, preprocessing, transmission, and classification technologies used in smart agriculture IoT platforms? This study adopts a Systematic Literature Review (SLR) approach. We searched Scopus, Web of Science, IEEE Xplore, and Springer Link databases from January 2018 to July 2022. From which we could identify five domains corresponding to (1) disease and pest detection, (2) crop growth and health monitoring, (3) irrigation and crop protection management, (4) intrusion detection, and (5) fruits and plant counting. There are three types of strategies to integrate image data into smart agriculture IoT platforms: (1) classification process in the edge, (2) classification process in the cloud, and (3) classification process combined. The main advantage of the first is obtaining data in real-time, and its main disadvantage is the cost of implementation. On the other hand, the main advantage of the second is the ability to process high-resolution images, and its main disadvantage is the need for high-bandwidth connectivity. Finally, the mixed strategy can significantly benefit infrastructure investment, but most works are experimental.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000205/pdfft?md5=adaa2b4e5272ad9c56b921776eacfaa1&pid=1-s2.0-S2589721724000205-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141325301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of flea beetle damage in the field using a multistage deep learning-based solution 使用基于多级深度学习的解决方案估算田间跳甲危害情况
Artificial Intelligence in Agriculture Pub Date : 2024-06-06 DOI: 10.1016/j.aiia.2024.06.001
Arantza Bereciartua-Pérez , María Monzón , Daniel Múgica , Greta De Both , Jeroen Baert , Brittany Hedges , Nicole Fox , Jone Echazarra , Ramón Navarra-Mestre
{"title":"Estimation of flea beetle damage in the field using a multistage deep learning-based solution","authors":"Arantza Bereciartua-Pérez ,&nbsp;María Monzón ,&nbsp;Daniel Múgica ,&nbsp;Greta De Both ,&nbsp;Jeroen Baert ,&nbsp;Brittany Hedges ,&nbsp;Nicole Fox ,&nbsp;Jone Echazarra ,&nbsp;Ramón Navarra-Mestre","doi":"10.1016/j.aiia.2024.06.001","DOIUrl":"10.1016/j.aiia.2024.06.001","url":null,"abstract":"<div><p>Estimation of damage in plants is a key issue for crop protection. Currently, experts in the field manually assess the plots. This is a time-consuming task that can be automated thanks to the latest technology in computer vision (CV). The use of image-based systems and recently deep learning-based systems have provided good results in several agricultural applications. These image-based applications outperform expert evaluation in controlled environments, and now they are being progressively included in non-controlled field applications.</p><p>A novel solution based on deep learning techniques in combination with image processing methods is proposed to tackle the estimate of plant damage in the field. The proposed solution is a two-stage algorithm. In a first stage, the single plants in the plots are detected by an object detection YOLO based model. Then a regression model is applied to estimate the damage of each individual plant. The solution has been developed and validated in oilseed rape plants to estimate the damage caused by flea beetle.</p><p>The crop detection model achieves a mean precision average of 91% with a [email protected] of 0.99 and a [email protected] of 0.91 for oilseed rape specifically. The regression model to estimate up to 60% of damage degree in single plants achieves a MAE of 7.11, and R2 of 0.46 in comparison with manual evaluations done plant by plant by experts. Models are deployed in a docker, and with a REST API communication protocol they can be inferred directly for images acquired in the field from a mobile device.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000199/pdfft?md5=6734d348bce39475c37cb2c23f24a354&pid=1-s2.0-S2589721724000199-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141390129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-comparative review of Machine learning for plant disease detection: apple, cassava, cotton and potato plants 机器学习用于植物病害检测的交叉比较综述:苹果、木薯、棉花和马铃薯植物
Artificial Intelligence in Agriculture Pub Date : 2024-06-01 DOI: 10.1016/j.aiia.2024.04.002
James Daniel Omaye , Emeka Ogbuju , Grace Ataguba , Oluwayemisi Jaiyeoba , Joseph Aneke , Francisca Oladipo
{"title":"Cross-comparative review of Machine learning for plant disease detection: apple, cassava, cotton and potato plants","authors":"James Daniel Omaye ,&nbsp;Emeka Ogbuju ,&nbsp;Grace Ataguba ,&nbsp;Oluwayemisi Jaiyeoba ,&nbsp;Joseph Aneke ,&nbsp;Francisca Oladipo","doi":"10.1016/j.aiia.2024.04.002","DOIUrl":"10.1016/j.aiia.2024.04.002","url":null,"abstract":"<div><p>Plant disease detection has played a significant role in combating plant diseases that pose a threat to global agriculture and food security. Detecting these diseases early can help mitigate their impact and ensure healthy crop yields. Machine learning algorithms have emerged as powerful tools for accurately identifying and classifying a wide range of plant diseases from trained image datasets of affected crops. These algorithms, including deep learning algorithms, have shown remarkable success in recognizing disease patterns and early signs of plant diseases. Besides early detection, there are other potential benefits of machine learning algorithms in overall plant disease management, such as soil and climatic condition predictions for plants, pest identification, proximity detection, and many more. Over the years, research has focused on using machine-learning algorithms for plant disease detection. Nevertheless, little is known about the extent to which the research community has explored machine learning algorithms to cover other significant areas of plant disease management. In view of this, we present a cross-comparative review of machine learning algorithms and applications designed for plant disease detection with a specific focus on four (4) economically important plants: apple, cassava, cotton, and potato. We conducted a systematic review of articles published between 2013 and 2023 to explore trends in the research community over the years. After filtering a number of articles based on our inclusion criteria, including articles that present individual prediction accuracy for classes of disease associated with the selected plants, 113 articles were considered relevant. From these articles, we analyzed the state-of-the-art techniques, challenges, and future prospects of using machine learning for disease identification of the selected plants. Results from our review show that deep learning and other algorithms performed significantly well in detecting plant diseases. In addition, we found a few references to plant disease management covering prevention, diagnosis, control, and monitoring. In view of this, little or no work has explored the prediction of the recovery of affected plants. Hence, we propose opportunities for developing machine learning-based technologies to cover prevention, diagnosis, control, monitoring, and recovery.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172400014X/pdfft?md5=a2288673548d57c63626027a95ff21bf&pid=1-s2.0-S258972172400014X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141054049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperparameter optimization of YOLOv8 for smoke and wildfire detection: Implications for agricultural and environmental safety 用于烟雾和野火探测的 YOLOv8 超参数优化:对农业和环境安全的影响
Artificial Intelligence in Agriculture Pub Date : 2024-06-01 DOI: 10.1016/j.aiia.2024.05.003
Leo Ramos , Edmundo Casas , Eduardo Bendek , Cristian Romero , Francklin Rivas-Echeverría
{"title":"Hyperparameter optimization of YOLOv8 for smoke and wildfire detection: Implications for agricultural and environmental safety","authors":"Leo Ramos ,&nbsp;Edmundo Casas ,&nbsp;Eduardo Bendek ,&nbsp;Cristian Romero ,&nbsp;Francklin Rivas-Echeverría","doi":"10.1016/j.aiia.2024.05.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.05.003","url":null,"abstract":"<div><p>In this study, we extensively evaluated the viability of the state-of-the-art YOLOv8 architecture for object detection tasks, specifically tailored for smoke and wildfire identification with a focus on agricultural and environmental safety. All available versions of YOLOv8 were initially fine-tuned on a domain-specific dataset that included a variety of scenarios, crucial for comprehensive agricultural monitoring. The ‘large’ version (YOLOv8l) was selected for further hyperparameter tuning based on its performance metrics. This model underwent a detailed hyperparameter optimization using the One Factor At a Time (OFAT) methodology, concentrating on key parameters such as learning rate, batch size, weight decay, epochs, and optimizer. Insights from the OFAT study were used to define search spaces for a subsequent Random Search (RS). The final model derived from RS demonstrated significant improvements over the initial fine-tuned model, increasing overall precision by 1.39 %, recall by 1.48 %, F1-score by 1.44 %, [email protected] by 0.70 %, and [email protected]:0.95 by 5.09 %. We validated the enhanced model's efficacy on a diverse set of real-world images, reflecting various agricultural settings, to confirm its robustness in detecting smoke and fire. These results underscore the model's reliability and effectiveness in scenarios critical to agricultural safety and environmental monitoring. This work, representing a significant advancement in the field of fire and smoke detection through machine learning, lays a strong foundation for future research and solutions aimed at safeguarding agricultural areas and natural environments.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000187/pdfft?md5=c551b82b80431a9f2f37f79894497fcb&pid=1-s2.0-S2589721724000187-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hazelnut mapping detection system using optical and radar remote sensing: Benchmarking machine learning algorithms 利用光学和雷达遥感的榛子绘图检测系统:机器学习算法基准测试
Artificial Intelligence in Agriculture Pub Date : 2024-06-01 DOI: 10.1016/j.aiia.2024.05.001
Daniele Sasso , Francesco Lodato , Anna Sabatini , Giorgio Pennazza , Luca Vollero , Marco Santonico , Mario Merone
{"title":"Hazelnut mapping detection system using optical and radar remote sensing: Benchmarking machine learning algorithms","authors":"Daniele Sasso ,&nbsp;Francesco Lodato ,&nbsp;Anna Sabatini ,&nbsp;Giorgio Pennazza ,&nbsp;Luca Vollero ,&nbsp;Marco Santonico ,&nbsp;Mario Merone","doi":"10.1016/j.aiia.2024.05.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.05.001","url":null,"abstract":"<div><p>Mapping hazelnut orchards can facilitate land planning and utilization policies, supporting the development of cooperative precision farming systems. The present work faces the detection of hazelnut crops using optical and radar remote sensing data. A comparative study of Machine Learning techniques is presented. The system proposed utilizes multi-temporal data from the Sentinel-1 and Sentinel-2 datasets extracted over several years and processed with cloud tools. We provide a dataset of 62,982 labeled samples, with 16,561 samples belonging to the ‘hazelnut’ class and 46,421 samples belonging to the ‘other’ class, collected in 8 heterogeneous geographical areas of the Viterbo province. Two different comparative tests are conducted: firstly, we use a Nested 5-Fold Cross-Validation methodology to train, optimize, and compare different Machine Learning algorithms on a single area. In a second experiment, the algorithms were trained on one area and tested on the remaining seven geographical areas. The developed study demonstrates how AI analysis applied to Sentinel-1 and Sentinel-2 data is a valid technology for hazelnut mapping. From the results, it emerges that Random Forest is the classifier with the highest generalizability, achieving the best performance in the second test with an accuracy of 96% and an F1 score of 91% for the ‘hazelnut’ class.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000163/pdfft?md5=3c0871cbfa7a056adc6aefce898ac420&pid=1-s2.0-S2589721724000163-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141244415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InstaCropNet: An efficient Unet-Based architecture for precise crop row detection in agricultural applications InstaCropNet:基于 Unet 的高效架构,用于农业应用中的作物行精确检测
Artificial Intelligence in Agriculture Pub Date : 2024-06-01 DOI: 10.1016/j.aiia.2024.05.002
Zhiming Guo , Yuhang Geng , Chuan Wang , Yi Xue , Deng Sun , Zhaoxia Lou , Tianbao Chen , Tianyu Geng , Longzhe Quan
{"title":"InstaCropNet: An efficient Unet-Based architecture for precise crop row detection in agricultural applications","authors":"Zhiming Guo ,&nbsp;Yuhang Geng ,&nbsp;Chuan Wang ,&nbsp;Yi Xue ,&nbsp;Deng Sun ,&nbsp;Zhaoxia Lou ,&nbsp;Tianbao Chen ,&nbsp;Tianyu Geng ,&nbsp;Longzhe Quan","doi":"10.1016/j.aiia.2024.05.002","DOIUrl":"10.1016/j.aiia.2024.05.002","url":null,"abstract":"<div><p>Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields. Among various navigation techniques, visual navigation using widely available RGB images is a cost-effective solution. However, current mainstream methods for maize crop row detection often rely on highly specialized, manually devised heuristic rules, limiting the scalability of these methods. To simplify the solution and enhance its universality, we propose an innovative crop row annotation strategy. This strategy, by simulating the strip-like structure of the crop row's central area, effectively avoids interference from lateral growth of crop leaves. Based on this, we developed a deep learning network with a dual-branch architecture, InstaCropNet, which achieves end-to-end segmentation of crop row instances. Subsequently, through the row anchor segmentation technique, we accurately locate the positions of different crop row instances and perform line fitting. Experimental results demonstrate that our method has an average angular deviation of no more than 2°, and the accuracy of crop row detection reaches 96.5%.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000175/pdfft?md5=4c6e92e045769fe5ef6e32adc1438b8b&pid=1-s2.0-S2589721724000175-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141143901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信