Computers and Electronics in Agriculture最新文献

筛选
英文 中文
TrackPlant3D: 3D organ growth tracking framework for organ-level dynamic phenotyping
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-19 DOI: 10.1016/j.compag.2024.109435
{"title":"TrackPlant3D: 3D organ growth tracking framework for organ-level dynamic phenotyping","authors":"","doi":"10.1016/j.compag.2024.109435","DOIUrl":"10.1016/j.compag.2024.109435","url":null,"abstract":"<div><p>The extraction of dynamic plant phenotypes is highly important for understanding the process of plant phenotype formation and formulating growth management plans. Although rapid progress has been made in the analysis of the efficiency and throughput of static phenotypes, dynamic growth tracking methods are still a key bottleneck for dynamic phenotyping. The major challenges related to organ growth tracking include the nonrigid deformation of organ morphology during growth, the high frequency of growth events, and a lack of spatiotemporal datasets. Inspired by the phenomenon in which a human naturally compares two similar three-dimensional objects by overlapping and aligning them, this study proposes an automatic organ growth tracking framework—TrackPlant3D—for time-series crop point clouds. The unsupervised framework takes crop point clouds at multiple growth stages with organ instance labels as input and produces point clouds with consistent organ labels as organ-level growth tracking outputs. Compared with the other two state-of-the-art organ tracking methods, TrackPlant3D has better tracking performance and greater adaptability across species. In an experiment involving maize species, the long-term and short-term tracking accuracies of TrackPlant3D both reached 100%. For sorghum, tobacco and tomato crops, the long-term tracking accuracies were 81.25%, 64.13% and 86.75%, respectively, and the short-term tracking accuracies were all greater than 85.00%, demonstrating satisfactory tracking performance. Moreover, TrackPlant3D is also robust against frequent organ growth events and adaptable to different types of segmentation inputs as well as to inputs involving inclination and rotation disturbances. We also demonstrated that the TrackPlant3D framework has the potential for incorporation into a fully automatic dynamic phenotyping pipeline that integrates organ segmentation, organ tracking, and dynamic monitoring of phenotypic traits such as individual leaf length and leaf area. This study may contribute to the development of dynamic phenotyping, digital agriculture, and the factory production of plants.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142242174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camouflaged cotton bollworm instance segmentation based on PVT and Mask R-CNN
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-18 DOI: 10.1016/j.compag.2024.109450
{"title":"Camouflaged cotton bollworm instance segmentation based on PVT and Mask R-CNN","authors":"","doi":"10.1016/j.compag.2024.109450","DOIUrl":"10.1016/j.compag.2024.109450","url":null,"abstract":"<div><p>Many pests change their appearance color to seamlessly blend with the surrounding environment in agricultural ecosystems, thereby rendering themselves virtually invisible. When the pest’s color and texture resemble the background, accurately identifying and detecting it becomes challenging. In this study, we construct a new dataset focusing on the cotton bollworm and conduct in-depth optimization and improvement of the instance segmentation model based on the Pyramid Vision Transformer (PVT) and Mask R-CNN. To better capture the features of camouflaged organisms, the proposed model utilizes the PVT as a feature extraction network and Mask-RCNN for instance segmentation. We also introduce overlapping image embedding patch structure and further incorporate a feed-forward network with depthwise separable convolution. These improvements enhance the PVT’s capability to capture global and intricate features and significantly boost the accuracy of instance segmentation. Considering the computational efficiency demands in real-time agricultural applications, we introduce a linear spatial-reduction attention mechanism that effectively reduces computational complexity. The experimental results show that the model achieves the detection accuracy of 89.7% and the segmentation accuracy of 89.2% for camouflaged cotton bollworms.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142242173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Path planning of manure-robot cleaners using grid-based reinforcement learning
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-18 DOI: 10.1016/j.compag.2024.109456
{"title":"Path planning of manure-robot cleaners using grid-based reinforcement learning","authors":"","doi":"10.1016/j.compag.2024.109456","DOIUrl":"10.1016/j.compag.2024.109456","url":null,"abstract":"<div><p>The use of a robot cleaner for manure removal improves housing conditions for dairy cows in the face of labor shortages. However, current robot cleaners follow programmed fixed routes without considering the dynamic behaviors of cows. This cleaning approach is less efficient and leads to more cow-robot encounters or collisions, thus affecting animal welfare. To address these issues, this paper (1) developed heatmap models for cow locations and defecation behaviors; (2) proposed a dynamic path planning approach for the manure robot cleaner using Grid-based Reinforcement Learning; (3) incorporated cow location information and defecation behavior into the path planning process; (4) compared the performance of the proposed approach with two different cleaning methods: the current fixed programmed cleaning in practice and the ideal path produced by simulated annealing for traveling salesman problem. The simulations mimic the situation in a barn at Dairy Campus of Wageningen Livestock Research located in Leeuwarden (the Netherlands). Obviously, the best performance was achieved when the route was executed without cows present, resulting in no cow-robot collision. However, with cows present, the proposed dynamic path planning strategy achieved a 67.6% reduction in cow-robot encounters while maintaining 85.4% of the cleaning performance compared to the current programmed fixed routes. Compared to the ideal path produced by simulated annealing for traveling salesman problem, the proposed dynamic path planning approach achieved 5% better cleaning performance, at the cost of 25% more cow-robot encounters due to its longer working path. We conclude the proposed grid-based Reinforcement Learning solution for manure robots in barns cleaned most efficient with the least interference with cow traffic.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0168169924008470/pdfft?md5=fa2d6dae507e2d8dd02d8b135cafdc07&pid=1-s2.0-S0168169924008470-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142242175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive human-machine teleoperation framework for precision agriculture: Integrating UAV-based digital mapping and virtual reality control
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-17 DOI: 10.1016/j.compag.2024.109444
{"title":"Immersive human-machine teleoperation framework for precision agriculture: Integrating UAV-based digital mapping and virtual reality control","authors":"","doi":"10.1016/j.compag.2024.109444","DOIUrl":"10.1016/j.compag.2024.109444","url":null,"abstract":"<div><p>In agricultural settings, the unstructured nature of certain production environments, along with the high complexity and inherent risks of production tasks, poses significant challenges to achieving full automation and effective on-site machine control. Remote control technology, which leverages human intelligence and precise machine movements, ensures operator safety and boosts productivity. Recently, virtual reality (VR) has shown promise in remote control applications by overcoming single-view limitations and providing three-dimensional information, yet most studies have not focused on agricultural settings. Therefore, to bridge the gap, this study proposes a large-scale digital mapping and immersive human–machine teleoperation framework specifically designed for precision agriculture. In this research, a DJI unmanned aerial vehicle (UAV) was utilized for data collection, and a novel video segmentation approach based on feature points was introduced. To accommodate the variability of complex textures, this method proposes an enhanced Structure from Motion (SfM) approach. It integrates the open Multiple View Geometry (OpenMVG) framework with Local Features from Transformers (LoFTR). The enhanced SfM produces a point cloud map, which is further processed through Multi-View Stereo (MVS) to generate a complete map model. For control, a closed-loop system utilizing TCP/IP for VR control and positioning of agricultural machinery was introduced. This system offers a fully visual-based method for immersive control, allowing operators to utilize VR technology for remote operations. The experimental results demonstrate that the digital map reconstruction algorithm developed in this study offers superior detail reconstruction, along with enhanced robustness and convenience. The user-friendly remote control method also showcases its advantages over traditional video streaming-based remote operations, providing operators with a more comprehensive and immersive experience and a higher level of situational awareness.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving soil moisture prediction with deep learning and machine learning models 利用深度学习和机器学习模型改进土壤湿度预测
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-14 DOI: 10.1016/j.compag.2024.109414
{"title":"Improving soil moisture prediction with deep learning and machine learning models","authors":"","doi":"10.1016/j.compag.2024.109414","DOIUrl":"10.1016/j.compag.2024.109414","url":null,"abstract":"<div><p>Reliable soil moisture (SM) data is critical for effective water resources management, yet its accurate measurement and prediction remain challenging. This study was conducted to develop a deep learning regression network for sub-hourly SM prediction and compare its performance with traditional machine learning models, including the eXtreme gradient boosting (XGB), light gradient-boosting (LGB), cat boosting (CB), random forest (RF), k-nearest neighbors (kNN), and long short-term memory (LSTM) models. Sub-hourly SM, electrical conductivity (EC), soil temperature (ST), and weather parameters were collected during research experiments conducted for two years (2020–2021 and 2021–2022) at the Tropical Research and Education Center (TREC), University of Florida. A network of SM sensors and a weather station were installed at the experimental site with 24 plots of green beans and sweet corn under full and three deficit irrigation treatments with three replications. Model performance metrics such as coefficient of determination (r<sup>2</sup>) and global performance indicator (GPI) were used to evaluate the performance of the models. Results showed that all MLs and DL models performed more than satisfactorily in simulating SM of green beans and sweet corn plots. The testing average r<sup>2</sup> and GPI of MLs were 0.83 and 0.02 (green beans) and 0.85 and 0.02 (sweet corn). However, XGB and LGB models outperformed the remaining ML and DL models. The testing r<sup>2</sup> and GPI of XGB were 0.86 and 0.014 for green beans, whereas 0.88 and 0.015 for sweet corn. The r<sup>2</sup> and GPI values for the LGB were 0.85 and 0.014 for green beans, while 0.88 and 0.015 for sweet corn. Even though DL model took longer and resources to be trained, its performance was not as accurate as that of XGB and LGB models. However, the performance of DL was better than the LSTM model. The r<sup>2</sup> and RMSE of the LSTM model were 0.68 and 0.02cm <sup>3</sup> cm<sup>-3</sup> for green beans and 0.75 and 0.02cm <sup>3</sup> cm<sup>-3</sup> for sweet corn, respectively. Whereas the r<sup>2</sup> and RMSE of DL were 0.84 and 0.015cm <sup>3</sup> cm<sup>-3</sup> (green beans) and 0.85 and 0.02 cm <sup>3</sup> cm<sup>-3</sup> (sweet corn). The ML and DL models performed better in simulating SM of sweet corn plots than green beans. Overall, these results confirmed that the ML and DL models could be alternative tools for SM prediction for agricultural fields, with potential applications for irrigation scheduling and water resources management.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectral-based estimation of chlorophyll content and determination of background interference mechanisms in low-coverage rice 基于光谱估算叶绿素含量并确定低覆盖率水稻的背景干扰机制
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-13 DOI: 10.1016/j.compag.2024.109442
{"title":"Spectral-based estimation of chlorophyll content and determination of background interference mechanisms in low-coverage rice","authors":"","doi":"10.1016/j.compag.2024.109442","DOIUrl":"10.1016/j.compag.2024.109442","url":null,"abstract":"<div><p>The chlorophyll content is a vital indicator of rice growth and nutritional status. However, estimating the rice chlorophyll content using spectral-based techniques at the early tillering stage is challenging because of background interference. Using the energy conservation principle, this study explained the spectral variation and background interference mechanisms of clear, muddy, and green algae-covered backgrounds. We developed mathematical interference models for the three types of backgrounds and determined their interference degree and influence mode. We developed rice chlorophyll content estimation models for unclassified and classified (clear, muddy, and green algae-covered) backgrounds using 12 preprocessing, four wavelength selection, and three modeling methods, and we explored the importance of background classification. Moreover, we found that the optimal chlorophyll content estimation model for the clear background was SS+UVE+CNN, with R<sup>2</sup> and RMSE values of 0.786 and 13.191 in the training set and 0.741 and 15.327 in the test set, respectively; that for the muddy background was MSC+GA+CNN, with R<sup>2</sup> and RMSE values of 0.914 and 10.425 in the training set and 0.660 and 16.844 in the test set, respectively; and that for the green algae-covered background was DC+GA+CNN, with R<sup>2</sup> and RMSE values of 0.904 and 9.111 in the training set and 0.688 and 17.694 in the test set, respectively. Our study could provide valuable insights into reducing and correcting background interference during proximal remote sensing data collection.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-shot image segmentation for monitoring thermal conditions of individual cage-free laying hens 用于监测单只笼养蛋鸡热状况的零镜头图像分割技术
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-13 DOI: 10.1016/j.compag.2024.109436
{"title":"Zero-shot image segmentation for monitoring thermal conditions of individual cage-free laying hens","authors":"","doi":"10.1016/j.compag.2024.109436","DOIUrl":"10.1016/j.compag.2024.109436","url":null,"abstract":"<div><p>Body temperature is a critical indicator of the health and productivity of egg-laying chickens and other domesticated animals. Recent advancements in thermography allow for precise surface temperature measurement without physical contact with animals, reducing animal stress from human handling. Gold standard temperature analysis via thermography requires manual selection of limited points for an object of interest, which could be time-consuming and inadequate for representing the comprehensive thermal profile of a chicken’s body. The objective of this study was to leverage and optimize a zero-shot artificial intelligence technology for the automatic segmentation of individual cage-free laying hens within thermal images, providing insights into their overall thermal conditions. A zero-shot image segmentation model (Segment Anything, “SAM”) was modified by replacing manual selections of target points with automatic selection of the initial point using pre-processing techniques (e.g., thresholding) in each thermal image. The model was also incorporated with post-processing techniques integrated with a machine learning classifier to improve segmentation accuracy. Three versions of modified SAM models (i.e., SAM, FastSAM, and MobileSAM), two common instance segmentation algorithms (i.e., YOLOv8 and Mask R-CNN), and two foundation segmentation models (i.e., U<sup>2</sup>-Net and ISNet) were comparatively evaluated to determine the optimal one for bird segmentation from thermal images. A total of 1,917 thermal images were collected from cage-free laying hens (Hy-Line W-36) at 77–80 weeks of age. The image dataset exhibited considerable variations such as feathers, bird movement, body gestures, and the specific conditions of cage-free facilities. The experimental results demonstrate that the modified SAM did not only surpass the six zero-shot models—YOLOv8, Mask R-CNN, FastSAM, MobileSAM, U<sup>2</sup>Net, and ISNet—but also outperformed other modified SAM-based models (Modified FastSAM and Modified MobileSAM) in terms of hen detection performance, achieving a success rate of 84.4 %, and in segmentation performance, with an intersection over union of 85.5 %, recall of 91.0 %, and an F1 score of 92.3 %. The optimal model, modified SAM, was pipelined to extract statistics including the averages (°C) of mean (27.03, 27.04, 28.53, 26.68), median (26.27, 26.84, 28.28, 26.78), 25th percentile (25.33, 25.61, 27.26, 25.53), and 75th percentile (28.04, 27.95, 29.22, 27.55) of surface body temperature of individual laying hens in thermal images for each week. More statistics of hen body surface temperature can be extracted based on the segmentation results. The developed pipeline is a useful tool for automatically evaluating the thermal conditions of individual birds.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0168169924008275/pdfft?md5=28f5e96825ccc5cc61a8ebb0668f8b39&pid=1-s2.0-S0168169924008275-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of aquaculture: From single modality analysis to multimodality fusion 水产养殖回顾:从单一模式分析到多模式融合
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-13 DOI: 10.1016/j.compag.2024.109367
{"title":"A review of aquaculture: From single modality analysis to multimodality fusion","authors":"","doi":"10.1016/j.compag.2024.109367","DOIUrl":"10.1016/j.compag.2024.109367","url":null,"abstract":"<div><p>Efficient management and accurate monitoring are crucial for the sustainable development of the aquaculture industry. Traditionally, monitoring methods have relied on single-modality approaches (e.g., physical sensors, vision, and audio). However, these methods are limited by environmental interference and inability to comprehensively capture the complex characteristics of aquatic organisms, leading to data bias, low identification accuracy, and poor model portability across different settings. In contrast, multimodal fusion technologies have emerged as a promising solution for intelligent aquaculture due to their strong environmental adaptability, information complementarity, and high generalization ability. Despite this potential, there is a lack of comprehensive literature reviewing the transition from single-modal to multimodal systems in aquaculture. This paper addresses this gap by presenting a systematic review of both single-modal and multimodal fusion technologies in aquaculture over the past two decades. We analyze the strengths and limitations of each approach, focusing on four key areas: water quality monitoring, feeding behavior analysis, disease prediction, and biomass estimation. Through this comprehensive analysis, we provide theoretical and practical insights into the application of multimodal fusion technology in aquaculture, highlighting its potential to enhance efficiency and sustainability while overcoming current limitations.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determining optimal nitrogen concentration intervals throughout lettuce growth using fluorescence parameters 利用荧光参数确定生菜生长过程中的最佳氮浓度间隔
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-12 DOI: 10.1016/j.compag.2024.109438
{"title":"Determining optimal nitrogen concentration intervals throughout lettuce growth using fluorescence parameters","authors":"","doi":"10.1016/j.compag.2024.109438","DOIUrl":"10.1016/j.compag.2024.109438","url":null,"abstract":"<div><p>The commonly used universal nutrient solution formula for facility-grown lettuce leads to excessive nitrogen fertilizer usage, low utilization efficiency, and severe environmental pollution. This formula keeps the nitrogen fertilizer concentration consistently high throughout the growth stages of lettuce, which is not conducive to lettuce growth because its nitrogen needs vary across different developmental stages. To address these inefficiencies, this study introduces a method for determining an appropriate interval limits for nitrogen concentration regulation for greenhouse lettuce cultivation based on chlorophyll fluorescence parameters. A single-factor experiment was designed to gather a dataset of chlorophyll fluorescence and biomass parameters at varying nitrogen concentrations and growth stages. Initial findings using the maximal information coefficient correlation analysis indicated that no single fluorescence parameter alone was sufficient for optimal regulation. Thus, the analytic hierarchy process was employed to dynamically determine the weights for comprehensive fluorescence parameters. The U-chord curvature method was then used to calculate the response curve’s upper and lower interval limits. The Technique for Order Preference by Similarity to an Ideal Solution method confirmed the rationality of the nitrogen concentration intervals for different stages, which achieved the highest comprehensive scores. Implementing these intervals led to a 49.7 % reduction in nitrogen fertilizer usage, with no significant difference in dry weight at the lower limit but a 36.2 % reduction with a 9.4 % increase in dry weight at the upper limit compared with the universal nutrient solution formula. This approach significantly reduces the use of ineffective nitrogen fertilizers while maintaining crop yield, offering a more environmentally friendly and efficient method for managing nitrogen for lettuce cultivation in greenhouses.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cyber security in smart agriculture: Threat types, current status, and future trends 智能农业中的网络安全:威胁类型、现状和未来趋势
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2024-09-12 DOI: 10.1016/j.compag.2024.109401
{"title":"Cyber security in smart agriculture: Threat types, current status, and future trends","authors":"","doi":"10.1016/j.compag.2024.109401","DOIUrl":"10.1016/j.compag.2024.109401","url":null,"abstract":"<div><p>Smart agriculture (SA), which combines the Internet of Things (IoT) with a variety of smart devices including unmanned aerial vehicles (UAVs),<!--> <!-->unmanned ground vehicles (UGVs), and computing systems, is an emerging technology that shows how far the agricultural sector has progressed. Usage of edge computing devices on farms has been growing in the past decades for increasing the yields by improving resource use efficiency through the utilization of temporal, spatial, and individual farm data. With the growing adoption of digital technology, the agricultural sector now offers tools and services for retaining, storing, and analyzing the vast amounts of data produced by Smart Agricultural systems. However, this industry is more vulnerable to cyber security risks due to its growing reliance on technology. This article presents a comprehensive assessment of the state-of-the-art consequences of SA and current cyber security concerns. In addition, this article delves into the structural framework of SA, thoroughly addressing the major security threats at each layer. This study also provides a complete overview of major developments and future research directions in agricultural cyber security for SA. These valuable insights into cyber security will encourage cyber security researchers to suggest more creative and innovative ideas in the future.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信