{"title":"Speed control of an autonomous electric vehicle for orchard spraying","authors":"Yoshitomo Yamasaki, Kazunobu Ishii, Noboru Noguchi","doi":"10.1016/j.compag.2025.110419","DOIUrl":"10.1016/j.compag.2025.110419","url":null,"abstract":"<div><div>We developed an autonomous electric vehicle for orchard spraying, termed a spraying robot. Traveling resistance varies depending on vehicle weight, the front sideslip angle, and surface slope. The vehicle weight must change while traveling, especially for the spraying robot. To adapt to changes in those resistances, it is necessary to develop a speed controller. This research focused on rolling and slope resistance as a traveling resistance, which depends on the vehicle weight. We modeled the resistance and developed a feedforward controller with a proportional-integral-derivative (PID) feedback controller. The developed controller (FF-PID) was compared with a simple PID controller in simulation. The FF-PID was verified to be more rapid and stable response than the PID. Moreover, the FF-PID responded adaptively when the vehicle weight changed. Compared to the PID, the FF-PID reduced the error to the target speed by 50 % during sideslip angle changes and by 48 % during slope angle changes. Finally, we simulated a spraying task based on actual traveling data in a vineyard, factoring in the vehicle weight, steering angle, and slope angle change. The results showed that the FF-PID reduced error by 32 %. This research improved the performance of the spraying robot’s speed controller by modeling traveling resistance in an orchard environment.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110419"},"PeriodicalIF":7.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Medeiros , Amy Tabb , Scott Stewart , Tracy Leskey
{"title":"Detecting invasive insects using Uncrewed Aerial Vehicles and Variational AutoEncoders","authors":"Henry Medeiros , Amy Tabb , Scott Stewart , Tracy Leskey","doi":"10.1016/j.compag.2025.110362","DOIUrl":"10.1016/j.compag.2025.110362","url":null,"abstract":"<div><div>Invasive insect pests, such as the brown marmorated stink bug (BMSB), cause significant economic and environmental damage to agricultural crops. To mitigate damage, entomological research to characterize insect behavior in the affected regions is needed. A component of this research is tracking insect movement with mark-release-recapture (MRR) methods. A common type of MRR requires marking insects with a fluorescent powder, releasing the insects into the wild, and searching for the marked insects using direct observations aided by ultraviolet (UV) flashlights at suspected destination locations. This involves a significant amount of labor and has a low recapture rate. Automating the insect search step can improve recapture rates, reducing the amount of labor required in the process and improving the quality of the data. We propose a new MRR method that uses an uncrewed aerial vehicle (UAV) to collect video data of the area of interest. Our system uses a UV illumination array and a digital camera mounted on the bottom of the UAV to collect nighttime images of previously marked and released insects. We propose a novel unsupervised computer vision method based on a Convolutional Variational Auto Encoder (CVAE) to detect insects in these videos. We then associate insect observations across multiple frames using ByteTrack and project these detections to the ground plane using the UAV’s flight log information. This allows us to accurately count the real-world insects. Our experimental results show that our system can detect BMSBs with an average precision of 0.86 and average recall of 0.87, substantially outperforming the current state of the art.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110362"},"PeriodicalIF":7.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AngusRecNet: Multi-module cooperation for facial anti-occlusion recognition in single-stage Angus cattle","authors":"Lijun Hu , Xu Li , Guoliang Li , Zhongyuan Wang","doi":"10.1016/j.compag.2025.110456","DOIUrl":"10.1016/j.compag.2025.110456","url":null,"abstract":"<div><div>In the context of the booming development of modern precision livestock farming, traditional cattle recognition methods exhibit clear limitations when faced with interference from feed residues, dirt, and other obstructions on the face. To address this, this study proposes an innovative deep learning framework, AngusRecNet, aimed at solving the facial recognition problem of Angus cattle under occlusion scenarios. The backbone network of AngusRecNet includes the innovatively designed Occlusion-Robust Feature Extraction Module (ORFEM) and the Vision AeroStack Module (VASM). By combining Asymmetric convolutions and fine spatial sampling, it effectively captures facial features. The neck structure is integrated with the Mamba architecture and the core ideas of DySample, leading to the design of the State Space Dynamic Sampling Feature Pyramid Network (SS-DSFPN), which enhances multi-scale feature extraction and fusion capabilities under occlusion scenarios. Additionally, the proposed Mish-Driven Channel-Spatial Transformer Head (MCST-Head), combining Channel Spatial Fusion Transformer (CSFT) and Smooth Depth Convolution (SDConv), optimizes feature representation and spatial perception in deep learning networks, significantly improving robustness and bounding box regression performance under complex backgrounds and occlusion conditions. Testing on the newly constructed AngusFace dataset demonstrates that AngusRecNet achieves a mAP50 of 94.2% in facial recognition tasks, showcasing its immense potential for application in precision livestock farming. The code can be obtained on GitHub: <span><span>https://github.com/HLJ11235/AngusRecNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110456"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Modal sensing for soil moisture mapping: Integrating drone-based ground penetrating radar and RGB-thermal imaging with deep learning","authors":"Milad Vahidi, Sanaz Shafian, William Hunter Frame","doi":"10.1016/j.compag.2025.110423","DOIUrl":"10.1016/j.compag.2025.110423","url":null,"abstract":"<div><div>Precise estimation of soil moisture is vital for refining irrigation practices, enhancing crop productivity, and promoting sustainable water use management. This study integrates Ground Penetrating Radar (GPR) and RGB-Thermal imaging datasets to enhance soil moisture prediction throughout the maize growing season, assessing moisture content at 10 and 30-cm soil depths. By leveraging the complementary strengths of GPR for subsurface moisture detection and RGB-Thermal imagery for surface and canopy analysis, we addressed common issues such as underestimation and overestimation often encountered with standalone datasets, including the weaknesses of GPR signal and its attenuation for deeper soil moisture monitoring as well as RGB-Thermal sensor lack in dealing with canopy, covering the<!--> <!-->soil surface. Advanced machine learning models, including ANN, AdaBoost, and PLS, were applied to evaluate the effects of thermal, structural and spectral variables on accuracy of moisture estimation. The best-performing model, the ANN trained with variables extracted from the 1D-CNN network, achieved an R<sup>2</sup> of 0.83 and an RMSE of 1.9 % at 10 cm depth. At 30 cm, the same model achieved an R<sup>2</sup> of 0.79 and an RMSE of 3.2 %, showing robust performance even at deeper soil levels. These results demonstrate the significant improvement in model performance when GPR data is integrated with RGB-Thermal data, reducing prediction errors in both high and low moisture regimes. Thermal variables, particularly Land Surface Temperature, exhibited a strong correlation with moisture content, especially at shallower depths. However, GPR variables were essential for detecting subsurface moisture at 30 cm depth, where RGB-Thermal data alone showed limitations. The integration of GPR and RGB-Thermal data resulted in a more comprehensive and accurate soil moisture estimation model, offering significant potential for optimizing water use in agricultural systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110423"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenlong Yi , Shuokang Xia , Sergey Kuzmin , Igor Gerasimov , Xiangping Cheng
{"title":"RTFVE-YOLOv9: Real-time fruit volume estimation model integrating YOLOv9 and binocular stereo vision","authors":"Wenlong Yi , Shuokang Xia , Sergey Kuzmin , Igor Gerasimov , Xiangping Cheng","doi":"10.1016/j.compag.2025.110401","DOIUrl":"10.1016/j.compag.2025.110401","url":null,"abstract":"<div><div>This study proposes a real-time fruit volume estimation model based on YOLOv9 (RTFVE-YOLOv9) and binocular stereo vision technology to address the challenges of low automation and insufficient accuracy in fruit volume measurement in complex orchard environments, particularly in scenarios with diverse canopy structures and severe branch-leaf occlusion. The model achieves effective recognition of occluded fruits through the innovative design of a Dual-Scale and Global–Local Sequence (DSGLSeq) module while incorporating a Multi-Head and Multi-Scale Self-Interaction (MHMSI) module to improve the detection performance of small fruit targets. Systematic validation experiments conducted on major economic fruit tree varieties, including apples, pears, pomelos, and kiwifruit, demonstrate that RTFVE-YOLOv9 improved the mean Average Precision (mAP) by 2.1%, 1.6%, 4%, and 3.8% respectively on the four fruit datasets compared to the baseline YOLOv9-c model. The model’s internal working mechanisms were thoroughly revealed through multi-dimensional evaluation, including ablation experiments, Heatmap Analysis, and Effective Receptive Field (ERF) analysis, providing a theoretical foundation for subsequent optimization. The research findings enrich the application theory of computer vision in smart agriculture and provide reliable technical support for achieving precise orchard management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110401"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flavio Rocha de Avila , Jorge Luis Victória Barbosa
{"title":"Smart environments in digital agriculture: a systematic review and taxonomy","authors":"Flavio Rocha de Avila , Jorge Luis Victória Barbosa","doi":"10.1016/j.compag.2025.110393","DOIUrl":"10.1016/j.compag.2025.110393","url":null,"abstract":"<div><div>This paper presents a systematic review on smart environments in digital agriculture and highlights the increasing interest in integrating advanced technologies throughout the agricultural life cycle. The study emphasizes the importance of these environments in productivity enhancement and agriculture sustainability. Despite the advancements, there are challenges that remain unaddressed, though. Issues related to security, high implementation costs, and the availability of skilled human resources. These gaps seem to prevent the widespread adoption of smart agricultural technologies. The primary aim of this study is to identify current trends and advancements in smart environments for digital agriculture, as well as, to analyze the enabling technologies that appear to facilitate their implementation. The review was conducted by analyzing articles from five academic databases (ACM, IEEE, Scopus, ScienceDirect, and Springer), between 2019 and December 2024. Initially, 758 articles were identified, and after applying a selection protocol, 71 articles were retained for detailed analysis. The findings reveal that smart environments can significantly enhance agricultural productivity, reduce costs, and promote environmental sustainability. Key technologies identified include sensors, the Internet of Things (IoT), big data analytics, artificial intelligence, and communication networks. The study concludes that while there are promising advancements in smart agriculture, the challenges rely mostly on technology costs and the need of more supportive government policies to foster innovation and nourish its adoption in the sector. The integration of technologies is also crucial for creating efficient, sustainable, and productive agricultural practices.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110393"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changqing Yan , Zeyun Liang , Han Cheng , Shuyang Li , Guangpeng Yang , Zhiwei Li , Ling Yin , Junjie Qu , Jing Wang , Genghong Wu , Qi Tian , Qiang Yu , Gang Zhao
{"title":"CDIP-ChatGLM3: A dual-model approach integrating computer vision and language modeling for crop disease identification and prescription","authors":"Changqing Yan , Zeyun Liang , Han Cheng , Shuyang Li , Guangpeng Yang , Zhiwei Li , Ling Yin , Junjie Qu , Jing Wang , Genghong Wu , Qi Tian , Qiang Yu , Gang Zhao","doi":"10.1016/j.compag.2025.110442","DOIUrl":"10.1016/j.compag.2025.110442","url":null,"abstract":"<div><div>Deep learning (DL) models have shown exceptional accuracy in plant disease identification, yet their practical utility for farmers remains limited due to a lack of professional and actionable guidance. To bridge this gap, we developed CDIP-ChatGLM3, an innovative framework that synergizes a state-of-the-art DL-based computer vision model with a fine-tuned large language model (LLM), designed specifically for Crop Disease Identification and Prescription (CDIP). EfficientNet-B2, evaluated among 10 DL models across 48 diseases and 13 crops, achieved top performance with 97.97 % ± 0.16 % accuracy at a 95 % confidence level. Building on this, we fine-tuned the widely used ChatGLM3-6B LLM using Low-Rank Adaptation (LoRA) and Freeze-tuning, optimizing its ability to deliver precise disease management prescriptions. We compared two training strategies—multi-task learning (MTL) and Dual-stage Mixed Fine-Tuning (DMT)—using a different combination of domain-specific and general datasets. Freeze-tuning with DMT led to substantial performance gains, achieving a 33.16 % improvement in BLEU-4 and a 27.04 % increase in the Average ROUGE F-score, surpassing the original model and state-of-the-art competitors such as Qwen-max, Llama-3.1-405B-Instruct, and GPT-4o. The dual-model architecture of CDIP-ChatGLM3 leverages the complementary strengths of computer vision for image-based disease detection and LLMs for contextualized, domain-specific text generation, offering unmatched specialization, interpretability, and scalability. Unlike resource-intensive multimodal models that blend modalities, our dual-model approach maintains efficiency while achieving superior performance in both disease identification and actionable prescription generation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110442"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accurate recognition and segmentation of northern corn leaf blight in drone RGB Images: A CycleGAN-augmented YOLOv5-Mobile-Seg lightweight network approach","authors":"Fei Wen , Hua Wu , XingXing Zhang , YanMin Shuai , JiaPeng Huang , Xin Li , JunYao Huang","doi":"10.1016/j.compag.2025.110433","DOIUrl":"10.1016/j.compag.2025.110433","url":null,"abstract":"<div><div>Northern corn leaf blight seriously threatens the health of maize crops in Northeast China. The complexity of field environments, coupled with variations in lighting conditions, poses significant challenges for accurate recognition and segmentation of this disease. To address these issues, this study employs CycleGAN networks and other methods to enhance the diversity of the dataset and proposes a lightweight neural network, Yolov5-Mobile-Seg, for the recognition and segmentation of lesion areas caused by Northern corn leaf blight. The Yolov5-Mobile-Seg network uses Mobilev2 as the backbone, integrating the Convolutional Block Attention Module (CBAM) and Fused MobileNet Bottleneck Convolution Module (FusedMBConv). This design enhances the network’s ability to capture critical information from images while minimizing the number of parameters. Additionally, by incorporating the Free Anchors mechanism, the algorithm’s adaptability to varying sizes of lesion areas is enhanced. Experimental results show that this network outperforms other approaches in identifying northern corn leaf blight, achieving an average precision (AP) of 88.8% in the recognition task and 88.0% in the segmentation task. Compared to the original network, the proposed network reduces the number of parameters by 30.6%, while improving the AP of both the recognition and segmentation tasks by 5.1%. This approach facilitates accurate recognition and efficient segmentation of lesion areas, significantly enhancing the precision and speed of damage assessment for northern corn leaf blight in maize fields.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110433"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mafalda Reis Pereira , Renan Tosin , Filipe Neves dos Santos , Fernando Tavares , Mário Cunha
{"title":"Digital assessment of plant diseases: A critical review and analysis of optical sensing technologies for early plant disease diagnosis","authors":"Mafalda Reis Pereira , Renan Tosin , Filipe Neves dos Santos , Fernando Tavares , Mário Cunha","doi":"10.1016/j.compag.2025.110443","DOIUrl":"10.1016/j.compag.2025.110443","url":null,"abstract":"<div><div>The present critical literature review describes the state-of-the-art innovative proximal (ground-based) solutions for plant disease diagnosis, suitable for promoting more precise and efficient phytosanitary measures. Research and development of new sensors for this purpose are currently a challenge. Present procedures and diagnosis techniques depend on visual characteristics and symptoms to be initiated and applied, compromising an early intervention. Also, these methods were designed to confirm the presence of pathogens, which did not have the required high throughput and speed to support real-time agronomic decisions in field extensions. Proximal sensor-based systems are a reasonable tool for an efficient and economic disease assessment. This work focused on identifying the application of optical and spectroscopic sensors as a tool for disease diagnosis. Biophoton emission, fluorescence spectroscopy, laser-induced breakdown spectroscopy, multi- and hyperspectral spectroscopy (HS), nuclear magnetic resonance spectroscopy, Raman spectroscopy, RGB imaging, thermography, volatile organic compounds assessment, and X-ray fluorescence were described due to their relevant potential. Nevertheless, some techniques revealed a low technology readiness level (TRL). The main conclusions identify HS, single and multi-spatial point observation, as the most applied methods for early plant disease diagnosis studies (88%), combined with distinct feature selection (FeS), dimensionality reduction (DR), and modeling techniques. Vegetation indices (28%) and principal component analysis (19%) were the most popular FeS and DR approaches, highlighting the most relevant wavelengths contributing to disease diagnosis. In modeling, classification was the most applied technique (80%), used mainly for binary and multi-class health status identification. Regression was used in the remaining (21%) scientific works screened. The data was collected primarily in laboratory conditions (62%), and a few works were performed in field conditions (21%). Regarding the study’s etiological agent responsible for causing the disease, fungi (53%) and viruses (23%) were the most analyzed group of pathogens found in the literature. Overall, proximal sensors are suitable for early plant disease diagnosis before and after symptom appearance, presenting classification accuracies mostly superior to 71% and regression coefficients superior to 61%. Nevertheless, additional research regarding the study of specific host-pathogen interactions is necessary.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110443"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuan Chen , Changcheng Li , Can Wang , Yansong Xiao , Tianbo Liu , Jiaying Li , Kai Teng , Hailin Cai , Zhipeng Xiao , Hong Zhou , Xiangping Zhou , Weiai Zeng , Yongjun Du , Zheming Yuan , Qianjun Tang , Shaolong Wu
{"title":"The application of integrated deep learning models with the Assistance of meteorological factors in forecasting major tobacco diseases","authors":"Yuan Chen , Changcheng Li , Can Wang , Yansong Xiao , Tianbo Liu , Jiaying Li , Kai Teng , Hailin Cai , Zhipeng Xiao , Hong Zhou , Xiangping Zhou , Weiai Zeng , Yongjun Du , Zheming Yuan , Qianjun Tang , Shaolong Wu","doi":"10.1016/j.compag.2025.110429","DOIUrl":"10.1016/j.compag.2025.110429","url":null,"abstract":"<div><div>Tobacco is a crucial economic crop that is highly susceptible to various diseases during its growth cycle. Developing accurate predictive models is essential for devising effective disease management strategies and reducing pesticide use. This study investigated the occurrence of four major tobacco diseases—tobacco mosaic virus, black shank, bacterial wilt, and brown spot—in seven key tobacco-producing regions of Hunan Province from 2009 to 2015. An ensemble deep learning method, BCNN-LSTM, was developed by integrating Bidirectional Convolutional Neural Networks with Long Short-Term Memory networks. The predictive performance of the BCNN-LSTM model was evaluated across three temporal prediction scales including one-step prediction, short-term prediction, and long-term prediction. In one-step prediction, the average <em>R</em><sup>2</sup> of BCNN‐LSTM reached 0.940—a 14.5 % improvement over the next-best model, CNN1D‐LSTM (0.821). For short‐term prediction, the BCNN‐LSTM attained an average <em>R</em><sup>2</sup> of 0.822, outperforming CNN1D‐LSTM by 11.7 % (0.736). In long‐term prediction, the BCNN‐LSTM achieved an average <em>R</em><sup>2</sup> of 0.838, marking a 34.2 % enhancement compared to the top competing Temporal Convolutional Network (TCN) model (0.625). Furthermore, across all three prediction scales, BCNN‐LSTM consistently delivered lower average Root Mean Squared Error (<em>RMSE</em>) values—3.880, 9.974, and 10.610, respectively. These results demonstrate that, relative to conventional forecasting models, the incorporation of a bidirectional convolution module in BCNN‐LSTM enables effective capture of the personalized temporal effects of meteorological factors and their interactions, thereby bolstering the model’s ability to represent nonlinear and dynamic characteristics. Notably, as a lightweight model (<10 MB), BCNN‐LSTM exhibits excellent scalability and adaptability, making it well-suited for integration into intelligent agricultural systems for large-scale monitoring of tobacco diseases.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110429"},"PeriodicalIF":7.7,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}