Lei Zhang , Changchun Li , Guangsheng Zhang , Xifang Wu , Longfei Zhou , Lulu Chen , Yinghua Jiao , Guodong Liu , Wenyan Hei
{"title":"Winter wheat yield estimation based on multisource remote sensing data: A dual-branch TCN-Transformer model and analysis of growth-stage feature transition mechanisms","authors":"Lei Zhang , Changchun Li , Guangsheng Zhang , Xifang Wu , Longfei Zhou , Lulu Chen , Yinghua Jiao , Guodong Liu , Wenyan Hei","doi":"10.1016/j.compag.2025.111014","DOIUrl":"10.1016/j.compag.2025.111014","url":null,"abstract":"<div><div>Timely and accurate acquisition of winter wheat yield information is crucial for ensuring food security and formulating agricultural policies. Although deep learning methods have become increasingly prominent in crop yield estimation, they often face challenges in simultaneously capturing both fine-grained local patterns and long-term temporal dependencies in time series data. By utilizing EVI, LAI, and fraction of photosynthetically active radiation (FPAR) from MODIS, along with temperature (TEM) and precipitation (PRE) data from ERA5-Land, we propose a novel dual-branch hybrid model named TCN–Transformer (TCT), which synergistically integrates temporal convolutional network (TCN) and transformer architectures to concurrently capture both localized temporal patterns and long-term dependencies. Bayesian optimization was employed for automated hyperparameter tuning, enabling accurate estimation of winter wheat yield under diverse agricultural management conditions. The experimental results demonstrate that optimal performance is achieved by the proposed TCT model in terms of estimating the county-level winter wheat yields across North China on the test set (R2 = 0.80, RMSE = 645.75 kg/ha). It significantly outperforms the individual temporal models (the TCN, LSTM, and transformer) and other comparative models, including traditional machine learning methods (Ridge, RF, LightGBM, and XGBoost) and an advanced hybrid model (CNN-BiLSTM). Specifically, compared with the individual models, the TCT improved R2 by 0.03 to 0.1 and reduced the RMSE by 29.33 to 156.07 kg/ha. It also outperforms CNN-BiLSTM (R2 = 0.78, RMSE = 668.23 kg/ha), achieving lower errors and more robust bias control. To elucidate the decision-making mechanism of the model, the Shapley additive explanations (SHAP) method was employed to analyze the feature importance values across the study region and the temporal feature weights at 8-day intervals. The results reveal that the EVI is the most representative feature, with the model accurately identifying critical growth stages from T20 (February 26) to T28 (May 1), corresponding to the greening to milk phases, respectively. The feature contribution dynamics were further visualized, revealing a transition from FPAR dominance during early greening (T20–T22) to EVI dominance during jointing (T23–T25), EVI‒PRE interactions during heading-milk (T26–T29), and finally LAI‒PRE dominance at maturity (T30–T32). Furthermore, the one-year leave-one-out cross-validation confirms the robustness of the TCT model, the simulation of yield spatial distribution for unseen years is consistent with the official yield data. Additionally, the proposed interpretability framework not only performed excellently in this study but also demonstrated strong generalizability and flexibility, indicating its broad application potential in other crop types and agricultural domains.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 111014"},"PeriodicalIF":8.9,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoyu Zhi , Qiaomin Chen , Yingchun Han , Beifang Yang , Yaru Wang , Fengqi Wu , Shiwu Xiong , Yahui Jiao , Yunzhen Ma , Shilong Shang , Tao Lin , Yaping Lei , Yabing Li
{"title":"Multi-modal feature integration from UAV-RGB imagery for high-precision cotton phenotyping: A paradigm shift toward cost-effective agricultural remote sensing","authors":"Xiaoyu Zhi , Qiaomin Chen , Yingchun Han , Beifang Yang , Yaru Wang , Fengqi Wu , Shiwu Xiong , Yahui Jiao , Yunzhen Ma , Shilong Shang , Tao Lin , Yaping Lei , Yabing Li","doi":"10.1016/j.compag.2025.111002","DOIUrl":"10.1016/j.compag.2025.111002","url":null,"abstract":"<div><div>Cost-effective remote sensing solutions are critically needed to democratize precision agriculture technologies. While hyperspectral and LiDAR systems deliver high accuracy, their prohibitive costs limit widespread adoption. This study demonstrates that systematic multi-modal feature integration transforms standard UAV-based RGB imagery into a powerful phenotyping instrument, achieving crop trait prediction accuracy comparable to systems costing 10–50 times more. We developed a comprehensive framework integrating spectral indices, geometric parameters, and texture metrics from commodity RGB sensors to predict five critical cotton traits: leaf area index (LAI), intercepted photosynthetically active radiation (IPAR), above-ground biomass, lint yield, and seed cotton yield. The progressive integration approach employed Random Forest regression with four feature configurations: baseline color indices (CI<sub>base</sub>), refined color indices (CI<sub>ref</sub>), geometric parameters (CI<sub>ref</sub> + GP), and texture metrics (CI<sub>ref</sub> + GP + T). Field experiments across three trials over two growing seasons (2022–2023) with varying genotypes, planting densities, and sowing dates provided 2,126 ground truth measurements for model development and validation. The optimal multi-modal model achieved R<sup>2</sup> = 0.97 for IPAR (rRMSE = 6 %), R<sup>2</sup> = 0.91 for LAI (rRMSE = 15 %), and R<sup>2</sup> = 0.85 for biomass (rRMSE = 32 %), with lint yield and seed cotton yield demonstrating R<sup>2</sup> values of 0.92 and 0.77, respectively. Variance partitioning analysis revealed texture features as the dominant contributor (16.2 % ± 7.1 %), followed by spectral indices (9.1 % ± 4.2 %) and geometric parameters (8.0 % ± 2.8 %), with substantial shared variance (45–65 %) indicating strong feature complementarity. Phenological analysis demonstrated that flowering-stage imagery outperformed boll opening stage measurements, while stage-general models showed superior robustness. Cross-temporal validation confirmed model generalizability, with trial-general models achieving R<sup>2</sup> values of 0.91–0.97 for IPAR across diverse environmental conditions. The framework enables sub-meter spatial resolution trait mapping while maintaining operational simplicity and cost-effectiveness, demonstrating that systematic feature engineering can democratize high-precision phenotyping technologies for broader agricultural applications.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 111002"},"PeriodicalIF":8.9,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangying Xu , Jing Jin , Li Xu , Zijian Zheng , Yonglong Zhang , Zhiping Zhang
{"title":"A semi-supervised method for detecting female cucumber flowers in greenhouses based on unmanned aerial vehicle images","authors":"Xiangying Xu , Jing Jin , Li Xu , Zijian Zheng , Yonglong Zhang , Zhiping Zhang","doi":"10.1016/j.compag.2025.111009","DOIUrl":"10.1016/j.compag.2025.111009","url":null,"abstract":"<div><div>Flower quantity and the flowering time often serve as<!--> <!-->reliable indicators of the growth and nutritional status of the cucumber plants. In particular, the number of female flowers reflects the yield potential of the cucumber crops. Therefore,<!--> <!-->timely identification and accurate counting of female flowers are important measures in cucumber cultivation. In order to satisfy the practical detection requirements, especially in the scenario of greenhouses with a high planting density, we have presented an efficient and accurate cucumber flower recognition framework MYTS (Mamba-YOLO based Teacher-Student model) in this research. A mini UAV (unmanned aerial vehicle) was used to enhance the convenience and efficiency of obtaining cucumber images. After data augmentation, we employed a three-stage deep learning pipeline to train our model, i.e. supervised learning, self-distillation, and semi-supervised learning. Experimental results demonstrate that the female flower detection of MYTS achieves precision, recall, and mAP values of 90.6%, 89.7%, and 95.2%, respectively. The mAP value outperforms leading models such as YOLO and Mamba-YOLO by 5.7% and 5.1%. The ablation experiments reveal that multiple attention mechanisms significantly improve the model performance during the stage 1 by 3.5% in mAP value. Additionally, compared to the original supervised model, the three-stage pipeline enhances the mAP values of all investigated models by 1.4% to 6.2%.<!--> <!-->Specifically, the MYTS model shows a 1.6% improvement over its supervised counterpart. In the future, further exploration<!--> <!-->should be conducted<!--> <!-->to apply the MYTS model in cucumber yield estimation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 111009"},"PeriodicalIF":8.9,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaolong Chen, Yuqi Chen, Zhicheng Huang, Jingting Wang, Yufei Deng, Pei Wang, Runmao Zhao, Lian Hu
{"title":"Integrated measurement method for field surface topography and tillage depth in rotary tillage operations","authors":"Gaolong Chen, Yuqi Chen, Zhicheng Huang, Jingting Wang, Yufei Deng, Pei Wang, Runmao Zhao, Lian Hu","doi":"10.1016/j.compag.2025.111000","DOIUrl":"10.1016/j.compag.2025.111000","url":null,"abstract":"<div><div>Field surface topography and tillage depth are crucial information for guiding crop production. However, the separate measurement of field surface topography and tillage depth increases production costs. To address issues, this study proposes an integrated measurement method for field surface topography and tillage depth in rotary tillage operations. Based on the operational characteristics of the rotary tiller, a simultaneous measurement method for field surface and the tillage bottom-layer topography (FS-TBLSM) was proposed. Building on this, a method was developed to arrange grid points, referred to as the directional adaptive gridding method in plane topography (DAG-PT), and a sample-approximated Gaussian process regression (SA-GPR) algorithm was used to estimate the field surface topography height and tillage depth at a given grid point. The accuracy of these methods was evaluated using verification and field tests. The verification results showed that the FS-TBLSM method achieved a static root mean square error (RMSE) of less than 15.00 mm along all three axes, with dynamic RMSEs below 20.00 mm, confirming the effectiveness of the FS-TBLSM method and its good dynamic tracking capability. Further field test results indicated that the field surface topography measured using the FS-TBLSM method aligned with the true topography. The measured field surface topography height exhibited an average absolute error (AAE) of 18.13 mm and an RMSE of 20.58 mm, validating the accuracy and reliability of this method for field surface topography measurement. Using true surface topographic height and tillage depth at 20 points as references, an AAE and RMSE of 17.13 and 17.95 mm, respectively, were obtained for surface topographic height estimation; estimated tillage depth exhibited an AAE and RMSE of 14.52 and 16.49 mm, respectively. These results demonstrate that the SA-GPR algorithm can accurately estimate the field surface topography height and tillage depth after rotary tillage operations. The integrated measurement method performs the measurement in a single operation, reducing the number of field operations by 50 %, saving an estimated 15.57 kg/ha in fuel consumption. Additionally, this study provides key inputs for leveling operations, including setting the base height, calculating earthwork volume, and planning paths. It also supports active control of seeding depth and provides references for yield assessment.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 111000"},"PeriodicalIF":8.9,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bidong Chen , Lingui Li , Han Zhu , Meijuan Tan , Guanhua Liu , Haiyang Chi , Xu Yang , Yapeng Wang
{"title":"PRINCE: Advanced classification algorithm for rice grain recognition in clustered images","authors":"Bidong Chen , Lingui Li , Han Zhu , Meijuan Tan , Guanhua Liu , Haiyang Chi , Xu Yang , Yapeng Wang","doi":"10.1016/j.compag.2025.110949","DOIUrl":"10.1016/j.compag.2025.110949","url":null,"abstract":"<div><div>With the rapid development of agriculture, the number of paddy (Oryza sativa L.) is increasing. However, accurately recognizing the variety of rice grain (dehusked paddy) is a significant challenge due to the occlusion and similarity problems in the image recognition field. To address the rice grain recognition problem in clustered images, we propose a novel precision rice grain identification and classification engine (PRINCE) architecture for high-similarity clustered rice grain images. Specifically, we pioneer the exploration and implementation of the SAM model in rice grain analysis, achieving zero-shot semantic segmentation of clustered rice grain images with diverse morphological masks. Secondly, we design a dual-layer filter (D-Filter), where Filter-I is a threshold-controlled discrete rice grain morphology quantitative analysis method for calibrating the morphological integrity of rice grain masks, and Filter-II is a neural network classifier of rice grain mask images that selects complete rice grain mask images from complex mask data. Finally, we integrate dual migration learning and pre-trained model fine-tuning (D-FTL) to train a classification model that accurately recognizes twelve visually indistinguishable discrete rice grain varieties, achieving a weighted F1-score of 82.29%, Top1 accuracy of 82.238%, and area under the curve (AUC) of 0.99. Extensive experimental results show that the proposed PRINCE architecture outperforms seven existing mainstream classification models in terms of accuracy, precision, and recall. Our research demonstrates practical significance in rice variety identification, cooking parameter optimization, and adulteration detection, establishing a novel framework for intelligent grain assessment and optimal cooking outcomes.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110949"},"PeriodicalIF":8.9,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smart irrigation systems in agriculture: An overview","authors":"Vikas Sharma , Gurleen Kaur , Sreethu S. , Vandna Chhabra , Rajeev Kashyap","doi":"10.1016/j.compag.2025.111008","DOIUrl":"10.1016/j.compag.2025.111008","url":null,"abstract":"<div><div>Smart irrigation systems represent a transformative solution to the pressing challenges of water scarcity, climate variability, and the demand for sustainable agricultural intensification. By integrating advanced technologies such as the Internet of Things (IoT), Wireless Sensor Networks (WSNs), cloud computing, and Artificial Intelligence (AI), these systems enable real-time, data-driven monitoring and control of irrigation practices. This review provides a comprehensive overview of the architecture, core technologies, and communication protocols that support smart irrigation, with a specific emphasis on their role in enhancing crop productivity, improving water use efficiency, and fostering climate-resilient agricultural systems. The integration of AI and Machine Learning (ML) models in irrigation scheduling is critically examined, highlighting commonly used algorithms, their applications, accuracy, and associated limitations. Furthermore, the review discusses key practical challenges, including the selection criteria and limitations of various technologies, particularly in the context of smallholder farming systems. Through recent innovations and case studies, this work underscores the potential of smart irrigation systems to revolutionize water management in agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 111008"},"PeriodicalIF":8.9,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nextv2-DETR: lightweight real-time classification model of potatoes based on improved RT-DETR for mobile deployment","authors":"Xiang Kong, Fei Liu, Yingsi Wu, Lihe Wang, Wenxue Dong, Xuan Zhao","doi":"10.1016/j.compag.2025.110996","DOIUrl":"10.1016/j.compag.2025.110996","url":null,"abstract":"<div><div>To address the challenge of quickly and accurately identifying and localizing potatoes in a complex production environment, this study proposes a lightweight potato classification algorithm, Nextv2-DETR, with enhanced feature extraction capabilities. The backbone of the model employs a lightweight ConNextv2 and incorporates DSC to reduce the number of parameters and realize potato feature extraction. The C2f module was enhanced with the BiFormer_attention mechanism, and a lightweight feature fusion network was designed. These modifications improved the extraction of potato features while maintaining a lightweight architecture. The average precision of the network saw a significant rise, climbing from 93.3 % to 96.3 % after the refinements were implemented. Concurrently, the model’s size, the floating-point operations required for processing a single image, and the detection timeframe were minimized to 43.8 %, 31.1 %, and 94.4 % of what they were initially. The mainstream target detection algorithm was employed for training on the dataset, and the training results were compared with the outcomes achieved by the algorithm proposed in this study. Compared to leading detection algorithms such as FAST-RCNN, YOLOv7x, YOLOXs, and lightweight models like YOLOv5s and YOLOv8s, the proposed method achieves superior accuracy with substantially fewer parameters. Operating at 70 FPS, the model provides a robust solution for integrating visual robotics into potato harvesting, demonstrating the potential to enhance agricultural productivity and efficiency.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110996"},"PeriodicalIF":8.9,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuxing Wei , Lina Zhang , Fan Yang , Xinhua Jiang , Jue Zhang , Lin Zhu , Meijia Yu , Maoguo Gong
{"title":"Automatic measurement method of sheep body size based on 3D reconstruction and point cloud segmentation","authors":"Yuxing Wei , Lina Zhang , Fan Yang , Xinhua Jiang , Jue Zhang , Lin Zhu , Meijia Yu , Maoguo Gong","doi":"10.1016/j.compag.2025.110978","DOIUrl":"10.1016/j.compag.2025.110978","url":null,"abstract":"<div><div>Sheep body measurement data can comprehensively reflect body size, structural characteristics, growth status, and developmental relationships between different body parts. Automatically obtaining sheep body size parameters represents a critical technological advancement towards digital intelligent livestock farming. Currently, computer vision-based livestock body measurement techniques have garnered significant research interest due to their non-contact and efficient nature. However, sheep’s collective behavior and highly flexible body postures present challenges to vision-based measurement methodologies. Addressing these challenges, this paper proposes an automatic body size measurement method for sheep utilizing 3D reconstruction and point cloud segmentation technologies. Three KinectV2 cameras were strategically positioned in the passageway leading to the activity field to capture multi-view sheep images. Through multi-view image alignment, a 3D reconstruction was achieved to reproduce the sheep’s spatial morphology. The PointNet++ deep learning model was trained to develop an automatic sheep body segmentation model. Based on local pose normalization, keypoints were automatically detected by utilizing morphometric features, then body size parameters were computed. Farm experiment results demonstrated measurement accuracy: average relative errors for wither height, chest width, rump height, rump width, body length, and chest girth were 1.67 %, 3.63 %, 1.14, 2.71 %, 3.57 %, and 3.71 %, respectively. Experimental validation demonstrates the proposed approach reduces hardware requirements for automated body‑size measurement while maintaining the accuracy of body measurements. This enhancement facilitates its broader adoption and improves animal welfare. Meanwhile, experimental outcomes revealed that irregular posture challenges the efficiency of automated body size measurements, highlighting the necessity of integrating pose estimation techniques to further enhance measurement precision in future research.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110978"},"PeriodicalIF":8.9,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ai-driven advanced flexible pressure sensor arrays for smart animal husbandry: Response characteristics, optimization strategies, innovative applications","authors":"Dongsheng Jiang , Mengjie Zhang , Jiahao Yu , Qinan Zhao , Marija Brkic Bakaric , Kaikang Chen , Xiaoshuan Zhang","doi":"10.1016/j.compag.2025.110988","DOIUrl":"10.1016/j.compag.2025.110988","url":null,"abstract":"<div><div>Flexible pressure sensor arrays, characterized by high sensitivity, stretchability, and biocompatibility, have emerged as a transformative technology across multiple disciplines, including robotics, healthcare, and biomedical applications. So far, despite significant advances in many aspects of flexible pressure sensors, comprehensive commentary on their role in smart animal husbandry remains scarce. This paper systematically evaluates the transmission mechanism, key performance index, optimization strategy, manufacturing method, and system integration of the flexible pressure sensor array. The cutting-edge and innovative applications of flexible pressure sensor arrays in animal husbandry were introduced, emphasizing how real-time monitoring and data analysis can improve production efficiency, optimize resource allocation, and improve animal welfare. The paper also explores the challenges and prospects of flexible pressure sensor arrays, with the expectation that this work will provide a valuable reference for researchers, engineers, and policymakers seeking to advance flexible sensing technologies in smart animal husbandry.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110988"},"PeriodicalIF":8.9,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of Spodoptera litura F. using an electronic nose: A novel approach for monitoring vegetable crop pests","authors":"Atirach Noosidum , Rattanawadee Onwong , Jarunee Phittayanivit , Chatchaloem Arkhan , Pisit Poolprasert , Benjakhun Sangtongpraow , Chatchawal Wongchoosuk","doi":"10.1016/j.compag.2025.110984","DOIUrl":"10.1016/j.compag.2025.110984","url":null,"abstract":"<div><div>The common cutworm, <em>Spodoptera litura</em>, is an economically important insect pest worldwide. However, outbreaks and control failures are frequently reported, with infestations often detected by farmers only after severe damage has occurred. This study aimed to apply an electronic nose (e-nose) based on eight metal oxide semiconductor (MOS) gas sensors to detect insect odors emitted from different stages of <em>S. litura</em> and volatile organic compounds (VOCs) released from their infested host plants. In laboratory tests, a prototype e-nose equipped with eight TGS sensors successfully detected various insect odors from different developmental stages and numbers of individuals (0.31–41.04 %). Older larvae and higher numbers of insects induced higher sensor responses. The e-nose also distinguished between two closely related species, <em>S. litura</em> and <em>S. exigua</em>. Additionally, plant VOCs released from pak choi (<em>Brassica rapa</em>) leaves with more than 5 % damage caused by <em>S. litura</em> larvae were more readily detected by the e-nose compared to leaves cut with scissors or undamaged plants. The e-nose also detected insect odors from <em>S. litura</em> larvae and plant VOCs from infested plants at distances of up to 40 cm. In greenhouse tests, the e-nose began detecting differences in insect odors and plant VOCs measurements between <em>S. litura</em>-infested and healthy plants when neonate larvae invaded plant leaves. This study demonstrates the potential of the e-nose equipped with eight TGS sensors as a detection tool for <em>S. litura</em> infestations in crop production.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"239 ","pages":"Article 110984"},"PeriodicalIF":8.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145060095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}