Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
A perspective analysis of imaging-based monitoring systems in precision viticulture: Technologies, intelligent data analyses and research challenges 精准葡萄栽培中基于成像的监测系统的视角分析:技术、智能数据分析和研究挑战
IF 12.4
Artificial Intelligence in Agriculture Pub Date : 2025-09-09 DOI: 10.1016/j.aiia.2025.08.001
Annaclaudia Bono , Cataldo Guaragnella , Tiziana D'Orazio
{"title":"A perspective analysis of imaging-based monitoring systems in precision viticulture: Technologies, intelligent data analyses and research challenges","authors":"Annaclaudia Bono ,&nbsp;Cataldo Guaragnella ,&nbsp;Tiziana D'Orazio","doi":"10.1016/j.aiia.2025.08.001","DOIUrl":"10.1016/j.aiia.2025.08.001","url":null,"abstract":"<div><div>This paper presents a comprehensive review of recent advancements in intelligent monitoring systems within the precision viticulture sector. These systems have the potential to make agricultural production more efficient and ensure the adoption of sustainable practices to increase food production and meet growing global demand while maintaining high-quality standards. The review examines core components of non-destructive imaging-based monitoring systems in vineyards, focusing on sensors, tasks, and data processing methodologies. Particular emphasis is placed on solutions designed for practical, in-field deployment. The analysis revealed that the most commonly used sensors are RGB cameras and that the most widespread analysis focuses on grape bunches, as they provide information on both the quality and quantity of the harvest. Regarding the image processing methods, it emerged that those based on deep learning are the most adopted. In addition, a detailed analysis highlights the main technical and practical limitations in real-world scenarios, such as the management of computational resources, the need for large datasets, and the difficulties in interpreting the results. The paper concludes with an in-depth discussion of the challenges and open research questions, providing insights into potential future directions for intelligent monitoring systems in precision viticulture. These include the continued exploration of sensors to balance ease of use and accuracy, the development of generalizable methods, experimentation in real-world scenarios, and collaboration between experts for practical solutions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 62-84"},"PeriodicalIF":12.4,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an enhanced hybrid attention YOLOv8s small object detection method for phenotypic analysis of root nodules 一种用于根瘤表型分析的增强杂交注意YOLOv8s小目标检测方法的开发
IF 12.4
Artificial Intelligence in Agriculture Pub Date : 2025-07-21 DOI: 10.1016/j.aiia.2025.07.001
Ya Zhao , Wen Zhang , Liangxiao Zhang , Xiaoqian Tang , Du Wang , Qi Zhang , Peiwu Li
{"title":"Development of an enhanced hybrid attention YOLOv8s small object detection method for phenotypic analysis of root nodules","authors":"Ya Zhao ,&nbsp;Wen Zhang ,&nbsp;Liangxiao Zhang ,&nbsp;Xiaoqian Tang ,&nbsp;Du Wang ,&nbsp;Qi Zhang ,&nbsp;Peiwu Li","doi":"10.1016/j.aiia.2025.07.001","DOIUrl":"10.1016/j.aiia.2025.07.001","url":null,"abstract":"<div><div>Nodule formation and their involvement in biological nitrogen fixation are critical features of leguminous plants, with phenotypic characteristics closely linked to plant growth and nitrogen fixation efficiency. However, the phenotypic analysis of root nodules remains technically challenging due to their small size, weak texture, dense clustering, and occlusion. To address these challenges, this study constructed a scanner-based imaging platform and optimized data acquisition conditions for high-resolution, high-consistency root nodule images under field conditions. In addition, A hybrid small-object detection method, SCO-YOLOv8s, was proposed, integrating Swin Transformer and CBAM attention mechanisms into the YOLOv8s framework to enhance global and local feature representation. Furthermore, an Otsu segmentation-based post-processing module was incorporated to validate and refine detection results based on geometric features, boundary sharpness, and image entropy, effectively reducing false positives and enhancing robustness in complex scenes. Using this integrated approach, over 3375 nodules were identified from a single plant sample in under 1 min, with extracted phenotypic features such as diameter, color, and texture. A total of 10,879 high-quality annotated images were collected from 39 peanut varieties across 14 provinces and 31 soybean varieties across 12 provinces in China, addressing the current lack of large-scale datasets for legume root nodules. The SCO-YOLOv8s model achieved a precision of 97.29 %, a mAP of 98.23 %, and an overall identification accuracy of 95.83 %. This integrated approach provides a practical and scalable solution for high-throughput nodule phenotyping, and may contribute to a deeper understanding of nitrogen fixation mechanisms.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 12-43"},"PeriodicalIF":12.4,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144722714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic body temperature detection of group-housed piglets based on infrared and visible image fusion 基于红外与可见光图像融合的群养仔猪体温自动检测
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-07-07 DOI: 10.1016/j.aiia.2025.06.008
Kaixuan Cuan , Feiyue Hu , Xiaoshuai Wang , Xiaojie Yan , Yanchao Wang , Kaiying Wang
{"title":"Automatic body temperature detection of group-housed piglets based on infrared and visible image fusion","authors":"Kaixuan Cuan ,&nbsp;Feiyue Hu ,&nbsp;Xiaoshuai Wang ,&nbsp;Xiaojie Yan ,&nbsp;Yanchao Wang ,&nbsp;Kaiying Wang","doi":"10.1016/j.aiia.2025.06.008","DOIUrl":"10.1016/j.aiia.2025.06.008","url":null,"abstract":"<div><div>Rapid and accurate measurement of body temperature is essential for early disease detection, as it is a key indicator of piglet health. Infrared thermography (IRT) is a widely used, convenient, non-intrusive, and efficient non-contact temperature measurement technology. However, the activities and clustering of group-housed piglets make it challenging to measure the individual body temperature using IRT. This study proposes a method for detecting body temperature in group-housed piglets using infrared-visible image fusion. The infrared and visible images were automatically captured by cameras mounted on a robot. An improved YOLOv8-PT model was proposed to detect both piglets and their key body regions (ears, abdomen and hip) in visible images. Subsequently, the Oriented FAST and Rotated BRIEF (ORB) image registration method and the U2Fusion image fusion network were employed to extract temperatures from the detected body parts. Finally, a core body temperature (CBT) estimation model was developed, with actual rectal temperature serving as the gold standard. The temperatures of three body parts detected by infrared thermography were used to estimate CBT, and the maximum estimated temperature based on these body parts (EBT-Max) was selected as the final result. In the experiment, the YOLOv8-PT model achieved a [email protected] of 93.6 %, precision of 93.3 %, recall of 88.9 %, and F1 score of 91.05 %. The average detection time per image was 4.3 ms, enabling real-time detection. Additionally, the mean absolute errors (MAE) and correlation coefficient between EBT-Max and actual rectal temperature is 0.40 °C and 0.6939, respectively. Therefore, this method provides a feasible and efficient approach for group-housed piglets body temperature detection and offers a reference for the development of automated pig health monitoring systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 1-11"},"PeriodicalIF":8.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144595478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VMGP: A unified variational auto-encoder based multi-task model for multi-phenotype, multi-environment, and cross-population genomic selection in plants VMGP:一个基于统一变分自编码器的多任务模型,用于植物的多表型、多环境和跨群体基因组选择
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-24 DOI: 10.1016/j.aiia.2025.06.007
Xiangyu Zhao , Fuzhen Sun , Jinlong Li , Dongfeng Zhang , Qiusi Zhang , Zhongqiang Liu , Changwei Tan , Hongxiang Ma , Kaiyi Wang
{"title":"VMGP: A unified variational auto-encoder based multi-task model for multi-phenotype, multi-environment, and cross-population genomic selection in plants","authors":"Xiangyu Zhao ,&nbsp;Fuzhen Sun ,&nbsp;Jinlong Li ,&nbsp;Dongfeng Zhang ,&nbsp;Qiusi Zhang ,&nbsp;Zhongqiang Liu ,&nbsp;Changwei Tan ,&nbsp;Hongxiang Ma ,&nbsp;Kaiyi Wang","doi":"10.1016/j.aiia.2025.06.007","DOIUrl":"10.1016/j.aiia.2025.06.007","url":null,"abstract":"<div><div>Plant breeding stands as a cornerstone for agricultural productivity and the safeguarding of food security. The advent of Genomic Selection heralds a new epoch in breeding, characterized by its capacity to harness whole-genome variation for genomic prediction. This approach transcends the need for prior knowledge of genes associated with specific traits. Nonetheless, the vast dimensionality of genomic data juxtaposed with the relatively limited number of phenotypic samples often leads to the “curse of dimensionality”, where traditional statistical, machine learning, and deep learning methods are prone to overfitting and suboptimal predictive performance. To surmount this challenge, we introduce a unified Variational auto-encoder based Multi-task Genomic Prediction model (VMGP) that integrates self-supervised genomic compression and reconstruction with multiple prediction tasks. This approach provides a robust solution, offering a formidable predictive framework that has been rigorously validated across public datasets for wheat, rice, and maize. Our model demonstrates exceptional capabilities in multi-phenotype and multi-environment genomic prediction, successfully navigating the complexities of cross-population genomic selection and underscoring its unique strengths and utility. Furthermore, by integrating VMGP with model interpretability, we can effectively triage relevant single nucleotide polymorphisms, thereby enhancing prediction performance and proposing potential cost-effective genotyping solutions. The VMGP framework, with its simplicity, stable predictive prowess, and open-source code, is exceptionally well-suited for broad dissemination within plant breeding programs. It is particularly advantageous for breeders who prioritize phenotype prediction yet may not possess extensive knowledge in deep learning or proficiency in parameter tuning.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 829-842"},"PeriodicalIF":8.2,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144534365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognizing and localizing chicken behaviors in videos based on spatiotemporal feature learning 基于时空特征学习的视频鸡行为识别与定位
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-21 DOI: 10.1016/j.aiia.2025.06.006
Yilei Hu , Jinyang Xu , Zhichao Gou , Di Cui
{"title":"Recognizing and localizing chicken behaviors in videos based on spatiotemporal feature learning","authors":"Yilei Hu ,&nbsp;Jinyang Xu ,&nbsp;Zhichao Gou ,&nbsp;Di Cui","doi":"10.1016/j.aiia.2025.06.006","DOIUrl":"10.1016/j.aiia.2025.06.006","url":null,"abstract":"<div><div>Timely acquisition of chicken behavioral information is crucial for assessing chicken health status and production performance. Video-based behavior recognition has emerged as a primary technique for obtaining such information due to its accuracy and robustness. Video-based models generally predict a single behavior from a single video segment of a fixed duration. However, during periods of high activity in poultry, behavior transition may occur within a video segment, and existing models often fail to capture such transitions effectively. This limitation highlights the insufficient temporal resolution of video-based behavior recognition models. This study presents a chicken behavior recognition and localization model, CBLFormer, which is based on spatiotemporal feature learning. The model was designed to recognize behaviors that occur before and after transitions in video segments and to localize the corresponding time interval for each behavior. An improved transformer block, the cascade encoder-decoder network (CEDNet), a transformer-based head, and weighted distance intersection over union (WDIoU) loss were integrated into CBLFormer to enhance the model's ability to distinguish between different behavior categories and locate behavior boundaries. For the training and testing of CBLFormer, a dataset was created by collecting videos from 320 chickens across different ages and rearing densities. The results showed that CBLFormer achieved a [email protected]:0.95 of 98.34 % on the test set. The integration of CEDNet contributed the most to the performance improvement of CBLFormer. The visualization results confirmed that the model effectively captured the behavioral boundaries of chickens and correctly recognized behavior categories. The transfer learning results demonstrated that the model is applicable to chicken behavior recognition and localization tasks in real-world poultry farms. The proposed method handles cases where poultry behavior transitions occur within the video segment and improves the temporal resolution of video-based behavior recognition models.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 816-828"},"PeriodicalIF":8.2,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144480996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FGPointKAN++ point cloud segmentation and adaptive key cutting plane recognition for cow body size measurement fgpointkan++点云分割和自适应关键切割平面识别的奶牛体型测量
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-18 DOI: 10.1016/j.aiia.2025.06.003
Guoyuan Zhou , Wenhao Ye , Sheng Li , Jian Zhao , Zhiwen Wang , Guoliang Li , Jiawei Li
{"title":"FGPointKAN++ point cloud segmentation and adaptive key cutting plane recognition for cow body size measurement","authors":"Guoyuan Zhou ,&nbsp;Wenhao Ye ,&nbsp;Sheng Li ,&nbsp;Jian Zhao ,&nbsp;Zhiwen Wang ,&nbsp;Guoliang Li ,&nbsp;Jiawei Li","doi":"10.1016/j.aiia.2025.06.003","DOIUrl":"10.1016/j.aiia.2025.06.003","url":null,"abstract":"<div><div>Accurate and efficient body size measurement is essential for health assessment and production management in modern animal husbandry. In order to realize the segmentation of the point clouds at the pixel-level and the accurate calculation of body size for the dairy cows in different postures, a segmentation model (FGPointKAN++) and an adaptive key cutting plane recognition (AKCPR) model are developed. FGPointKAN++ introduces FGE module and KAN that enhance local feature extraction and geometric consistency, significantly improving dairy cow part segmentation accuracy. The AKCPR utilizes adaptive plane fitting and dynamic orientation calibration to optimize the key body size measurement. The dairy cow body size parameters are then calculated based on the plane geometry features. The experimental results show that mIoU scores of 82.92 % and 83.24 % for the dairy cow pixel-level point cloud segmentation results. The calculated Mean Absolute Percentage Errors (MAPE) of Wither Height (WH), Body Width (BW), Chest Circumference (CC) and Abdominal Circumference (AC) are 2.07 %, 3.56 %, 2.24 % and 1.42 %, respectively. This method enables precise segmentation and automatic body size measurement of dairy cows in various walking postures, showing considerable potential for practical applications. It provides technical support for unmanned, intelligent, and precision farming, thereby enhancing animal welfare and improving economic efficiency.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 783-801"},"PeriodicalIF":8.2,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144365220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of artificial intelligence in insect pest identification - A review 人工智能在害虫识别中的应用综述
IF 12.4
Artificial Intelligence in Agriculture Pub Date : 2025-06-16 DOI: 10.1016/j.aiia.2025.06.005
Sourav Chakrabarty , Chandan Kumar Deb , Sudeep Marwaha , Md. Ashraful Haque , Deeba Kamil , Raju Bheemanahalli , Pathour Rajendra Shashank
{"title":"Application of artificial intelligence in insect pest identification - A review","authors":"Sourav Chakrabarty ,&nbsp;Chandan Kumar Deb ,&nbsp;Sudeep Marwaha ,&nbsp;Md. Ashraful Haque ,&nbsp;Deeba Kamil ,&nbsp;Raju Bheemanahalli ,&nbsp;Pathour Rajendra Shashank","doi":"10.1016/j.aiia.2025.06.005","DOIUrl":"10.1016/j.aiia.2025.06.005","url":null,"abstract":"<div><div>The increasing danger of insect pests to agriculture and ecosystems calls for quick, and precise diagnosis. Conventional techniques that depend on human observation and taxonomic knowledge are frequently labour-intensive and time-consuming. Incorporating artificial intelligence (AI) into detection has emerged as an effective approach in agriculture, including entomology. AI-based detection methods use machine learning, deep learning algorithms, and computer vision techniques to automate and improve the identification of insects. Deep learning algorithms, such as convolutional neural networks (CNNs), are primarily used for AI-powered insect pest identification by categorizing insects based on their visual features through image-based classification methodology. These methods have revolutionized insect identification by analyzing large databases of insect images and identifying distinct patterns and features linked to different species. AI-powered systems can improve insect pest identification by utilizing other data modalities. However, there are obstacles to overcome, such as the scarcity of high-quality labelled datasets and scalability and affordability issues. Despite these challenges, there is significant potential for AI-powered insect pest identification and pest management. Cooperation among researchers, practitioners, and policymakers is necessary to utilize AI in pest management fully. AI technology is transforming the field of entomology by enabling high-precision identification of insect pests, leading to more efficient and eco-friendly pest management strategies. This can enhance food safety and reduce the need for continuous insecticide spraying, ensuring the purity and safety of the food supply chains. This review updates AI-powered insect pest identification, covering its significance, methods, challenges, and prospects.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"16 1","pages":"Pages 44-61"},"PeriodicalIF":12.4,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144773225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EU-GAN: A root inpainting network for improving 2D soil-cultivated root phenotyping EU-GAN:改善二维土壤栽培根系表型的根染网络
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-11 DOI: 10.1016/j.aiia.2025.06.004
Shangyuan Xie , Jiawei Shi , Wen Li , Tao Luo , Weikun Li , Lingfeng Duan , Peng Song , Xiyan Yang , Baoqi Li , Wanneng Yang
{"title":"EU-GAN: A root inpainting network for improving 2D soil-cultivated root phenotyping","authors":"Shangyuan Xie ,&nbsp;Jiawei Shi ,&nbsp;Wen Li ,&nbsp;Tao Luo ,&nbsp;Weikun Li ,&nbsp;Lingfeng Duan ,&nbsp;Peng Song ,&nbsp;Xiyan Yang ,&nbsp;Baoqi Li ,&nbsp;Wanneng Yang","doi":"10.1016/j.aiia.2025.06.004","DOIUrl":"10.1016/j.aiia.2025.06.004","url":null,"abstract":"<div><div>Beyond its fundamental roles in nutrient uptake and plant anchorage, the root system critically influences crop development and stress tolerance. Rhizobox enables in situ and nondestructive phenotypic detection of roots in soil, serving as a cost-effective root imaging method. However, the opacity of the soil often results in intermittent gaps in the root images, which reduces the accuracy of the root phenotype calculations. We present a root inpainting method built upon Generative Adversarial Networks (GANs) architecture In addition, we built a hybrid root inpainting dataset (HRID) that contains 1206 cotton root images with real gaps and 7716 rice root images with generated gaps. Compared with computer simulation root images, our dataset provides real root system architecture (RSA) and root texture information. Our method avoids cropping during training by instead utilizing downsampled images to provide the overall root morphology. The model is trained using binary cross-entropy loss to distinguish between root and non-root pixels. Additionally, Dice loss is employed to mitigate the challenge of imbalanced data distribution Additionally, we remove the skip connections in U-Net and introduce an edge attention module (EAM) to capture more detailed information. Compared with other methods, our approach significantly improves the recall rate from 17.35 % to 35.75 % on the test dataset of 122 cotton root images, revealing improved inpainting capabilities. The trait error reduction rates (TERRs) for the root area, root length, convex hull area, and root depth are 76.07 %, 68.63 %, 48.64 %, and 88.28 %, respectively, enabling a substantial improvement in the accuracy of root phenotyping. The codes for the EU-GAN and the 8922 labeled images are open-access, which could be reused by researchers in other AI-related work. This method establishes a robust solution for root phenotyping, thereby increasing breeding program efficiency and advancing our understanding of root system dynamics.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 770-782"},"PeriodicalIF":8.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving accuracy and generalization in single kernel oil characteristics prediction in maize using NIR-HSI and a knowledge-injected spectral tabtransformer 利用NIR-HSI和知识注入谱表转换器提高玉米单粒油特性预测的准确性和泛化性
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-11 DOI: 10.1016/j.aiia.2025.05.007
Anran Song , Xinyu Guo , Weiliang Wen , Chuanyu Wang , Shenghao Gu , Xiaoqian Chen , Juan Wang , Chunjiang Zhao
{"title":"Improving accuracy and generalization in single kernel oil characteristics prediction in maize using NIR-HSI and a knowledge-injected spectral tabtransformer","authors":"Anran Song ,&nbsp;Xinyu Guo ,&nbsp;Weiliang Wen ,&nbsp;Chuanyu Wang ,&nbsp;Shenghao Gu ,&nbsp;Xiaoqian Chen ,&nbsp;Juan Wang ,&nbsp;Chunjiang Zhao","doi":"10.1016/j.aiia.2025.05.007","DOIUrl":"10.1016/j.aiia.2025.05.007","url":null,"abstract":"<div><div>Near-infrared spectroscopy hyperspectral imaging (NIR-HSI) is widely used for seed component prediction due to its non-destructive and rapid nature. However, existing models often suffer from limited generalization, particularly when trained on small datasets, and there is a lack of effective deep learning (DL) models for spectral data analysis. To address these challenges, we propose the Knowledge-Injected Spectral TabTransformer (KIT-Spectral TabTransformer), an innovative adaptation of the traditional TabTransformer specifically designed for maize seeds. By integrating domain-specific knowledge, this approach enhances model training efficiency and predictive accuracy while reducing reliance on large datasets. The generalization capability of the model was rigorously validated through ten-fold cross-validation (10-CV). Compared to traditional machine learning methods, the attention-based CNN (ACNNR), and the Oil Characteristics Predictor Transformer (OCP-Transformer), the KIT-Spectral TabTransformer demonstrated superior performance in oil mass prediction, achieving <span><math><msubsup><mi>R</mi><mi>p</mi><mn>2</mn></msubsup></math></span>= 0.9238 ± 0.0346, RMSE<sub>p</sub> = 0.1746 ± 0.0401. For oil content, <span><math><msubsup><mi>R</mi><mi>p</mi><mn>2</mn></msubsup></math></span>= 0.9602 ± 0.0180 and RMSE<sub>p</sub> = 0.5301 ± 0.1446 on a dataset with oil content ranging from 0.81 % to 13.07 %. On the independent validation set, our model achieved <span><math><msup><mi>R</mi><mn>2</mn></msup></math></span> values of 0.7820 and 0.7586, along with RPD values of 2.1420 and 2.0355 in the two tasks, highlighting its strong prediction capability and potential for real-world application. These findings offer a potential method and direction for single seed oil prediction and related crop component analysis.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 802-815"},"PeriodicalIF":8.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid detection and visualization of physiological signatures in cotton leaves under Verticillium wilt stress 黄萎病胁迫下棉花叶片生理特征的快速检测与可视化
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-06 DOI: 10.1016/j.aiia.2025.06.002
Na Wu , Pan Gao , Jie Wu , Yun Zhao , Xing Xu , Chu Zhang , Erik Alexandersson , Juan Yang , Qinlin Xiao , Yong He
{"title":"Rapid detection and visualization of physiological signatures in cotton leaves under Verticillium wilt stress","authors":"Na Wu ,&nbsp;Pan Gao ,&nbsp;Jie Wu ,&nbsp;Yun Zhao ,&nbsp;Xing Xu ,&nbsp;Chu Zhang ,&nbsp;Erik Alexandersson ,&nbsp;Juan Yang ,&nbsp;Qinlin Xiao ,&nbsp;Yong He","doi":"10.1016/j.aiia.2025.06.002","DOIUrl":"10.1016/j.aiia.2025.06.002","url":null,"abstract":"<div><div>Verticillium wilt poses a severe threat to cotton growth and significantly impacts cotton yield. It is of significant importance to detect Verticillium wilt stress in time. In this study, the effects of Verticillium wilt stress on the microstructure and physiological indicators (SOD, POD, CAT, MDA, Chl<sub>a</sub>, Chl<sub>b</sub>, Chl<sub>ab</sub>, Car) of cotton leaves were investigated, and the feasibility of utilizing hyperspectral imaging to estimate physiological indicators of cotton leaves was explored. The results showed that Verticillium wilt stress-induced alterations in cotton leaf cell morphology, leading to the disruption and decomposition of chloroplasts and mitochondria. In addition, compared to healthy leaves, infected leaves exhibited significantly higher activities of SOD and POD, along with increased MDA amounts, while chlorophyll and carotenoid levels were notably reduced. Furthermore, rapid detection models for cotton physiological indicators were constructed, with the <em>R</em><sub><em>p</em></sub> of the optimal models ranging from 0.809 to 0.975. Based on these models, visual distribution maps of the physiological signatures across cotton leaves were created. These results indicated that the physiological phenotype of cotton leaves could be effectively detected by hyperspectral imaging, which could provide a solid theoretical basis for the rapid detection of Verticillium wilt stress.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 757-769"},"PeriodicalIF":8.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信