Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
Multi-scale cross-modal feature fusion and cost-sensitive loss function for differential detection of occluded bagging pears in practical orchards 多尺度跨模态特征融合与代价敏感损失函数在实际果园闭塞套袋梨鉴别检测中的应用
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-05-18 DOI: 10.1016/j.aiia.2025.05.002
Shengli Yan , Wenhui Hou , Yuan Rao , Dan Jiang , Xiu Jin , Tan Wang , Yuwei Wang , Lu Liu , Tong Zhang , Arthur Genis
{"title":"Multi-scale cross-modal feature fusion and cost-sensitive loss function for differential detection of occluded bagging pears in practical orchards","authors":"Shengli Yan ,&nbsp;Wenhui Hou ,&nbsp;Yuan Rao ,&nbsp;Dan Jiang ,&nbsp;Xiu Jin ,&nbsp;Tan Wang ,&nbsp;Yuwei Wang ,&nbsp;Lu Liu ,&nbsp;Tong Zhang ,&nbsp;Arthur Genis","doi":"10.1016/j.aiia.2025.05.002","DOIUrl":"10.1016/j.aiia.2025.05.002","url":null,"abstract":"<div><div>In practical orchards, the challenges posed by fruit overlapping, branch and leaf occlusion, significantly impede the successful implementation of automated picking, particularly for bagging pears. To address this issue, this paper introduces the multi-scale cross-modal feature fusion and cost-sensitive classification loss function network (MCCNet), specifically designed to accurately detect bagging pears with various occlusion categories. The network designs a dual-stream convolutional neural network as its backbone, enabling the parallel extraction of multi-modal features. Meanwhile, we propose a novel lightweight cross-modal feature fusion method, inspired by enhancing shared features between modalities while extracting specific features from RGB and depth modalities. The cross-modal method enhances the perceptual capabilities of the model by facilitating the fusion of complementary information from multimodal bagging pear image pairs. Furthermore, we optimize the classification loss function by transforming it into a cost-sensitive loss function, aiming to improve detection classification efficiency and reduce instances of missing and false detections during the picking process. Experimental results on a bagging pear dataset demonstrate that our MCCNet achieves mAP0.5 and mAP0.5:0.95 values of 97.3 % and 80.3 %, respectively, representing improvements of 3.6 % and 6.3 % over the classical YOLOv10m model. When benchmarked against several state-of-the-art detection models, our MCCNet network has only 19.5 million parameters while maintaining superior inference speed.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 573-589"},"PeriodicalIF":8.2,"publicationDate":"2025-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Orah fruit detection method using lightweight improved YOLOv8n model verified by optimized deployment on edge device 基于轻量级改进的YOLOv8n模型的Orah水果精确检测方法通过优化部署在边缘设备上的验证
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-05-14 DOI: 10.1016/j.aiia.2025.05.001
Hongwei Li , Yongmei Mo , Jiasheng Chen , Jiqing Chen , Jiabao Li
{"title":"Accurate Orah fruit detection method using lightweight improved YOLOv8n model verified by optimized deployment on edge device","authors":"Hongwei Li ,&nbsp;Yongmei Mo ,&nbsp;Jiasheng Chen ,&nbsp;Jiqing Chen ,&nbsp;Jiabao Li","doi":"10.1016/j.aiia.2025.05.001","DOIUrl":"10.1016/j.aiia.2025.05.001","url":null,"abstract":"<div><div>The replacement of personal computer terminal with edge device is recognized as a portable and cost-effective potential solution in solving equipment miniaturization and achieving high flexibility of robotic fruit harvesting at in-field scale. This study proposes a lightweight improved You Only Look Once version 8n (YOLOv8n) model for detecting Orah fruits and deploying this model on an edge device. First of all, the model size was reduced while maintaining detection accuracy via the introduction of the ADown modules. Subsequently, a Concentrated-Comprehensive Dual Convolution (C3_DualConv) module combining dual convolutional bottlenecks was proposed to enhance the model capability to capture features of Orah fruits obscured by branches and leaves; this practice further reduced the model size. Additionally, a Bidirectional Feature Pyramid Network (BiFPN) that includes a pyramid level 2 high-resolution layer was employed for more efficient multi-scale feature fusion. Besides, three Coordinate Attention (CA) mechanism modules were also added to improve the recognition and capture capability for Orah fruit features. Finally, a more focused minimum points distance intersection over union loss was adopted to boost the detection efficiency of densely occluded Orah fruits. Experimentally demonstrating that the improved YOLOv8n model accurately detected Orah fruits in complex orchard environments, achieving a 97.7 % of precision, an Average Precision at IoU threshold 0.5 ([email protected]) of 98.8 %, and a 96.69 % of F1 score, while maintaining a compact model size of 4.1 MB, under a Windows-based system terminal. This proposed model was optimally deployed on an Nvidia Jetson Orin Nano using TensorRT Python Application Programming Interface (API), the average interface speed exceeds 30 fps, indicating a real-time detection ability. This study can provide technical support for Orah fruit robotic harvesting on the basis of edge device.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 707-723"},"PeriodicalIF":8.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding canola and oat crop health and productivity under drought and heat stress using bioelectrical signals and machine learning 利用生物电信号和机器学习解码干旱和热胁迫下油菜籽和燕麦作物的健康和生产力
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-04-30 DOI: 10.1016/j.aiia.2025.04.006
Guoqi Wen, Bao-Luo Ma
{"title":"Decoding canola and oat crop health and productivity under drought and heat stress using bioelectrical signals and machine learning","authors":"Guoqi Wen,&nbsp;Bao-Luo Ma","doi":"10.1016/j.aiia.2025.04.006","DOIUrl":"10.1016/j.aiia.2025.04.006","url":null,"abstract":"<div><div>Abiotic stresses, such as heat and drought, often reduce crop yields by harming plant health. Plants have evolved complex signaling networks to mitigate environmental impacts, making monitoring in-situ biosignals a promising tool for assessing plant health in real time. In this study, needle-like sensors were used to measure electrical potential changes in oat and canola plants under heat and drought stress conditions. Signals were recorded over a 30-min period and segmented into time intervals of 1-, 5-, 10-, 20-, and 30-min. Machine learning algorithms, including Random Forest, K-Nearest Neighbors, and Support Vector Machines, were applied to classify stress conditions and estimate biomass based on 14 extracted bioelectrical features, such as signal amplitude and entropy. Results showed that heat stress primarily altered signal patterns, whereas drought stress affected the signal intensity, possibly due to a reduction in the flow rate of charged ions. Random Forest classifier successfully identified over 85 % of stressed crops within 30 min of signal recording. These signals also explained 58–95 % of the variation in plant aboveground and root biomass, depending on stress intensity and crop genotype. This study demonstrates the potential of using bioelectrical sensing as a rapid and efficient tool for stress detection and biomass estimation. Future research should explore the ability to use biosensors to capture genetic variability to mitigate abiotic stresses and combine this with remote sensing and other emerging precision agriculture technologies.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 696-706"},"PeriodicalIF":8.2,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing maize LAI estimation accuracy using unmanned aerial vehicle remote sensing and deep learning techniques 利用无人机遥感和深度学习技术提高玉米LAI估计精度
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-04-25 DOI: 10.1016/j.aiia.2025.04.008
Zhen Chen , Weiguang Zhai , Qian Cheng
{"title":"Enhancing maize LAI estimation accuracy using unmanned aerial vehicle remote sensing and deep learning techniques","authors":"Zhen Chen ,&nbsp;Weiguang Zhai ,&nbsp;Qian Cheng","doi":"10.1016/j.aiia.2025.04.008","DOIUrl":"10.1016/j.aiia.2025.04.008","url":null,"abstract":"<div><div>The leaf area index (LAI) is crucial for precision agriculture management. UAV remote sensing technology has been widely applied for LAI estimation. Although spectral features are widely used for LAI estimation, their performance is often constrained in complex agricultural scenarios due to interference from soil background reflectance, variations in lighting conditions, and vegetation heterogeneity. Therefore, this study evaluates the potential of multi-source feature fusion and convolutional neural networks (CNN) in estimating maize LAI. To achieve this goal, field experiments on maize were conducted in Xinxiang City and Xuzhou City, China. Subsequently, spectral features, texture features, and crop height were extracted from the multi-spectral remote sensing data to construct a multi-source feature dataset. Then, maize LAI estimation models were developed using multiple linear regression, gradient boosting decision tree, and CNN. The results showed that: (1) Multi-source feature fusion, which integrates spectral features, texture features, and crop height, demonstrated the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.70 to 0.83, the RMSE ranging from 0.44 to 0.60, and the rRMSE ranging from 10.79 % to 14.57 %. In addition, the multi-source feature fusion demonstrates strong adaptability across different growth environments. In Xinxiang, the R<sup>2</sup> ranges from 0.76 to 0.88, the RMSE ranges from 0.35 to 0.50, and the rRMSE ranges from 8.73 % to 12.40 %. In Xuzhou, the R<sup>2</sup> ranges from 0.60 to 0.83, the RMSE ranges from 0.46 to 0.71, and the rRMSE ranges from 10.96 % to 17.11 %. (2) The CNN model outperformed traditional machine learning algorithms in most cases. Moreover, the combination of spectral features, texture features, and crop height using the CNN model achieved the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.83 to 0.88, the RMSE ranging from 0.35 to 0.46, and the rRMSE ranging from 8.73 % to 10.96 %.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 482-495"},"PeriodicalIF":8.2,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping of soil sampling sites using terrain and hydrological attributes 利用地形和水文属性绘制土壤采样点图
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-04-25 DOI: 10.1016/j.aiia.2025.04.007
Tan-Hanh Pham , Kristopher Osterloh , Kim-Doang Nguyen
{"title":"Mapping of soil sampling sites using terrain and hydrological attributes","authors":"Tan-Hanh Pham ,&nbsp;Kristopher Osterloh ,&nbsp;Kim-Doang Nguyen","doi":"10.1016/j.aiia.2025.04.007","DOIUrl":"10.1016/j.aiia.2025.04.007","url":null,"abstract":"<div><div>Efficient soil sampling is essential for effective soil management and research on soil health. Traditional site selection methods are labor-intensive and fail to capture soil variability comprehensively. This study introduces a deep learning-based tool that automates soil sampling site selection using spectral images. The proposed framework consists of two key components: an extractor and a predictor. The extractor, based on a convolutional neural network (CNN), derives features from spectral images, while the predictor employs self-attention mechanisms to assess feature importance and generate prediction maps. The model is designed to process multiple spectral images and address the class imbalance in soil segmentation.</div><div>The model was trained on a soil dataset from 20 fields in eastern South Dakota, collected via drone-mounted LiDAR with high-precision GPS. Evaluation on a test set achieved a mean intersection over union (mIoU) of 69.46 % and a mean Dice coefficient (mDc) of 80.35 %, demonstrating strong segmentation performance. The results highlight the model's effectiveness in automating soil sampling site selection, providing an advanced tool for producers and soil scientists. Compared to existing state-of-the-art methods, the proposed approach improves accuracy and efficiency, optimizing soil sampling processes and enhancing soil research.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 470-481"},"PeriodicalIF":8.2,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal behavior recognition for dairy cow digital twin construction under incomplete modalities: A modality mapping completion network approach 不完全模态下奶牛数字孪生构建的多模态行为识别:一种模态映射完成网络方法
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-04-14 DOI: 10.1016/j.aiia.2025.04.005
Yi Zhang , Yu Zhang , Meng Gao , Xinjie Wang , Baisheng Dai , Weizheng Shen
{"title":"Multimodal behavior recognition for dairy cow digital twin construction under incomplete modalities: A modality mapping completion network approach","authors":"Yi Zhang ,&nbsp;Yu Zhang ,&nbsp;Meng Gao ,&nbsp;Xinjie Wang ,&nbsp;Baisheng Dai ,&nbsp;Weizheng Shen","doi":"10.1016/j.aiia.2025.04.005","DOIUrl":"10.1016/j.aiia.2025.04.005","url":null,"abstract":"<div><div>The recognition of dairy cow behavior is essential for enhancing health management, reproductive efficiency, production performance, and animal welfare. This paper addresses the challenge of modality loss in multimodal dairy cow behavior recognition algorithms, which can be caused by sensor or video signal disturbances arising from interference, harsh environmental conditions, extreme weather, network fluctuations, and other complexities inherent in farm environments. This study introduces a modality mapping completion network that maps incomplete sensor and video data to improve multimodal dairy cow behavior recognition under conditions of modality loss. By mapping incomplete sensor or video data, the method applies a multimodal behavior recognition algorithm to identify five specific behaviors: drinking, feeding, lying, standing, and walking. The results indicate that, under various comprehensive missing coefficients (λ), the method achieves an average accuracy of 97.87 % ± 0.15 %, an average precision of 95.19 % ± 0.4 %, and an average F1 score of 94.685 % ± 0.375 %, with an overall accuracy of 94.67 % ± 0.37 %. This approach enhances the robustness and applicability of cow behavior recognition based on multimodal data in situations of modality loss, resolving practical issues in the development of digital twins for cow behavior and providing comprehensive support for the intelligent and precise management of farms.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 459-469"},"PeriodicalIF":8.2,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143873553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint optimization of AI large and small models for surface temperature and emissivity retrieval using knowledge distillation 基于知识蒸馏的地表温度和发射率检索AI大、小模型联合优化
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-04-12 DOI: 10.1016/j.aiia.2025.03.009
Wang Dai , Kebiao Mao , Zhonghua Guo , Zhihao Qin , Jiancheng Shi , Sayed M. Bateni , Liurui Xiao
{"title":"Joint optimization of AI large and small models for surface temperature and emissivity retrieval using knowledge distillation","authors":"Wang Dai ,&nbsp;Kebiao Mao ,&nbsp;Zhonghua Guo ,&nbsp;Zhihao Qin ,&nbsp;Jiancheng Shi ,&nbsp;Sayed M. Bateni ,&nbsp;Liurui Xiao","doi":"10.1016/j.aiia.2025.03.009","DOIUrl":"10.1016/j.aiia.2025.03.009","url":null,"abstract":"<div><div>The rapid advancement of artificial intelligence in domains such as natural language processing has catalyzed AI research across various fields. This study introduces a novel strategy, the AutoKeras-Knowledge Distillation (AK-KD), which integrates knowledge distillation technology for joint optimization of large and small models in the retrieval of surface temperature and emissivity using thermal infrared remote sensing. The approach addresses the challenges of limited accuracy in surface temperature retrieval by employing a high-performance large model developed through AutoKeras as the teacher model, which subsequently enhances a less accurate small model through knowledge distillation. The resultant student model is interactively integrated with the large model to further improve specificity and generalization capabilities. Theoretical derivations and practical applications validate that the AK-KD strategy significantly enhances the accuracy of temperature and emissivity retrieval. For instance, a large model trained with simulated ASTER data achieved a Pearson Correlation Coefficient (PCC) of 0.999 and a Mean Absolute Error (MAE) of 0.348 K in surface temperature retrieval. In practical applications, this model demonstrated a PCC of 0.967 and an MAE of 0.685 K. Although the large model exhibits high average accuracy, its precision in complex terrains is comparatively lower. To ameliorate this, the large model, serving as a teacher, enhances the small model's local accuracy. Specifically, in surface temperature retrieval, the small model's PCC improved from an average of 0.978 to 0.979, and the MAE decreased from 1.065 K to 0.724 K. In emissivity retrieval, the PCC rose from an average of 0.827 to 0.898, and the MAE reduced from 0.0076 to 0.0054. This research not only provides robust technological support for further development of thermal infrared remote sensing in temperature and emissivity retrieval but also offers important references and key technological insights for the universal model construction of other geophysical parameter retrievals.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 407-425"},"PeriodicalIF":8.2,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143837982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective methods for mitigate the impact of light occlusion on the accuracy of online cabbage recognition in open fields 减轻光遮挡对露天白菜在线识别准确性影响的有效方法
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-04-11 DOI: 10.1016/j.aiia.2025.04.002
Hao Fu , Xueguan Zhao , Haoran Tan , Shengyu Zheng , Changyuan Zhai , Liping Chen
{"title":"Effective methods for mitigate the impact of light occlusion on the accuracy of online cabbage recognition in open fields","authors":"Hao Fu ,&nbsp;Xueguan Zhao ,&nbsp;Haoran Tan ,&nbsp;Shengyu Zheng ,&nbsp;Changyuan Zhai ,&nbsp;Liping Chen","doi":"10.1016/j.aiia.2025.04.002","DOIUrl":"10.1016/j.aiia.2025.04.002","url":null,"abstract":"<div><div>To address the low recognition accuracy of open-field vegetables under light occlusion, this study focused on cabbage and developed an online target recognition model based on deep learning. Using Yolov8n as the base network, a method was proposed to mitigate the impact of light occlusion on the accuracy of online cabbage recognition. A combination of cabbage image filters was designed to eliminate the effects of light occlusion. A filter parameter adaptive learning module for cabbage image filter parameters was constructed. The image filter combination and adaptive learning module were embedded into the Yolov8n object detection network. This integration enabled precise real-time recognition of cabbage under light occlusion conditions. Experimental results showed recognition accuracies of 97.5 % on the normal lighting dataset, 93.1 % on the light occlusion dataset, and 95.0 % on the mixed dataset. For images with a light occlusion degree greater than 0.4, the recognition accuracy improved by 9.9 % and 13.7 % compared to Yolov5n and Yolov8n models. The model achieved recognition accuracies of 99.3 % on the Chinese cabbage dataset and 98.3 % on the broccoli dataset. The model was deployed on an Nvidia Jetson Orin NX edge computing device, achieving an image processing speed of 26.32 frames per second. Field trials showed recognition accuracies of 96.0 % under normal lighting conditions and 91.2 % under light occlusion. The proposed online cabbage recognition model enables real-time recognition and localization of cabbage in complex open-field environments, offering technical support for target-oriented spraying.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 449-458"},"PeriodicalIF":8.2,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing particle application in multi-pass overlapping scenarios with variable rate centrifugal fertilizer spreaders for precision agriculture 精准农业用变速离心式撒肥机在多道重叠场景下的颗粒应用评估
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-04-10 DOI: 10.1016/j.aiia.2025.04.003
Shi Yinyan, Zhu Yangxu, Wang Xiaochan, Zhang Xiaolei, Zheng Enlai, Zhang Yongnian
{"title":"Assessing particle application in multi-pass overlapping scenarios with variable rate centrifugal fertilizer spreaders for precision agriculture","authors":"Shi Yinyan,&nbsp;Zhu Yangxu,&nbsp;Wang Xiaochan,&nbsp;Zhang Xiaolei,&nbsp;Zheng Enlai,&nbsp;Zhang Yongnian","doi":"10.1016/j.aiia.2025.04.003","DOIUrl":"10.1016/j.aiia.2025.04.003","url":null,"abstract":"<div><div>Environmental impacts and economic demands are driving the development of variable rate fertilization (VRF) technology for precision agriculture. Despite the advantages of a simple structure, low cost and high efficiency, uneven fertilizer-spreading uniformity is becoming a key factor restricting the application of centrifugal fertilizer spreaders. Accordingly, the particle application characteristics and variation laws for centrifugal VRF spreaders with multi-pass overlapped spreading needs to be urgently explored, in order to improve their distribution uniformity and working accuracy. In this study, the working performance of a self-developed centrifugal VRF spreader, based on real-time growth information of rice and wheat, was investigated and tested through the test methods of using the collection trays prescribed in ISO 5690 and ASAE S341.2. The coefficient of variation (CV) was calculated by weighing the fertilizer mass in standard pans, in order to evaluate the distribution uniformity of spreading patterns. The results showed that the effective application widths were 21.05, 22.58 and 23.67 m for application rates of 225, 300 and 375 kg/ha, respectively. The actual fertilizer application rates of multi-pass overlapped spreading were generally higher than the target rates, as well as the particle distribution CVs within the effective spreading widths were 11.51, 9.25 and 11.28 % for the respective target rates. Field test results for multi-pass overlapped spreading showed that the average difference between the actual and target application was 4.54 %, as well as the average particle distribution CV within the operating width was 11.94 %, which met the operation requirements of particle transverse distribution for centrifugal fertilizer spreaders. The results and findings of this study provide a theoretical reference for technical innovation and development of centrifugal VRF spreaders and are of great practical and social significance for accelerating their application in implementing precision agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 395-406"},"PeriodicalIF":8.2,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based audio-visual multimodal fusion for fine-grained recognition of individual sow nursing behaviour 基于变压器的视听多模态融合技术,用于精细识别母猪的哺乳行为
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-04-08 DOI: 10.1016/j.aiia.2025.03.006
Yuqing Yang , Chengguo Xu , Wenhao Hou , Alan G. McElligott , Kai Liu , Yueju Xue
{"title":"Transformer-based audio-visual multimodal fusion for fine-grained recognition of individual sow nursing behaviour","authors":"Yuqing Yang ,&nbsp;Chengguo Xu ,&nbsp;Wenhao Hou ,&nbsp;Alan G. McElligott ,&nbsp;Kai Liu ,&nbsp;Yueju Xue","doi":"10.1016/j.aiia.2025.03.006","DOIUrl":"10.1016/j.aiia.2025.03.006","url":null,"abstract":"<div><div>Nursing behaviour and the calling-to-nurse sound are crucial indicators for assessing sow maternal behaviour and nursing status. However, accurately identifying these behaviours for individual sows in complex indoor pig housing is challenging due to factors such as variable lighting, rail obstructions, and interference from other sows' calls. Multimodal fusion, which integrates audio and visual data, has proven to be an effective approach for improving accuracy and robustness in complex scenarios. In this study, we designed an audio-visual data acquisition system that includes a camera for synchronised audio and video capture, along with a custom-developed sound source localisation system that leverages a sound sensor to track sound direction. Specifically, we proposed a novel transformer-based audio-visual multimodal fusion (TMF) framework for recognising fine-grained sow nursing behaviour with or without the calling-to-nurse sound. Initially, a unimodal self-attention enhancement (USE) module was employed to augment video and audio features with global contextual information. Subsequently, we developed an audio-visual interaction enhancement (AVIE) module to compress relevant information and reduce noise using the information bottleneck principle. Moreover, we presented an adaptive dynamic decision fusion strategy to optimise the model's performance by focusing on the most relevant features in each modality. Finally, we comprehensively identified fine-grained nursing behaviours by integrating audio and fused information, while incorporating angle information from the real-time sound source localisation system to accurately determine whether the sound cues originate from the target sow. Our results demonstrate that the proposed method achieves an accuracy of 98.42 % for general sow nursing behaviour and 94.37 % for fine-grained nursing behaviour, including nursing with and without the calling-to-nurse sound, and non-nursing behaviours. This fine-grained nursing information can provide a more nuanced understanding of the sow's health and lactation willingness, thereby enhancing management practices in pig farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 363-376"},"PeriodicalIF":8.2,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信