Hilmy Baja , Michiel G.J. Kallenberg , Herman N.C. Berghuijs , Ioannis N. Athanasiadis
{"title":"基于约束强化学习的自适应肥料管理优化氮素利用效率","authors":"Hilmy Baja , Michiel G.J. Kallenberg , Herman N.C. Berghuijs , Ioannis N. Athanasiadis","doi":"10.1016/j.compag.2025.110554","DOIUrl":null,"url":null,"abstract":"<div><div>Optimizing nitrogen use efficiency (NUE) in crop production is crucial for sustainable agriculture, balancing the need to maximize yield while minimizing environmental impacts such as nitrogen loss and soil nutrient depletion. Reinforcement learning (RL) emerges as a potent, data-driven approach for achieving optimal farm management decisions, particularly in the context of fertilization, thereby facilitating optimal NUE. Previous literature of RL in crop management have predominantly focused on optimizing yield, profit, or nitrogen loss reduction. However, optimizing NUE has been largely overlooked despite its significance in preventing soil nutrient mining. In this study, we develop an RL environment in various aspects to investigate the capability of RL to optimize NUE through crop growth model simulations. We develop an RL agent with a novel NUE reward function and incorporates action constrains. We compare its performance against baseline methods and other RL agents trained with reward functions from previous literature. Additionally, we evaluate the robustness of our RL agent across various soil conditions, including different initial nitrogen content and drought-(in)sensitive soils. We find that the RL agent trained with our novel reward function is close to the optimal policy, although generalization to different soil texture scenarios prove to be challenging to the RL agent. Further, we identify several open challenges for future work pertaining to RL in crop management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"237 ","pages":"Article 110554"},"PeriodicalIF":8.9000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive fertilizer management for optimizing nitrogen use efficiency with constrained reinforcement learning\",\"authors\":\"Hilmy Baja , Michiel G.J. Kallenberg , Herman N.C. Berghuijs , Ioannis N. Athanasiadis\",\"doi\":\"10.1016/j.compag.2025.110554\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Optimizing nitrogen use efficiency (NUE) in crop production is crucial for sustainable agriculture, balancing the need to maximize yield while minimizing environmental impacts such as nitrogen loss and soil nutrient depletion. Reinforcement learning (RL) emerges as a potent, data-driven approach for achieving optimal farm management decisions, particularly in the context of fertilization, thereby facilitating optimal NUE. Previous literature of RL in crop management have predominantly focused on optimizing yield, profit, or nitrogen loss reduction. However, optimizing NUE has been largely overlooked despite its significance in preventing soil nutrient mining. In this study, we develop an RL environment in various aspects to investigate the capability of RL to optimize NUE through crop growth model simulations. We develop an RL agent with a novel NUE reward function and incorporates action constrains. We compare its performance against baseline methods and other RL agents trained with reward functions from previous literature. Additionally, we evaluate the robustness of our RL agent across various soil conditions, including different initial nitrogen content and drought-(in)sensitive soils. We find that the RL agent trained with our novel reward function is close to the optimal policy, although generalization to different soil texture scenarios prove to be challenging to the RL agent. Further, we identify several open challenges for future work pertaining to RL in crop management.</div></div>\",\"PeriodicalId\":50627,\"journal\":{\"name\":\"Computers and Electronics in Agriculture\",\"volume\":\"237 \",\"pages\":\"Article 110554\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers and Electronics in Agriculture\",\"FirstCategoryId\":\"97\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S016816992500660X\",\"RegionNum\":1,\"RegionCategory\":\"农林科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURE, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S016816992500660X","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
Adaptive fertilizer management for optimizing nitrogen use efficiency with constrained reinforcement learning
Optimizing nitrogen use efficiency (NUE) in crop production is crucial for sustainable agriculture, balancing the need to maximize yield while minimizing environmental impacts such as nitrogen loss and soil nutrient depletion. Reinforcement learning (RL) emerges as a potent, data-driven approach for achieving optimal farm management decisions, particularly in the context of fertilization, thereby facilitating optimal NUE. Previous literature of RL in crop management have predominantly focused on optimizing yield, profit, or nitrogen loss reduction. However, optimizing NUE has been largely overlooked despite its significance in preventing soil nutrient mining. In this study, we develop an RL environment in various aspects to investigate the capability of RL to optimize NUE through crop growth model simulations. We develop an RL agent with a novel NUE reward function and incorporates action constrains. We compare its performance against baseline methods and other RL agents trained with reward functions from previous literature. Additionally, we evaluate the robustness of our RL agent across various soil conditions, including different initial nitrogen content and drought-(in)sensitive soils. We find that the RL agent trained with our novel reward function is close to the optimal policy, although generalization to different soil texture scenarios prove to be challenging to the RL agent. Further, we identify several open challenges for future work pertaining to RL in crop management.
期刊介绍:
Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.