Computers and Electronics in Agriculture最新文献

筛选
英文 中文
An open-source data-driven automatic road extraction framework for diverse farmland application scenarios
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-26 DOI: 10.1016/j.compag.2025.110330
Jing Shen , Yawen He , Jian Peng , Tang Liu , Chenghu Zhou
{"title":"An open-source data-driven automatic road extraction framework for diverse farmland application scenarios","authors":"Jing Shen ,&nbsp;Yawen He ,&nbsp;Jian Peng ,&nbsp;Tang Liu ,&nbsp;Chenghu Zhou","doi":"10.1016/j.compag.2025.110330","DOIUrl":"10.1016/j.compag.2025.110330","url":null,"abstract":"<div><div>The narrow contours of farmland roads, lack of clear boundary features with surrounding objects, and the complexity and variability of features limit the applicability of existing supervised extraction algorithms. Meanwhile, visual segmentation models represented by SAM (Segment Anything Model) can achieve zero-shot generalization with appropriate prompts but struggle to capture linear object effectively. This study introduces OSAM (OpenStreetMap SAM), which fine-tunes SAM using historical open-source datasets to enhance its ability to detect linear features. Then the OSAM framework dynamically generates prompts from the open geographic database OpenStreetMap to activate SAM, enabling autonomous detection of farmland roads without the need for additional manual annotations or assisted interactions. Experiments demonstrate that OSAM performs exceptionally well in scenarios with sparse farmland road distributions and delivers robust results even with limited training data. Specifically, OSAM achieves a F1 of 71.91 % and an IoU of 58.53 % when trained on the full dataset, significantly outperforming DLinkNet (IoU: 56.42 %) and SegFormer (IoU: 41.65 %). Even with only 1 % of the original training samples, OSAM maintains robust performance (F1: 62.26 %, IoU: 47.02 %), whereas supervised learning methods such as SegFormer, SIINet, and UNet suffer significant performance degradation under extreme data constraints. Furthermore, evaluations on remote sensing images with varying data distributions, spatial resolutions, and agricultural environments confirm that OSAM achieves high extraction accuracy and strong generalization ability. This framework significantly reduces reliance on large, well-balanced labeled datasets while maintaining high accuracy, making farmland road extraction more efficient and cost-effective in diverse scenarios.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110330"},"PeriodicalIF":7.7,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous Cellular-Networked surveillance system for coconut rhinoceros beetle
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-26 DOI: 10.1016/j.compag.2025.110310
Mohsen Paryavi , Keith Weiser , Michael Melzer , Reza Ghorbani , Daniel Jenkins
{"title":"Autonomous Cellular-Networked surveillance system for coconut rhinoceros beetle","authors":"Mohsen Paryavi ,&nbsp;Keith Weiser ,&nbsp;Michael Melzer ,&nbsp;Reza Ghorbani ,&nbsp;Daniel Jenkins","doi":"10.1016/j.compag.2025.110310","DOIUrl":"10.1016/j.compag.2025.110310","url":null,"abstract":"<div><div>A biological invasion of the Coconut Rhinoceros Beetle (CRB; <em>Oryctes rhinoceros</em>) to the island of Oahu was discovered in late 2013, posing a threat to palm trees on the island and potential for accidental export to other Hawaiian Islands and sub-tropical palm growing regions of California and Florida. Delineation of populations by physical trapping in remote, undeveloped areas is a critical part of the program for containment and eradication. Continuous surveillance near ports of entry is especially important to eliminate incipient populations rapidly and mitigate the risk of human-assisted transport. Traditional trap monitoring for the CRB is labor-intensive, costly, and temporally inadequate. We have developed an autonomous trap surveillance system framework using electronic sensors and front and backend remote cloud systems for monitoring the CRB trap contents. The customized surveillance system incorporates a camera and digital microphone, and communicates data through a cellular network using Category-M (CAT-M) Low-Power Wide-Area Network (LPWAN) with an integrated GNSS chip for precise geolocation of catches. Hourly monitoring data from early deployments of the system have demonstrated that adult CRB have a crepuscular behavior, with over two-thirds of catches occurring after sunset within three hours of twilight, and fewer than 1% occurring unambiguously during daylight. The system represents a significant advance for trap monitoring, and can prove valuable for identifying biological behaviors that might be exploited for more effective control.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110310"},"PeriodicalIF":7.7,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent advances in pig behavior detection based on information perception technology
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-26 DOI: 10.1016/j.compag.2025.110327
Jinyang Xu , Yibin Ying , Dihua Wu , Yilei Hu , Di Cui
{"title":"Recent advances in pig behavior detection based on information perception technology","authors":"Jinyang Xu ,&nbsp;Yibin Ying ,&nbsp;Dihua Wu ,&nbsp;Yilei Hu ,&nbsp;Di Cui","doi":"10.1016/j.compag.2025.110327","DOIUrl":"10.1016/j.compag.2025.110327","url":null,"abstract":"<div><div>Global demand for meat is increasing with the world’s population growth, which leads to the expansion of global pig breeding. For farming enterprises, the production efficiency is important. In the process of pig breeding, pig behavior will reflect some pieces of information such as health, welfare, and growth status, which indirectly impacts the production efficiency of farming enterprises. Therefore, the detection of daily pig behaviors is essential. With the development of sensing and artificial intelligence technologies, various information perception technologies have been used in pig behavior detection. This paper provides a comprehensive review of recent advances in information perception technology for pig behavior detection. The merits and demerits of different information perception technologies for pig target perception and behavior detection were analyzed first. Then different detection systems for pig behavior were compared. Subsequently, the public datasets for pig behavior were innovatively summarized. Based on these findings, this study identifies key challenges that persist in the application of information perception technologies for pig behavior detection. These challenges include the limited data dimensionality when using a single sensing modality, the difficulty of accurately perceiving individual behavioral information in group housing conditions, the uneven research focus across different types of behaviors, the limited variety and scale of publicly available pig behavior datasets, and the heavy reliance on manual data annotation. To address these issues, future research should integrate multiple sensing modalities to enrich data quality and dimensionality, develop target extraction and behavior detection models that balance accuracy with computational complexity, broaden the scope of studied behaviors to include those previously overlooked, construct more diverse and sufficiently large datasets, and adopt semi-supervised or unsupervised strategies for data annotation. This work will facilitate large-scale commercial applications of pig behavior detection and will lay a critical foundation for welfare-oriented pig farming.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110327"},"PeriodicalIF":7.7,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseDFFNet: Dense connected dual-stream feature fusion network for calf manure segmentation and diarrhea recognition
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-25 DOI: 10.1016/j.compag.2025.110328
Liuru Pu , Yongjie Zhao , Haoyu Kang , Xiangfeng Kong , Xiaopeng Du , Huaibo Song
{"title":"DenseDFFNet: Dense connected dual-stream feature fusion network for calf manure segmentation and diarrhea recognition","authors":"Liuru Pu ,&nbsp;Yongjie Zhao ,&nbsp;Haoyu Kang ,&nbsp;Xiangfeng Kong ,&nbsp;Xiaopeng Du ,&nbsp;Huaibo Song","doi":"10.1016/j.compag.2025.110328","DOIUrl":"10.1016/j.compag.2025.110328","url":null,"abstract":"<div><div>Neonatal calf diarrhea is a globally prevalent disease, accounting for 57% of pre-weaning calf mortality. Early detection and intervention of diarrhea symptoms are critical for reducing morbidity and mortality while improving breeding efficiency. In intensive farming environments, it is challenging for staff to identify diarrhea symptoms in calves timely and effectively, as automated recognition methods for calf diarrhea remain underdeveloped. To address this issue, a non-contact calf diarrhea recognition method based on DenseDFFNet has been developed. By employing the multi-modal segmentation model Grounded-Segment-Anything (G-SAM) for manure segmentation, the difficulty of data annotation was significantly reduced and a fecal diarrhea segmentation accuracy of 96.45% was achieved in complex backgrounds. After segmentation, to mitigate abrupt pixel value transitions at image boundaries, a Parallel Convolutional Squeeze-and-Excitation (ParallelConvSE) module was designed, effectively integrating local and global features through parallel standard convolution and Squeeze-and-Excitation (SE) attention mechanisms. And the overall performance and generalization capability of the model was enhanced. For diarrhea classification, the DenseDFFNet module was introduced. In fecal classification tasks, the model achieved a test accuracy of 95.87%. When validated with video data, recognition accuracies for diarrhea and normal states reached 93.92% and 91.21%, respectively. Additionally, a self-propelled data collection system has been developed to enable efficient diarrhea recognition in complex commercial farming scenarios, offering a novel solution for calf health monitoring and early diagnosis. With its non-contact, efficient, and objective characteristics, the proposed method significantly reduces labor intensity and provides a robust technical solution for the recognition of calf diarrhea.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110328"},"PeriodicalIF":7.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143724241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Star-YOLO: A lightweight and efficient model for weed detection in cotton fields using advanced YOLOv8 improvements
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-25 DOI: 10.1016/j.compag.2025.110306
Zheng Lu , Zhu Chengao , Liu Lu , Yang Yan , Wang Jun , Xia Wei , Xu Ke , Tie Jun
{"title":"Star-YOLO: A lightweight and efficient model for weed detection in cotton fields using advanced YOLOv8 improvements","authors":"Zheng Lu ,&nbsp;Zhu Chengao ,&nbsp;Liu Lu ,&nbsp;Yang Yan ,&nbsp;Wang Jun ,&nbsp;Xia Wei ,&nbsp;Xu Ke ,&nbsp;Tie Jun","doi":"10.1016/j.compag.2025.110306","DOIUrl":"10.1016/j.compag.2025.110306","url":null,"abstract":"<div><div>Effective weed management in cotton fields is crucial for preventing crop loss and maintaining agricultural productivity. However, the complexity and high computational demands of deep-learning models pose challenges when deployed in resource-constrained devices. Hence, this study proposes a lightweight deep-learning model based on an improved YOLOv8 architecture. First, the backbone and C2f modules are restructured using Star Blocks, along with a designed lightweight detection head, i.e., the lightweight shared convolutional separable BN detection head, thus effectively reducing the model’s parameters and computational overhead. To better capture the global weed information, the LSK attention mechanism expands the receptive field, thus enhancing the detection performance of the model. Additionally, a dynamic upsampling technique, DySample, is employed to replace conventional upsampling operators, thereby further improving the detection speed. Compared with YOLOv8, the proposed model reduces the parameters, computation, and model size by 50.0%, 39.0%, and 47.0%, respectively, while achieving mAP@50 and mAP@50–95 scores of 98.0% and 95.4%, respectively. Furthermore, the model optimally balances accuracy, lightweight design, and detection speed compared with mainstream lightweight backbone networks and architectures, thus demonstrating its superior performance on public datasets CottonWeedDet12 and CottonWeedDet3. By integrating TensorRT technology, the model’s detection speed increases by nine times, thus providing significant advancements toward the development of an efficient weed-detection system for real-world agricultural applications. As this model can be integrated into automated weeding equipment, fully automated weed detection and weeding operations are realizable, thereby enhancing the efficiency and precision of agricultural tasks.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110306"},"PeriodicalIF":7.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biomass phenotyping of oilseed rape through UAV multi-view oblique imaging with 3DGS and SAM model
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-25 DOI: 10.1016/j.compag.2025.110320
Yutao Shen , Hongyu Zhou , Xin Yang , Xuqi Lu , Ziyue Guo , Lixi Jiang , Yong He , Haiyan Cen
{"title":"Biomass phenotyping of oilseed rape through UAV multi-view oblique imaging with 3DGS and SAM model","authors":"Yutao Shen ,&nbsp;Hongyu Zhou ,&nbsp;Xin Yang ,&nbsp;Xuqi Lu ,&nbsp;Ziyue Guo ,&nbsp;Lixi Jiang ,&nbsp;Yong He ,&nbsp;Haiyan Cen","doi":"10.1016/j.compag.2025.110320","DOIUrl":"10.1016/j.compag.2025.110320","url":null,"abstract":"<div><div>Biomass estimation of oilseed rape is crucial for optimizing crop productivity and breeding strategies. While UAV-based imaging has advanced high-throughput phenotyping, current methods often rely on orthophoto images, which struggle with overlapping leaves and incomplete structural information in complex field environments. This study integrates 3D Gaussian Splatting (3DGS) with the Segment Anything Model (SAM) for precise 3D reconstruction and biomass estimation of oilseed rape. UAV multi-view oblique images from 36 angles were used to perform 3D reconstruction, with the SAM module enhancing point cloud segmentation. The segmented point clouds were then converted into point cloud volumes, which were fitted to ground-measured biomass using linear regression. The results showed that 3DGS (7 k and 30 k iterations) provided high accuracy, with peak signal-to-noise ratios (PSNR) of 27.43 and 29.53 and training times of 7 and 49 min, respectively. This performance exceeded that of structure from motion (SfM) and mipmap Neural Radiance Fields (Mip-NeRF), demonstrating superior efficiency. The SAM module achieved high segmentation accuracy, with a mean intersection over union (mIoU) of 0.961 and an F1-score of 0.980. Additionally, a comparison of biomass extraction models found the point cloud volume model to be the most accurate, with an determination coefficient (R<sup>2</sup>) of 0.976, root mean square error (RMSE) of 2.92 g/plant, and mean absolute percentage error (MAPE) of 6.81 %, outperforming both the plot crop volume and individual crop volume models. This study highlights the potential of combining 3DGS with multi-view UAV imaging for improved biomass phenotyping.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110320"},"PeriodicalIF":7.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aggressive behavior recognition and welfare monitoring in yellow-feathered broilers using FCTR and wearable identity tags 利用 FCTR 和可穿戴身份标签识别黄羽肉鸡的攻击行为并进行福利监测
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-25 DOI: 10.1016/j.compag.2025.110284
Hongcheng Xue , Jie Ma , Yakun Yang , Hao Qu , Longhe Wang , Lin Li
{"title":"Aggressive behavior recognition and welfare monitoring in yellow-feathered broilers using FCTR and wearable identity tags","authors":"Hongcheng Xue ,&nbsp;Jie Ma ,&nbsp;Yakun Yang ,&nbsp;Hao Qu ,&nbsp;Longhe Wang ,&nbsp;Lin Li","doi":"10.1016/j.compag.2025.110284","DOIUrl":"10.1016/j.compag.2025.110284","url":null,"abstract":"<div><div>Aggressive behavior and individual identification in chickens have long attracted widespread attention in animal welfare farming and scientific genetic breeding. Existing methods predominantly rely on manual observation, which is limited by subjectivity and slow response times. The matching of chicken identity with behavior requires considerable human and material resources. To address these challenges, we propose a <strong>F</strong>ast <strong>C</strong>hicken aggressive behavior recognition model based on <strong>TR</strong>ansformer (FCTR) model and introduce a wearable identity tag for chickens. FCTR demonstrates robust recognition performance on the free-range Yellow-Feathered Broiler dataset and establishes an identity matching verification method, refining behavioral quantification analysis at the individual level for precise farming. To evaluate this approach, the ChickenFight-2024 dataset was collected and constructed. Multiple experiments confirm that the method can effectively identify both chicken identities and aggressive behaviors using video surveillance images. The proposed model achieved mAP values of 89.81%, 85.76%, 90.14%, 93.19%, and 87.27% for fight, tread, peck, eat, and drink behaviors, respectively, with an mAP of 77.39% for identity information. The identity matching verification method achieved a 94.88% matching rate, highlighting the model’s significant potential for application in commercial farming scenarios and offering new insights and solutions for efficient genetic breeding.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110284"},"PeriodicalIF":7.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A point-supervised algorithm with multiscale semantic enhancement for counting multiple crop plants from aerial imagery
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110289
Huibin Li , Huaiyang Liu , Wenbo Wang , Haozhou Wang , Qiangyi Yu , Jianping Qian , Wenbin Wu , Yun Shi , Changxing Geng
{"title":"A point-supervised algorithm with multiscale semantic enhancement for counting multiple crop plants from aerial imagery","authors":"Huibin Li ,&nbsp;Huaiyang Liu ,&nbsp;Wenbo Wang ,&nbsp;Haozhou Wang ,&nbsp;Qiangyi Yu ,&nbsp;Jianping Qian ,&nbsp;Wenbin Wu ,&nbsp;Yun Shi ,&nbsp;Changxing Geng","doi":"10.1016/j.compag.2025.110289","DOIUrl":"10.1016/j.compag.2025.110289","url":null,"abstract":"<div><div>Counting crop plants is important for agricultural activities such as crop breeding and yield prediction. Numerous studies have developed methods for counting individual crop plants or those with similar morphological characteristics. However, these methods often face challenges of low accuracy and poor generalization when counting multiple crop plants with significant scale variations in complex backgrounds. Hence, we proposed MCPCNet, a point-supervised algorithm that enhances multiscale semantics for counting multiple crop plants from aerial imagery. We also constructed the first dataset of multicategory crop plant counting (MCPC-Dataset). We developed a concurrent spatial group enhancement module, a residual dynamic dilated convolution module, and introduced the contextual transformer module with self-attention mechanism. These modules can reduce the impact of background, adapt to scale variations of multiple crops, and enhance the robustness of our algorithm, respectively. The experiment results on the MCPC-Dataset indicate that MCPCNet achieves excellent performance, with a mean absolute error (MAE) of 2.577, a mean square error (MSE) of 14.289, and a coefficient of determination (<span><math><mrow><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></mrow></math></span>) of 0.991. MCPCNet also has a clear advantage over the state-of-the-art (SOTA) point-supervised counting algorithm. In conclusion, MCPCNet provides a robust solution for high-precision counting of multiple crop plants and is a vital reference for future related research.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110289"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143680432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing biomass estimation in hydroponic lettuce using RGB-depth imaging and morphometric descriptors with machine learning
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110299
Jonathan S. Cardenas-Gallegos , Lorena Nunes Lacerda , Paul M. Severns , Alicia Peduzzi , Pavel Klimeš , Rhuanito Soranz Ferrarezi
{"title":"Advancing biomass estimation in hydroponic lettuce using RGB-depth imaging and morphometric descriptors with machine learning","authors":"Jonathan S. Cardenas-Gallegos ,&nbsp;Lorena Nunes Lacerda ,&nbsp;Paul M. Severns ,&nbsp;Alicia Peduzzi ,&nbsp;Pavel Klimeš ,&nbsp;Rhuanito Soranz Ferrarezi","doi":"10.1016/j.compag.2025.110299","DOIUrl":"10.1016/j.compag.2025.110299","url":null,"abstract":"<div><div>By capturing the intricate structural and spectral variations of the plant canopy, we can enhance our ability to model and predict dynamic parameters such as biomass with greater precision. This method not only preserves the plants for continuous monitoring but also provides a scalable and efficient alternative to traditional destructive techniques. The objective of this study was to examine the potential of using image-derived color and geometric plant features to output accurate predictions of three plant biomass accumulation parameters − leaf fresh weight, leaf dry weight, and leaf area for single plant monitoring. Top-view images of a hydroponic ‘Chicarita’ romaine lettuce (<em>Lactuca sativa</em>) crop captured with a color and depth sensor were used as the input of a multiple plants image processing workflow that extracted plant height, canopy morphometric, and color traits at an individual plant level. Two destructive harvest rounds were performed across the plant cycle to measure the observed values for each biomass response given by leaf fresh weight, leaf dry weight and leaf area from two crop cycles. The image-derived traits were used as potential predictors for a simple linear regression used as a baseline model and for two supervised machine learning models (random forest and least absolute shrinkage and selection operator or LASSO regression) to estimate each response. Using extracted depth information, vertical height per plant was estimated with a mean absolute error of 1.51 cm. Random Forest regression models yielded the most accurate predictions on a first harvest round for all three biomass parameters with R<sup>2</sup> values of 0.74, 0.80, and 0.67 and mean absolute percentage error (MAPE) of 11.77%, 10.16%, and 12.50%. LASSO regression outperformed the other models in a second harvest round with R<sup>2</sup> values of 0.72, 0.65, and 0.79 and MAPE of 7.79%, 7.76%, and 7.06% for leaf fresh weight, leaf dry weight, and leaf area, respectively. These results suggest that using a selection of canopy descriptors may improve the non-destructive biomass estimation along a lettuce crop cycle, enabling remote monitoring and real-time harvest projections.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110299"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143680439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cotton3DGaussians: Multiview 3D Gaussian Splatting for boll mapping and plant architecture analysis
IF 7.7 1区 农林科学
Computers and Electronics in Agriculture Pub Date : 2025-03-24 DOI: 10.1016/j.compag.2025.110293
Lizhi Jiang , Jin Sun , Peng W. Chee , Changying Li , Longsheng Fu
{"title":"Cotton3DGaussians: Multiview 3D Gaussian Splatting for boll mapping and plant architecture analysis","authors":"Lizhi Jiang ,&nbsp;Jin Sun ,&nbsp;Peng W. Chee ,&nbsp;Changying Li ,&nbsp;Longsheng Fu","doi":"10.1016/j.compag.2025.110293","DOIUrl":"10.1016/j.compag.2025.110293","url":null,"abstract":"<div><div>Cotton is an economically important crop cultivated worldwide for textile production. Breeding programs focus on selecting genotypes with favorable traits for high yields. This study introduced 3D Gaussian Splatting (3DGS) to reconstruct high-fidelity three-dimensional (3D) models and developed a segmentation workflow, Cotton3DGaussians, to analyze cotton bolls and extract architectural traits from single plants. Cotton plants were scanned 360° using a smartphone, and photogrammetry was used to estimate camera parameters and reconstruct a sparse point cloud, which was then optimized into a 3DGS model. In Cotton3DGaussians, 2D masks of bolls segmented from four views were mapped to 3D space, and redundant bolls were removed through cross-view clustering. YOLOv11x and a foundation model, segment anything model (SAM), were compared to obtain 2D masks, with YOLOv11x achieving an F1-score 5.9 % higher than SAM. Phenotypic traits such as boll number, volume, plant height, and canopy size were estimated. The 3DGS model exhibited superior rendering quality, achieving a peak signal-to-noise ratio (PSNR) that was 6.91 higher than NeRF. Cotton3DGaussians effectively segmented 3D bolls from multiple views, with mean absolute percentage errors (MAPE) of 9.23 % for boll number, 3.66 % for canopy size, 2.38 % for plant height, and 8.17 % for boll volume compared to LiDAR ground truth. The regression analysis between convex boll volume and boll weight showed a 19.3 % weight error per plant. This study demonstrates the potential of 3DGS for low-cost, high-fidelity 3D modeling, enabling high-resolution phenotyping and advancing cotton breeding programs. The methodology can also be applied to other crops for improved 3D trait measurement research and enhanced productivity.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110293"},"PeriodicalIF":7.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143680435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信