Gianmarco Goycochea Casas , Zool Hilmi Ismail , Mohd Ibrahim Shapiai , Ettikan Kandasamy Karuppiah
{"title":"Automated detection and segmentation of baby kale crowns using grounding DINO and SAM for data-scarce agricultural applications","authors":"Gianmarco Goycochea Casas , Zool Hilmi Ismail , Mohd Ibrahim Shapiai , Ettikan Kandasamy Karuppiah","doi":"10.1016/j.atech.2025.100903","DOIUrl":null,"url":null,"abstract":"<div><div>This research addresses the significant challenge of data scarcity in agriculture by introducing an automatic pipeline for plant detection and segmentation. The primary objective was to detect and segment the crown area of baby kale (Brassica oleracea var. sabellica) during its early growth stages without relying on extensive data training or manual annotations, providing an alternative for scenarios with insufficient data. A dataset comprising aerial images of baby kale plants was gathered over a three-week period in a controlled environment. The model was processed using the NVIDIA GeForce RTX 4060 GPU. Grounding DINO was employed for plant detection based on textual prompts, and bounding boxes were generated to locate the central plant in each image. The detected regions were then processed using SAM to extract precise segmentation masks of the plant crown. The segmentation results were validated by comparing the automated method with manually annotated ground truth using statistical metrics, including Spearman's correlation, RMSE%, and the Wilcoxon signed-rank test. The automated approach demonstrated a strong correlation (ρ = 0.956) with manual annotations across all weeks, with RMSE% decreasing as plants matured. While Week 1 exhibited lower agreement (ρ = 0.581, RMSE% = 56.246 %) due to segmentation challenges at early growth stages, performance improved significantly in Week 2 (ρ = 0.945, RMSE% = 24.834 %) and Week 3 (ρ = 0.996, RMSE% = 11.733 %). The statistical validation confirmed a significant difference between manual and automated annotations; however, the automated method consistently captured the growth trend of the plants. In conclusion, while the pipeline offers a promising approach for plant detection and segmentation in data-scarce environments, its limitations, especially in early growth stages, should be considered. The study contributes by demonstrating a practical approach to overcoming data scarcity in agriculture using multimodal AI models capable of zero-shot and few-shot learning. This approach paves the way for more adaptive AI-driven agricultural monitoring systems, addressing data scarcity challenges in precision farming.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"11 ","pages":"Article 100903"},"PeriodicalIF":6.3000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375525001364","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
This research addresses the significant challenge of data scarcity in agriculture by introducing an automatic pipeline for plant detection and segmentation. The primary objective was to detect and segment the crown area of baby kale (Brassica oleracea var. sabellica) during its early growth stages without relying on extensive data training or manual annotations, providing an alternative for scenarios with insufficient data. A dataset comprising aerial images of baby kale plants was gathered over a three-week period in a controlled environment. The model was processed using the NVIDIA GeForce RTX 4060 GPU. Grounding DINO was employed for plant detection based on textual prompts, and bounding boxes were generated to locate the central plant in each image. The detected regions were then processed using SAM to extract precise segmentation masks of the plant crown. The segmentation results were validated by comparing the automated method with manually annotated ground truth using statistical metrics, including Spearman's correlation, RMSE%, and the Wilcoxon signed-rank test. The automated approach demonstrated a strong correlation (ρ = 0.956) with manual annotations across all weeks, with RMSE% decreasing as plants matured. While Week 1 exhibited lower agreement (ρ = 0.581, RMSE% = 56.246 %) due to segmentation challenges at early growth stages, performance improved significantly in Week 2 (ρ = 0.945, RMSE% = 24.834 %) and Week 3 (ρ = 0.996, RMSE% = 11.733 %). The statistical validation confirmed a significant difference between manual and automated annotations; however, the automated method consistently captured the growth trend of the plants. In conclusion, while the pipeline offers a promising approach for plant detection and segmentation in data-scarce environments, its limitations, especially in early growth stages, should be considered. The study contributes by demonstrating a practical approach to overcoming data scarcity in agriculture using multimodal AI models capable of zero-shot and few-shot learning. This approach paves the way for more adaptive AI-driven agricultural monitoring systems, addressing data scarcity challenges in precision farming.