An open-source data-driven automatic road extraction framework for diverse farmland application scenarios

IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY
Jing Shen , Yawen He , Jian Peng , Tang Liu , Chenghu Zhou
{"title":"An open-source data-driven automatic road extraction framework for diverse farmland application scenarios","authors":"Jing Shen ,&nbsp;Yawen He ,&nbsp;Jian Peng ,&nbsp;Tang Liu ,&nbsp;Chenghu Zhou","doi":"10.1016/j.compag.2025.110330","DOIUrl":null,"url":null,"abstract":"<div><div>The narrow contours of farmland roads, lack of clear boundary features with surrounding objects, and the complexity and variability of features limit the applicability of existing supervised extraction algorithms. Meanwhile, visual segmentation models represented by SAM (Segment Anything Model) can achieve zero-shot generalization with appropriate prompts but struggle to capture linear object effectively. This study introduces OSAM (OpenStreetMap SAM), which fine-tunes SAM using historical open-source datasets to enhance its ability to detect linear features. Then the OSAM framework dynamically generates prompts from the open geographic database OpenStreetMap to activate SAM, enabling autonomous detection of farmland roads without the need for additional manual annotations or assisted interactions. Experiments demonstrate that OSAM performs exceptionally well in scenarios with sparse farmland road distributions and delivers robust results even with limited training data. Specifically, OSAM achieves a F1 of 71.91 % and an IoU of 58.53 % when trained on the full dataset, significantly outperforming DLinkNet (IoU: 56.42 %) and SegFormer (IoU: 41.65 %). Even with only 1 % of the original training samples, OSAM maintains robust performance (F1: 62.26 %, IoU: 47.02 %), whereas supervised learning methods such as SegFormer, SIINet, and UNet suffer significant performance degradation under extreme data constraints. Furthermore, evaluations on remote sensing images with varying data distributions, spatial resolutions, and agricultural environments confirm that OSAM achieves high extraction accuracy and strong generalization ability. This framework significantly reduces reliance on large, well-balanced labeled datasets while maintaining high accuracy, making farmland road extraction more efficient and cost-effective in diverse scenarios.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110330"},"PeriodicalIF":7.7000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169925004363","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

The narrow contours of farmland roads, lack of clear boundary features with surrounding objects, and the complexity and variability of features limit the applicability of existing supervised extraction algorithms. Meanwhile, visual segmentation models represented by SAM (Segment Anything Model) can achieve zero-shot generalization with appropriate prompts but struggle to capture linear object effectively. This study introduces OSAM (OpenStreetMap SAM), which fine-tunes SAM using historical open-source datasets to enhance its ability to detect linear features. Then the OSAM framework dynamically generates prompts from the open geographic database OpenStreetMap to activate SAM, enabling autonomous detection of farmland roads without the need for additional manual annotations or assisted interactions. Experiments demonstrate that OSAM performs exceptionally well in scenarios with sparse farmland road distributions and delivers robust results even with limited training data. Specifically, OSAM achieves a F1 of 71.91 % and an IoU of 58.53 % when trained on the full dataset, significantly outperforming DLinkNet (IoU: 56.42 %) and SegFormer (IoU: 41.65 %). Even with only 1 % of the original training samples, OSAM maintains robust performance (F1: 62.26 %, IoU: 47.02 %), whereas supervised learning methods such as SegFormer, SIINet, and UNet suffer significant performance degradation under extreme data constraints. Furthermore, evaluations on remote sensing images with varying data distributions, spatial resolutions, and agricultural environments confirm that OSAM achieves high extraction accuracy and strong generalization ability. This framework significantly reduces reliance on large, well-balanced labeled datasets while maintaining high accuracy, making farmland road extraction more efficient and cost-effective in diverse scenarios.

Abstract Image

求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers and Electronics in Agriculture
Computers and Electronics in Agriculture 工程技术-计算机:跨学科应用
CiteScore
15.30
自引率
14.50%
发文量
800
审稿时长
62 days
期刊介绍: Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信