Investigating the use of deep learning models for land cover classification from street-level imagery

IF 1.7 4区 环境科学与生态学 Q3 ECOLOGY
Narumasa Tsutsumida, Jing Zhao, Naho Shibuya, Kenlo Nasahara, Takeo Tadono
{"title":"Investigating the use of deep learning models for land cover classification from street-level imagery","authors":"Narumasa Tsutsumida,&nbsp;Jing Zhao,&nbsp;Naho Shibuya,&nbsp;Kenlo Nasahara,&nbsp;Takeo Tadono","doi":"10.1111/1440-1703.12470","DOIUrl":null,"url":null,"abstract":"<p>Land cover classification mapping is the process of assigning labels to different types of land surfaces based on overhead imagery. However, acquiring reference samples through fieldwork for ground truth can be costly and time-intensive. Additionally, annotating high-resolution satellite images poses challenges, as certain land cover types are difficult to discern solely from nadir images. To address these challenges, this study examined the feasibility of using street-level imagery to support the collection of reference samples and identify land cover. We utilized 18,022 images captured in Japan, with 14 different land cover classes. Our approach involved using convolutional neural networks based on Inception-v4 and DenseNet, as well as Transformer-based Vision and Swin Transformers, both with and without pre-trained weights and fine-tuning techniques. Additionally, we explored explainability through Gradient-Weighted Class Activation Mapping (Grad-CAM). Our results indicate that using a Vision Transformer was the most effective method, achieving an overall accuracy of 86.12% and allowing for full explainability of land cover targets within an image. This paper proposes a promising solution for land cover classification from street-level imagery, which can be used for semi-automatic reference sample collection from geo-tagged street-level photos.</p>","PeriodicalId":11434,"journal":{"name":"Ecological Research","volume":"39 5","pages":"757-765"},"PeriodicalIF":1.7000,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/1440-1703.12470","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ecological Research","FirstCategoryId":"93","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/1440-1703.12470","RegionNum":4,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Land cover classification mapping is the process of assigning labels to different types of land surfaces based on overhead imagery. However, acquiring reference samples through fieldwork for ground truth can be costly and time-intensive. Additionally, annotating high-resolution satellite images poses challenges, as certain land cover types are difficult to discern solely from nadir images. To address these challenges, this study examined the feasibility of using street-level imagery to support the collection of reference samples and identify land cover. We utilized 18,022 images captured in Japan, with 14 different land cover classes. Our approach involved using convolutional neural networks based on Inception-v4 and DenseNet, as well as Transformer-based Vision and Swin Transformers, both with and without pre-trained weights and fine-tuning techniques. Additionally, we explored explainability through Gradient-Weighted Class Activation Mapping (Grad-CAM). Our results indicate that using a Vision Transformer was the most effective method, achieving an overall accuracy of 86.12% and allowing for full explainability of land cover targets within an image. This paper proposes a promising solution for land cover classification from street-level imagery, which can be used for semi-automatic reference sample collection from geo-tagged street-level photos.

Abstract Image

研究如何利用深度学习模型从街道级图像中进行土地覆被分类
土地覆被分类测绘是根据俯瞰图像为不同类型的土地表面分配标签的过程。然而,通过实地考察获取地面实况参考样本可能成本高昂且耗时较长。此外,对高分辨率卫星图像进行标注也是一项挑战,因为某些土地覆被类型很难仅从天底图像中辨别出来。为了应对这些挑战,本研究考察了使用街道级图像支持参考样本收集和识别土地覆被的可行性。我们使用了在日本拍摄的 18,022 幅图像,其中包含 14 种不同的土地覆被类别。我们的方法包括使用基于 Inception-v4 和 DenseNet 的卷积神经网络,以及基于变换器的视觉和 Swin 变换器,包括使用和不使用预训练权重和微调技术。此外,我们还通过梯度加权类激活映射(Gradient-Weighted Class Activation Mapping,Grad-CAM)探索了可解释性。我们的研究结果表明,使用视觉变换器是最有效的方法,总体准确率达到 86.12%,并能完全解释图像中的土地覆被目标。本文提出了一种很有前景的街道图像土地覆被分类解决方案,可用于从有地理标记的街道照片中半自动收集参考样本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Ecological Research
Ecological Research 环境科学-生态学
CiteScore
4.40
自引率
5.00%
发文量
87
审稿时长
5.6 months
期刊介绍: Ecological Research has been published in English by the Ecological Society of Japan since 1986. Ecological Research publishes original papers on all aspects of ecology, in both aquatic and terrestrial ecosystems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信