Multi-class rice seed recognition based on deep space and channel residual network combined with double attention mechanism.

IF 2.9 3区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
PLoS ONE Pub Date : 2025-05-16 eCollection Date: 2025-01-01 DOI:10.1371/journal.pone.0322699
Tingyuan Zhang, Changsheng Zhang, Zhongyi Yang, Meng Wang, Fujie Zhang, Dekai Li, Sen Yang
{"title":"Multi-class rice seed recognition based on deep space and channel residual network combined with double attention mechanism.","authors":"Tingyuan Zhang, Changsheng Zhang, Zhongyi Yang, Meng Wang, Fujie Zhang, Dekai Li, Sen Yang","doi":"10.1371/journal.pone.0322699","DOIUrl":null,"url":null,"abstract":"<p><p>Accurately recognizing rice seed varieties poses significant challenges due to their diverse morphological characteristics and complex classification requirements. Traditional image recognition methods often struggle with both accuracy and efficiency in this context. To address these limitations, this study proposes the Deep Space and Channel Residual Network with Double Attention Mechanism (RSCD-Net) to enhance the recognition accuracy of 36 rice seed varieties. The core innovation of RSCD-Net is the introduction of the Space and Channel Feature Extraction Residual Block (SCR-Block), which improves inter-class differentiation while minimizing redundant features, thereby optimizing computational efficiency. The RSCD-Net architecture consists of 16 layers of SCR-Blocks, structured into four convolutional stages with 3, 4, 6, and 3 units, respectively. Additionally, a Double Attention Mechanism (A2Net) is incorporated to enhance the network's global receptive field, improving its capacity to distinguish subtle variations among seed types. Experimental results on a self-collected dataset demonstrate that RSCD-Net achieves an average accuracy of 81.94%, surpassing the baseline model by 4.16%. Compared with state-of-the-art models such as InceptionResNetV2, ConvNeXt, MobileNetV3, and Swin Transformer, RSCD Net has improved by 1.17%, 3%, 24.72%, and 13.22%, respectively, showcasing its superior performance. These findings confirm that RSCD-Net provides an effective and efficient solution for rice seed classification, offering a promising reference for addressing similar fine-grained recognition challenges in agricultural applications.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 5","pages":"e0322699"},"PeriodicalIF":2.9000,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12083804/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0322699","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Accurately recognizing rice seed varieties poses significant challenges due to their diverse morphological characteristics and complex classification requirements. Traditional image recognition methods often struggle with both accuracy and efficiency in this context. To address these limitations, this study proposes the Deep Space and Channel Residual Network with Double Attention Mechanism (RSCD-Net) to enhance the recognition accuracy of 36 rice seed varieties. The core innovation of RSCD-Net is the introduction of the Space and Channel Feature Extraction Residual Block (SCR-Block), which improves inter-class differentiation while minimizing redundant features, thereby optimizing computational efficiency. The RSCD-Net architecture consists of 16 layers of SCR-Blocks, structured into four convolutional stages with 3, 4, 6, and 3 units, respectively. Additionally, a Double Attention Mechanism (A2Net) is incorporated to enhance the network's global receptive field, improving its capacity to distinguish subtle variations among seed types. Experimental results on a self-collected dataset demonstrate that RSCD-Net achieves an average accuracy of 81.94%, surpassing the baseline model by 4.16%. Compared with state-of-the-art models such as InceptionResNetV2, ConvNeXt, MobileNetV3, and Swin Transformer, RSCD Net has improved by 1.17%, 3%, 24.72%, and 13.22%, respectively, showcasing its superior performance. These findings confirm that RSCD-Net provides an effective and efficient solution for rice seed classification, offering a promising reference for addressing similar fine-grained recognition challenges in agricultural applications.

基于深空通道残差网络结合双注意机制的水稻种子多类识别。
由于水稻种子品种形态特征多样,分类要求复杂,因此对其进行准确识别具有重大挑战。在这种情况下,传统的图像识别方法往往在准确性和效率上挣扎。为了解决这些问题,本研究提出了基于双注意机制的深空通道残差网络(RSCD-Net)来提高36个水稻种子品种的识别精度。RSCD-Net的核心创新是引入了空间和信道特征提取残差块(SCR-Block),在减少冗余特征的同时提高了类间的差异性,从而优化了计算效率。RSCD-Net架构由16层scr - block组成,分为四个卷积阶段,分别有3、4、6和3个单元。此外,双注意机制(A2Net)被纳入增强网络的全球接受野,提高其区分种子类型之间细微变化的能力。在自采集数据集上的实验结果表明,RSCD-Net的平均准确率为81.94%,比基线模型高出4.16%。与InceptionResNetV2、ConvNeXt、MobileNetV3和Swin Transformer等先进模型相比,RSCD Net分别提高了1.17%、3%、24.72%和13.22%,显示出优越的性能。这些发现证实了RSCD-Net为水稻种子分类提供了有效和高效的解决方案,为解决农业应用中类似的细粒度识别挑战提供了有希望的参考。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
PLoS ONE
PLoS ONE 生物-生物学
CiteScore
6.20
自引率
5.40%
发文量
14242
审稿时长
3.7 months
期刊介绍: PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides: * Open-access—freely accessible online, authors retain copyright * Fast publication times * Peer review by expert, practicing researchers * Post-publication tools to indicate quality and impact * Community-based dialogue on articles * Worldwide media coverage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信