SRU-Net: a novel spatiotemporal attention network for sclera segmentation and recognition

IF 3.7 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Tara Mashayekhbakhsh, Saeed Meshgini, Tohid Yousefi Rezaii, Somayeh Makouei
{"title":"SRU-Net: a novel spatiotemporal attention network for sclera segmentation and recognition","authors":"Tara Mashayekhbakhsh, Saeed Meshgini, Tohid Yousefi Rezaii, Somayeh Makouei","doi":"10.1007/s10044-024-01301-z","DOIUrl":null,"url":null,"abstract":"<p>Segmenting sclera images for effective recognition under non-cooperative conditions poses a significant challenge due to the prevalent noise. While U-Net-based methods have shown success, their limitations in accurately segmenting objects with varying shapes necessitate innovative approaches. This paper introduces the spatiotemporal residual encoding and decoding network (SRU-Net), featuring multi-spatiotemporal feature integration (Ms-FI) modules and attention-pool mechanisms to enhance segmentation accuracy and robustness. Ms-FI modules within SRU-Net’s encoders and decoders identify salient feature regions and prune responses, while attention-pool modules improve segmentation robustness. To assess the proposed SRU-Net, we conducted experiments using six datasets, employing precision, recall, and F1-score metrics. The experimental results demonstrate the superiority of SRU-Net over state-of-the-art methods. Specifically, SRU-Net achieves F1-score values of 94.58%, 98.31%, 98.49%, 97.52%, 95.3%, 97.47%, and 93.11% for MSD, MASD, SVBPI, MASD+MSD, UBIRIS.v1, UBIRIS.v2, and MICHE, respectively. Further evaluation in recognition tasks, with metrics such as AUC, EER, VER@0.1%FAR, and VER@1%FAR considered for the six datasets. The proposed pipeline, comprising SRU-Net and auto encoders (AE), outperforms previous research for all datasets. Particularly noteworthy is the comparison of EER, where SRU-Net + AE exhibits the best recognition results, achieving an EER of 9.42%, 3.81%, and 5.73% for MSD, MASD, and MICHE datasets, respectively.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"67 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2024-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Analysis and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10044-024-01301-z","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Segmenting sclera images for effective recognition under non-cooperative conditions poses a significant challenge due to the prevalent noise. While U-Net-based methods have shown success, their limitations in accurately segmenting objects with varying shapes necessitate innovative approaches. This paper introduces the spatiotemporal residual encoding and decoding network (SRU-Net), featuring multi-spatiotemporal feature integration (Ms-FI) modules and attention-pool mechanisms to enhance segmentation accuracy and robustness. Ms-FI modules within SRU-Net’s encoders and decoders identify salient feature regions and prune responses, while attention-pool modules improve segmentation robustness. To assess the proposed SRU-Net, we conducted experiments using six datasets, employing precision, recall, and F1-score metrics. The experimental results demonstrate the superiority of SRU-Net over state-of-the-art methods. Specifically, SRU-Net achieves F1-score values of 94.58%, 98.31%, 98.49%, 97.52%, 95.3%, 97.47%, and 93.11% for MSD, MASD, SVBPI, MASD+MSD, UBIRIS.v1, UBIRIS.v2, and MICHE, respectively. Further evaluation in recognition tasks, with metrics such as AUC, EER, VER@0.1%FAR, and VER@1%FAR considered for the six datasets. The proposed pipeline, comprising SRU-Net and auto encoders (AE), outperforms previous research for all datasets. Particularly noteworthy is the comparison of EER, where SRU-Net + AE exhibits the best recognition results, achieving an EER of 9.42%, 3.81%, and 5.73% for MSD, MASD, and MICHE datasets, respectively.

Abstract Image

SRU-Net:用于巩膜分割和识别的新型时空注意力网络
由于普遍存在噪声,在非合作条件下对巩膜图像进行有效识别的分段是一项重大挑战。虽然基于 U-Net 的方法取得了成功,但它们在准确分割形状各异的物体方面存在局限性,因此有必要采用创新方法。本文介绍了时空残差编码和解码网络(SRU-Net),其特点是多时空特征集成(Ms-FI)模块和注意力池机制,以提高分割的准确性和鲁棒性。SRU-Net 编码器和解码器中的 Ms-FI 模块可识别突出特征区域并剪切响应,而注意力池模块则可提高分割稳健性。为了评估所提出的 SRU-Net,我们使用六个数据集进行了实验,采用了精确度、召回率和 F1 分数指标。实验结果表明,SRU-Net 优于最先进的方法。具体来说,SRU-Net 对 MSD、MASD、SVBPI、MASD+MSD、UBIRIS.v1、UBIRIS.v2 和 MICHE 的 F1 分数分别达到 94.58%、98.31%、98.49%、97.52%、95.3%、97.47% 和 93.11%。在识别任务中的进一步评估,考虑了六个数据集的 AUC、EER、VER@0.1%FAR 和 VER@1%FAR 等指标。由 SRU-Net 和自动编码器 (AE) 组成的拟议管道在所有数据集上的表现都优于之前的研究。特别值得注意的是在 EER 的比较中,SRU-Net + AE 的识别效果最好,在 MSD、MASD 和 MICHE 数据集上的 EER 分别达到了 9.42%、3.81% 和 5.73%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Analysis and Applications
Pattern Analysis and Applications 工程技术-计算机:人工智能
CiteScore
7.40
自引率
2.60%
发文量
76
审稿时长
13.5 months
期刊介绍: The journal publishes high quality articles in areas of fundamental research in intelligent pattern analysis and applications in computer science and engineering. It aims to provide a forum for original research which describes novel pattern analysis techniques and industrial applications of the current technology. In addition, the journal will also publish articles on pattern analysis applications in medical imaging. The journal solicits articles that detail new technology and methods for pattern recognition and analysis in applied domains including, but not limited to, computer vision and image processing, speech analysis, robotics, multimedia, document analysis, character recognition, knowledge engineering for pattern recognition, fractal analysis, and intelligent control. The journal publishes articles on the use of advanced pattern recognition and analysis methods including statistical techniques, neural networks, genetic algorithms, fuzzy pattern recognition, machine learning, and hardware implementations which are either relevant to the development of pattern analysis as a research area or detail novel pattern analysis applications. Papers proposing new classifier systems or their development, pattern analysis systems for real-time applications, fuzzy and temporal pattern recognition and uncertainty management in applied pattern recognition are particularly solicited.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信