Hybrid User-Independent and User-Dependent Offline Signature Verification with a Two-Channel CNN

M. Yilmaz, Kagan Ozturk
{"title":"Hybrid User-Independent and User-Dependent Offline Signature Verification with a Two-Channel CNN","authors":"M. Yilmaz, Kagan Ozturk","doi":"10.1109/CVPRW.2018.00094","DOIUrl":null,"url":null,"abstract":"Signature verification task needs relevant signature representations to achieve low error rates. Many signature representations have been proposed so far. In this work we propose a hybrid user-independent/dependent offline signature verification technique with a two-channel convolutional neural network (CNN) both for verification and feature extraction. Signature pairs are input to the CNN as two channels of one image, where the first channel always represents a reference signature and the second channel represents a query signature. We decrease the image size through the network by keeping the convolution stride parameter large enough. Global average pooling is applied to decrease the dimensionality to 200 at the end of locally connected layers. We utilize the CNN as a feature extractor and report 4.13% equal error rate (EER) considering 12 reference signatures with the proposed 200-dimensional representation, compared to 3.66% of a recently proposed technique with 2048-dimensional representation using the same experimental protocol. When the two methods are combined at score level, more than 50% improvement (1.76% EER) is achieved demonstrating the complementarity of them. Sensitivity of the model to gray-level and binary images is investigated in detail. One model is trained using gray-level images and the other is trained using binary images. It is shown that the availability of gray-level information in train and test data decreases the EER e.g. from 11.86% to 4.13%.","PeriodicalId":150600,"journal":{"name":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"36","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2018.00094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 36

Abstract

Signature verification task needs relevant signature representations to achieve low error rates. Many signature representations have been proposed so far. In this work we propose a hybrid user-independent/dependent offline signature verification technique with a two-channel convolutional neural network (CNN) both for verification and feature extraction. Signature pairs are input to the CNN as two channels of one image, where the first channel always represents a reference signature and the second channel represents a query signature. We decrease the image size through the network by keeping the convolution stride parameter large enough. Global average pooling is applied to decrease the dimensionality to 200 at the end of locally connected layers. We utilize the CNN as a feature extractor and report 4.13% equal error rate (EER) considering 12 reference signatures with the proposed 200-dimensional representation, compared to 3.66% of a recently proposed technique with 2048-dimensional representation using the same experimental protocol. When the two methods are combined at score level, more than 50% improvement (1.76% EER) is achieved demonstrating the complementarity of them. Sensitivity of the model to gray-level and binary images is investigated in detail. One model is trained using gray-level images and the other is trained using binary images. It is shown that the availability of gray-level information in train and test data decreases the EER e.g. from 11.86% to 4.13%.
基于双通道CNN的用户独立和用户依赖混合离线签名验证
签名验证任务需要相应的签名表示来实现低错误率。到目前为止,已经提出了许多签名表示。在这项工作中,我们提出了一种混合的用户独立/依赖离线签名验证技术,该技术使用双通道卷积神经网络(CNN)进行验证和特征提取。签名对作为一幅图像的两个通道输入CNN,其中第一个通道总是代表参考签名,第二个通道总是代表查询签名。我们通过保持卷积步幅参数足够大来减小网络图像的大小。采用全局平均池化方法,在局部连接层的末端将维数降至200。我们利用CNN作为特征提取器,并报告了4.13%的相等错误率(EER),考虑到12个参考签名,使用提出的200维表示,而最近提出的使用相同实验协议的2048维表示技术的相等错误率(EER)为3.66%。当两种方法在评分水平上结合时,提高了50%以上(1.76%的EER),显示了两者的互补性。详细研究了该模型对灰度图像和二值图像的敏感性。一个模型使用灰度图像进行训练,另一个模型使用二值图像进行训练。结果表明,列车和测试数据中灰度信息的可用性将EER从11.86%降低到4.13%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信