The Unconstrained Ear Recognition Challenge 2019

Ž. Emeršič, S. V. A. Kumar, B. Harish, Weronika Gutfeter, J. Khiarak, A. Pacut, E. Hansley, Maurício Pamplona Segundo, Sudeep Sarkar, Hyeon-Nam Park, G. Nam, Ig-Jae Kim, S. G. Sangodkar, Umit Kacar, M. Kirci, Li Yuan, Jishou Yuan, Haonan Zhao, Fei Lu, Junying Mao, Xiaoshuang Zhang, Dogucan Yaman, Fevziye Irem Eyiokur, Kadir Bulut Özler, H. K. Ekenel, D. P. Chowdhury, Sambit Bakshi, B. Majhi, P. Peer, V. Štruc
{"title":"The Unconstrained Ear Recognition Challenge 2019","authors":"Ž. Emeršič, S. V. A. Kumar, B. Harish, Weronika Gutfeter, J. Khiarak, A. Pacut, E. Hansley, Maurício Pamplona Segundo, Sudeep Sarkar, Hyeon-Nam Park, G. Nam, Ig-Jae Kim, S. G. Sangodkar, Umit Kacar, M. Kirci, Li Yuan, Jishou Yuan, Haonan Zhao, Fei Lu, Junying Mao, Xiaoshuang Zhang, Dogucan Yaman, Fevziye Irem Eyiokur, Kadir Bulut Özler, H. K. Ekenel, D. P. Chowdhury, Sambit Bakshi, B. Majhi, P. Peer, V. Štruc","doi":"10.1109/ICB45273.2019.8987337","DOIUrl":null,"url":null,"abstract":"This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on ensemble based methods combining either representations from multiple deep models or hand-crafted with learned image descriptors. Our analysis shows that methods incorporating deep learning models clearly outperform techniques relying solely on hand-crafted descriptors, even though both groups of techniques exhibit similar behavior when it comes to robustness to various covariates, such presence of occlusions, changes in (head) pose, or variability in image resolution. The results of the challenge also show that there has been considerable progress since the first UERC in 2017, but that there is still ample room for further research in this area.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Biometrics (ICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICB45273.2019.8987337","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20

Abstract

This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on ensemble based methods combining either representations from multiple deep models or hand-crafted with learned image descriptors. Our analysis shows that methods incorporating deep learning models clearly outperform techniques relying solely on hand-crafted descriptors, even though both groups of techniques exhibit similar behavior when it comes to robustness to various covariates, such presence of occlusions, changes in (head) pose, or variability in image resolution. The results of the challenge also show that there has been considerable progress since the first UERC in 2017, but that there is still ample room for further research in this area.
2019年无约束耳识别挑战赛
本文概述了2019年无约束耳识别挑战(UERC),这是一系列小组基准测试工作中的第二个,这些工作围绕着从非受控环境中捕获的耳图像中识别人的问题展开。挑战的目标是评估现有耳朵识别技术在具有挑战性的大规模耳朵数据集上的性能,并从不同角度分析该技术的性能,例如对未见数据特征的泛化能力,对旋转、遮挡和图像分辨率的敏感性以及基于人口统计学标准(即性别和种族)选择的子群体的性能偏差。来自12个机构的研究小组参加了比赛,共提交了13种识别方法,从基于描述符的方法到深度学习模型。大多数提交都集中在基于集成的方法上,这些方法结合了来自多个深度模型的表示或手工制作的图像描述符。我们的分析表明,结合深度学习模型的方法明显优于仅依赖手工制作描述符的技术,尽管两组技术在对各种协变量(如遮挡的存在、(头部)姿势的变化或图像分辨率的可变性)的鲁棒性方面表现出相似的行为。挑战的结果还表明,自2017年第一次UERC以来,已经取得了相当大的进展,但在这一领域仍有足够的进一步研究空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信