Nianfeng Liu, Haiqing Li, Man Zhang, Jing Liu, Zhenan Sun, T. Tan
{"title":"基于全卷积网络的非合作环境下虹膜准确分割","authors":"Nianfeng Liu, Haiqing Li, Man Zhang, Jing Liu, Zhenan Sun, T. Tan","doi":"10.1109/ICB.2016.7550055","DOIUrl":null,"url":null,"abstract":"Conventional iris recognition requires controlled conditions (e.g., close acquisition distance and stop-and-stare scheme) and high user cooperation for image acquisition. Non-cooperative acquisition environments introduce many adverse factors such as blur, off-axis, occlusions and specular reflections, which challenge existing iris segmentation approaches. In this paper, we present two iris segmentation models, namely hierarchical convolutional neural networks (HCNNs) and multi-scale fully convolutional network (MFCNs), for noisy iris images acquired at-a-distance and on-the-move. Both models automatically locate iris pixels without handcrafted features or rules. Moreover, the features and classifiers are jointly optimized. They are end-to-end models which require no further pre- and post-processing and outperform other state-of-the-art methods. Compared with HCNNs, MFCNs take input of arbitrary size and produces correspondingly-sized output without sliding window prediction, which makes MFCNs more efficient. The shallow, fine layers and deep, global layers are combined in MFCNs to capture both the texture details and global structure of iris patterns. Experimental results show that MFCNs are more robust than HCNNs to noises, and can greatly improve the current state-of-the-arts by 25.62% and 13.24% on the UBIRIS.v2 and CASIA.v4-distance databases, respectively.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"158","resultStr":"{\"title\":\"Accurate iris segmentation in non-cooperative environments using fully convolutional networks\",\"authors\":\"Nianfeng Liu, Haiqing Li, Man Zhang, Jing Liu, Zhenan Sun, T. Tan\",\"doi\":\"10.1109/ICB.2016.7550055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Conventional iris recognition requires controlled conditions (e.g., close acquisition distance and stop-and-stare scheme) and high user cooperation for image acquisition. Non-cooperative acquisition environments introduce many adverse factors such as blur, off-axis, occlusions and specular reflections, which challenge existing iris segmentation approaches. In this paper, we present two iris segmentation models, namely hierarchical convolutional neural networks (HCNNs) and multi-scale fully convolutional network (MFCNs), for noisy iris images acquired at-a-distance and on-the-move. Both models automatically locate iris pixels without handcrafted features or rules. Moreover, the features and classifiers are jointly optimized. They are end-to-end models which require no further pre- and post-processing and outperform other state-of-the-art methods. Compared with HCNNs, MFCNs take input of arbitrary size and produces correspondingly-sized output without sliding window prediction, which makes MFCNs more efficient. The shallow, fine layers and deep, global layers are combined in MFCNs to capture both the texture details and global structure of iris patterns. Experimental results show that MFCNs are more robust than HCNNs to noises, and can greatly improve the current state-of-the-arts by 25.62% and 13.24% on the UBIRIS.v2 and CASIA.v4-distance databases, respectively.\",\"PeriodicalId\":308715,\"journal\":{\"name\":\"2016 International Conference on Biometrics (ICB)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"158\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 International Conference on Biometrics (ICB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICB.2016.7550055\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Conference on Biometrics (ICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICB.2016.7550055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Accurate iris segmentation in non-cooperative environments using fully convolutional networks
Conventional iris recognition requires controlled conditions (e.g., close acquisition distance and stop-and-stare scheme) and high user cooperation for image acquisition. Non-cooperative acquisition environments introduce many adverse factors such as blur, off-axis, occlusions and specular reflections, which challenge existing iris segmentation approaches. In this paper, we present two iris segmentation models, namely hierarchical convolutional neural networks (HCNNs) and multi-scale fully convolutional network (MFCNs), for noisy iris images acquired at-a-distance and on-the-move. Both models automatically locate iris pixels without handcrafted features or rules. Moreover, the features and classifiers are jointly optimized. They are end-to-end models which require no further pre- and post-processing and outperform other state-of-the-art methods. Compared with HCNNs, MFCNs take input of arbitrary size and produces correspondingly-sized output without sliding window prediction, which makes MFCNs more efficient. The shallow, fine layers and deep, global layers are combined in MFCNs to capture both the texture details and global structure of iris patterns. Experimental results show that MFCNs are more robust than HCNNs to noises, and can greatly improve the current state-of-the-arts by 25.62% and 13.24% on the UBIRIS.v2 and CASIA.v4-distance databases, respectively.