Miguel De-la-Torre, Eric Granger, P. Radtke, R. Sabourin, D. Gorodnichy
{"title":"Incremental update of biometric models in face-based video surveillance","authors":"Miguel De-la-Torre, Eric Granger, P. Radtke, R. Sabourin, D. Gorodnichy","doi":"10.1109/IJCNN.2012.6252658","DOIUrl":null,"url":null,"abstract":"Video-based face recognition of individuals involves matching facial regions captured in video sequences against the model of individuals enrolled to a face recognition system. Due to a limited control over operational conditions, classification systems applied to face matching are confronted with complex pattern recognition environments that change over time. Therefore, the facial model of an individual tends to diverge from the underlying data distribution. Although a limited amount of reference data is often collected during initial enrollment, new samples often become available over time to update and refine models. In this paper, an adaptive ensemble of classifiers is proposed to update facial models in response to new reference samples. To avoid knowledge corruption linked to incremental learning of monolithic classifiers, and maintain a high level of performance, this ensemble exploits a learn-and-combine approach. In response to new reference samples, a new 2-class Probabilistic Fuzzy ARTMAP classifier is trained and combined to previously-trained classifiers in the ROC space. Iterative Boolean Combination is employed for fusion of 2-class classifiers of each individual in the decision space. Performance is assessed in terms of AUC accuracy and resource requirements under different incremental learning scenarios with new data extracted from the Faces in Action data set. Simulation results indicate that the proposed system significantly outperforms reference classifiers and ensembles for incremental learning.","PeriodicalId":287844,"journal":{"name":"The 2012 International Joint Conference on Neural Networks (IJCNN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2012 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2012.6252658","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Video-based face recognition of individuals involves matching facial regions captured in video sequences against the model of individuals enrolled to a face recognition system. Due to a limited control over operational conditions, classification systems applied to face matching are confronted with complex pattern recognition environments that change over time. Therefore, the facial model of an individual tends to diverge from the underlying data distribution. Although a limited amount of reference data is often collected during initial enrollment, new samples often become available over time to update and refine models. In this paper, an adaptive ensemble of classifiers is proposed to update facial models in response to new reference samples. To avoid knowledge corruption linked to incremental learning of monolithic classifiers, and maintain a high level of performance, this ensemble exploits a learn-and-combine approach. In response to new reference samples, a new 2-class Probabilistic Fuzzy ARTMAP classifier is trained and combined to previously-trained classifiers in the ROC space. Iterative Boolean Combination is employed for fusion of 2-class classifiers of each individual in the decision space. Performance is assessed in terms of AUC accuracy and resource requirements under different incremental learning scenarios with new data extracted from the Faces in Action data set. Simulation results indicate that the proposed system significantly outperforms reference classifiers and ensembles for incremental learning.
基于视频的个人人脸识别涉及将视频序列中捕获的面部区域与注册到人脸识别系统的个人模型相匹配。由于对操作条件的控制有限,应用于人脸匹配的分类系统面临着随时间变化的复杂模式识别环境。因此,个体的面部模型往往偏离底层数据分布。虽然在初始登记期间通常收集有限数量的参考数据,但随着时间的推移,通常可以使用新的样本来更新和改进模型。本文提出了一种自适应集成分类器来更新面部模型以响应新的参考样本。为了避免与整体分类器的增量学习相关的知识损坏,并保持高水平的性能,该集成利用了学习和组合方法。为了响应新的参考样本,我们训练了一个新的2类概率模糊ARTMAP分类器,并将其与之前在ROC空间中训练好的分类器相结合。采用迭代布尔组合对决策空间中每个个体的2类分类器进行融合。使用从Faces in Action数据集中提取的新数据,根据不同增量学习场景下的AUC准确性和资源需求来评估性能。仿真结果表明,该系统在增量学习方面明显优于参考分类器和集成器。