{"title":"基于情境补丁的人脸幻觉阈值定位约束表征与再现学习","authors":"Junjun Jiang, Yi Yu, Suhua Tang, Jiayi Ma, Guo-Jun Qi, Akiko Aizawa","doi":"10.1109/ICME.2017.8019459","DOIUrl":null,"url":null,"abstract":"Face hallucination, which refers to predicting a HighResolution (HR) face image from an observed Low-Resolution (LR) one, is a challenging problem. Most state-of-the-arts employ local face structure prior to estimate the optimal representations for each patch by the training patches of the same position, and achieve good reconstruction performance. However, they do not take into account the contextual information of image patch, which is very useful for the expression of human face. Different from position-patch based methods, in this paper we leverage the contextual information and develop a robust and efficient context-patch face hallucination algorithm, called Thresholding Locality-constrained Representation with Reproducing learning (TLcR-RL). In TLcR-RL, we use a thresholding strategy to enhance the stability of patch representation and the reconstruction accuracy. Additionally, we develop a reproducing learning to iteratively enhance the estimated result by adding the estimated HR face to the training set. Experiments demonstrate that the performance of our proposed framework has a substantial increase when compared to state-of-the-arts, including recently proposed deep learning based method.","PeriodicalId":330977,"journal":{"name":"2017 IEEE International Conference on Multimedia and Expo (ICME)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"Context-patch based face hallucination via thresholding locality-constrained representation and reproducing learning\",\"authors\":\"Junjun Jiang, Yi Yu, Suhua Tang, Jiayi Ma, Guo-Jun Qi, Akiko Aizawa\",\"doi\":\"10.1109/ICME.2017.8019459\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Face hallucination, which refers to predicting a HighResolution (HR) face image from an observed Low-Resolution (LR) one, is a challenging problem. Most state-of-the-arts employ local face structure prior to estimate the optimal representations for each patch by the training patches of the same position, and achieve good reconstruction performance. However, they do not take into account the contextual information of image patch, which is very useful for the expression of human face. Different from position-patch based methods, in this paper we leverage the contextual information and develop a robust and efficient context-patch face hallucination algorithm, called Thresholding Locality-constrained Representation with Reproducing learning (TLcR-RL). In TLcR-RL, we use a thresholding strategy to enhance the stability of patch representation and the reconstruction accuracy. Additionally, we develop a reproducing learning to iteratively enhance the estimated result by adding the estimated HR face to the training set. Experiments demonstrate that the performance of our proposed framework has a substantial increase when compared to state-of-the-arts, including recently proposed deep learning based method.\",\"PeriodicalId\":330977,\"journal\":{\"name\":\"2017 IEEE International Conference on Multimedia and Expo (ICME)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Conference on Multimedia and Expo (ICME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICME.2017.8019459\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2017.8019459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Context-patch based face hallucination via thresholding locality-constrained representation and reproducing learning
Face hallucination, which refers to predicting a HighResolution (HR) face image from an observed Low-Resolution (LR) one, is a challenging problem. Most state-of-the-arts employ local face structure prior to estimate the optimal representations for each patch by the training patches of the same position, and achieve good reconstruction performance. However, they do not take into account the contextual information of image patch, which is very useful for the expression of human face. Different from position-patch based methods, in this paper we leverage the contextual information and develop a robust and efficient context-patch face hallucination algorithm, called Thresholding Locality-constrained Representation with Reproducing learning (TLcR-RL). In TLcR-RL, we use a thresholding strategy to enhance the stability of patch representation and the reconstruction accuracy. Additionally, we develop a reproducing learning to iteratively enhance the estimated result by adding the estimated HR face to the training set. Experiments demonstrate that the performance of our proposed framework has a substantial increase when compared to state-of-the-arts, including recently proposed deep learning based method.