Gao Xinwei, Deng Haibo, Guo Yaoyao, Gu Chen-chen, S. Yongfang, Gao Anlin, Guo Licai, Mao Xunan, Lv Jing
{"title":"基于约束时空模型的低照度实时移动通信视频增强","authors":"Gao Xinwei, Deng Haibo, Guo Yaoyao, Gu Chen-chen, S. Yongfang, Gao Anlin, Guo Licai, Mao Xunan, Lv Jing","doi":"10.1109/ISCAS.2017.8050384","DOIUrl":null,"url":null,"abstract":"Video quality in real-time mobile communication is often influenced by the ambient light. In the low-lighting condition, videos in real-time mobile communication are usually dark and lack details. To solve the above problem, we propose a fast low-lighting video enhancement algorithm by exploiting the constrained spatial-temporal model, in which three terms including luminance accuracy, contrast accuracy and temporal consistency are all considered. For the first term, the average luminance level of each frame is taken into account of a set of the off-line trained luminance enhancement functions. For the second term, the adjustment of the luminance range in the frame based statistical histogram is obtained for the adaptive contrast enhancement function. For the third term, the temporal consistency is utilized by considering the current enhancement function and these of the previous frames to avoid the flicker between these adjacent frames. Furthermore, the proposed method has been already implemented on Wechat, a social application that connects 800 million people with chat, calls and more. Extensive experiments demonstrate that the proposed method achieves better enhancement results than many current state-of-the-art methods.","PeriodicalId":91083,"journal":{"name":"IEEE International Symposium on Circuits and Systems proceedings. IEEE International Symposium on Circuits and Systems","volume":"1 1","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Low-lighting video enhancement using constrained spatial-temporal model for real-time mobile communication\",\"authors\":\"Gao Xinwei, Deng Haibo, Guo Yaoyao, Gu Chen-chen, S. Yongfang, Gao Anlin, Guo Licai, Mao Xunan, Lv Jing\",\"doi\":\"10.1109/ISCAS.2017.8050384\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Video quality in real-time mobile communication is often influenced by the ambient light. In the low-lighting condition, videos in real-time mobile communication are usually dark and lack details. To solve the above problem, we propose a fast low-lighting video enhancement algorithm by exploiting the constrained spatial-temporal model, in which three terms including luminance accuracy, contrast accuracy and temporal consistency are all considered. For the first term, the average luminance level of each frame is taken into account of a set of the off-line trained luminance enhancement functions. For the second term, the adjustment of the luminance range in the frame based statistical histogram is obtained for the adaptive contrast enhancement function. For the third term, the temporal consistency is utilized by considering the current enhancement function and these of the previous frames to avoid the flicker between these adjacent frames. Furthermore, the proposed method has been already implemented on Wechat, a social application that connects 800 million people with chat, calls and more. Extensive experiments demonstrate that the proposed method achieves better enhancement results than many current state-of-the-art methods.\",\"PeriodicalId\":91083,\"journal\":{\"name\":\"IEEE International Symposium on Circuits and Systems proceedings. IEEE International Symposium on Circuits and Systems\",\"volume\":\"1 1\",\"pages\":\"1-4\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE International Symposium on Circuits and Systems proceedings. IEEE International Symposium on Circuits and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCAS.2017.8050384\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Symposium on Circuits and Systems proceedings. IEEE International Symposium on Circuits and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCAS.2017.8050384","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Low-lighting video enhancement using constrained spatial-temporal model for real-time mobile communication
Video quality in real-time mobile communication is often influenced by the ambient light. In the low-lighting condition, videos in real-time mobile communication are usually dark and lack details. To solve the above problem, we propose a fast low-lighting video enhancement algorithm by exploiting the constrained spatial-temporal model, in which three terms including luminance accuracy, contrast accuracy and temporal consistency are all considered. For the first term, the average luminance level of each frame is taken into account of a set of the off-line trained luminance enhancement functions. For the second term, the adjustment of the luminance range in the frame based statistical histogram is obtained for the adaptive contrast enhancement function. For the third term, the temporal consistency is utilized by considering the current enhancement function and these of the previous frames to avoid the flicker between these adjacent frames. Furthermore, the proposed method has been already implemented on Wechat, a social application that connects 800 million people with chat, calls and more. Extensive experiments demonstrate that the proposed method achieves better enhancement results than many current state-of-the-art methods.