{"title":"一种新的感知编码的不明显编码失真模型","authors":"Shengyang Xu, Mei Yu, G. Jiang, Shuqing Fang","doi":"10.1109/DMIAF.2016.7574928","DOIUrl":null,"url":null,"abstract":"With the aim of improving the efficiency and perceptual quality in video coding, this paper proposes a novel just-noticeable coding distortion (JNCD) model that considers human visual perception redundancy and unreasonable factors of existing just-noticeable distortion (JND) models in the coding process. First, we design a psycho-physical experiment to analyze the just-noticeable gradient difference (JNGD) and build a JNGD model to filter the gradient components that are imperceptible to human eyes. We use total variation (TV) to decompose an image into a structural image and a textural image, and calculate their gradients. Then, we use JNGD to filter out imperceptible gradient components in each gradient image. Second, human visual sensitivity to different gradient magnitudes is analyzed to model the relationship between the human visual perceptible gradient magnitude and JNCD. Finally, considering the perceived difference of human eye perception in edge, flat, and textural regions of an image, we adjust the JNCD value in each region and establish a JNCD model of the whole image. To verify the efficiency of the proposed JNCD model, we compare it with the classic JND model and test it on the high-efficiency video coding (HEVC) platform. The proposed model has advantages in subjective visual effects, meaning that it is helpful in analysis of human visual perception redundancy and the relevant perceptual video coding.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"New just noticeable coding distortion model for perceptual coding\",\"authors\":\"Shengyang Xu, Mei Yu, G. Jiang, Shuqing Fang\",\"doi\":\"10.1109/DMIAF.2016.7574928\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the aim of improving the efficiency and perceptual quality in video coding, this paper proposes a novel just-noticeable coding distortion (JNCD) model that considers human visual perception redundancy and unreasonable factors of existing just-noticeable distortion (JND) models in the coding process. First, we design a psycho-physical experiment to analyze the just-noticeable gradient difference (JNGD) and build a JNGD model to filter the gradient components that are imperceptible to human eyes. We use total variation (TV) to decompose an image into a structural image and a textural image, and calculate their gradients. Then, we use JNGD to filter out imperceptible gradient components in each gradient image. Second, human visual sensitivity to different gradient magnitudes is analyzed to model the relationship between the human visual perceptible gradient magnitude and JNCD. Finally, considering the perceived difference of human eye perception in edge, flat, and textural regions of an image, we adjust the JNCD value in each region and establish a JNCD model of the whole image. To verify the efficiency of the proposed JNCD model, we compare it with the classic JND model and test it on the high-efficiency video coding (HEVC) platform. The proposed model has advantages in subjective visual effects, meaning that it is helpful in analysis of human visual perception redundancy and the relevant perceptual video coding.\",\"PeriodicalId\":404025,\"journal\":{\"name\":\"2016 Digital Media Industry & Academic Forum (DMIAF)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 Digital Media Industry & Academic Forum (DMIAF)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DMIAF.2016.7574928\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Digital Media Industry & Academic Forum (DMIAF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DMIAF.2016.7574928","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
New just noticeable coding distortion model for perceptual coding
With the aim of improving the efficiency and perceptual quality in video coding, this paper proposes a novel just-noticeable coding distortion (JNCD) model that considers human visual perception redundancy and unreasonable factors of existing just-noticeable distortion (JND) models in the coding process. First, we design a psycho-physical experiment to analyze the just-noticeable gradient difference (JNGD) and build a JNGD model to filter the gradient components that are imperceptible to human eyes. We use total variation (TV) to decompose an image into a structural image and a textural image, and calculate their gradients. Then, we use JNGD to filter out imperceptible gradient components in each gradient image. Second, human visual sensitivity to different gradient magnitudes is analyzed to model the relationship between the human visual perceptible gradient magnitude and JNCD. Finally, considering the perceived difference of human eye perception in edge, flat, and textural regions of an image, we adjust the JNCD value in each region and establish a JNCD model of the whole image. To verify the efficiency of the proposed JNCD model, we compare it with the classic JND model and test it on the high-efficiency video coding (HEVC) platform. The proposed model has advantages in subjective visual effects, meaning that it is helpful in analysis of human visual perception redundancy and the relevant perceptual video coding.