{"title":"基于上下文的视频编码","authors":"Richard George Vigars, A. Calway, D. Bull","doi":"10.1109/ICIP.2013.6738402","DOIUrl":null,"url":null,"abstract":"We present a video CODEC framework which exploits extrinsic scene knowledge to condition a perspective motion model. An approximate textural-geometric model of the scene is prepared prior to coding. During coding, the locations of planar surfaces in the scene are tracked, facilitating the computation of accurate perspective motion warp parameters. These algorithms are integrated with H.264 into a hybrid CODEC framework, achieving savings of up to 48% for equivalent visual quality.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Context-based video coding\",\"authors\":\"Richard George Vigars, A. Calway, D. Bull\",\"doi\":\"10.1109/ICIP.2013.6738402\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a video CODEC framework which exploits extrinsic scene knowledge to condition a perspective motion model. An approximate textural-geometric model of the scene is prepared prior to coding. During coding, the locations of planar surfaces in the scene are tracked, facilitating the computation of accurate perspective motion warp parameters. These algorithms are integrated with H.264 into a hybrid CODEC framework, achieving savings of up to 48% for equivalent visual quality.\",\"PeriodicalId\":388385,\"journal\":{\"name\":\"2013 IEEE International Conference on Image Processing\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE International Conference on Image Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP.2013.6738402\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Conference on Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2013.6738402","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We present a video CODEC framework which exploits extrinsic scene knowledge to condition a perspective motion model. An approximate textural-geometric model of the scene is prepared prior to coding. During coding, the locations of planar surfaces in the scene are tracked, facilitating the computation of accurate perspective motion warp parameters. These algorithms are integrated with H.264 into a hybrid CODEC framework, achieving savings of up to 48% for equivalent visual quality.