{"title":"UMCTF框架中的内容自适应过滤","authors":"D. Turaga, M. Schaar","doi":"10.1109/ICASSP.2003.1199551","DOIUrl":null,"url":null,"abstract":"Unconstrained motion compensated temporal filtering (UMCTF) is a very general and flexible framework for temporal filtering. It allows the selection of many different filters as well as decomposition structures to allow easy adaptation to video content, bandwidth variations, complexity requirements, and in conjunction with embedded coding can provide spatio-temporal-SNR scalability. In this paper we demonstrate the content-adaptive filter selection provided within the UMCTF framework. We show improvements in coding efficiency as well as in decoded visual quality using content-adaptive filters, at different granularities.","PeriodicalId":104473,"journal":{"name":"2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03).","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Content-adaptive filtering in the UMCTF framework\",\"authors\":\"D. Turaga, M. Schaar\",\"doi\":\"10.1109/ICASSP.2003.1199551\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unconstrained motion compensated temporal filtering (UMCTF) is a very general and flexible framework for temporal filtering. It allows the selection of many different filters as well as decomposition structures to allow easy adaptation to video content, bandwidth variations, complexity requirements, and in conjunction with embedded coding can provide spatio-temporal-SNR scalability. In this paper we demonstrate the content-adaptive filter selection provided within the UMCTF framework. We show improvements in coding efficiency as well as in decoded visual quality using content-adaptive filters, at different granularities.\",\"PeriodicalId\":104473,\"journal\":{\"name\":\"2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03).\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-07-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03).\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICASSP.2003.1199551\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03).","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP.2003.1199551","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unconstrained motion compensated temporal filtering (UMCTF) is a very general and flexible framework for temporal filtering. It allows the selection of many different filters as well as decomposition structures to allow easy adaptation to video content, bandwidth variations, complexity requirements, and in conjunction with embedded coding can provide spatio-temporal-SNR scalability. In this paper we demonstrate the content-adaptive filter selection provided within the UMCTF framework. We show improvements in coding efficiency as well as in decoded visual quality using content-adaptive filters, at different granularities.