{"title":"学习压缩使用深度自动编码器","authors":"Qing Li, Yang Chen","doi":"10.1109/ALLERTON.2019.8919866","DOIUrl":null,"url":null,"abstract":"A novel deep learning framework for lossy compression is proposed. The framework is based on Deep AutoEncoder (DAE) stacked of Restricted Boltzmann Machines (RBMs), which form Deep Belief Networks (DBNs). The proposed DAE compression scheme is one variant of the known fixed-distortion scheme, where the distortion is fixed and the compression rate is left to optimize. The fixed distortion is achieved by the DBN Blahut-Arimoto algorithm to approximate the Nth-order rate distortion approximating posterior. The trained DBNs are then unrolled to create a DAE, which produces an encoder and a reproducer. The unrolled DAE is fine-tuned with back-propagation through the whole autoencoder to minimize reconstruction errors.","PeriodicalId":120479,"journal":{"name":"2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Learning to Compress Using Deep AutoEncoder\",\"authors\":\"Qing Li, Yang Chen\",\"doi\":\"10.1109/ALLERTON.2019.8919866\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A novel deep learning framework for lossy compression is proposed. The framework is based on Deep AutoEncoder (DAE) stacked of Restricted Boltzmann Machines (RBMs), which form Deep Belief Networks (DBNs). The proposed DAE compression scheme is one variant of the known fixed-distortion scheme, where the distortion is fixed and the compression rate is left to optimize. The fixed distortion is achieved by the DBN Blahut-Arimoto algorithm to approximate the Nth-order rate distortion approximating posterior. The trained DBNs are then unrolled to create a DAE, which produces an encoder and a reproducer. The unrolled DAE is fine-tuned with back-propagation through the whole autoencoder to minimize reconstruction errors.\",\"PeriodicalId\":120479,\"journal\":{\"name\":\"2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ALLERTON.2019.8919866\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2019.8919866","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A novel deep learning framework for lossy compression is proposed. The framework is based on Deep AutoEncoder (DAE) stacked of Restricted Boltzmann Machines (RBMs), which form Deep Belief Networks (DBNs). The proposed DAE compression scheme is one variant of the known fixed-distortion scheme, where the distortion is fixed and the compression rate is left to optimize. The fixed distortion is achieved by the DBN Blahut-Arimoto algorithm to approximate the Nth-order rate distortion approximating posterior. The trained DBNs are then unrolled to create a DAE, which produces an encoder and a reproducer. The unrolled DAE is fine-tuned with back-propagation through the whole autoencoder to minimize reconstruction errors.