基于深度自编码器的多层感知器图像压缩

G. Bandara, R. Siyambalapitiya
{"title":"基于深度自编码器的多层感知器图像压缩","authors":"G. Bandara, R. Siyambalapitiya","doi":"10.35940/ijsce.e3357.039620","DOIUrl":null,"url":null,"abstract":"The Artificial Neural Network is one of the heavily used alternatives for solving complex problems in machine learning and deep learning. In this research, a deep autoencoder-based multi-layer feed-forward neural network has been proposed to achieve image compression. The proposed neural network splits down a large image into small blocks and each block applies the normalization process as the preprocessing technique. Since this is an autoencoder-based neural network, each normalized block of pixels has been initialized as the input and the output of the neural network. The training process of the proposed network has been done for various block sizes and different saving percentages of various kinds of images by using the backpropagation algorithm. The output of the middle-hidden layer will be the compressed representation for each block of the image. The proposed model has been implemented using Python, Keras, and Tensor flow backend.","PeriodicalId":173799,"journal":{"name":"International Journal of Soft Computing and Engineering","volume":"92 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Deep Autoencoder-Based Image Compression using Multi-Layer Perceptrons\",\"authors\":\"G. Bandara, R. Siyambalapitiya\",\"doi\":\"10.35940/ijsce.e3357.039620\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Artificial Neural Network is one of the heavily used alternatives for solving complex problems in machine learning and deep learning. In this research, a deep autoencoder-based multi-layer feed-forward neural network has been proposed to achieve image compression. The proposed neural network splits down a large image into small blocks and each block applies the normalization process as the preprocessing technique. Since this is an autoencoder-based neural network, each normalized block of pixels has been initialized as the input and the output of the neural network. The training process of the proposed network has been done for various block sizes and different saving percentages of various kinds of images by using the backpropagation algorithm. The output of the middle-hidden layer will be the compressed representation for each block of the image. The proposed model has been implemented using Python, Keras, and Tensor flow backend.\",\"PeriodicalId\":173799,\"journal\":{\"name\":\"International Journal of Soft Computing and Engineering\",\"volume\":\"92 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Soft Computing and Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.35940/ijsce.e3357.039620\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Soft Computing and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.35940/ijsce.e3357.039620","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

人工神经网络是解决机器学习和深度学习中复杂问题的常用替代方法之一。本研究提出了一种基于深度自编码器的多层前馈神经网络来实现图像压缩。提出的神经网络将大图像分割成小块,每个小块采用归一化过程作为预处理技术。由于这是一个基于自编码器的神经网络,每个归一化像素块都被初始化为神经网络的输入和输出。利用反向传播算法对不同块大小和不同保存百分比的各类图像进行了网络的训练。中间隐藏层的输出将是图像的每个块的压缩表示。提出的模型已经使用Python、Keras和Tensor流后端实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Autoencoder-Based Image Compression using Multi-Layer Perceptrons
The Artificial Neural Network is one of the heavily used alternatives for solving complex problems in machine learning and deep learning. In this research, a deep autoencoder-based multi-layer feed-forward neural network has been proposed to achieve image compression. The proposed neural network splits down a large image into small blocks and each block applies the normalization process as the preprocessing technique. Since this is an autoencoder-based neural network, each normalized block of pixels has been initialized as the input and the output of the neural network. The training process of the proposed network has been done for various block sizes and different saving percentages of various kinds of images by using the backpropagation algorithm. The output of the middle-hidden layer will be the compressed representation for each block of the image. The proposed model has been implemented using Python, Keras, and Tensor flow backend.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信