{"title":"基于深度神经网络的图像字幕编解码器框架实现","authors":"Md. Mijanur Rahman, A. Uzzaman, S. Sami","doi":"10.1109/SPICSCON54707.2021.9885414","DOIUrl":null,"url":null,"abstract":"This study is concerned with the development of a deep neural network-based framework, including a “convolutional neural network (CNN)” encoder and a “Long Short-Term Memory (LSTM)” decoder in an automatic image captioning application. The proposed model percepts information points in a picture and their relationship to one another in the viewpoint. Firstly, a CNN encoder excels at retaining spatial information and recognizing objects in images by extracting features to produce vocabulary that describes the photos. Secondly, an LSTM network decoder is used for predicting words and creating meaningful sentences from the built keywords. Thus, in the proposed neural network-based system, the VGG-19 model is presented for defining the proposed model as an image feature extractor and sequence processor, and then the LSTM model provides a fixed-length output vector as a final prediction. A variety of images from several open-source datasets, such as Flickr 8k, Flickr 30k, and MS COCO, were explored and used for training as well as testing the proposed model. The experiment was done on Python with Keras and TensorFlow backend. It demonstrated the automatic image captioning and evaluated the performance of the proposed model using the BLEU (BiLingual Evaluation Understudy) metric.","PeriodicalId":159505,"journal":{"name":"2021 IEEE International Conference on Signal Processing, Information, Communication & Systems (SPICSCON)","volume":"347 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Implementing Deep Neural Network Based Encoder-Decoder Framework for Image Captioning\",\"authors\":\"Md. Mijanur Rahman, A. Uzzaman, S. Sami\",\"doi\":\"10.1109/SPICSCON54707.2021.9885414\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study is concerned with the development of a deep neural network-based framework, including a “convolutional neural network (CNN)” encoder and a “Long Short-Term Memory (LSTM)” decoder in an automatic image captioning application. The proposed model percepts information points in a picture and their relationship to one another in the viewpoint. Firstly, a CNN encoder excels at retaining spatial information and recognizing objects in images by extracting features to produce vocabulary that describes the photos. Secondly, an LSTM network decoder is used for predicting words and creating meaningful sentences from the built keywords. Thus, in the proposed neural network-based system, the VGG-19 model is presented for defining the proposed model as an image feature extractor and sequence processor, and then the LSTM model provides a fixed-length output vector as a final prediction. A variety of images from several open-source datasets, such as Flickr 8k, Flickr 30k, and MS COCO, were explored and used for training as well as testing the proposed model. The experiment was done on Python with Keras and TensorFlow backend. It demonstrated the automatic image captioning and evaluated the performance of the proposed model using the BLEU (BiLingual Evaluation Understudy) metric.\",\"PeriodicalId\":159505,\"journal\":{\"name\":\"2021 IEEE International Conference on Signal Processing, Information, Communication & Systems (SPICSCON)\",\"volume\":\"347 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Signal Processing, Information, Communication & Systems (SPICSCON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPICSCON54707.2021.9885414\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Signal Processing, Information, Communication & Systems (SPICSCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPICSCON54707.2021.9885414","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Implementing Deep Neural Network Based Encoder-Decoder Framework for Image Captioning
This study is concerned with the development of a deep neural network-based framework, including a “convolutional neural network (CNN)” encoder and a “Long Short-Term Memory (LSTM)” decoder in an automatic image captioning application. The proposed model percepts information points in a picture and their relationship to one another in the viewpoint. Firstly, a CNN encoder excels at retaining spatial information and recognizing objects in images by extracting features to produce vocabulary that describes the photos. Secondly, an LSTM network decoder is used for predicting words and creating meaningful sentences from the built keywords. Thus, in the proposed neural network-based system, the VGG-19 model is presented for defining the proposed model as an image feature extractor and sequence processor, and then the LSTM model provides a fixed-length output vector as a final prediction. A variety of images from several open-source datasets, such as Flickr 8k, Flickr 30k, and MS COCO, were explored and used for training as well as testing the proposed model. The experiment was done on Python with Keras and TensorFlow backend. It demonstrated the automatic image captioning and evaluated the performance of the proposed model using the BLEU (BiLingual Evaluation Understudy) metric.