V. V. N. V. Phani Kumar, V. Phani Teja, A. R. Kumar, V. Harshavardhan, U. Sahith
{"title":"使用深度学习的视障人士图像摘要器","authors":"V. V. N. V. Phani Kumar, V. Phani Teja, A. R. Kumar, V. Harshavardhan, U. Sahith","doi":"10.1109/ICSCAN53069.2021.9526465","DOIUrl":null,"url":null,"abstract":"One of the leading problems for Humanity, particularly in this era where information and people are interconnected with text messages, are visual impairment, or have a learning disability. For this problem we aim to provide a solution that summarizes any input image into meaningful data i.e. we analyse the image data and identify the objects in it and thereby generating vocabulary and generate meaningful sentences accordingly. This can be achieved by the use of multilayer Convolutional Neural Network (CNN) to generate vocabulary describing the images, this includes living as well as non-living things. For this we used ResNet50. And a Long Short Term Memory to construct meaningful sentences using the generated keywords i.e. LSTM will use the information from CNN to help generate a description of the image and we call this, The Image Summarizer. We use the gTTS library to provide the audio output. The audio will be based on the user’s preferred language.","PeriodicalId":393569,"journal":{"name":"2021 International Conference on System, Computation, Automation and Networking (ICSCAN)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Image Summarizer for the Visually Impaired Using Deep Learning\",\"authors\":\"V. V. N. V. Phani Kumar, V. Phani Teja, A. R. Kumar, V. Harshavardhan, U. Sahith\",\"doi\":\"10.1109/ICSCAN53069.2021.9526465\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of the leading problems for Humanity, particularly in this era where information and people are interconnected with text messages, are visual impairment, or have a learning disability. For this problem we aim to provide a solution that summarizes any input image into meaningful data i.e. we analyse the image data and identify the objects in it and thereby generating vocabulary and generate meaningful sentences accordingly. This can be achieved by the use of multilayer Convolutional Neural Network (CNN) to generate vocabulary describing the images, this includes living as well as non-living things. For this we used ResNet50. And a Long Short Term Memory to construct meaningful sentences using the generated keywords i.e. LSTM will use the information from CNN to help generate a description of the image and we call this, The Image Summarizer. We use the gTTS library to provide the audio output. The audio will be based on the user’s preferred language.\",\"PeriodicalId\":393569,\"journal\":{\"name\":\"2021 International Conference on System, Computation, Automation and Networking (ICSCAN)\",\"volume\":\"70 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on System, Computation, Automation and Networking (ICSCAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSCAN53069.2021.9526465\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on System, Computation, Automation and Networking (ICSCAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSCAN53069.2021.9526465","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Image Summarizer for the Visually Impaired Using Deep Learning
One of the leading problems for Humanity, particularly in this era where information and people are interconnected with text messages, are visual impairment, or have a learning disability. For this problem we aim to provide a solution that summarizes any input image into meaningful data i.e. we analyse the image data and identify the objects in it and thereby generating vocabulary and generate meaningful sentences accordingly. This can be achieved by the use of multilayer Convolutional Neural Network (CNN) to generate vocabulary describing the images, this includes living as well as non-living things. For this we used ResNet50. And a Long Short Term Memory to construct meaningful sentences using the generated keywords i.e. LSTM will use the information from CNN to help generate a description of the image and we call this, The Image Summarizer. We use the gTTS library to provide the audio output. The audio will be based on the user’s preferred language.