{"title":"Akshi:一个使用机器学习的视觉障碍辅助系统","authors":"Aakash Jain, Ritik Verma, Gurtej Singh Khokhar, Madhulika Bhadauria","doi":"10.1109/AIST55798.2022.10064996","DOIUrl":null,"url":null,"abstract":"This work focuses on is emotion recognition. Emotion shows crucial data about human communication. It’s general to utilize face expressions to convey feelings throughout a discussion, and personal communication is only possible through facial expressions. The goal of this study is for offering a machine learning-based emotion recognition structure for people who are impaired visually. We present a CNN-based solution approach to manage this challenge, for training and testing we used FER2013 database which consisted of 7 facial expression and a total of 35,685 images out of which we selected 3 facial expression happy, sad, and neutral comprising of 21264 images and achieved an accuracy of 81%.It has some limitations that it needs a person to operate and sometimes mix up of expressions so gives wrong results. Likewise, CNN we also implemented Transfer learning model i.e., Mobile Net for facial expression detection with similar dataset and achieved an accuracy of around 80%. To improvise our overall accuracy firstly we and modified the CNN design and achieved an overall accuracy of 91.65% which was superior to previous two implementations. The primary utility of the model is to help visually impaired people for better communication.","PeriodicalId":360351,"journal":{"name":"2022 4th International Conference on Artificial Intelligence and Speech Technology (AIST)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Akshi: An Assistance system for visually challenged using Machine Learning\",\"authors\":\"Aakash Jain, Ritik Verma, Gurtej Singh Khokhar, Madhulika Bhadauria\",\"doi\":\"10.1109/AIST55798.2022.10064996\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work focuses on is emotion recognition. Emotion shows crucial data about human communication. It’s general to utilize face expressions to convey feelings throughout a discussion, and personal communication is only possible through facial expressions. The goal of this study is for offering a machine learning-based emotion recognition structure for people who are impaired visually. We present a CNN-based solution approach to manage this challenge, for training and testing we used FER2013 database which consisted of 7 facial expression and a total of 35,685 images out of which we selected 3 facial expression happy, sad, and neutral comprising of 21264 images and achieved an accuracy of 81%.It has some limitations that it needs a person to operate and sometimes mix up of expressions so gives wrong results. Likewise, CNN we also implemented Transfer learning model i.e., Mobile Net for facial expression detection with similar dataset and achieved an accuracy of around 80%. To improvise our overall accuracy firstly we and modified the CNN design and achieved an overall accuracy of 91.65% which was superior to previous two implementations. The primary utility of the model is to help visually impaired people for better communication.\",\"PeriodicalId\":360351,\"journal\":{\"name\":\"2022 4th International Conference on Artificial Intelligence and Speech Technology (AIST)\",\"volume\":\"156 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on Artificial Intelligence and Speech Technology (AIST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIST55798.2022.10064996\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Artificial Intelligence and Speech Technology (AIST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIST55798.2022.10064996","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Akshi: An Assistance system for visually challenged using Machine Learning
This work focuses on is emotion recognition. Emotion shows crucial data about human communication. It’s general to utilize face expressions to convey feelings throughout a discussion, and personal communication is only possible through facial expressions. The goal of this study is for offering a machine learning-based emotion recognition structure for people who are impaired visually. We present a CNN-based solution approach to manage this challenge, for training and testing we used FER2013 database which consisted of 7 facial expression and a total of 35,685 images out of which we selected 3 facial expression happy, sad, and neutral comprising of 21264 images and achieved an accuracy of 81%.It has some limitations that it needs a person to operate and sometimes mix up of expressions so gives wrong results. Likewise, CNN we also implemented Transfer learning model i.e., Mobile Net for facial expression detection with similar dataset and achieved an accuracy of around 80%. To improvise our overall accuracy firstly we and modified the CNN design and achieved an overall accuracy of 91.65% which was superior to previous two implementations. The primary utility of the model is to help visually impaired people for better communication.