Usman Hassan, Hamza Ahmed Khan, Hamdah Khan, M. Owais
{"title":"Communication System For Non-Verbal Paralyzed Patients Using Computer Vision","authors":"Usman Hassan, Hamza Ahmed Khan, Hamdah Khan, M. Owais","doi":"10.1109/ICCIS54243.2021.9676383","DOIUrl":null,"url":null,"abstract":"This paper presents a solution for assisting the paralyzed patients in their day to day lives, by integrating a system that would be controlled by their eyes. The presented system contains a set of different messages in a variety of languages that the user can select from without any speech or physical movement, other than eyes. This system uses computer vision to translate the movement of eye-ball to the cursor on the screen, and eye-blinking to register clicks on the computer to trigger the desired output. The technique used to implement object detection in the following system is called HOG (Histogram of Oriented Gradient). The system is tested primarily on two datasets, one of which is an open-source dataset containing pictures of people, obtained from Kaggle, and the other one is collected locally at DHA Suffa University.","PeriodicalId":165673,"journal":{"name":"2021 4th International Conference on Computing & Information Sciences (ICCIS)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 4th International Conference on Computing & Information Sciences (ICCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIS54243.2021.9676383","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a solution for assisting the paralyzed patients in their day to day lives, by integrating a system that would be controlled by their eyes. The presented system contains a set of different messages in a variety of languages that the user can select from without any speech or physical movement, other than eyes. This system uses computer vision to translate the movement of eye-ball to the cursor on the screen, and eye-blinking to register clicks on the computer to trigger the desired output. The technique used to implement object detection in the following system is called HOG (Histogram of Oriented Gradient). The system is tested primarily on two datasets, one of which is an open-source dataset containing pictures of people, obtained from Kaggle, and the other one is collected locally at DHA Suffa University.