Prathik B. Jain, Sandeep Bhat, Gourav Pujari, Vismitha Hiremath, Deepti. C
{"title":"基于眼打字视觉的人类活动控制","authors":"Prathik B. Jain, Sandeep Bhat, Gourav Pujari, Vismitha Hiremath, Deepti. C","doi":"10.1109/icdcece53908.2022.9792928","DOIUrl":null,"url":null,"abstract":"In real world applications, the computer is just an extension of human body to complete tasks in a fast and efficient manner. The primary task performed by a computer user is to type. It is not possible for people with severe motor disabilities like paralysis to communicate effectively using their hands. The aim of the work described in this paper is to provide an on-screen keyboard where a user can type and communicate just by coordinating the blinking of his or her eye. This is achieved through face detection followed by focus on the eye region of the facial image. This facilitates capture and mapping of the eye gaze coordinates to the corresponding key on a keyboard. To achieve this, DLib toolkit has been used. DLib is a trained model majorly used for face detection and facial landmark detection using Support vector machine algorithm. Transfer learning approach is used here to achieve efficiency. The outcome of the work described is the integration of the eye-typing platform with a variety of other features, which has led to an improvement in software usability and productivity.","PeriodicalId":417643,"journal":{"name":"2022 IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)","volume":"258 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Eye Typing-Vision Based Human Activity Control\",\"authors\":\"Prathik B. Jain, Sandeep Bhat, Gourav Pujari, Vismitha Hiremath, Deepti. C\",\"doi\":\"10.1109/icdcece53908.2022.9792928\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In real world applications, the computer is just an extension of human body to complete tasks in a fast and efficient manner. The primary task performed by a computer user is to type. It is not possible for people with severe motor disabilities like paralysis to communicate effectively using their hands. The aim of the work described in this paper is to provide an on-screen keyboard where a user can type and communicate just by coordinating the blinking of his or her eye. This is achieved through face detection followed by focus on the eye region of the facial image. This facilitates capture and mapping of the eye gaze coordinates to the corresponding key on a keyboard. To achieve this, DLib toolkit has been used. DLib is a trained model majorly used for face detection and facial landmark detection using Support vector machine algorithm. Transfer learning approach is used here to achieve efficiency. The outcome of the work described is the integration of the eye-typing platform with a variety of other features, which has led to an improvement in software usability and productivity.\",\"PeriodicalId\":417643,\"journal\":{\"name\":\"2022 IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)\",\"volume\":\"258 2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icdcece53908.2022.9792928\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icdcece53908.2022.9792928","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In real world applications, the computer is just an extension of human body to complete tasks in a fast and efficient manner. The primary task performed by a computer user is to type. It is not possible for people with severe motor disabilities like paralysis to communicate effectively using their hands. The aim of the work described in this paper is to provide an on-screen keyboard where a user can type and communicate just by coordinating the blinking of his or her eye. This is achieved through face detection followed by focus on the eye region of the facial image. This facilitates capture and mapping of the eye gaze coordinates to the corresponding key on a keyboard. To achieve this, DLib toolkit has been used. DLib is a trained model majorly used for face detection and facial landmark detection using Support vector machine algorithm. Transfer learning approach is used here to achieve efficiency. The outcome of the work described is the integration of the eye-typing platform with a variety of other features, which has led to an improvement in software usability and productivity.