基于深度学习的语言障碍人士眼手语交流系统。

IF 2.2 4区 医学 Q2 REHABILITATION
Rajesh Kannan Megalingam, Sakthiprasad Kuttankulangara Manoharan, Gokul Riju, Shree Rajesh Raagul Vadivel
{"title":"基于深度学习的语言障碍人士眼手语交流系统。","authors":"Rajesh Kannan Megalingam, Sakthiprasad Kuttankulangara Manoharan, Gokul Riju, Shree Rajesh Raagul Vadivel","doi":"10.1080/17483107.2025.2532698","DOIUrl":null,"url":null,"abstract":"<p><p><b>Objective</b>: People with motor difficulties and speech impairments often struggle to communicate their needs and views. Augmentative and Alternative Communication (AAC) offers solutions through gestures, body language, or specialized equipment. However, eye gaze and eye signs remain the sole communication method for some individuals. While existing eye-gaze devices leverage deep learning, their pre-calibration techniques can be unreliable and susceptible to lighting conditions. On the other hand, the research into eye sign-based communication is at a very novice stage.</p><p><p><b>Methods</b>: In this research, we propose an eye sign-based communication system that operates on deep learning principles and accepts eye sign patterns from speech-impaired or paraplegic individuals via a standard webcam. The system converts the eye signs into alphabets, words, or sentences and displays the resulting text visually on the screen. In addition, it provides a vocal prompt for the user and the caretaker. It functions effectively in various lighting conditions without requiring calibration and integrates a text prediction function for user convenience. Impact Experiments conducted with participants aged between 18 and 35 years yielded average accuracy rates of 98%, 99%, and 99% for alphabet, word, and sentence formation, respectively. These results demonstrate the system's robustness and potential to significantly benefit individuals with speech impairments.</p>","PeriodicalId":47806,"journal":{"name":"Disability and Rehabilitation-Assistive Technology","volume":" ","pages":"1-22"},"PeriodicalIF":2.2000,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep learning-based eye sign communication system for people with speech impairments.\",\"authors\":\"Rajesh Kannan Megalingam, Sakthiprasad Kuttankulangara Manoharan, Gokul Riju, Shree Rajesh Raagul Vadivel\",\"doi\":\"10.1080/17483107.2025.2532698\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><b>Objective</b>: People with motor difficulties and speech impairments often struggle to communicate their needs and views. Augmentative and Alternative Communication (AAC) offers solutions through gestures, body language, or specialized equipment. However, eye gaze and eye signs remain the sole communication method for some individuals. While existing eye-gaze devices leverage deep learning, their pre-calibration techniques can be unreliable and susceptible to lighting conditions. On the other hand, the research into eye sign-based communication is at a very novice stage.</p><p><p><b>Methods</b>: In this research, we propose an eye sign-based communication system that operates on deep learning principles and accepts eye sign patterns from speech-impaired or paraplegic individuals via a standard webcam. The system converts the eye signs into alphabets, words, or sentences and displays the resulting text visually on the screen. In addition, it provides a vocal prompt for the user and the caretaker. It functions effectively in various lighting conditions without requiring calibration and integrates a text prediction function for user convenience. Impact Experiments conducted with participants aged between 18 and 35 years yielded average accuracy rates of 98%, 99%, and 99% for alphabet, word, and sentence formation, respectively. These results demonstrate the system's robustness and potential to significantly benefit individuals with speech impairments.</p>\",\"PeriodicalId\":47806,\"journal\":{\"name\":\"Disability and Rehabilitation-Assistive Technology\",\"volume\":\" \",\"pages\":\"1-22\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-07-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Disability and Rehabilitation-Assistive Technology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1080/17483107.2025.2532698\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"REHABILITATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Disability and Rehabilitation-Assistive Technology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/17483107.2025.2532698","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"REHABILITATION","Score":null,"Total":0}
引用次数: 0

摘要

目的:有运动障碍和语言障碍的人经常很难表达他们的需求和观点。辅助和替代沟通(AAC)通过手势、肢体语言或专门的设备提供解决方案。然而,对一些人来说,眼神凝视和眼神符号仍然是唯一的交流方式。虽然现有的眼睛凝视设备利用深度学习,但它们的预校准技术可能不可靠,而且容易受到照明条件的影响。另一方面,基于眼动符号的交际研究还处于起步阶段。方法:在本研究中,我们提出了一种基于深度学习原理的眼动信号交流系统,并通过标准网络摄像头接收来自语言障碍或截瘫患者的眼动信号模式。该系统将眼动符号转换成字母、单词或句子,并在屏幕上直观地显示结果文本。此外,它还为用户和管理员提供语音提示。它在各种照明条件下有效工作,无需校准,并集成了文本预测功能,方便用户使用。对年龄在18到35岁之间的参与者进行的冲击实验显示,字母表、单词和句子的平均准确率分别为98%、99%和99%。这些结果证明了该系统的稳健性和潜力,对有语言障碍的个体有显著的好处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep learning-based eye sign communication system for people with speech impairments.

Objective: People with motor difficulties and speech impairments often struggle to communicate their needs and views. Augmentative and Alternative Communication (AAC) offers solutions through gestures, body language, or specialized equipment. However, eye gaze and eye signs remain the sole communication method for some individuals. While existing eye-gaze devices leverage deep learning, their pre-calibration techniques can be unreliable and susceptible to lighting conditions. On the other hand, the research into eye sign-based communication is at a very novice stage.

Methods: In this research, we propose an eye sign-based communication system that operates on deep learning principles and accepts eye sign patterns from speech-impaired or paraplegic individuals via a standard webcam. The system converts the eye signs into alphabets, words, or sentences and displays the resulting text visually on the screen. In addition, it provides a vocal prompt for the user and the caretaker. It functions effectively in various lighting conditions without requiring calibration and integrates a text prediction function for user convenience. Impact Experiments conducted with participants aged between 18 and 35 years yielded average accuracy rates of 98%, 99%, and 99% for alphabet, word, and sentence formation, respectively. These results demonstrate the system's robustness and potential to significantly benefit individuals with speech impairments.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
13.60%
发文量
128
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信