Doaa E. Elmatary, Doaa M. Maher, Areeg Tarek Ibrahim
{"title":"有效数据预处理的智能手语多语言实时预测系统","authors":"Doaa E. Elmatary, Doaa M. Maher, Areeg Tarek Ibrahim","doi":"10.4236/jcc.2023.1110008","DOIUrl":null,"url":null,"abstract":"A multidisciplinary approach for developing an intelligent sign multi-language recognition system to greatly enhance deaf-mute communication will be discussed and implemented. This involves designing a low-cost glove-based sensing system, collecting large and diverse datasets, preprocessing the data, and using efficient machine learning models. Furthermore, the glove is integrated with a user-friendly mobile application called “Life-sign” for this system. The main goal of this work is to minimize the processing time of machine learning classifiers while maintaining higher accuracy performance. This is achieved by using effective preprocessing algorithms to handle noisy and inconsistent data. Testing and iterating approaches have been applied to various classifiers to refine and improve their accuracy in the recognition process. Additionally, the Extra Trees (ET) classifier has been identified as the best algorithm, with results proving successful gesture prediction at an average accuracy of about 99.54%. A smart optimization feature has been implemented to control the size of data transferred via Bluetooth, allowing for fast recognition of consecutive gestures. Real-time performance has been measured through extensive experimental testing on various consecutive gestures, specifically referring to Arabic Sign Language (ArSL). The results have demonstrated that the system guarantees consecutive gesture recognition with a lower delay of 50 milliseconds.","PeriodicalId":67799,"journal":{"name":"电脑和通信(英文)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Intelligent Sign Multi-Language Real-Time Prediction System with Effective Data Preprocessing\",\"authors\":\"Doaa E. Elmatary, Doaa M. Maher, Areeg Tarek Ibrahim\",\"doi\":\"10.4236/jcc.2023.1110008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A multidisciplinary approach for developing an intelligent sign multi-language recognition system to greatly enhance deaf-mute communication will be discussed and implemented. This involves designing a low-cost glove-based sensing system, collecting large and diverse datasets, preprocessing the data, and using efficient machine learning models. Furthermore, the glove is integrated with a user-friendly mobile application called “Life-sign” for this system. The main goal of this work is to minimize the processing time of machine learning classifiers while maintaining higher accuracy performance. This is achieved by using effective preprocessing algorithms to handle noisy and inconsistent data. Testing and iterating approaches have been applied to various classifiers to refine and improve their accuracy in the recognition process. Additionally, the Extra Trees (ET) classifier has been identified as the best algorithm, with results proving successful gesture prediction at an average accuracy of about 99.54%. A smart optimization feature has been implemented to control the size of data transferred via Bluetooth, allowing for fast recognition of consecutive gestures. Real-time performance has been measured through extensive experimental testing on various consecutive gestures, specifically referring to Arabic Sign Language (ArSL). The results have demonstrated that the system guarantees consecutive gesture recognition with a lower delay of 50 milliseconds.\",\"PeriodicalId\":67799,\"journal\":{\"name\":\"电脑和通信(英文)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"电脑和通信(英文)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4236/jcc.2023.1110008\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"电脑和通信(英文)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4236/jcc.2023.1110008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
本文将讨论并实现一种多学科方法来开发智能手语多语言识别系统,以极大地提高聋哑人的交流能力。这包括设计一个低成本的基于手套的传感系统,收集大量不同的数据集,预处理数据,以及使用高效的机器学习模型。此外,该手套还集成了一个名为“Life-sign”的用户友好移动应用程序。这项工作的主要目标是最小化机器学习分类器的处理时间,同时保持更高的准确率性能。这是通过使用有效的预处理算法来处理噪声和不一致的数据来实现的。测试和迭代方法已应用于各种分类器,以改进和提高其识别过程中的准确性。此外,Extra Trees (ET)分类器被认为是最好的算法,其结果证明成功的手势预测平均准确率约为99.54%。一个智能优化功能已经实现,以控制通过蓝牙传输的数据大小,允许快速识别连续的手势。实时性能通过对各种连续手势的广泛实验测试来衡量,特别是指阿拉伯手语(ArSL)。结果表明,该系统可以保证连续的手势识别,延迟较低,为50毫秒。
Intelligent Sign Multi-Language Real-Time Prediction System with Effective Data Preprocessing
A multidisciplinary approach for developing an intelligent sign multi-language recognition system to greatly enhance deaf-mute communication will be discussed and implemented. This involves designing a low-cost glove-based sensing system, collecting large and diverse datasets, preprocessing the data, and using efficient machine learning models. Furthermore, the glove is integrated with a user-friendly mobile application called “Life-sign” for this system. The main goal of this work is to minimize the processing time of machine learning classifiers while maintaining higher accuracy performance. This is achieved by using effective preprocessing algorithms to handle noisy and inconsistent data. Testing and iterating approaches have been applied to various classifiers to refine and improve their accuracy in the recognition process. Additionally, the Extra Trees (ET) classifier has been identified as the best algorithm, with results proving successful gesture prediction at an average accuracy of about 99.54%. A smart optimization feature has been implemented to control the size of data transferred via Bluetooth, allowing for fast recognition of consecutive gestures. Real-time performance has been measured through extensive experimental testing on various consecutive gestures, specifically referring to Arabic Sign Language (ArSL). The results have demonstrated that the system guarantees consecutive gesture recognition with a lower delay of 50 milliseconds.