ASL 冠军!:深度学习驱动手语识别的虚拟现实游戏

Md Shahinur Alam , Jason Lamberton , Jianye Wang , Carly Leannah , Sarah Miller , Joseph Palagano , Myles de Bastion , Heather L. Smith , Melissa Malzkuhn , Lorna C. Quandt
{"title":"ASL 冠军!:深度学习驱动手语识别的虚拟现实游戏","authors":"Md Shahinur Alam ,&nbsp;Jason Lamberton ,&nbsp;Jianye Wang ,&nbsp;Carly Leannah ,&nbsp;Sarah Miller ,&nbsp;Joseph Palagano ,&nbsp;Myles de Bastion ,&nbsp;Heather L. Smith ,&nbsp;Melissa Malzkuhn ,&nbsp;Lorna C. Quandt","doi":"10.1016/j.cexr.2024.100059","DOIUrl":null,"url":null,"abstract":"<div><p>We developed an American Sign Language (ASL) learning platform in a Virtual Reality (VR) environment to facilitate immersive interaction and real-time feedback for ASL learners. We describe the first game to use an interactive teaching style in which users learn from a fluent signing avatar and the first implementation of ASL sign recognition using deep learning within the VR environment. Advanced motion-capture technology powers an expressive ASL teaching avatar within an immersive three-dimensional environment. The teacher demonstrates an ASL sign for an object, prompting the user to copy the sign. Upon the user’s signing, a third-party plugin executes the sign recognition process alongside a deep learning model. Depending on the accuracy of a user’s sign production, the avatar repeats the sign or introduces a new one. We gathered a 3D VR ASL dataset from fifteen diverse participants to power the sign recognition model. The proposed deep learning model’s training, validation, and test accuracy are 90.12%, 89.37%, and 86.66%, respectively. The functional prototype can teach sign language vocabulary and be successfully adapted as an interactive ASL learning platform in VR.</p></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"4 ","pages":"Article 100059"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949678024000096/pdfft?md5=93fa5220f68e5778acb9969e40e147f4&pid=1-s2.0-S2949678024000096-main.pdf","citationCount":"0","resultStr":"{\"title\":\"ASL champ!: a virtual reality game with deep-learning driven sign recognition\",\"authors\":\"Md Shahinur Alam ,&nbsp;Jason Lamberton ,&nbsp;Jianye Wang ,&nbsp;Carly Leannah ,&nbsp;Sarah Miller ,&nbsp;Joseph Palagano ,&nbsp;Myles de Bastion ,&nbsp;Heather L. Smith ,&nbsp;Melissa Malzkuhn ,&nbsp;Lorna C. Quandt\",\"doi\":\"10.1016/j.cexr.2024.100059\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We developed an American Sign Language (ASL) learning platform in a Virtual Reality (VR) environment to facilitate immersive interaction and real-time feedback for ASL learners. We describe the first game to use an interactive teaching style in which users learn from a fluent signing avatar and the first implementation of ASL sign recognition using deep learning within the VR environment. Advanced motion-capture technology powers an expressive ASL teaching avatar within an immersive three-dimensional environment. The teacher demonstrates an ASL sign for an object, prompting the user to copy the sign. Upon the user’s signing, a third-party plugin executes the sign recognition process alongside a deep learning model. Depending on the accuracy of a user’s sign production, the avatar repeats the sign or introduces a new one. We gathered a 3D VR ASL dataset from fifteen diverse participants to power the sign recognition model. The proposed deep learning model’s training, validation, and test accuracy are 90.12%, 89.37%, and 86.66%, respectively. The functional prototype can teach sign language vocabulary and be successfully adapted as an interactive ASL learning platform in VR.</p></div>\",\"PeriodicalId\":100320,\"journal\":{\"name\":\"Computers & Education: X Reality\",\"volume\":\"4 \",\"pages\":\"Article 100059\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949678024000096/pdfft?md5=93fa5220f68e5778acb9969e40e147f4&pid=1-s2.0-S2949678024000096-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Education: X Reality\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949678024000096\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Education: X Reality","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949678024000096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们在虚拟现实(VR)环境中开发了一个美国手语(ASL)学习平台,为 ASL 学习者提供身临其境的互动和实时反馈。我们描述了第一款采用互动教学方式的游戏,在这款游戏中,用户通过一个流利的手语化身进行学习,我们还描述了第一款在 VR 环境中使用深度学习实现 ASL 手语识别的游戏。先进的动作捕捉技术为身临其境的三维环境中富有表现力的 ASL 教学化身提供了动力。教师为一个物体演示 ASL 手势,提示用户复制手势。在用户做出手势后,第三方插件会与深度学习模型一起执行手势识别过程。根据用户手势的准确性,头像会重复手势或引入新的手势。我们收集了来自 15 位不同参与者的 3D VR ASL 数据集,为手语识别模型提供支持。所提出的深度学习模型的训练、验证和测试准确率分别为 90.12%、89.37% 和 86.66%。该功能原型可以教授手语词汇,并可成功改编为 VR 中的交互式 ASL 学习平台。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ASL champ!: a virtual reality game with deep-learning driven sign recognition

We developed an American Sign Language (ASL) learning platform in a Virtual Reality (VR) environment to facilitate immersive interaction and real-time feedback for ASL learners. We describe the first game to use an interactive teaching style in which users learn from a fluent signing avatar and the first implementation of ASL sign recognition using deep learning within the VR environment. Advanced motion-capture technology powers an expressive ASL teaching avatar within an immersive three-dimensional environment. The teacher demonstrates an ASL sign for an object, prompting the user to copy the sign. Upon the user’s signing, a third-party plugin executes the sign recognition process alongside a deep learning model. Depending on the accuracy of a user’s sign production, the avatar repeats the sign or introduces a new one. We gathered a 3D VR ASL dataset from fifteen diverse participants to power the sign recognition model. The proposed deep learning model’s training, validation, and test accuracy are 90.12%, 89.37%, and 86.66%, respectively. The functional prototype can teach sign language vocabulary and be successfully adapted as an interactive ASL learning platform in VR.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信