{"title":"采用 MPEG-4 虚拟人物方法的聋哑儿童手语表达系统","authors":"Itimad Raheem Ali, Hoshang Kolivand","doi":"10.1007/s12652-024-04842-7","DOIUrl":null,"url":null,"abstract":"<p>Children with language impairments during their significant developmental periods within childhood are exposed to cognitive risk, social impairments, along with language. This is difficult with children born deaf from hearing parents who own little or no experience of communicating in sign language. This system presents the sign language in the context of British Sign Language (BSL) for producing utterances through virtual characters. In capturing, Kinect sensors use a motion capture sensor for motion actors. The connection uses sensors to read data, connect to high-quality 3D scans, and then use these high-quality scans of the animated MPEG-4 face and hand models. The main challenges of this system are the simultaneous capture of data for the whole hand and the development of the MPEG-4 approach considering the animation engines with descriptive sign language features. After synchronizing motion data from motion capture results with Kinect, the combined hand character adjusts points, frames, and time with virtual characters based on the motion of character actors. This study demonstrates the skills of this sign language system instrumental in presenting an assessment by users, highlighting the importance of the hand part in creating new accents and signs in BSL. We have validated this system by testing the reliability and functionality of the virtual characters..</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Expressive sign language system for deaf kids with MPEG-4 approach of virtual human character\",\"authors\":\"Itimad Raheem Ali, Hoshang Kolivand\",\"doi\":\"10.1007/s12652-024-04842-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Children with language impairments during their significant developmental periods within childhood are exposed to cognitive risk, social impairments, along with language. This is difficult with children born deaf from hearing parents who own little or no experience of communicating in sign language. This system presents the sign language in the context of British Sign Language (BSL) for producing utterances through virtual characters. In capturing, Kinect sensors use a motion capture sensor for motion actors. The connection uses sensors to read data, connect to high-quality 3D scans, and then use these high-quality scans of the animated MPEG-4 face and hand models. The main challenges of this system are the simultaneous capture of data for the whole hand and the development of the MPEG-4 approach considering the animation engines with descriptive sign language features. After synchronizing motion data from motion capture results with Kinect, the combined hand character adjusts points, frames, and time with virtual characters based on the motion of character actors. This study demonstrates the skills of this sign language system instrumental in presenting an assessment by users, highlighting the importance of the hand part in creating new accents and signs in BSL. We have validated this system by testing the reliability and functionality of the virtual characters..</p>\",\"PeriodicalId\":14959,\"journal\":{\"name\":\"Journal of Ambient Intelligence and Humanized Computing\",\"volume\":\"4 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Ambient Intelligence and Humanized Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12652-024-04842-7\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Ambient Intelligence and Humanized Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12652-024-04842-7","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
Expressive sign language system for deaf kids with MPEG-4 approach of virtual human character
Children with language impairments during their significant developmental periods within childhood are exposed to cognitive risk, social impairments, along with language. This is difficult with children born deaf from hearing parents who own little or no experience of communicating in sign language. This system presents the sign language in the context of British Sign Language (BSL) for producing utterances through virtual characters. In capturing, Kinect sensors use a motion capture sensor for motion actors. The connection uses sensors to read data, connect to high-quality 3D scans, and then use these high-quality scans of the animated MPEG-4 face and hand models. The main challenges of this system are the simultaneous capture of data for the whole hand and the development of the MPEG-4 approach considering the animation engines with descriptive sign language features. After synchronizing motion data from motion capture results with Kinect, the combined hand character adjusts points, frames, and time with virtual characters based on the motion of character actors. This study demonstrates the skills of this sign language system instrumental in presenting an assessment by users, highlighting the importance of the hand part in creating new accents and signs in BSL. We have validated this system by testing the reliability and functionality of the virtual characters..
期刊介绍:
The purpose of JAIHC is to provide a high profile, leading edge forum for academics, industrial professionals, educators and policy makers involved in the field to contribute, to disseminate the most innovative researches and developments of all aspects of ambient intelligence and humanized computing, such as intelligent/smart objects, environments/spaces, and systems. The journal discusses various technical, safety, personal, social, physical, political, artistic and economic issues. The research topics covered by the journal are (but not limited to):
Pervasive/Ubiquitous Computing and Applications
Cognitive wireless sensor network
Embedded Systems and Software
Mobile Computing and Wireless Communications
Next Generation Multimedia Systems
Security, Privacy and Trust
Service and Semantic Computing
Advanced Networking Architectures
Dependable, Reliable and Autonomic Computing
Embedded Smart Agents
Context awareness, social sensing and inference
Multi modal interaction design
Ergonomics and product prototyping
Intelligent and self-organizing transportation networks & services
Healthcare Systems
Virtual Humans & Virtual Worlds
Wearables sensors and actuators