Krista M Wilkinson, Dawn J Sowers, Lynn R Elko, Mitchell Case, Sharon Redmon, Kevin Williams, Tanuj Namdeo, Syed Billah
{"title":"Consideration of artificial intelligence applications for interpreting communicative movements by individuals with visual and/or motor disabilities.","authors":"Krista M Wilkinson, Dawn J Sowers, Lynn R Elko, Mitchell Case, Sharon Redmon, Kevin Williams, Tanuj Namdeo, Syed Billah","doi":"10.1080/07434618.2025.2495905","DOIUrl":null,"url":null,"abstract":"<p><p>Access to speech-language therapies that promote optimal communication outcomes is a fundamental right outlined by the United Nation's Article 19 of the Convention on the Rights of Persons with Disabilities as well as the Communication Bill of Rights of the National Joint Committee on the Communication Needs of Persons with Severe Disabilities. For many individuals, augmentative and alternative communication (AAC) offers an important set of supports for realizing this critical human right. Yet despite a strong evidence base, many AAC systems are not tailored to the support needs of individuals with visual and/or motor impairments. In this paper we explore the potential for artificial intelligence to interpret body-based communicative movements (including gestures) into speech output. The team, which includes individuals with motor impairments, developed and tested a prototype application that shifts the burden of access off the AAC user and onto the application, through the use of body-based sensors. The effort demonstrated proof of concept that the prototype could learn meaningful idiosyncratic gestures and ignore unintentional ones. Movement sensors interpreted the gestures regardless of the user's or communication device's position, reducing constraints on its usability, potentially increasing the flexibility of AAC access. Dedicated research is needed to confirm this possibility.</p>","PeriodicalId":49234,"journal":{"name":"Augmentative and Alternative Communication","volume":" ","pages":"1-13"},"PeriodicalIF":2.1000,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Augmentative and Alternative Communication","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/07434618.2025.2495905","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Access to speech-language therapies that promote optimal communication outcomes is a fundamental right outlined by the United Nation's Article 19 of the Convention on the Rights of Persons with Disabilities as well as the Communication Bill of Rights of the National Joint Committee on the Communication Needs of Persons with Severe Disabilities. For many individuals, augmentative and alternative communication (AAC) offers an important set of supports for realizing this critical human right. Yet despite a strong evidence base, many AAC systems are not tailored to the support needs of individuals with visual and/or motor impairments. In this paper we explore the potential for artificial intelligence to interpret body-based communicative movements (including gestures) into speech output. The team, which includes individuals with motor impairments, developed and tested a prototype application that shifts the burden of access off the AAC user and onto the application, through the use of body-based sensors. The effort demonstrated proof of concept that the prototype could learn meaningful idiosyncratic gestures and ignore unintentional ones. Movement sensors interpreted the gestures regardless of the user's or communication device's position, reducing constraints on its usability, potentially increasing the flexibility of AAC access. Dedicated research is needed to confirm this possibility.
期刊介绍:
As the official journal of the International Society for Augmentative and Alternative Communication (ISAAC), Augmentative and Alternative Communication (AAC) publishes scientific articles related to the field of augmentative and alternative communication (AAC) that report research concerning assessment, treatment, rehabilitation, and education of people who use or have the potential to use AAC systems; or that discuss theory, technology, and systems development relevant to AAC. The broad range of topic included in the Journal reflects the development of this field internationally. Manuscripts submitted to AAC should fall within one of the following categories, AND MUST COMPLY with associated page maximums listed on page 3 of the Manuscript Preparation Guide.
Research articles (full peer review), These manuscripts report the results of original empirical research, including studies using qualitative and quantitative methodologies, with both group and single-case experimental research designs (e.g, Binger et al., 2008; Petroi et al., 2014).
Technical, research, and intervention notes (full peer review): These are brief manuscripts that address methodological, statistical, technical, or clinical issues or innovations that are of relevance to the AAC community and are designed to bring the research community’s attention to areas that have been minimally or poorly researched in the past (e.g., research note: Thunberg et al., 2016; intervention notes: Laubscher et al., 2019).