{"title":"Framing the Design Space of Multimodal Mid-Air Gesture and Speech-Based Interaction With Mobile Devices for Older People","authors":"O. Mich, G. Schiavo, Michela Ferron, N. Mana","doi":"10.4018/ijmhci.2020010102","DOIUrl":null,"url":null,"abstract":"Multimodal human–computer interaction has been sought to provide not only more compelling interactive experiences, but also more accessible interfaces to mobile devices. With the advance in mobile technology and in affordable sensors, multimodal research that leverages and combines multiple interaction modalities (such as speech, touch, vision, and gesture) has become more and more prominent. This article provides a framework for the key aspects in mid-air gesture and speech-based interaction for older adults. It explores the literature on multimodal interaction and older adults as technology users and summarises the main findings for this type of users. Building on these findings, a number of crucial factors to take into consideration when designing multimodal mobile technology for older adults are described. The aim of this work is to promote the usefulness and potential of multimodal technologies based on mid-air gestures and voice input for making older adults' interaction with mobile devices more accessible and inclusive.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":0.2000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Mobile Human Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/ijmhci.2020010102","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 5
Abstract
Multimodal human–computer interaction has been sought to provide not only more compelling interactive experiences, but also more accessible interfaces to mobile devices. With the advance in mobile technology and in affordable sensors, multimodal research that leverages and combines multiple interaction modalities (such as speech, touch, vision, and gesture) has become more and more prominent. This article provides a framework for the key aspects in mid-air gesture and speech-based interaction for older adults. It explores the literature on multimodal interaction and older adults as technology users and summarises the main findings for this type of users. Building on these findings, a number of crucial factors to take into consideration when designing multimodal mobile technology for older adults are described. The aim of this work is to promote the usefulness and potential of multimodal technologies based on mid-air gestures and voice input for making older adults' interaction with mobile devices more accessible and inclusive.